Skip to content

Latest commit

 

History

History

DifferentialTesting

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 
 
 
 
 

Installation

  1. Get docker for your OS.
  2. Install python3.
  3. Install dnspython.
  4. Install named-compilezone.
    • In windows, download the latest BIND software BIND9.x.zip-win 64-bit and unzip it. The unzipped directory should have named-compilezone.exe executable.
    • For other OSes, too, if it can not be installed successfully, download the BIND9.x.tar.xz and decompress it.

Running Tests

Please note:

  • All commands mentioned in this file must be run from the DifferentialTesting directory and not from the repository root.
  • At least 16 GB of RAM is recommended for testing all the eight implementations using Docker.

1. Docker Images Generation

Generate Docker images for the implementations using:

python3 Scripts/generate_docker_images.py 
CLICK to show all command-line options
usage: generate_docker_images.py [-h] [-l] [-b] [-n] [-k] [-p] [-c] [-y] [-m] [-t] [-e]

optional arguments:
-h, --help    show this help message and exit
-l, --latest  Build the images using latest code. (default: False)
-b            Disable Bind. (default: False)
-n            Disable Nsd. (default: False)
-k            Disable Knot. (default: False)
-p            Disable PowerDns. (default: False)
-c            Disable CoreDns. (default: False)
-y            Disable Yadifa. (default: False)
-m            Disable MaraDns. (default: False)
-t            Disable TrustDns. (default: False)
-e            Disable Technitium. (default: False)
  • By default, the images are built using the implementations code as of around October 1st, 2020 (check Readme for details). Pass the -l flag to use the latest code, but some images may not build if dependecies or other things are updated. Technitium is always built with the latest commit irrespective of the -l flag.
  • Without the -l flag, the built images will have the oct as the image tag; for example, the built Bind image would be bind:oct. If the -l flag was used to build the images, the tag would be latest.
  • Note: Each Docker image consumes ~ 1-2 GB of disk space.
  • Est. Time: ~ 30 mins.
  • Expected Output: Docker images for the implementations.

2. Tests Organization

Use either Zen generated tests or custom tests to test implementations.

Using Zen Tests

A. Using Pre-generated Tests from the Ferret Dataset
  • Clone the dataset repository as Results directory using
    git clone https://github.com/dns-groot/FerretDataset.git Results
  • Proceed to Step 3 to run the tests
B. Using Tests from Test Generation Module
  • Move the generated tests (Results directory) from the TestGenerator directory to the DifferentialTesting directory
  • Translate Zen tests with integer labels to English labels
    • Translate valid zone file tests using either

        ⇒  installed named-compilezone

      python3 Scripts/translate_tests.py Results/ValidZoneFileTests

        ⇒   (or) the executable of named-compilezone

      python3 Scripts/translate_tests.py Results/ValidZoneFileTests -c <path to the named-compilezone executable>
    • Translate invalid zone files using

      python3 Scripts/zone_translator.py Results/InvalidZoneFileTests
  • Est. Time: ~ 5 mins.
  • Expected Output:
    • For valid zone file tests, the translate_tests.py script creates three directories in the ValidZoneFileTests directory.
      ZoneFiles directory with all the zone files translated to English labels and formatted with named-compilezone.
      Queries directory contains the queries corresponding to each zone file.
      TestsTotalInfo directory contains all the information regarding a test in a single JSON file, for easy debugging.
    • For invalid zone files, the zone_translator.py script creates a ZoneFiles directory in each of the subdirectories (FalseCond_1, FalseCond_2, ...).

Using Custom Tests

  • Create a directory CustomTests (or Results) and a sub-directory ZoneFiles in that directory.

  • Place the test zone files (TXT files in Bind format and FQDN) in ZoneFiles directory.

  • If you don't have any specific queries to test on these zone files, then assume them as invalid zone files and proceed to Step 3 to follow the steps of testing using invalid zone files.

  • If you have queries, then for each test zone file (foo.txt) in ZoneFiles, create a same named json file (foo.json) in Queries directory (sibling directory of ZoneFiles ) with queries.

    CLICK to reveal the queries format in foo.json
    [
        {
            "Query": {
                "Name": "campus.edu.",
                "Type": "SOA"
            }
        },
        {
            "Query": {
                "Name": "host1.campus.edu.",
                "Type": "A"
            }
        }
    ]

3. Testing Implementations

Test an implementation by comparing its response with the expected response from the Ferret Dataset or compare mulitple implementations' responses in a differential testing set up.

Testing with Valid Zone Files

Run the testing script from the DifferentialTesting directory as a Python module using:

usage: python3 -m Scripts.test_with_valid_zone_files [-h] [-path DIRECTORY_PATH]
                                                     [-id {1,2,3,4,5}] [-r START END] [-b]
                                                     [-n] [-k] [-p] [-c] [-y] [-m] [-t] [-e] [-l]

Runs tests with valid zone files on different implementations.
Either compares responses from mulitple implementations with each other or uses a
expected response to flag differences (only when one implementation is passed for testing).

optional arguments:
  -h, --help            show this help message and exit
  -path DIRECTORY_PATH  The path to the directory containing ZoneFiles and either Queries or
                        ExpectedResponses directories.
                        (default: Results/ValidZoneFileTests/)
  -id {1,2,3,4,5}       Unique id for all the containers (useful when running comparison in
                        parallel). (default: 1)
  -r START END          The range of tests to compare. (default: All tests)
  -b                    Disable Bind. (default: False)
  -n                    Disable Nsd. (default: False)
  -k                    Disable Knot. (default: False)
  -p                    Disable PowerDns. (default: False)
  -c                    Disable CoreDns. (default: False)
  -y                    Disable Yadifa. (default: False)
  -m                    Disable MaraDns. (default: False)
  -t                    Disable TrustDns. (default: False)
  -e                    Disable Technitium. (default: False)
  -l, --latest          Test using latest image tag. (default: False)
  • Arguments -r and -id can be used to parallelize testing.

    CLICK to reveal details
    • Please note: Parallelize with caution as each run can deal with eight containers. Do not parallelize if the RAM is less than 64 GB when testing all eight implementations.
    • If there are 12,700 tests, then they can be split three-way as:
      python3 -m Scripts.test_with_valid_zone_files -id 1 -r 0    4000
      python3 -m Scripts.test_with_valid_zone_files -id 2 -r 4000 8000
      python3 -m Scripts.test_with_valid_zone_files -id 3 -r 8000 13000
      
  • The default host ports used for testing are: [8000, 8100, ... 8700]*id, which can be changed by modifying the get_ports function in the python script before running it.

  • Est Time: ~ 36 hours (😞) with no parallelization for the Zen generated 12,673 tests. Yadifa slows down the testing process significantly due to not reloading the next zone file quickly and the script has to wait a few seconds every time that happens.

  • Expected Output: Creates a directory Differences in the input directory to store responses for each query if there are different responses from the implementations.

Testing with invalid zone files

  • Only four implementations — Bind, Nsd, Knot, PowerDNS — are supported as these have a mature zone-file preprocessor available.

  • Run the script preprocessor_checks.py to first check all the zone files with each implementation's preprocessor.

    python3 Scripts/preprocessor_checks.py
    CLICK to show all command-line options
    usage: preprocessor_checks.py [-h] [-path DIRECTORY_PATH] [-id {1,2,3,4,5}]
                                  [-b] [-n] [-k] [-p] [-l]
    
    optional arguments:
    -h, --help            show this help message and exit
    -path DIRECTORY_PATH  The path to the directory containing ZoneFiles; looks for ZoneFiles
                          directory recursively. (default: Results/InvalidZoneFileTests/)
    -id {1,2,3,4,5}       Unique id for all the containers (default: 1)
    -b                    Disable Bind. (default: False)
    -n                    Disable Nsd. (default: False)
    -k                    Disable Knot. (default: False)
    -p                    Disable PowerDns. (default: False)
    -l, --latest          Test using latest image tag. (default: False)
    

    Creates a directory PreprocessorOutputs and outputs whether each implementation's preprocessor accepts or rejects the zone files along with the explanation for rejection.

  • Run the testing script from the DifferentialTesting directory as a Python module using:

    python3 -m Scripts.test_with_invalid_zone_files
    CLICK to show all command-line options
    usage: python3 -m Scripts.test_with_invalid_zone_files [-h] [-path DIRECTORY_PATH]
                                                           [-id {1,2,3,4,5}] [-b] [-n] [-k] [-p] [-l]
    
    Runs tests with invalid zone files on different implementations.
    Generates queries using GRoot equivalence classes.
    Either compares responses from mulitple implementations with each other or uses a expected
    response to flag differences (only when one implementation is passed for testing).
    
    optional arguments:
    -h, --help            show this help message and exit
    -path DIRECTORY_PATH  The path to the directory containing ZoneFiles and PreprocessorOutputs
                          directories; looks for those two directories recursively
                          (default: Results/InvalidZoneFileTests/)
    -id {1,2,3,4,5}       Unique id for all the containers (default: 1)
    -b                    Disable Bind. (default: False)
    -n                    Disable Nsd. (default: False)
    -k                    Disable Knot. (default: False)
    -p                    Disable PowerDns. (default: False)
    -l, --latest          Test using latest image tag. (default: False)
    
  • Est Time: ~ 4 hours for the Zen generated 900 invalid zones for a maximum length of 4.

  • Expected Output: Creates two directories
    EquivalenceClassNames directory to store the query equivalence class names generated from GRoot for each of the test zone files.
    Differences directory to store responses for each query if there are different responses from the implementations.

4. Triaging

Since there are often many more test failures than there are bugs (e.g., a bug can cause multiple tests to fail), we triage the tests in Differences directory by creating a hybrid fingerprint for each test, which combines information from the test's path in the Zen model (if available) with the results of differential testing, and then groups tests by fingerprint for user inspection.

Fingerprint and group the tests using python3 -m Scripts.triaging

CLICK to show all command-line options
usage: python3 -m Scripts.triaging [-h] [-path DIRECTORY_PATH]

Fingerprint and group the tests that resulted in differences based on the model case (for valid zone
files) as well as the unique implementations in each group from the responses.
For invalid zone files, they are already separated into different directories based on the condition
violated. Therefore, only the unique implementations in each group is used.

optional arguments:
  -h, --help            show this help message and exit
  -path DIRECTORY_PATH  The path to the directory containing Differences directory.
                        Searches recursively (default: Results/)
  • Est Time: ~ 2 mins.
  • Expected Output:
    • Creates a Fingerprints.json file in each of the directories with the Differences directory.
    • Fingerprints.json has a Summary section which lists for each group how many tests are in that group and a Details section which lists the tests for each group.
    • When all the 8 implementations are tested using oct tagged Docker images with the 12,673 tests generated with length limit 4, Ferret found more than one response in roughly 8,200 tests. When all these tests are fingerprinted and grouped using the above command, it resulted in roughly 75 unique fringerprints. For 24 of these fingerprints there is only one test with that fingerprint, while one fingerpint has roughly 1890 tests.