Note: This documentation is for the legacy command-line tool odasa.
The final version was released in May 2020. Support for this tool expires in May 2021.

For documentation on the new generation CodeQL CLI, see CodeQL CLI .
In particular, you may find the Notes for legacy QL CLI users useful in planning your migration.

Skip to end of metadata
Go to start of metadata


This topic describes how to set up regression testing of custom queries using the qltest command, which can help ensure that your queries behave as expected before using them in Semmle Core analysis.

Task overview

Semmle analysis uses a simple test framework to provide automated regression testing of queries. You can use this to define regression tests for any query.

Setting up tests for custom queries

You can use the qltest command to test one or more custom queries that are located in a specified directory or within its subdirectories. When testing queries, you must ensure that you include the following files:

  • The queries to test, written in QL. Each query can be specified either by a QL reference file (a . qlref file) which defines the location of a query to run, or you can include the query itself (a .ql file). If you are using a .qlref file you must make sure that the location of the .ql file is defined relative to the .qlref file. 
  • The source code that you want to extract and run your queries against. This should consist of files containing examples of the code that the queries are to designed to identify. The files should use conventional extensions for the language so that the qltest command can identify how to process the code for the test.
  • Optionally, you may also include XML files. They will be extracted if present, but will not affect the the choice of extractor to use or the library to import.
  • Optionally, the results you expect when you run the queries on the source code. If you include the results, they must be in a file with the extension .expected, and the format of the output must match the format generated by qltest and runQuery. You should only include one .expected file per query, and the base file name of each .expected file must match file name of the .ql file that it corresponds to. If you don't include a .expected file, the test will fail but will generate an .actual results file that can be used in future tests, if necessary. Further details are included below.

Testing queries

After you have set up your test directory, and ensured that all of the necessary files are included, you can test your custom queries using the following workflow:

  1. Run the qltest command to test the queries of interest. For example, if you have developed several custom queries, which are located in the tests\custom-query-tests directory, and you want to test them all, you would run:  odasa qltest tests\custom-query-tests. If you only want to test a single query from that directory, the individual file must be specified. For example:  odasa qltest tests\custom-query-tests\custom-query1.ql
  2. Review the command-line output of the qltest tool and check that the correct queries were tested.
  3. The test results are reported in the  .actual results file, which has the same format as an  .expected results file. If you have defined an .expected file, and the .actual file is an exact match, then qltest will report that the test was successful. If the output does not match .expected, then review the source code to check that you have included all of the correct files. Also check the query to ensure that the select statement defines the output correctly.
  4. If you haven't specified an .expected file, then the test will fail. Review the .actual file and if the output matches your expectations, then change the file extension to .expected. If the output is unexpected, carry out the checks outlined in step 3 above.
  5. Run qltest again and check that the query passes the test.


The following short example shows how the qltest tool works. In this example, you will test a QL query that looks for unnecessary if statements in some Java code–all details are included below so that you can try the example yourself. 

  1. Using a QL plugin or extension for your IDE, you can develop a custom query. For example:

    Example query, EmptyThen.ql
    import java
    from IfStmt ifstmt
    where ifstmt.getThen() instanceof EmptyStmt
    select ifstmt, "This if statement has an empty then."
  2. Save this query to a file named  EmptyThen.ql in a directory with your other custom queries (for example: queries/custom/java/EmptyThen.ql).
  3. Create a directory to contain the test files associated with EmptyThen.ql (for example: tests/java/EmptyThen). Create a .qlref file in this directory and define the location of the query you want to test in this file. The location should be relative to the .qlref file and defined using forward slashes (/) on all operating systems. In this example, if the tests and queries directories share the same parent directory, the .qlref file should contain:../../../ queries/custom/java/EmptyThen.ql
  4. Before you can run the regression test, you need some Java code to run the query against. Save the following code sample to a file named in the test directory (for example: tests/java/EmptyThen):

    Example: Java source file
    class Test {
      public void problem(String arg) {
        if (arg.isEmpty())
            System.out.println("Empty argument");
      public void good(String arg) {
        if (arg.isEmpty()) {
            System.out.println("Empty argument");
  5. Now that you have specified which query to test and the source files to analyze, you can run the qltest command using: odasa qltest tests\java\EmptyThen

Test output

Initial output of qltest command
Extracting sample code for tests...

[1/1] Extraction finished tests\java\EmptyThen (extraction: 13.9s)
Running tests...
[1/1] tests\java\EmptyThen\EmptyThen.qlref: FAILED (compilation: 3.6s, execution: 252ms, total: 4.2s)
Expected output file does not exist (tests\java\EmptyThen.expected)
Actual output:
| | stmt | This if statement has an empty then. |

Tests complete.
Some tests failed!
    tests\java\EmptyThen.qlref: Test failure (Expected output does not match)
0 tests passed, 1 test failed

Lines 5 and 6 of the command-line output above report that the test failed because the expected output file is missing–there is nothing to test the actual test output against. Lines 7 and 8 report the actual output found when the query was run against the source code stored in the test directory. You should review this output and verify that it is as expected for this query and the code example. If the actual output matches your expectations, then you can simply copy this information into an .expected file in the test directory (alternatively, rename the EmptyThen.actual file generated by the qltest tool to EmptyThen.expected).

Example: expected output file EmptyThen.expected
| | stmt | This if statement has an empty then. |

If you run the test again it should now succeed, and you should observe the following output in the command-line:

Example output: regression test set up correctly
Extracting sample code for tests...
[1/1] Extraction finished in tests\java\EmptyThen (extraction: 12.1s)
Running tests...
[1/1] tests\java\EmptyThen\EmptyThen.ql: OK (compilation: 3.2s, execution: 320ms, total: 3.8s)
Tests complete.
1 test passed, 0 tests failed

Debugging tests

In the example above, the initial test failed due to the lack of an .expected file in the test directory. In some cases, the reason for the query failing the test may be less straightforward to diagnose, and you may wish to investigate the effects of changing your test query. When a test fails, it leaves behind a test database in new directory named <test-name>. testproj, where <test-name> is the name of your test directory. You can import this directly into an IDE which has a QL plugin or extension installed for further investigation. For further information on importing files generated by qltest into QL for Eclipse, see Importing QL test files in the QL for Eclipse help.

What next?

You may want to look at the qltest reference topic to find out more about the additional options that you can use to fine tune testing. For example, you can:

  • Specify extra compiler options to use during the extraction of source code.
  • Split a large test into a number of smaller parts (slicing). This is not usually used during manual testing but can be helpful if you want to integrate tests into a concurrent build system.
  • Use the JUnit results format to allow interpretation of the results by Jenkins or other continuous build software.


  • No labels