This topic describes how to set up regression testing of custom queries using the
qltest command, which can help ensure that your queries behave as expected before using them in Semmle Core analysis.
Semmle analysis uses a simple test framework to provide automated regression testing of queries. You can use this to define regression tests for any query.
Setting up tests for custom queries
You can use the
qltest command to test one or more custom queries that are located in a specified directory or within its subdirectories. When testing queries, you must ensure that you include the following files:
- The queries to test, written in QL. Each query can be specified either by a QL reference file (a .
qlreffile) which defines the location of a query to run, or you can include the query itself (a
.qlfile). If you are using a
.qlreffile you must make sure that the location of the
.qlfile is defined relative to the
- The source code that you want to extract and run your queries against. This should consist of files containing examples of the code that the queries are to designed to identify. The files should use conventional extensions for the language so that the
qltestcommand can identify how to process the code for the test.
- Optionally, you may also include XML files. They will be extracted if present, but will not affect the the choice of extractor to use or the library to import.
- Optionally, the results you expect when you run the queries on the source code. If you include the results, they must be in a file with the extension
.expected, and the format of the output must match the format generated by
runQuery. You should only include one
.expectedfile per query, and the base file name of each
.expectedfile must match file name of the
.qlfile that it corresponds to. If you don't include a
.expectedfile, the test will fail but will generate an
.actualresults file that can be used in future tests, if necessary. Further details are included below.
After you have set up your test directory, and ensured that all of the necessary files are included, you can test your custom queries using the following workflow:
- Run the
qltestcommand to test the queries of interest. For example, if you have developed several custom queries, which are located in the
tests\custom-query-testsdirectory, and you want to test them all, you would run:
odasa qltest tests\custom-query-tests. If you only want to test a single query from that directory, the individual file must be specified. For example:
odasa qltest tests\custom-query-tests\custom-query1.ql
- Review the command-line output of the
qltesttool and check that the correct queries were tested.
- The test results are reported in the
.actualresults file, which has the same format as an
.expectedresults file. If you have defined an
.expectedfile, and the
.actualfile is an exact match, then
qltestwill report that the test was successful. If the output does not match
.expected, then review the source code to check that you have included all of the correct files. Also check the query to ensure that the
selectstatement defines the output correctly.
- If you haven't specified an
.expectedfile, then the test will fail. Review the
.actualfile and if the output matches your expectations, then change the file extension to
.expected. If the output is unexpected, carry out the checks outlined in step 3 above.
qltestagain and check that the query passes the test.
The following short example shows how the
qltest tool works. In this example, you will test a QL query that looks for unnecessary
if statements in some Java code–all details are included below so that you can try the example yourself.
Using a QL plugin or extension for your IDE, you can develop a custom query. For example:
- Save this query to a file named
EmptyThen.qlin a directory with your other custom queries (for example:
- Create a directory to contain the test files associated with
tests/java/EmptyThen). Create a
.qlreffile in this directory and define the location of the query you want to test in this file. The location should be relative to the
.qlreffile and defined using forward slashes (
/) on all operating systems. In this example, if the
queriesdirectories share the same parent directory, the
.qlreffile should contain:
Before you can run the regression test, you need some Java code to run the query against. Save the following code sample to a file named
Test.javain the test directory (for example:
- Now that you have specified which query to test and the source files to analyze, you can run the
odasa qltest tests\java\EmptyThen
Lines 5 and 6 of the command-line output above report that the test failed because the expected output file is missing–there is nothing to test the actual test output against. Lines 7 and 8 report the actual output found when the query was run against the source code stored in the test directory. You should review this output and verify that it is as expected for this query and the
Test.java code example. If the actual output matches your expectations, then you can simply copy this information into an
.expected file in the test directory (alternatively, rename the
EmptyThen.actual file generated by the
qltest tool to
If you run the test again it should now succeed, and you should observe the following output in the command-line:
In the example above, the initial test failed due to the lack of an
.expected file in the test directory. In some cases, the reason for the query failing the test may be less straightforward to diagnose, and you may wish to investigate the effects of changing your test query. When a test fails, it leaves behind a test database in new directory named
<test-name> is the name of your test directory. You can import this directly into an IDE which has a QL plugin or extension installed for further investigation. For further information on importing files generated by
qltest into QL for Eclipse, see Importing QL test files in the QL for Eclipse help.
You may want to look at the qltest reference topic to find out more about the additional options that you can use to fine tune testing. For example, you can:
- Specify extra compiler options to use during the extraction of source code.
- Split a large test into a number of smaller parts (slicing). This is not usually used during manual testing but can be helpful if you want to integrate tests into a concurrent build system.
- Use the JUnit results format to allow interpretation of the results by Jenkins or other continuous build software.