[Estimated Reading Time: 6 minutes]

Many years ago I created Smoketest and had a lot of fun doing so. But recently I ran into a problem when trying to use it for creating tests for my code that I am currently preparing for packaging. The problem was that it was just too darned ‘heavy’.

In particular, it made extensive use of the very code I was trying to test with it. In the past I admit I had just lived with this, developing all sorts of coping strategies. But in trying to automate my CI/CD pipelines these headaches grew to migraine levels and enough was enough.

Initially I set about creating a much more basic and crude set of tests for my lowest level libraries, thinking that if I could just get test coverage in place at that level then once I had all the dependencies that Smoketest needed I could then start using that for those higher-level, more complex testing needs.

Because of the code it was needed to test, this basic testing approach needed to be dependency-free, relying solely on the Delphi RTL and being portable across all Delphi versions from 7 to 10.3. These constraints actually made the job easier as it limited options and kept things very basic indeed.

In fact, I had something that did this job in a little over half a day, even with support for outputting results in xUnit 2 format! I then realised that what I had come up with needed only a little more work to become the basis for what I have decided will be Smoketest 2.0, available now on GitHub.


Weighing in at a smidgen over 40KB of code, Smoketest 2.0 is an absolute featherweight compared to the original at 450KB+ of code. In fact, the original is actually embarrassingly bloated, topping the scales at a whopping 2-3 MEGABytes with all the resources for the built-in UI runner taken into account! I know that this isn’t necessarily important or even significant, but it’s an interesting data-point none-the-less.

The first thing to get out of the way is that Smoketest 2.0 has absolutely nothing in common with the previous Smoketest framework other than the name. Any existing Smoketest tests you have will need to be re-purposed. I have the same problem.

Being so completely different, I wrestled with whether to keep the name and initially had a new name in mind. Eventually however, I concluded that this was the genesis of a total Smoketest replacement, and hence a completely new version of Smoketest rather than something actually separate and new.

Everything you really need to know is in the README.md for the project on GitHub, though the Getting Started section is a bit of a tease. It references duget which has yet to be revealed to the world (coming soon, I promise).

For now, these instructions don’t make much sense but fortunately because of the dependency-free nature of Smoketest 2.0, using it is actually super-easy. Barely and inconvenience [You really should stop using that or Ryan George will on your back! – Ed]

The REAL ‘Getting Started’ Guide

Clone the repo somewhere convenient and add the src folder to the path of any project (or globally). I use scope resolving declarations to elevate types and consts etc throughout the framework where needed, so:

To write a test suite you need only add Deltics.Smoketest to your program uses clause and call the appropriate methods on the TestRun object that this introduces. At a minimum this means making at least one call to TestRun.Test, passing a test class (or array of test classes) to be executed.

To implement a test class that you can pass to the TestRun you derive a class from the TTest class. Again, this needs only Deltics.Smoketest present in your uses clause. The tests performed by a test class will be any published, parameter-less methods.

To write tests in your test methods, use the Assert method inherited from the TTest class. AssertException and AssertBaseException are also provided for super-advanced exception handling tests.

Examples for all three of these are provided in the README.

Output of results to file is handled automatically by Smoketest upon completion of the test run, if the appropriate format option is provided on the command line. Different output formats are supported by implementations of a ResultsWriter class. Each class is registered in the Smoketest framework with a corresponding format identifier. To output in a particular format you simply specify a command line option to the test run where the option has the name of the format and has a value which identifies a filename to hold the results in that format.

At time of writing only one format is supported: XUnit 2.x. The format identifier for this is xunit2 so to emit test results in this format you would run your test suite with a command line similar to:

mytests -xunit2:test-results.xml

The xUnit2 results writer supports appending results. If the filename specified already exists then the writer will assume that you wish to add the results of this run to those already in the file and will add them as a new <assembly>. Whether this makes sense may vary from test suite to test suite but if you want a separate file for each test run results then you will need to contrive some way of specifying unique filenames to your test runs.


Below is an example of a test suite taken from one of the packages I am currently, uh, packaging. It simply packages up the include file that I use across all my projects for detecting and adapting to various Delphi versions. This test suite is a little unusual therefore in that the tests are testing $DEFINE symbols and the only ‘code’ under test is the include file.

First, the test suite:

{$apptype CONSOLE}

{$i deltics.inc}

program Test;

  uses
    Deltics.Smoketest,
    TestConsts in 'TestConsts.pas',
    TestCore in 'TestCore.pas',
    TestVersionDefines in 'TestVersionDefines.pas';

begin
  TestRun.Environment := 'Delphi ' + Uppercase(DELPHI_VERSION);
  try
    TestRun.Test(TCoreFunctionality);
    TestRun.Test(TVersionDefines, UpperCase(DELPHI_VERSION));

  except
    on e: Exception do
      WriteLn('ERROR: ' + e.Message);
  end;
end.

The Environment property which is set is used in the test result output. A Name property can also be set. Both are optional and default values will be used if not provided. Name will default to the executable filename and Environment will default to ‘Test’.

These values are fixed once the first test has been performed in a run.

Then the suite runs two test classes. The second is run with a NamePrefix parameter. This is a prefix that is prefixed to every test name produced by that test. This is for discriminating between results where the same names are used for different runs of the same tests. In this case, this test suite will be executed for multiple Delphi versions (separate compilations and executions), so the Delphi version is a useful prefix.

As well as making it easier for humans to interpret results, this enables the test tracking performed by Azure DevOps to identify when a particular test that was broken has now become fixed.

For the tests themselves, here’s an example from the same test suite:

  unit TestCore;

{$i deltics.inc}

interface

  uses
    Deltics.Smoketest;

  type
    TCoreFunctionality = class(TTest)
      procedure DelphiVersionIsIdentified;
      procedure DelphiVersionIsTheExpectedVersion;
    end;



implementation

  uses
    SysUtils,
    TestConsts;


  procedure TCoreFunctionality.DelphiVersionIsIdentified;
  begin
    Assert('UNKNOWN_COMPILER_VERSION is not defined',
           {$ifdef UNKNOWN_COMPILER_VERSION} FALSE {$else} TRUE {$endif},
           'UNKNOWN_COMPILER_VERSION *is* defined!');
  end;


  procedure TCoreFunctionality.DelphiVersionIsTheExpectedVersion;
  var
    expectedVersion: String;
  begin
    if NOT TestRun.HasCmdLineOption('delphiVersion', expectedVersion) then
    begin
      Console.WriteLn('ERROR: -delphiVersion:<version> command line parameter missing');
      TestRun.Abort;
    end;

    if NOT Assert('Expected version = Detected version',
                  SameText(expectedVersion, DELPHI_VERSION),
                  Format('Expected version ''%s'', Detected version %s', [expectedVersion, DELPHI_VERSION])) then
      TestRun.Abort;
  end;


end.

This is a slightly clunky example since what is being tested here are the settings of compiler $DEFINE symbols, but the principle should be clear.

For a test, you write an Assert() that has at least two parameters. The first is the name of the test that the assert applies. This is used in the test output to identify the individual test.

The second parameter is a boolean – if this is TRUE then the test is considered to have passed. If FALSE then it failed.

The third parameter is optional and may be specified to provide a message that will be attached to the test result. In most cases results writers will output this message only for tests that fail or end in an error state.

In the example above, the second test method a demonstrates a couple of additional features of the framework.

The first is a Delphi version agnostic mechanism for interrogating command line switches and obtaining values from them. These are expected to be of the form:

-<<em>optionName</em>>[:<<em>value</em>>]
or -<<em>optionName</em>>[=<<em>value</em>>]
or -<<em>optionName</em>>

The second additional feature is ability to abort a test run.

A third feature is the fact that an assert also returns its pass/fail value back to the caller which may be used to direct conditional flow where appropriate. In this case the result of a test is also used to determine whether or not to abort the remaining test run.

An aborted test run still actually performs all remaining tests but every single test that follows is recorded with an outcome of “skipped” rather than pass or fail. Even raising an exception will not prevent subsequent tests from running, only those in the current test method.


I will be providing more documentation at some point as the project grows. In the meantime, for a more detailed example you can peruse the self-test project in the repo. Be aware that this also takes advantage of some sneaky-stuff to access methods designed and intended for use specifically (and only) in self-test scenarios.