[Estimated Reading Time: 5 minutes]

Some minor improvements to the Smoketest framework went live earlier this week. Some cosmetic, some functional.

More than Skin Deep

The most subtle change in this release is in the form of additional documentation added to the comments in the implementation itself.

I hope people find that useful. 🙂

Just a Prettier Face

On the cosmetic front, console output from the test run is now improved [subjective opinion only, ymmv – Ed].

The first thing taken care of was the display of tests outcomes with overridden expectations. This is something that is only relevant to the special case of the self-tests provided with the framework but now means that when a self-test is expected to fail, if it correctly fails then this is now reported as a PASS in the console output. Correct outcomes were already taken care of in the results writers, so this was just an inconsistency that needed tidying up.

In more general changes, output from tests in different test classes are now separated visually by a single blank line and each test class is introduced. Test outcomes recorded by the methods in that class are then output with a leading + resulting in a pleasing indentation of those test results relative to the containing class.

Any warnings output by the test run are not indented or prefixed helping these to stand out from the test outcomes.

One of the warnings now output is in the case of a test method that performs no tests. That is, no test results are recorded during the execution of that method.

WARNING: Test method <className>.<methodName> did not perform any tests

A similar warning is emitted if a test class fails to implement any test methods at all:

WARNING: Test class <className> implements no test methods

In both cases the warning has no impact on any test results written by any writer and in fact are not even detectable to a results writer. They are emitted solely by the test runner.

For the first of these methods a mechanism is provided to suppress this warning if required. Which brings us to the functional changes.

NOT Just a Prettier Face

Any method that deliberately and knowingly does not perform any test methods can call TestRun.PerformsNoTests to suppress the no test methods warning.

Why would you do this ?

I provided this as I ran into a scenario when expanding the self-tests where I wanted to perform some scaffolding of some test data before subsequents test methods were executed. Rather than overloading a test method with this scaffolding in addition to performing an initial test, I decided that such things should be handled specifically by scaffolding methods.

Such methods still need to be published as test methods for the test runner to be able to discover and execute them.

The specific example, if you are interested, is the CaptureStats method of the TCoreFunctionalityTests in the tests\selftest project.

Currently this sort of thing relies on predictable order of execution based on the declared order of methods in the test class declaration. This makes me nervous and may be revisited to find a more robust solution but does the job that is needed for now.

A bigger, but somewhat related, change is in the fact that Setup and Teardown methods are now supported.


Set Me Up and Tear Me Down…

Setup and Teardown relies on test methods in a class having special names. Test methods with these names are removed from the list of test methods executed as part of the test class and are instead invoked at specific points during test execution for that class:

Method Name       When The Method Is Called
---------------   -------------------------------------------------
SetupTest         ONCE per test class run, before any test methods
                   in the class

SetupMethod       ONCE PER TEST METHOD, before each and every
                   test method in the class

TeardownMethod    ONCE PER TEST METHOD, after each and
                   every test method in the class

TeardownTest      ONCE per test class run, after all test methods
                   in a class

These methods do not have to be ‘paired up’. You can have Setup methods without corresponding Teardown methods, and/or vice-versa. When any of these methods is present a line is output to the console to indicate their execution.

There is nothing currently to stop you from writing tests in these methods, it’s just not a good idea to do so. It would be possible to abend a test run if test results were detected from these methods and this may be added in the future, or at least a WARNING emitted when this occurs.

It also doesn’t matter where these methods are declared, relative to any other test methods. They do still need to be published methods but can appear anywhere in the list of such methods. As long as they are named with the correct names as described above, they will be identified and invoked by the test runner at the appropriate points, automatically.

NOTE: If an exception escapes from any of these methods it will be caught and logged to the console but the test runner will then carry on with the test run as normal.

For an example of these methods in action you can again consult the self tests, specifically in this case the TCommandLineHandlingTests in Test.CommandLineHandling.

For some idea of the impact of these changes, here’s a capture of a test run of those self tests:

Smoketest test run: Name=selftest.exe, Environment=Delphi XE4
Writers supported: xunit2

Executing tests in TCoreFunctionalityTests:
+ This test passed: PASSED
+ This test failed: PASSED
+ ThisTestWillThrowAnException test 1: PASSED
+ ThisTestWillThrowAnException test 1 threw expected exception [Exception: This exception was deliberately raised]
+ 3 tests recorded to this point: PASSED
+ 1 test passed at this point: PASSED
+ 1 test failed at this point: PASSED
+ 1 test error at this point: PASSED 

Executing tests in TExceptionHandlingTests:
+ EDivByZero caught by AssertException(EDivByZero): PASSED
+ EDivByZero caught by AssertBaseException(Exception): PASSED
+ EDivByZero not caught by Assert(Exception) causes test to fail: PASSED
+ Unexpected Exception raised causes test to fail: PASSED 

Executing tests in TCommandLineHandlingTests:
Performing setup for test
Performing setup for method:SwitchPresentWithNoValueIsHandledCorrectly
+ Present -switch is identified: PASSED
+ Present -switch has no value: PASSED
Performing setup for method:SwitchPresentWithValueIsHandledCorrectly
+ -mode switch is identified: PASSED
+ -mode value is 'level=42': PASSED
Performing setup for method:SwitchNotPresentIsHandledCorrectly
+ Missing -lever is not identified: PASSED
+ Missing -lever has no value: PASSED
Performing setup for method:QuotedValueIsUnquotedCorrectly
+ Quoted value is unquoted correctly: PASSED
+ Performing teardown for test 

Total Tests = 18, Passed = 18, Failed = 0, Skipped = 0, Errors = 0

All of these changes have no impact on the results writers which continue to function as before.

One small change which has had a minor impact on results writers is that the Runtime property on the test run, which records the elapsed time of the overall test run. This is now reported in milliseconds rather than seconds.

The existing xUnit2 results writer has already been updated to reflect this and so provide more ‘accurate’ timings in the results that it produces.


More To Come

As part of the implementation of duget I am going through my existing libraries of code that I rely on extensively, re-organising them for packaging with duget. Part of this involves migrating the tests to Smoketest 2.0 (or adding them where not already present).

This is driving further improvements to Smoketest 2.0, with a lot of work at the moment in the Api for expressing Asserts but it’s all quite fluid at the moment.

This will be a significant addition to the framework and I anticipate a bump to 2.1.0 once things have settled down and I have a clear direction for the future evolution of this aspect of the framework.