Writing tests in Smoketest is intended to enable a test developer to write tests in a way that describe themselves, without requiring the test developer to add this “narrative” themselves. To see this in action, I thought I would compare some simple DUnit tests with the equivalent using the Smoketest framework.
For this exercise we shall consider the test for a for splitting a string based on some delimiting character. The prototype for the function is:
class function WIDE.Split(const aString: UnicodeString; const aChar: WideChar; var aParts: TWideStringArray): Boolean;
In any successful call to this function there are a number of things that need to be checked to ensure correct behaviour.
First, the function should return TRUE only if one or more instances of aChar are found in the string. Second, where TRUE has been returned, the number of entries in the aParts array needs to be correct. And finally, each item in that aParts array needs to be what we would expect.
A DUnit test for this might look something like this:
var ReturnValue: Boolean; aString: UnicodeString; aChar: WideChar; aParts: TWideStringArray; begin aString := 'left*mid-left*middle*mid-right*right'; aChar := '*'; ReturnValue := WIDE.Split(aString, aChar, aParts); CheckEquals(ReturnValue, TRUE); CheckEquals(Length(aParts), 5); CheckEquals(aParts, 'left'); CheckEquals(aParts, 'mid-left'); CheckEquals(aParts, 'middle'); CheckEquals(aParts, 'mid-right'); CheckEquals(aParts, 'right'); end;
First off all, it’s worth mentioning here that the IDE support for DUnit was actually counter-productive in this case. The wizard fails to recognize a class function and creates swathes of boiler-plate test code for setting up and tearing down tests, instantiating the class in order to (incorrectly) call it via an instance.
But apart from that, once all extraneous and erroneous code has been cleaned out, we can get on with writing the test itself and as written above – this all looks fine, right ?
What’s worse is that as long as all the tests pass there is no reason to suspect that this test is actually completely and utterly wrong. The CheckEquals method places particular significance on the order of the parameters identifying the two values that are supposed to be equal. Without inspecting the parameter list for the CheckEquals() method this significance is not immediately obvious.
When the two parameters are equal (test will pass) this doesn’t matter, but in the event of a test failure if you get the order wrong then the test report will itself be misleading, reporting the actual value as the expected value and vice versa. The correct DUnit test should be written:
var ReturnValue: Boolean; aString: UnicodeString; aChar: WideChar; aParts: TWideStringArray; begin aString := 'left*mid-left*middle*mid-right*right'; aChar := '*'; ReturnValue := WIDE.Split(aString, aChar, aParts); CheckEquals(TRUE, ReturnValue); CheckEquals(5, Length(aParts)); CheckEquals('left', aParts); CheckEquals('mid-left', aParts); CheckEquals('middle', aParts); CheckEquals('mid-right', aParts); CheckEquals('right', aParts); end;
And still, the only information coming from DUnit about these tests will be that an actual value either did or did not equal it’s expected value. If the test developer wishes to add some descriptive information about this they must add it as a msg parameter to the test:
CheckEquals(TRUE, ReturnValue, 'Split()'); CheckEquals(5, Length(aParts), 'No. of parts'); CheckEquals('left', aParts, 'aParts'); CheckEquals('left', aParts, 'aParts'); etc
This always felt backwards to me with DUnit and was one of the primary reasons for creating Smoketest and taking an entirely different approach which more closely resembles the language we would use when describing our expected test outcomes.
Let’s look at the equivalent tests in Smoketest:
Test('Split()').Expect(ReturnValue).IsTRUE; Test.Expect(Length(aParts)).Equals(5); Test('aParts').Expect(aParts).Equals('left'); Test('aParts').Expect(aParts).Equals('mid-left'); Test('aParts').Expect(aParts).Equals('middle'); Test('aParts').Expect(aParts).Equals('mid-right'); Test('aParts').Expect(aParts).Equals('right');
This is perhaps a little more verbose but to my mind reads far more naturally as an expression of our test expectations. We say what it is we are testing (if it needs spelling out), identify the where the value comes from that we need to test and then say how it should meet our expectations.
NOTE: The use of indexing syntax on the Test() expression is optional but facilitates labelling tests where tests are being applied iteratively from some collection of test vectors. For example if we had declared our expected resulting parts in a VECTORS array:
const VECTORS: array[0..4] of String = ('left', 'mid-left', 'middle', 'mid-right', 'right'); .. for i := 0 to High(VECTORS) do Test('aParts')[i].Expect(aParts[i]).Equals(VECTORS[i]);
With the context provided by the interfaces returned at each step along the way, the test developer is guided toward writing tests that are appropriate to the values being tested.
But there is another advantage to the way that tests work in Smoketest as compared with DUnit.
Fail Early. But Not Too Early
In DUnit, each CheckEquals() must pass if the following checks are to be performed. Sometimes this is desirable. If you are testing that you have an object reference before you then go on to check other properties of that object then there is little point in proceeding since those tests are simply going to fail.
This might be described as “Failing Early“.
But in other cases, the manner in which subsequent tests fail could provide useful diagnostic information to explain the initial failure.
Consider a hypothetical situation where a developer has identified a potential optimisation in the Split() function. They make their change and run the tests, but as a result of their change the Split() function creates the wrong number of items in the aParts array.
As a result in DUnit, this part of the test will fail and case the test method to halt:
If – say – the Split() function is creating only 4 items in the aParts array, then what those 4 parts contain could contain useful information that will help the developer realise their mistake. With DUnit they won’t get this information.
With Smoketest – by default – the test of the number of items in aParts will fail but the test will continue to apply the further tests and will either output garbage, crash or halt with an ERangeCheck exception (if compiling the tests with range checking enabled) only on the test of the fifth, non-existent item in the aParts array. As a result we might see the following in our test results:
No. of parts - FAILED Expected: 5 Actual: 4 aParts - FAILED Expected: 'left' Actual: 'left*m'; aParts - FAILED Expected: 'mid-left' Actual: 'id-left*mi'; aParts - FAILED Expected: 'middle' Actual: 'ddle*mid'; aParts - FAILED Expected: 'mid-right' Actual: 'dle*mid-'; ERANGECHECK EXCEPTION
This is not the actual output, just a representation of it. And it is entirely hypothetical data of course not intended to indicate any particular type of error that might exist in a function such as Split() rather only to demonstrate that “fail early” is not always the most helpful strategy, especially when testing.
We should always of course “Fix the first problem“, but sometimes the consequential problems help us identify what that first problem is.
With Smoketest you can get DUNit-like fail early behaviour if you want it. And more.
To make this test halt if the number of items in the aParts array is not what we expect, then we simply add a qualification to the effect that this test result is a required outcome and we add it to the test itself:
As explained in an earlier post, an IsRequired result will halt the current test method if the test fails. IsCritical can be used to halt an entire test case, and IsShowStopper will halt the entire test run.
But so far all we have really seen is how Smoketest does what DUnit also does, just differently. Now for something completely different.
With DUnit the number and type of tests you can perform is fairly limited. The CheckEquals() method is called upon to carry a great deal of the burden of testing, often hiding the detail of a test in the expression used to calculate a result passed to that CheckEquals() method as a boolean.
Imagine a scenario where a test was interested only in whether or not some value exceeded some threshold amount but was not concerned with the precise value. In other words, that some value was greater than some other value.
In DUnit you would write this test as follows:
CheckEquals(TRUE, value > limit);
And in the event of a test failure you get the not very helpful report that TRUE was expected, not FALSE. So you are forced to add some narrative to describe the test to endow it with a meaning that is no apparent from the test itself:
CheckEquals(TRUE, value > limit, 'Value is greater than Threshold);
In Smoketest, because test expectations are specific and appropriate to the type of value being tested, there is far greater diversity and richness of expression in the tests available, enabling tests to be written in a way that describe themselves. This limit test for example would be written in Smoketest using an Integer expectation:
Not only does Smoketest guide us toward writing a more appropriate test but since the test describes itself the result is now actually more compact than DUnit where the test has to be described separately from and in addition to the test itself (and then only if the test developer could be bothered to add that description in the first place).
As I mentioned, in this particular case the DUnit IDE wizard created a whole lot of boilerplate code for setting up and tearing down this test case that was wholly inappropriate on this occasion. But there are times when you need such housekeeping and this is what I shall cover in my next Smoketest post.