A Thought on your TDD (Test-Driven Development) Strategy

TDD can feel wet and scratchy, but it will feel nice if you treat it with understanding.

Test-Driven-Development (TDD) can be a good contribution to your project strategy. It is a means by which one can know that a given kit of software (your end-product) does perform according to a specific specification (and your suite of unit-tests should comprise a part of that specification), and can help to focus a developer’s mental energies toward meeting that specific goal. However, it can also bog your team down in detail and make-work if it is not used wisely.

To illustrate this last point – a project I was on recently used TDD throughout, religiously. We had several disparate groups who met daily for our “scrum” update, and from these meetings it seemed that for any code-change, no matter how small – one could expect that many days would be needed to get all of the unit-tests to pass.

The problem was that we did not have a real direction to our TDD. Each was told to “write tests” for anything we added or changed. Thus, a developer would wind up looking at his code for any detail that could be “verified”: assert that this flag was true, how many results returned, what strings could be present, etc. Upon adding T-SQL code to add a database view, for example, unit-tests were added to verify the number of lines of T-SQL code, and that it contained a given keyword, etc. Upon adding another bit of SQL, all of those previous unit-tests now fail (even though still valid): every detail has to be accounted for and updated.

A huge amount of time was being wasted.

It’s crucial to ask oneself: “What are these tests supposed to achieve?”

Your work as a developer is to implement functionality within your product. Do you really care about every detail of how that was implemented? Do you really need to test for every artifact of the implementation? What if the developer finds a superior way to implement, and achieves the same functionality? Do you really want him to have to re-write a huge battery of unit-tests?

And, if your developer is going through the tests, method-by-method, editing them to get them to pass, are they really serving their true purpose — which is to double-check that the functionality is actually working?

I submit this: a golden aspect of TDD is that you can edit your code and know that your unit-tests will confirm (at the press of a button) that there are no regressions. That is – if properly designed. By allowing you to NOT be sidetracked continually by the need to re-edit your tests, you can keep focused on your real code.

If the one over-riding goal of your software development work, is to produce a product that works (and I hope that it is) – then you really cannot afford to get bogged down by extraneous detail. You must move forward, solidly, or perish. No matter how large your team. Even IBM and Microsoft got ground down by excessive code complexity and detail-work. Progress grinds to a standstill. To keep others from coming to eat your lunch – your software has to evolve, to improve, to become always more solid — and to do this you have to make real, steady forward progress and not just write a bazillion unit-tests for the sake of saying you “use TDD”.

Suggestion: Forge your goals related to TDD, and discuss this with your team leaders. Know how much time is being consumed by writing and running tests (which means – individually tracking this time). And talk about and understand (together) how best to use TDD to meet your goals. Use it where it makes sense, let it go where it does not!

Your specification (your System Requirements Document) specifies what your software is going to do. Make your unit-tests mirror a part of your specification. Link right to it from your requirements document, and specify at the top of each unit-test source-code file what that section is going to test and how.

The purpose of software, is to accomplish a specified functionality. Thus your tests should serve the purpose of verifying, to the maximum extent possible, that that functionality is indeed accomplished. But it should do this in the simplest and most concise way, and avoid duplication. Only test for the correct end-result, not the steps to get there (with important exceptions – for a later article). Factor out as much as possible of the infrastructure-code (e.g. setup code) and share it amongst the team. If your API changes, then yes – you can expect a lot of rewriting of tests. But if it is a simple change of code to optimize the implementation, and it necessitates a massive number of test changes — this is a red-flag that you may be getting bogged down in unnecessary detail!

On a different project we were writing low-level code that had to work on myriad platforms — versions of Windows or Unix, 32-versus-64 bit, environments with various components already installed – or not, etc. For this we used virtual machines (VMs) — VMware in this case. One VM would represent a specific platform: for example – one for Windows XP 32-bit, a bare install, another for Windows 8 beta 64-bit that already had .NET 4.5 installed, etc. One lovely thing about these VMs is that you can deploy them and do a lot of stuff using Windows Powershell, which in turn is easily callable from your own code. Ideally you set this functionality up such that it can be invoked via a right-click on a project within Visual Studio. Thus, instead of spending days setting up for, and running, and checking the results of each out of this plethora of tests, we could just define the whole suite up front, right-click on our C# project when done coding, and select “Test this!” — and it would send the compiled code out to the proper test-VM (or set of VMs) on the dedicated test boxes, and deliver back the results (“Go”, or “No-go”). To keep things singing along, I dedicated a box just to running these VMs, one which had a PCIe SSD and oodles of RAM. I could open Remote Desktop on it (RDC) and see at a glance what was running, and what the results were. No manual file-copying, no setting configuration values.

Along with that, I strongly suggest that you look into continuous integration and to integrate that into your build process.

Note: do not use your version-control as a backup system for your source-code. Builds (and tests) happen off of that! Unless you can thoughtfully parcel your code into appropriate branches and merge it back into your main line of code when it’s ready to submit to QA.

In summary, pay heed to your process. Watch out for that trap that ensnares many teams, of getting bogged down trying to meet the needs of the tools, of the processes (like TDD or bug-tracking), and of paperwork. When your developers start to sense that your process is weighing down their productivity (as measured by the actual, real-world functionality that is evolving – the kind that your customers will actually see) then it is time to seriously re-examine your whole process.


James W. Hurst


return to Testing Software