Test-Driven-Development (TDD) has a good contribution to make to a project’s management strategy. It is a means by which one can know that a given kit of software (your end-product) does perform according to a specific specification (the suite of unit-tests), and can help to focus a developer’s mental energies toward meeting that specific goal. However, it can also bog your team down in detail if it is not used wisely.
To illustrate this last point – a project I was on recently used TDD throughout, religiously. We had several disparate groups who met daily for our “scrum” update, and from these meetings it seemed that for any code-change, no matter how small – even just a minor change on one line, one could expect that many days would be needed to get all of the unit-tests to pass.
The problem was that we did not have a real direction to our TDD. Each was told to “write tests” for anything we added or changed. Thus, a developer would wind up looking at his code for any detail that could be “verified”: assert that this flag was true, that flag was false, how many results returned, what strings could be present, etc. Upon adding T-SQL code to add a database view, for example, unit-tests were added to verify the number of lines of T-SQL code, and that it contained such-and-such keyword, etc. Upon adding another bit of SQL, all of those previous unit-tests now fail: every detail has to be accounted for and updated.
A huge amount of time was being wasted. Still is.
It is vital to ask oneself: “What are these tests supposed to achieve?” Your work is to implement functionality within your end product. Do you really care about every detail of how that was done? Do you really need to test for every artifact of the implementation? What if the developer finds a superior way to implement, and achieves the same functionality? Do you really want him to have to re-write a huge battery of unit-tests?
And, if your developer is going through the tests, method-by-method, editing them to get them to pass, are they really serving their true purpose — which is to double-check that the functionality is actually working?
If the one over-riding goal of your software development work, is to produce a product that works (and I hope that it is) – then you really cannot afford to get bogged down by detail. You must move forward, solidly, or perish. No matter how large your team. Even IBM and Microsoft got ground down by excessive code and detail-work. Progress grinds to a standstill, and younger, more agile competitors come to eat your lunch. Software has to evolve, to improve, to become always more solid — and to do this you have to make real, steady forward progress. Not just write a zillion unit-tests for the sake of saying you “use TDD”.
Suggestion: Forge your goals related to TDD, and discuss this with your team leaders. Know how much time is being consumed by writing and running tests (which means – individually tracking this time). And talk about and understand (together) how best to use TDD to meet your goals. Use it where it makes sense, let it go where it does not!
The purpose of software, is to accomplish a specified functionality. Thus your tests should serve the purpose of verifying, to the maximum extent possible, that that functionality is indeed accomplished. But it should do this in the simplest and most concise way, and avoid duplication. Only test for the correct end-result, not the steps to get there. Factor out as much as possible of the infrastructure-code and share it amongst the team. If your API changes, then yes – you can expect a lot of rewriting of tests. But if it is a simple change of code to optimize the implementation, and it necessitates a massive number of test changes — this is a red-flag that you may be getting bogged down in test-itus!
On a different project that I considered to be pretty successful, we were writing low-level code that had to work on myriad platforms — versions of Windows or Unix, 32-versus-64 bit, environments with various components already installed (or not), etc. For this we used virtual machines (VMs) — VMWare in this case. One VM would represent a specific platform: one for Windows XP 32-bit, a bare install, another for Windows 8 beta 64-bit that already had .NET 4.5 installed, etc. One lovely thing about these VMs is that you can deploy them and do a lot of stuff using Windows Powershell, which in turn is easily callable from C# or a product like RobotMind. Products which in turn can be invoked via a right-click on a project within Visual Studio. Thus, instead of spending days setting up for, and running, and checking the results of each out of this plethora of tests, we could just define the whole set up front, right-click on our C# project when done coding, and select “Test this!” — and it would send the compiled code out to the proper test-VM (or set of VMs) on the dedicated test boxes, and deliver back the results (“Go”, or “No-go”). To keep things singing along, I dedicated a box just to running these VMs, one which had a PCIe SSD and oodles of RAM. I could open Remote Desktop on it (RDC) and see at a glance what was running, and what the results were. No manual file-copying, so setting configuration values.
Along with that, I strongly suggest that you look into continuous integration. And to integrate that into your build process, I suggest you carefully consider it in the context of your chosen version-control tool. I have found that you don’t necessarily want to just automatically build everything that gets checked in, when it is checked in. If you do, then everyone is afraid to tinker, or to check in partial results at the end of the day.
Instead, if your version-control tool gives you the ability to attach a “Label” or “Tag” to a given set of check-ins, then you can use that to signal to your continuous-build tool what to check out and build and run tests on. This way, you can merrily check in your work at the end of the day, even if it does not pass tests. If your workstation goes down overnight, or something else happens — your work is safely stored within the code repository. And it does not “break the build” because you did not label it as “Known Good” (or whatever nomenclature you decide to use). Your build server, when it runs nightly or continuously, can simply check out the current branch that is labelled “Known Good” and build it, and run the suite of tests. CruiseControl is probably the most well-known product in this space; I have used it in the past, and it worked well for us. FinalBuilder is another, very powerful product that merits a careful look. Most recently I have grabbed and deployed TeamCity (from JetBrains) and was absolutely delighted at how fast it is to fire up.
In summary, pay heed to your process. Watch out for that trap that ensnares many teams, of getting bogged down trying to meet the needs of the tools, of the processes (like TDD or bug-tracking), and of paperwork. When your developers start to sense that your process is weighing down their productivity (as measured by the actual, real-world functionality that is evolving – the kind that your customers will actually see) then it is time to seriously re-examine your whole process.