Test Driven Development
When you code,alternate these activities:
- add a test,get it to fail,and write code to pass the test (DoSimpleThings,CodeUnitTestFirst)
- remove duplication (OnceAndOnlyOnce,DontRepeatYourself,ThreeStrikesAndYouAutomate)
- Think about what you want to do.
- Think about how to test it.
- Write a small test. Think about the desired API.
- Write just enough code to fail the test.
- Run and watch the test fail. (The test-runner,if you're using something like JUnit,shows the "Red Bar"). Now you know that your test is going to be executed.
- Write just enough code to pass the test (and pass all your prevIoUs tests).
- Run and watch all of the tests pass. (The test-runner,if you're using JUnit,etc.,shows the "Green Bar"). If it doesn't pass,you did something wrong,fix it now since it's got to be something you just wrote.
- If you have any duplicate logic,or inexpressive code,refactor to remove duplication and increase expressiveness -- this includes reducing coupling and increasing cohesion.
- Run the tests again,you should still have the Green Bar. If you get the Red Bar,then you made a mistake in your refactoring. Fix it now and re-run.
- Repeat the steps above until you can't find any more tests that drive writing new code.
Systems created using CodeUnitTestFirst,RelentlessTesting& AcceptanceTest s might just be better designed than traditional systems. But they all certainly would support CodeUnitTestFirstwhile using them better than our current set of systems. But because so danged many of them were not,we are a little blind to what we could have available on the shelf.
A list of ways that test-first programming can affect design:
- Re-use is good. Test first code is born with two clients,not one. This makes adding a third client twice as easy.
- Refactoring test-first code results in equally tested code,permitting more aggressive refactorings (RefactorMercilessly). Cruft is not allowed,and code is generally in better shape to accept more refactorings.
- When paying attention during all of the little steps,you may discover patterns in your code.
- Test code is easy to write. It's usually a couple calls to the server object,then a list of assertions. Writing the easy code first makes writing the hard code easy.
- DesignPatternsmay be incremented in,not added all of a bunch up front.
- Test-first code is written Interface first. You think of the simplest interface that can show function.
- Code tends to be less coupled. Effective unit tests only test one thing. To do this you have to move the irrelevant portions out of the way (e.g.,MockObjects). This forces out what might be a poor design choice.
- UnitTests stand as canonical & tested documentation for objects' usage. Developers read them,and do what they do in production code to the same objects. This keeps projects annealed and on track.
- When the developer has to write tests for what he is going to do,he is far less likely to add extraneous capabilities. This really puts a damper on developer driven scope creep.
- Test First Design forces you to really think about what you are going to do. It gets you away from "It seemed like a good idea at the time" programming.
I have been working my way through Kent's TDD book for a while now,and applying the principles quite rigorously. I am a real dullard sometimes,because it takes me a horribly long time to understand even simple stuff. I had probably been applying TDD for more than a week before I realized why it works so well (at least for me). There are three parts to this:
- The tests (obvIoUsly) help find bugs in the application code,and the micro-steps taken with TDD mean that any "bugs" are in the very code I have just been writing and hence which still has a relevant mental model in my small brain
- By doing the absolutely simplest thing in the application code in order to get each test to run,I often have smallAhaMoments,where I see that I am writing such overly concrete code (i.e. just enough to get the tests to work) that the tests (even though they run) cannot possibly be adequate to cover the "real" requirements. So to justify writing more abstract application code,I need to add more test cases that demand that abstraction,and this forces me to explore the requirement further. Therefore,the application code actually helps me debug the tests. That is,since the tests are the specification,Feedback from the application code helps me debug the specification.
- As I have these "aha!" moments (mentioned in 2 above) I follow Kent's practice of adding them to theToDoList. It took my stupid head quite some time to realize that the TODO list is actually a list of Micro-Stories,which I constantly prioritize (since I am the customer at this level). FollowingAlistairCockburn's insight that Stories are promises to have a conversation with the customer,I see,then,that the Micro-Stories in the TODO list are a promise to have a conversation with myself (and,here is the weird bit) to have a conversation with the code (since it gives me Feedback - it tells me things - and TDD tunes me in to listening to the code).
Test Driven Development(TDD) by KentBeck ISBN 0321146530A Book,Mailing list: http://groups.yahoo.com/group/testdrivendevelopment. Test Driven Development(TDD) by DavidAstels ? ISBN 0131016490another book
JohnRuskworries that one danger ofTestDrivenDevelopmentis that developers maynottake that step that you take. I.e. developers may stay with overly concrete code that satisfies the tests but not the "real" requirements. To look at it another way,I have always felt that it was dangerous to approach design with (only) particular test cases in mind,since its usually necessary to think about boundary cases,and other unusual cases. How does XP address that danger? By encouraging developers to write sufficiently comprehensive tests? Or by relying on developers to take that step which you mention,which is saying,"OK,this actually passes my tests,but its not really adequate for the real world because....". XP addresses that danger with PairProgramming. When obvIoUs boundary cases are overlooked by the programmer driving the keyboard,the programmer acting as navigator points out the oversight. This is an excellent example of a case where a single practice is,by itself,insufficient to reasonably guarantee success but,in combination with a complementary practice,provides excellent results. TestFirstis a cool way to program source code. XP extendsTestFirstto all scales of the project. One tests the entire project by frequently releasing it and collecting Feedback. Q: What "real" requirements can you not test?
- Those requirements which are not stated.
- Requirements which requires highly specialized,unaffordable,or non-existant support hardware to test. E.g.,with hardware such as the Intellasys SEAforth chips,it's possible to generate pulses on an I/O pin as narrow as 6ns under software control. Toseethese pulses,you need an oscilloscope with a bandwidth no less than 166MHz,and to make sure their waveform is accurate,you need a bandwidth no less than 500MHz,with 1GHz strongly preferred. However,at 1GHz bandwidths,you're looking atincrediblyexpensive sampling oscilloscopes. Thus,if you cannot afford this kind of hardware,you pretty much have to take it on faith things are OK. Then,you need some means of Feeding this waveform data back to the test framework PC (which may or may not be the PC running the actual test),which adds to the cost.
Note: here is the link to the pdf file on the yahoo groups area: I found this unexpectedly awkward to locate by searching,so I thought I'd drop the link in here. -- DavidPlumpton
I've sometimes been an advocate of test driven development,but my enthusiasm has dropped after I've noticed that CodeUnitTestFirstgoes heavily against prototyping. I prefer a style of coding where the division of responsibility between units and the interfaces live a lot in the beginning of the development cycle,and writing a test for those before the actual code is written will serIoUsly hinder the speed of development and almost certainly end testing the wrong thing. Agreed. Please see my blog entry about adapting TDD for mere mortals at http://agileskills2.org/blog/2010/02/07/tdd-adapted-for-mere-mortals/-- KentTong : Assuming you're not shipping the prototype,there's nothing particularly in conflict. The prototype is in itself a sort of design test. The trouble with prototypes is that they have this habit of becoming the product...
- [RE: prototypes becoming "the" product. Arrg.. I agree.No reason for it to happen. TDD makes rewriting slops as clean code absurdly easy]
Note that some TDDers abuse MockObject s. Dynamic mock systems like http://classmock.sf.netcan make this too easy. A TDD design should be sufficiently decoupled that its native object work fine as stubs and test resources. They help to test other objects without runaway dependencies. One should mock the few remaining things which are too hard to adapt to testing,such as random numbers or filesystem errors. Some of us disagree with that view,see http://www.mockobjects.com/files/mockrolesnotobjects.pdffor an alternative.
I'd like to revisit a comment that JohnRuskmade above: It seems to me that one danger ofTestDrivenDevelopmentis that developers may _not_ take that step that you take. I.e. developers may stay with overly concrete code that satisfies the tests but not the "real" requirements. See FakeItUntilYouMakeIt
The principles of TDD can be applied quite well to analysis and design,also. There's a tutorial on Test-driven Analysis & Design at http://www.parlezuml.com/tutorials/tdad/index_files/frame.htmwhich neatly introduces the ideas. -- DaveChan
"Roman Numerals" is often held up as a good sample project to learn TDD. I know TDD but I'm bad at math,so I tried the project,and put its results here: -- PhlIp
<shameless plug> Up to date info on tools and practices at http://testdriven.com -- DavidVydra
I found GNU/Unix command line option parsing to be a good TDD exercise as well. My results here:
- http://home.comcast.net/~pholser/software/pholser-getopts.zip(3.5Mb download; 4.7Mb unpacked)
OrganicTestingof the TgpMethodologyis one way to practice TestDrivenDevelopment. Organic Testing is an AutomatedTest s methodology that share resemblance to both UnitTestand IntegrationTest. Like UnitTest,they are run by the developers whenever they want (before check-in). Unlike UnitTest,only the framework for the test is provided by the programmers,while the actual data of the test is given by BusinessProfessionals. In Organic testing like IntegrationTest s,in each run the whole software (or a whole module) is activated. -- OriInbar
Another reference: article "Improving Application Quality Using Test-Driven Development" from Methods & Tools http://www.methodsandtools.com/archive/archive.php?id=20
But what about SecureDesign ?? SecurityAsAnAfterthought ?is a bad idea and it seems that test-driven development (and a number of other agile processes,though perhaps not all) has a bad habit of ignoring security except as test cases,which isn't always the best way to approach the problem. -- KyleMaxwell
Hmmmm. Seems to me that TDD deals with security (as well as things like performance) just like any other functional requirement. You have a story (or task) explicitly stating what functionality is needed (e.g.,user needs 2 passwords to login,which are stored in an LDAP server; or algorithm needs to perform 5000 calculations per second). You then write tests that will verify functionality. And then you write the functionality itself. And unlike traditional development,you now have regression tests to make sure that this functionality never gets broken. (i.e.,if,due to subsequent coding,the security code gets broken,or the algorithm performance drops off,you'll have a broken test to alert you of that.) -- DavidRosenstrauch One of the reasons that security is hard is that security is not just a piece of functionality. Okay,there are things like passwords which are security features,but the rest of the code also has to not have security holes,which are NegativeRequirements ?; ie,code must not do X. This is obvIoUs in the case of 'must not overflow buffer',which TDD does address,but is less obvIoUs in things like 'component X should not be able to affect component Y'. How do you test that? ( I probably read this in 'Security Engineering',by Ross Anderson,which is now free on the web). -- AlexBurr ? In the case of "Component A must not affect Component B",how would you evaluate thiswithout test-driven development? If you can't formally define this requirement,then TDD is no better or worse than hoping for the best. (One answer in this case may be rule-based formal validation of the system,which is easy enough to plug into a TDD framework.)-- JevonWright ?
Some interesting info on test driven development from E. Dijkstra:
- http://www.cs.utexas.edu/users/EWD/ewd10xx/EWD1012.PDF
- (above is fromhttp://www.cs.utexas.edu/users/EWD/index10xx.html)
IEEE Software will publish a special issue on Test-Driven Development in July 2007. For more information,see IeeeSoftwareSpecialIssueOnTestDrivenDevelopment
Found this useful illustration of TDD in .NET (Flash screen cast) http://www.parlezuml.com/tutorials/tdd.html
See TestFirstUserInterfaces CalvinAndHobbesDiscussTdd TestDrivenAnalysisAndDesign TestDrivenDevelopmentaPracticalGuide TestDrivenDevelopmentChallenges TestDrivenDesignPhaseShift,PowerOfTdd,IeeeSoftwareSpecialIssueOnTestDrivenDevelopment,BehaviorDrivenDevelopment,TestFoodPyramid