Extreme Programming in Agile Development
The primary driver of any software project should be the problem the software is aimed at solving. If the design of the program is too large and expensive it may become the driver of the project, either because it is too unwieldy to change or too much has been invested in it. Similarly, a requirements gathering process may become the main driver of the project. Extreme Programming insists on the fundamental importance of keeping the software problem to be solved as the focus of the development effort. Extreme Programming is also designed with that fundamental observation in mind.
Extreme Programming maintains that tests should be created as the beginning of the code and that the code is written to pass those tests. As well, the client who has a problem to be solved with software defines criteria to create Acceptance Tests. The Extreme Programming aim of maintaining tight feedback and iteration cycles among test, code, and design offer a viewpoint on which to dispense with requirements entirely. The framers of the software product simply have to create a set of tests that the software must satisfy sets of tests to code to and a set of tests to define acceptance criteria (this can even go so far as to include integration, system, and performance tests).
The classical view of Software Testing maintains that tests are designed to verify that the software satisfies (or fails to satisfy) given requirements. In fact, some Automated Test Management Suites are built on this premise (e.g. Mercury Test Director). This easy to understand and implement the concept of a one to one relationship between requirements and tests are unfortunately invalid. Software tests are designed around models of the actual software. Similarly, requirements define a model that the software is supposed to adhere to.
When a test is said to fail, does this mean that the software does not meet the requirement or does it mean that the test is not an accurate model of the software? Have we found a software bug or have we found a testing bug? We can push this further is the requirement an accurate reflection of the problem to be solved? Often requirements are changed when it is discovered that they do not add up to what the client wanted i.e. requirements bugs have been found. On the other hand, the software may properly be satisfying the client’s needs despite failing tests aimed at establishing that it satisfies the requirement. These are not matters of bad test design they are a reflection of the fact that the one to one relationship claimed between tests and requirements is illusory.
Relationship A between the software and the problem being addressed is the key relationship that is to be tested. Extreme Programming advocates keeping this relationship as focused as possible by doing unit testing with as tightly incremental a test-code-design feedback cycle as is feasible. The models of the software and of the client’s problem to be solved are in this way kept as rigidly close to the actual phenomena being modeled as is possible.
Relationship B between the requirements and the tests is addressed in acceptance testing. However, it is possible to drop the requirements component and the model of the problem to be solved that the requirements are based on. Instead, Extreme Testing advocates a single model of the relationship between the software and the problem to be solved, a model that is built through unit tests and validated through Acceptance Tests.