Friday, May 09, 2008

CHAPTER 6: Quality and Testing

6. Quality and Testing

6.1 Internal and external quality of a system

Defining what quality is in software development, means following the two perspectives of the main stakeholders involved in a software project: the customer’s perspective and the programmer’s perspective.

If we look at defining software quality from the customer perspective, we realize that what the real user sees and feels when interacting with the software, defines the quality. Mary and Tom Poppendieck call it perceived integrity, Martin Fowler and Kent Beck call it external quality of a system.

A wonderful example, regarding external quality of a system, is given in “Lean Software Development: An Agile Toolkit” [], written by Mary and Tom Poppendieck, about how everyone needs to contribute to quality and about how different people see quality differently:

Walt Disney designed Disneyland as a giant stage where several hundred actors make it their job to be sure every guest has a wonderful time. One guest's requirements for having a wonderful time are quite different from the next, and the actors are supposed to figure out exactly what each guest thinks a quality experience should be and make sure he or she has it.

Quality at Disneyland

At Disneyland, even the tram drivers are actors. A friend told me the story of a tram driver who noticed a small girl crying on her way back to the Disneyland hotel. He asked her why she was crying and found out that the crowd around

Mickey Mouse was too large, so the girl had not been able to talk to Mickey.

The driver called ahead, and when the tram arrived at the hotel, there was Mickey Mouse, waiting to meet it. The girl was thrilled, and the driver had done his job of making sure she had a quality experience.

—Mary

Looking at a software product, only from the programmer’s point of view, means mainly looking at the maintainability and extensibility sides and not only if the program respects design patterns or object oriented principles. A software program is of high quality, or it is considered to have a good design or architecture, if it can be modified, adapted and extended with ease.

Martin Fowler and Kent Beck describe internal quality:

… internal quality. This reflects the quality of the internals of the system: how well it is designed, how good the internal tests are, and so on. This is a very dangerous lever to play with. If you allow internal quality to drop you'll get a small immediate increase in speed, rapidly followed by a much bigger decrease in speed. As a result you must keep an eagle eye on this lever and make sure it is always up as far as it can go. Nothing kills speed more effectively than poor internal quality.

[Planning Extreme Programming]

Mary and Tom Poppendieck extend the concept of internal quality to conceptual integrity:

Conceptual integrity means that a system's central concepts work together as a smooth, cohesive whole. The components match and work well together; the architecture achieves an effective balance between flexibility, maintainability, efficiency, and responsiveness.

[Mary and Tom Poppendieck, 2003]

6.2 Automated tests

It is not testing alone that leads to high quality but the constant focus on maintaining quality at its highest, all the time. Traditional processes tend to confuse quality with testing, and although a lot of time is allocated for testing, is the testing phase is left to be the last stage in the process.

Agile processes have developed techniques to maintain focus on the high quality all the time, the most important being continuous integration. At the base of these principles stand automated tests, acceptance tests (defined with the customer) and unit tests that are run frequently giving feedback to the programmers about the quality status of the system. Keeping all tests running at 100%, and having a good coverage of automated tests on the system, focuses the team on quality and detects breaks early so they can be fixed fast before they eat out trough the system.

Automated tests, continuous integration, refactoring and customer involvement, combined with the focus on quality that needs to be delivered at the end of each iteration, ensure that the system’s integrity is built in and maintained all the time, from the beginning to the end.

6.3 Test Driven Development

Ron Jeffries expresses the Test Driven Development paradox, very well:

Writing code and tests is faster than writing just the code, if the code has to work.

Now let’s see if this is true:

The mini TDD experiment

To demonstrate the concepts around test first approaches and test driven development, we will make a small experiment. We assume that we are programmers and we need to code a function that divides two positive numbers. For this experiment we will compare the traditional and the TDD approaches.

Approach #1. Code and fix

As programmers, for a simple division we will write the following “pseudo” code:

Function Divide(No1, No2)

Return No1/No2

For this very simple method, let’s assume we needed 5 seconds to write it. Now let’s test if it works. First we try 6 and 2, expecting 3. It works. Let’s try another combination: 1 and 2, expecting 0.5. It works. Now let’s try 8 and 0. An error just occurred. This means we need to modify the program to display a message to the user that the second number cannot be 0:

Function Divide(No1,No2)

If No2 = 0 then display message “Division by 0 cannot be performed”

Else Return No1/No2

Now let’s test our function again. 6 and 2, result 3, good, 1 and 2, 0.5 as expected, 8 and 0 and a message “Division by 0 cannot be performed” occurs as expected. Now, our program works fine.

Assuming that manual testing is slow and for each combination of numbers we need about 10 seconds, this means that a testing session takes 30 seconds. The total time in which we developed the code was: 5 seconds to write the function, 30 seconds to test and see it has problems with division by 0, then about another 5 to correct the function and 30 minutes to test it again and make sure it works: total 5+30+5+30=70 seconds, a minute and 10 seconds.

Approach #2: Test Driven development

In test driven development, there are a series of steps to write a piece of code, starting with and automated test written first and ending up making that test succeed, by writing the code that it tests. Let’s see how it goes:

Function TestNormalDivision()

Expect 3 as a result of Divide(6,2)

The code above compares the value expected and the value returned by our (yet unwritten code) and if they do not match it fails.

One very important step now is to make sure our test really tests something and it does not work every time, no matter what the code under test does. So for this we need to make sure that when it needs to fail, it fails. So we write the following function:

Function Divide(No1, No2)

Return 0

Now we run the test, and it fails saying: expected 3 but the result was 0. So now we modify the function to return pass the test.

Function Divide(No1, No2)

Return 3

Now we run the test again: 1 test succeeded. Excellent. Now let’s see if it works for 1 and 2, so we update the test:

Function TestNormalDivision()

Expect 3 as a result of Divide(6,2)

Expect 0.5 as a result of Divide(1,2)

We run the test. Failure. Oooh, we just realize the mistake we made (code always returns 3) and modify the Divide function:

Function Divide(No1, No2)

Return No1/No2

Running the test, now passes all our expectations. But now we think, what would happen if we used 8 and 0. Let’s add a new test to the test suite (now we have two) and make sure that if there is division by 0, the user is notified:

Function TestDivisionByZero()

Expect message “Division by 0 cannot be performed” displayed as a result of Divide(8,0)

We run the test. It fails. Now we modify our function to make it work:

Function Divide(No1,No2)

If No2 = 0 then display message “Division by 0 cannot be performed”

Else Return No1/No2

Running all our tests, we discover that they all succeed.

How much time did we need to write this code? We needed 5 seconds to write the first test, 5 seconds to make sure it fails, 1 second to run the test (now testing is done by the computer so we assume it should be at least 10 times faster then manual testing), 5 seconds to modify the code to make the test work, 1 second to run the test, another 5 seconds to extend the test to verify the 1,2 combination, 1 second to see that the test fails, 5 seconds to modify the function and 1 second to see it working, another 5 seconds to write the second test and 2 seconds to see the first test work but the second failing, and 5 seconds to complete the code and another two to run the 2 tests and make sure it works. Wow, a long way: 5+5+1+5+1+5+1+5+1+5+2+5+2 = 42 seconds.

Using both approaches, we ended up with the same code. The amount of code written for the second approach is bigger then for the first, having the code and the tests. The amount of time needed for the second approach was arguably smaller then the amount for the first approach, which leads us to Ron Jeffries’s conclusion: to obtain good code, writing tests and code is faster then code alone. The main advantage is that we use computer power to do the testing rather then human power, so we are much faster. Then we can run the automated tests over and over again and it will take 2 seconds to see if they work, manually it will take 30 to do the same thing.

Let’s go further with our experiment, assuming that now we need to extend the program to be able also to do addictions, subtractions and multiplications.

Approach #1. Since all these operation are not affected by 0, but we test that anyway, the code written first will work, so it would take about 5 seconds to write each method, and testing each with 3 combinations of numbers would result in about 30 seconds to test each. The amount of time, needed would be 5+30+5+30+5+30 = 105 seconds, 1 minute and 45 seconds. Testing the whole program (the 3 new methods and the division method) would take us 4*30 = 120 seconds, which is 2 minutes.

Approach #2. TDD

Operations just as above will need only one test, checking 3 combinations. Let’s say it takes 10 seconds to write a test method like this:

Function TestMultiplication()

Expect 0 as a result of Multiply(6,0)

Expect 3 as a result of Multiply(3,1)

Expect -9 as a result of Multiply(3,-3)

Then we’d have to make sure it fails: 5 seconds, 1 second to run the test, then we’d write the code to make it work: 5 seconds and 1 second to make sure it works, so it takes about 10+5+1+5+1 = 22 seconds for each new function, resulting in 3*22 = 66 seconds or 1 minute and 6 seconds to write the new functions. Testing all the code would mean running 5 test methods (2 for division and 1 for the other three), which would be run in 5 seconds.

Tests and code, faster then just code

Comparing the times needed to test our incredibly simple system: 2 minutes vs. 5 seconds show us that not only the code is written faster (110+105=215 seconds vs 66+45=111 seconds), but making sure it works requires far less time for the TDD approach. And second big advantage, it can be done by a computer.

Using a continuous integration machine that downloads the program sources and runs the test suite, then sends us an email telling us what happened, means 5 seconds for the machine and 0 seconds on my side to test the whole system. Using the first approach, would take me 2 minutes to make sure the whole system works. I could delegate this responsibility of testing the whole system to the testing team, but the feedback times, telling me whether the system works as a whole or not, increase rapidly to days and weeks and by that time I should be doing something else.

Scalability

In the 3rd phase of our little experiment, we analyze what would happen if our system would have 400 functions instead of 4. Using the first approach it would need about 12000 seconds (that is over 3 hours) for a full test, while using the TDD and automated testing suite about 500 seconds or, better said, less then 10 minutes. This simple sample shows us scalability when it comes to TDD vs traditional coding approaches. The testing team could work, to some extent in parallel, but after, all I could set my integration machine to divide the tests and work in parallel.

After making this very small experiment, we showed how test driven development is, compared with just coding:

- faster to develop

- faster to test the whole system and give feedback

- scalable

Tests as documentation

Another advantage of the method described above is that, the automated tests can act as a very good documentation of the code written. In traditional approaches, just documenting things that can be very easily deduced from the automated tests, like how a function works, would increase even more the development time. After all, just reading:

Function TestDivisionByZero()

Expect message “Division by 0 cannot be performed” displayed as a result of Divide(8,0)

tells me or someone new in the project, that if you try to perform a division by 0, the system will display an error message on the screen.

Embrace change: how?

Having a system with 4, 400, 1000…100.000 methods, doesn’t really comfort me when it comes to making a change in it. If I change one tiny piece of code somewhere, could I break something in another part of the system? And if I do, how could I know fast enough, to be able to either correct it or reverse my changes?

To have the feedback from the code, telling me if and where I’ve broken some existing functionality, I would normally need to retest the whole system. For a 4 methods system, it would take 2 minutes, but for a more likely system, it might take hours, days or even weeks. So the courage to change decreases with the system getting bigger, thus is shortening the life of the system. When a system is too rigid and can no longer adapt to the changes on the market, it is bound to die.

Having a full regression automated test suite, that runs very fast and can be run very often, means fast feedback. Fast feedback means changes are less risky and can be done easier and faster, thus extending the life of a system.

Design advantages

Another advantage of writing automated tests for the code is that the code written tends to be very loosely coupled, thus better designed. Test driven development also tends to eliminate “partially completed code”, encouraging less code to be written, as the programmer is more focused on what is really needed, thus decreasing the amount of code and its complexity.

At a macro level, the fact that changes, even in the architecture are much easier to be performed, when using TDD combined with aggressive refactoring, allows the programmers to continuously upgrade the design and update the architecture. Since the changes are easy to do, the evolutionary design technique is encouraged, having a much smaller need to build a flexible architecture upfront, following the YAGNI principle from XP.

How do we achieve external quality at 100% all the time?

After each iteration, a potentially shippable system is delivered, so quality is at 100%, every 2-4 weeks, maintaining this way the system at the highest level throughout its lifecycle.

Frequent releases that allow the customer representatives to see the system all the time are maintaining the project on the right path and at the right perceived quality. When the customer sees the software he can very easily spot things that are not according with what he thought they would be, and together in the next iteration these issues can be fixed. The focus on quality in agile development is mainly expressed by the iterative and incremental development model, which allows the stakeholders in a project to identify problems early and correct them as soon as they are found, thus maintaining the system on the highest quality.

As a small sample, we think about one frequent non-functional requirement: the system must be fast. In traditional testing techniques, the test and development teams had to figure out by themselves what fast means. This is a very difficult requirement to deal with, because it is almost impossible to measure. Presenting the system, after an iteration, to the customer, might let him say: “I want that report to be faster. At this time it needs about 10 seconds. Can you make it faster? Let’s say 2-4 seconds”. Suddenly, there is a clear definition on where “fast” applies, allowing the team to focus on solving a concrete problem. This is a simple sample, showing how perceived quality is maintained by the continuous collaboration and feedback between the customer and the team.

Who is involved in quality, how and when?

Everyone, all the time. In traditional systems, the responsibility for quality is mainly delegated to testing teams that must make sure the code is of high quality. Agile thinking makes quality a collective responsibility of the customer, the developers and the testers all the time from the first, to the last minute of a project.

The customer is involved in quality by defining acceptance tests. The developers are involved in quality by helping the customers write the tests, by writing unit tests for all the production code they write and the testers are involved by helping the developers automate acceptance (customer) tests and by extending the suite of automated tests.

Manual vs. automated testing

Manual testing is not forgotten, although in agile methodologies a great emphasis is put on automated tests. Manual testing is still performed. Agile testing is about balancing between the need to automate when it is beneficial and relying on manual testing when it is more efficient then writing automated tests.

How do we achieve internal quality?

Simplicity, unit tests running at 100%, refactoring, YAGNI, collective ownership, continuous integration combined with good coding techniques, are all practices that as a whole are aimed at maintaining the code quality at its highest all the time.

Regarding code quality and the simplicity principle, Ron Jeffries writes:

Everything we write must:

  1. Run all the tests
  2. Express every idea we need to express
  3. Say everything once and only once
  4. Have a minimum number of classes and methods, consistent with the above

[Ron Jeffries, 2001]

Friday, May 02, 2008

CHAPTER 5: Managing and Organizing

5. Managing and Organizing contents:

5.1 A small agile process practice sample
5.2 Iterative and Incremental process
5.3 Adaptive planning strategy
5.4 Evolutionary design strategy
5.5 Fast delivery strategy
5.6 People first strategy

5. Managing and Organizing

5.1 A small agile process practice sample

Let’s consider that we have a new client who wants a new software product to manage and track his sales and customers. He wants the system to go live in 2 months. We meet and establish what is required, then plan to meet the deadline.

After the first meetings we come up with the following feature list, that he thinks he wants, that is afterwards estimated by us:

  1. Client management (3p)
  2. Product management (3p)
  3. Sales leads management (4p)
  4. Sales reports (3p)
  5. Client activity management (3p)
  6. User management (2p)
  7. Sales workflow (3p)

Now together with the customer, we make the first plan, dividing the work in two iterations:

Iteration #1: 10 p

  1. Client management (3p)
  2. Product management (3p)
  3. Sales leads management (4p)

Iteration #2: 11p

  1. Sales reports (3p)
  2. Client activity management (3p)
  3. User management (2p)
  4. Sales workflow (3p)

As you can see estimates are given in an abstract measuring unit: points. We as the development team, know we can deliver about 10.5 – 11 point per iteration. After we have a plan to deliver the 21 points features in 2 months, we start working, by beginning the first iteration, at the end of which we deliver features 1, 2 and 3.

We now show the customer the first 3 features implemented, but he suddenly realizes that he needs more then what was planned for the release. We start by adding what he wants to the list of features, after that, each of the new features is being estimated.

  1. Client management (3p)
  2. Product management (3p)
  3. Sales leads management (4p)
  4. Sales reports (3p)
  5. Client activity management (3p)
  6. User management (2p)
  7. Sales workflow (3p)
  8. Activity calendar (3p)
  9. Forecast reports (3p)
  10. Document templates and document merging (3p)

With the client, we realize that we cannot deliver all in the 2 months term, but we decide that we can still make a delivery after the two months and we plan another delivery after that, which will include other features that will make the system more compete. The new plan looks like:

Release #1: 2 months

Iteration #1: 10p

  1. Client management (3p)
  2. Product management (3p)
  3. Sales leads management (4p)

Iteration #2: 11p

  1. Sales reports (3p)
  2. Client activity management (3p)
  3. User management (2p)
  4. Forecast reports (3p)

Release #2

Iteration #3: 9p

  1. Sales workflow (3p)
  2. Activity calendar (3p)
  3. Document templates and document merging (3p)

As you can see, the customer considered that it is more important to have the forecast reports in the first delivery, so he moved the forecast report into the second iteration and the sales workflow to the 3rd iteration.

After the second iteration is over, we have the features 1-7 finished, in 2 one month iterations, delivering on term what the customer wanted, and deploying it to be used by its end users.

At the beginning of the 3rd iteration, the customer realizes that he wants a few more things and he would like to have the delivery of the complete system, in another two months so he can fit his budget. He adds a few new features, having the following list now:

  1. Client management (3p)
  2. Product management (3p)
  3. Sales leads management (4p)
  4. Sales reports (3p)
  5. Client activity management (3p)
  6. User management (2p)
  7. Sales workflow (3p)
  8. Activity calendar (3p)
  9. Forecast reports (3p)
  10. Document templates and document merging (3p)
  11. Contact communication management (3p)
  12. Microsoft Outlook integration (2p)

The plan now becomes:

Release #1: 2 months

Iteration #1: 10p

  1. Client management (3p)
  2. Product management (3p)
  3. Sales leads management (4p)

Iteration #2: 11p

  1. Sales reports (3p)
  2. Client activity management (3p)
  3. User management (2p)
  4. Forecast reports (3p)

Release #2

Iteration #3: 9p

  1. Sales workflow (3p)
  2. Activity calendar (3p)
  3. Document templates and document merging (3p)

Iteration #4: 5p

  1. Contact communication management (3p)
  2. Microsoft outlook integration (2p)

After finishing and showing to the client the resulting product, having now 10 features out of 12 implemented, he realizes that there is one more feature he’d like to have, and that can be implemented in the last iteration: sales processes. The list of feature becomes:

  1. Client management (3p)
  2. Product management (3p)
  3. Sales leads management (4p)
  4. Sales reports (3p)
  5. Client activity management (3p)
  6. User management (2p)
  7. Sales workflow (3p)
  8. Activity calendar (3p)
  9. Forecast reports (3p)
  10. Document templates and document merging (3p)
  11. Contact communication management (3p)
  12. Microsoft Outlook integration (2p)
  13. Sales Processes (5p)

And the updated plan:

Release #1: 2 months

Iteration #1: 10 p

  1. Client management (3p)
  2. Product management (3p)
  3. Sales leads management (4p)

Iteration #2: 11p

  1. Sales reports (3p)
  2. Client activity management (3p)
  3. User management (2p)
  4. Forecast reports (3p)

Release #2

Iteration #3 – 9p

  1. Sales workflow (3p)
  2. Activity calendar (3p)
  3. Document templates and document merging (3p)

Iteration #4 – 10p

  1. Contact communication management (3p)
  2. Microsoft outlook integration (2p)
  3. Sales processes (5p)

At the beginning of the 4th iteration the client says he’d like one more feature that is estimated by the team to be of 3 points. Adding that feature would mean not meeting the second release target, so after balancing the options he decides to drop it.

After the 4th iteration the final product is delivered. The sample above cannot show by any means how any project can be developed, but it shows the agile process at work, iteration by iteration, planning, adapting and delivering incrementally a product to the end customer.

The evolutive process of gathering the client requirements is shown below:

Building a burn down chart, showing iteration by iteration the number of remaining features to be implemented would be like:

5.2 Iterative and Incremental building

Iterations are the base of stability in agile methodologies. They are the ground for learning and adapting trough customer feedback. The iterations are the foundation of the whole adaptive approach used in agile processes.

Iteration properties:

1. Iterations give a close deadline, increasing focus and the need to organize inside the team

Since iterations are kept small (1-4 weeks) the deadline is very close, and as Ken Schwaber explains, the team must focus much better to deliver what is planned by that deadline. A big problem with traditional approaches is that focus is increasing as the deadline approaches:

Everyone in the development group had a lot to accomplish, so why wasn’t the whole department hard at work at 9 a.m.? The vice president observed that the team usually didn’t feel any pressure until three months before the release date and that members of the team started developing in earnest only during the last two months of the release cycle. Assignments at the task level, assignment of individuals to multiple teams, and particularly the waterfall approach all led everyone to feel isolated from the reality of the release during the first three or four months. During the last two months, the developers tried to make up for what they hadn’t completed in the first four months.

[Ken Schwaber, Agile Project Development with SCRUM, Service 1st sample]

2. Iterations are a good instrument for planning, tracking progress

The plan can be easily readapted at the end of each iteration to the customer’s needs. At the end of each iteration, a piece of the final product is shown to the customer, thus showing progress.

3. Iterations minimize risks

Agile thinking minimizes the risks because it always focuses on the most important and valuable features for the customer, developing them first. When they are delivered, the priorities for the remaining might have changed, so now, a new plan is made up, reprioritizing and delivering what is important now. In the next iteration, if reprioritization is necessary then it is used to plan what to do next.

4. Iterations are a good instrument for managing changes in software

At each end of an iteration, direction can be changed if it needs to, and plans adapted to the new needs.

5. Iterations are a good instrument to build trust

By showing the software after each iteration, the customer can see the product growing as planned, seeing progress being made, thus starting to increase his confidence in the development team and that the product will be delivered.

6. Iterations allow learning and adapting

Because the customer sees the results in iterations, he can better express his needs, learning about what can be done and learning about costs. On the other hand, the developers can learn what the customer’s needs are as the project is being developed.

7. Iterations are a good instrument for development

At the end of an iteration, a set of features must be shown working. This transfers the focus of the developers, which might be tempted to develop a system horizontally, layer by layer, from data access, to business layer and presentation layer, assembling the whole system at the end, to a vertical approach, where the focus is put on delivering working features, developing them vertically on all layers, and delivering them one by one.

8. Iterations are building confidence and motivation in the team

One very important aspect in development, especially in the early stages of a project is an early victory. By delivering the first iteration, the team starts to see something positive happening and starts to build confidence that it will win. With each new iteration, a new battle is won, getting closer and closer to winning the war.

9. Iterations bring honesty

Since iterations are short, and after each iteration, the customer sees the real product developed, delays are surfaced very early, not allowing them to grow into a huge problem.

10. Iterations are a good instrument to increase quality

At the end of each iteration, a potential shippable product must be shown to the customer. This focuses the team on keeping the quality high, never letting bugs and quality problems unhandled.

5.3 Adaptive Planning strategy

Kent Beck defines planning as:

Plans are not predictions of the future. At best, they express everything you know today about what might happen tomorrow. Their uncertainty doesn't negate their value. Plans help you coordinate with other teams. Plans give you a place to start. Plans help everyone on the team make choices aligned with the team's goals.

And he defines the planning strategy:

We will plan by quickly making an overall plan, then refining it further and further on shorter and shorter time horizons—years, months, weeks, days. We will make the plan quickly and cheaply, so there will be little inertia when we must change it. […]

The strategy for the team is to invest as little as possible to put the most valuable functionality into production as quickly as possible, but only in conjunction with the programming and design strategies designed to reduce risk. In the light of the technology and business lessons of this first system, it becomes clear to Business what is now the most valuable functionality, and the team quickly puts this into production. And so on.

[Kent Beck, 1999]

The big, high level, direction and goal establishing plan

Many critics of agile methodologies, claim that they sustain going directly in developing code, without any vision of the whole product that will be developed, implementing and delivering randomly features, that have no direction and can lead to a never ending process of development with no finished product ever being shipped. The truth is quite the opposite. Agile methodologies do emphasize the need for an overall vision of the product, but that vision must not be very complete from the beginning and can be shaped and changed later according to the necessities of the software buyer.

In all agile methodologies, software is delivered to the end users in releases, which usually take from 1 to 6 months, depending on the specifics of the methodology and of the project. These releases are divided into iterations, that do deliver potentially releasable software to the customers, but usually the results of the iterations are there to demonstrate progress, maintain a high level of quality and most of all, collect feedback from the customer representatives that supervise the development.

A Scrum project starts with a vision of the system to be developed. The vision might be vague at first, perhaps stated in market terms rather than system terms, but it will become clearer as the project moves forward. The Product Owner is responsible to those funding the project for delivering the vision in a manner that maximizes their ROI. The Product Owner formulates a plan for doing so that includes a Product Backlog. The Product Backlog is a list of functional and nonfunctional requirements that, when turned into functionality, will deliver this vision. The Product Backlog is prioritized so that the items most likely to generate value are top priority and is divided into proposed releases. The prioritized Product Backlog is a starting point, and the contents, priorities, and grouping of the Product Backlog into releases usually changes the moment the project starts—as should be expected. Changes in the Product Backlog reflect changing business requirements and how quickly or slowly the Team can transform Product Backlog into functionality.

[Ken Schwaber – Agile Project Management with SCRUM]

Since software is delivered to the end users every few months, it is very important that at first at least one release is planned, defining what will be developed in that release. In XP, this is called release planning, in SCRUM this is done trough the product backlog, in Crystal it is done by defining a product plan.

After defining the high level features of the product, the developers estimate them, mostly in terms of weeks and months, giving the customer or product owner, as it is called in SCRUM, the cost for each and then the possibility to act on those costs, by saying which features are the most important, which can be left for a later release or which can be dropped. After the customer prioritizes the features, they are divided into iterations to be developed, having the most important into the first iteration, with the next iteration developing the next and so on until the end of that release.

The main advantage of the agile approach is that the product plan is very flexible; allowing it to be at first, at a high level and allowing the customers to complete it as the program is being developed.

The small, detailed, short cycle plan

Once the high level plan is defined, it is high time to start working at the first iteration. An iteration is a small project, that goes trough all the phases used in traditional processes like: requirements analysis, design, implementing and testing phases. The result of an iteration must be a potentially releasable piece of software, delivering “running, tested features”.

The first step in an iteration, is the iteration planning meeting that is very well exposed by SCRUM sprint planning meeting, which has inspired the planning sessions that are used in other agile methodologies like the planning game in Extreme Programming or the blitz planning sessions used in Crystal. The main purpose of this meeting is to obtain the details of the user stories or backlog items and based on these details the team breaks them into programming tasks that are estimated again and based on these estimates (which are now much closer to reality) the team can commit to a list of features that will be developed in the current iteration:

At the start of an iteration, the team reviews what it must do. It then selects what it believes it can turn into an increment of potentially shippable functionality by the end of the iteration. The team is then left alone to make its best effort for the rest of the iteration. At the end of the iteration, the team presents the increment of functionality it built so that the stakeholders can inspect the functionality and timely adaptations to the project can be made.

[Ken Schwaber ]

In SCRUM, the sprint planning meeting is a one day meeting session, where the customer representatives and the team of developers all get together, in a meeting that is divided into two time boxed sessions of 4 hours. The first is the part where the details are exposed and the second part is when the developers can now estimate and commit.

In XP, where user stories are defining the functionality to be developed, each story is discussed with the customer, getting its details, then the programmers disaggregate the story into more programming tasks, and for each of these tasks a programmer accepts the responsibility to develop it, re-estimating the time needed to develop it in order to make sure he does not over-commit. As a sample, if we have a user story like: the user can search for contracts could be divided into the following list of tasks:

  • build search page
  • build advanced search page
  • display results screen
  • build the SQL database query for basic searches
  • build the SQL database query the database for advanced searches
  • document new functionality in help system and user's guide

Ron Jeffries recommends that the programmers commit to user stories rather then tasks, so that they commit to full pieces of functionality having a much clearer focus.

The most important of agile aspect techniques is that, the planning is a continuous collaboration and adaptation activity between the customer representatives and the team of programmers, where the customers define what needs to be done with the help from programmers and the developers estimate the cost and commit to doing it on their own estimates.

5.4 Incremental design strategy

In agile methodologies, the design strategy starts from two fundamental lessons learned while developing software:

  1. Complexity is very hard to manage
  2. Irreversible decisions are to be avoided in design

One of the big problems in software development is the managing complexity. The bigger a system becomes, the harder it becomes to change, to add new functionality to it or to maintain it. Starting from this assumption, the agilists have built a series of practices and tools that enable developers to manage the growing complexity in a software project easier and also help them take decisions that can easily be reversed.

Simplicity is at the very core of every agile methodology, as stated in the agile manifesto principles: Simplicity--the art of maximizing the amount of work not done--is essential. One of the principles, promoted especially by Extreme Programming is the YAGNI principle, or “You Ain’t Gonna Need It”. Its fundament comes from the fact that, in many projects, when creating a design before any code was developed, the need for a very flexible design is very strong. Often, those designs which had flexibility built into them, ended up with lots of parts never used, thus only adding to the complexity of the whole system. In other cases, the “bullet proof” designs, only contributed to making simple things harder to put in practice.

Traditional processes presume that the whole architecture and design of the software is developed before any code is written. Although the idea sounds good, in practice it has been proven that it is almost impossible to foresee all the needs in a software project. In other cases, this technique also known as BDUF (Big Design Up Front) failed even when a very good design was defined because it was impossible to code. Martin Fowler states that “UML-like design can look very good on paper yet be seriously flawed when you have to program the thing” continuing “even skilled designers, such as I consider myself to be, are often surprised when we turn such designs into software”. One of the biggest problems with BDUF is the fact that it is very risk prone to changing requirements, and thus exposing the problem of trying to build extra flexibility into the design, that ends up never used.

So if the design is not done up front, is it dropped and software is being developed in a “cowboy programming” manner? Martin Fowler, one of the most respected authors when it comes to design, architecture, patterns and UML wrote an article about evolutionary design called “Is Design Dead?” in which he explains how XP has built some methods to allow continuous design to be performed and adapted by continually focusing on the architecture and refactoring it to the needs little by little all the time.

Martin Fowler explains that XP has developed some enabling practices such as: test driven development, refactoring and continuous integration. Full automated test suites for the code enable refactoring of the existing code to be performed aggressively and continuously thus making the code simpler and easier to handle. Continuous integration is the technique that keeps the team is sync all the time and since built in focus is enabled by the iterations, daily meetings and reflective workshops, the code can be kept at the highest level of quality all the time, respecting the simplicity and YAGNI principles. This expresses how XP followers understand and support the “Continuous attention to technical excellence and good design enhances agility. ” agile principle.

Evolving an architecture is not the same as just letting the system grow in any shape that's convenient for the moment. Growing an architecture takes thought and discipline. We keep the architecture optimized for the current conditions. When those conditions change we refactor the architecture to keep it optimized.

[Robert C. Martin blog]

Even in the agile community, there are some continuous debates over the evolving architecture concepts, although most of the agile methodologies seem to favor and encourage this practice. One of the most known criticism of this approach is expressed in Doug Rosenberg and Matt Collins’s book: “XP Refactored: The case against Extreme Programming” [] where the authors publicly criticize the evolving architecture technique promoted by XP, favoring a design upfront technique, but which is quite different from BDUF technique because it is concentrated mostly of smaller pieces of the project, like an iteration, rather then on the whole project.

It is very important for the team using agile approaches to find the method that suites them best, either choosing to design a whole piece of the project in one go, concentrating on not falling in the BDUF trap, or choosing the evolutionary design practice, making sure they do see the whole, as recommended in the lean principles and refactor effectively.

Modeling is something encouraged by agilists all the time, from CRC cards, to system metaphor, UML sketches, drawn on the whiteboard, following the principle of sufficiency. Agile modeling concentrates on those techniques that bring value to their projects and customer and eliminate modeling (just like documentation) done without a very clear purpose, becoming a burden in time.

5.5 Fast delivery strategy

One of the most important aspects of agile thinking is that the result of an iteration must be working, tested software, and not anything else. There have been cases where organizations believed they have adopted agile practices using iterative and incremental processes, but they only masked the old waterfall model under an iterative one, building in the first few iterations the requirements, in the next iterations the design and so on. This breaks one of the most fundamental rules of agile and lean thinking: delivering valuable software as soon as possible.

Not delivering at the end of an iteration, especially in the first stages of development, a partially implemented part of the whole system, with a small list of features implemented, can become a major drawback in agile development. Not delivering software at the end of an iteration, minimizes the feedback that can be obtained from the customer, feedback that is at the base of the learning and adapting process sustained by agile movement. Not delivering software early, does not focus the team, making collaboration between team members less needed and does not contribute to building trust between the software buyer and the software vendor that is essential in agile development.

Another very important agile rule for early delivery, is that the features planned to be delivered in one iteration must be delivered at 100%, being defined, implemented, tested and ready for production. It is very important to have 8 features out of 10 planned, developed at 100% rather then 10 out of 10, at 80%.

Agile methodologies have built in practices to keep focus, all the time, at the most important thing to do, having the customer prioritize the features, having a very clear goal planned for the end of the increment that is usually very close and with the manager having as main purpose eliminating obstacles from the team’s delivery path and keeping the team focused in daily meetings designed exactly for this purpose.

In order to be able to deliver fast, the team must be able to always know where they are in development. The purpose of planning is to give the team a direction to follow, and to be able to calm them down by letting them know at all times where they are by comparing reality with the plan. If reality and the plan are different then either the team is off track or the plan is no longer valid. Agile, adaptive planning activities make sure the plan is continuously updated and adapted so it is always as needed.

Tracking activities to compare reality with plans are in agile methodologies very transparent, having big visible charts on the walls in XP, or burn down charts showing how much of the product backlog has been implemented and delivered in each iteration. With the variable purpose, the burn down chart, shows how the number of features varies over iterations and how many of them are implemented iteration after iteration.

When it comes to code, fast delivery can only be done by keeping the code in very good shape at all times. Complex and unhealthy code has been proven to slow team down very drastically as the amount of code grows, that is why in all agile methodologies, the code must be developed at the highest quality. The emphasize on automated tests tends to enforce loose coupling in code and simplicity and refactoring decrease complexity, continuously improving the design.

5.6 People first strategy

Building a team of highly capable and motivated individuals that work very effectively as a team is not an easy task to achieve, however agile methodologies expose several tools and practices:

a. People build software not processes

The focus must be put on people, letting themselves organize, letting them participate in taking decisions, with the most important tool of this being the ability to let the people estimate the tasks rather then having them estimated for them.

“Individuals and interactions over processes and tools”, is the first value in the agile manifesto, showing that it is not the tools and the religious following of a process that produces valuable software for the customer, over and over again, but the people developing it, self organizing and collaborating all the time.

Building a team is more important then building the environment. Many managers make the mistake of building the environment first and expecting the team to gel automatically. Instead work to create the team and let them configure the environment on the basis of need

[Robert C. Martin, 2005]

b) Motivation, leadership and self organization as tools of management

Management can work usually with the team of programmers in two ways: telling the programmers what, when and how to do, or telling the programmers what needs to be done, and letting them decide how long it will take and how it will be implemented. In agile methodologies, the second method is used, people are being put first and given responsibilities and rights, letting them work in their self organize and this method has been proven to be extremely successful both in software and outside.

The big difference, between self organized teams and teams that are organized by the upper management is motivation. In self organized teams, very high motivated individuals work, collaborate very well as a team focusing on what is important for all the team. In teams that are told what and how to do, the motivation is very low, people find it hard to work together, having more individualistic attitude because even if they do want to collaborate, the possibilities are very restricted.

Ken Schwaber states that in agile methodologies, teams are forced to collaborate better and to organize themselves at a higher level, because the deadline is always very close and they need to be very focused because of this. He says:

In most circumstances, management imposes a deadline and tells the workers what to complete by that deadline. This violates the rule of common sense: “You can tell me what to do or how to do it, but you can’t tell me both”. With agile processes the length of the iteration imposes the deadline. … The team selects how much work they can perform within the iteration, and makes commitments for it. Nothing demotivates a team as much as someone else making commitments for it. Nothing motivates a team as much as accepting the responsibility for fulfilling commitments that it made itself.

In agile teams, the project manager’s main role is to make sure nothing stands in front of the team so that they can organize themselves and build valuable software. These teams of programmers work together all the time, decisions are taken together, design is done together, and testing is done by the testers together with the programmers. The team is allowed to decide how to use their resources better to achieve the goal, without intervention from upper management, on who does what, who tests, who designs, who implements.

d) People are not pluggable and compatible units that can easily be replaced

Alistair Cockburn continues the idea, by showing that people are not plug compatible units, as some of the traditional processes try to believe and enforce. In his paper “Characterizing people as non-linear, first order components in software development” he shows what led to the belief that software developers are replaceable resources that come in different shapes: analysts, coders, testers, managers etc:

In the title, I refer to people as components. That is how people are treaded in the process/methodology design literature. The mistake in this approach is that people are highly variable and non-linear, with unique success and failure modes. Those factors are first order, not negligible factors. Failure of process and methodology designers to account for them contributes to the sort of unplanned project trajectories that we often see.

e) The developers have the power to take decisions

Many organizations claim that their biggest asset are the people they have, however when it comes to actually proving their people oriented processes, it is most often shown that the power to take decisions is in the hands of very few in upper management. If the people do not have the power to take decisions, even in their own area of competency: technical environment then how is this “people oriented process” really “people oriented”?

Ron Jeffries summarizes the need to empower the team, in an article called “Making the date”, in which he emphasizes that if the developers are responsible in making the deadline, then they should have the power to do so, otherwise the responsibility of not making a deadline is by no means theirs:

If the primary issue is to "make the date", and in my experience it usually is, then whoever has that responsibility needs to have the authority to apply people and resources, and to set the detailed objectives and goals. Unless we plan to give our developers the authority to hire and fire, to buy computers, to bring in contractors, we'd better not imagine that they can be responsible for the date. Unless we plan to give our developers the authority to decide which features will be delivered on the date and which ones will be deferred until later, we'd better not suppose that they can be responsible for the date. They can't: they don't have enough authority to steer.

To deliver the best possible combination of features by a given date, there must be control over the resources, and over the feature list. There's no way out of this. If you go to the store with a huge shopping list and twenty dollars, you need the authority to go to the money machine for more cash, or the authority to make changes to the list. And shopping is a lot easier than software development.

Agile processes emphasize the need that the programmers and the technical stuff take ALL the technical decisions. Management in agile thinking is done in a very different manner then in traditional methodologies, shifting the command and control model to a self organizing one.

The developers take a very active part in planning, designing, implementing and testing the software they produce. Agile thinking emphasizes that when planning, the estimates are done by those really doing the work. This is the most important aspect of giving the power to the people, being able to estimate how much their work will take. This is a tremendously motivational tool.