Troughout this chapter I will try to show briefly how the software industry went from one extreme to another, in the processes used to develop software. If at first chaos was the main problem, then the software industry went straight into the other extreme adopting the extremely heavyweight "waterfall" process, where software was very hard to develop because there were too many rules. As an example at the beginning of Mary and Tom Poppendieck’s book “Lean Software Development: An Agile Toolkit” the following example is presented:
Jim Johnson, chairman of the Standish Group, told an attentive audience the story of how Florida and Minnesota each developed its Statewide Automated Child Welfare Information System (SACWIS). In Florida, system development started in 1990 and was estimated to take 8 years and to cost $32 million. As Johnson spoke in 2002, Florida had spent $170 million and the system was estimated to be completed in 2005 at the cost of $230 million. Meanwhile, Minnesota began developing essentially the same system in
1999 and completed it in early 2000 at the cost of $1.1 million. That's a productivity difference of over 200:1. Johnson credited Minnesota's success to a standardized infrastructure, minimized requirements, and a team of eight capable people.
In many organizations, the way the projects are organized and managed can determine very well the end result. The lack of good organization is the “cancer” in any business, ending sooner or later in failure. So what can we do as software developers?
History
At the beginning of the software development industry, due to lack of guidance and maturity, most projects were developed without any clear process applied. The difference was made by the people that developed the projects and their decisions, without any guidance from outside.
As the software industry started to develop, the number and the complexity of the projects started to increase, and soon enough, the need for a software process to be followed started to emerge. Many projects were developed in completely chaotic conditions and this made most of them end up as failures. The project managers wanted more visibility into their projects, and this need of predictability started to make them look in other engineering disciplines and adapt the management practices from there.
At the time, most industries had already developed and used, for decades, predictable processes where everything was carefully planned from the beginning and then designed, implemented and tested before being deployed.
In 1970, W. Royce writes a paper, where he exposes the so called “waterfall model”, where a project is developed in more stages, in a very clear order, no stage starting before the completion of the former one. At the time, the waterfall model started to be adopted by many software organizations, being a massive step ahead from the chaos that used to characterize the software industry.
The waterfall model
The waterfall model presumes four consecutive phases:
1. requirements & analysis,
2. design,
3. implementation
4. testing.
The process takes its followers from one phase to the next sequentially, each phase being needed to be finished before the next can be started.
The first phase presumes obtaining the requirements from the customers and is finished only after these requirements are analyzed and approved by the client, usually signing off a requirements specification document that needs to be carved in stone before the next phase can start: designing the software. This phase, presumes that the whole system is designed and architected in the smallest details before it goes into production. This process is often called, big design up front, and needs to find all the risks involved and solve all the problems before it can go to the next phase. The design is a very detailed plan of how the requirements should be implemented for the coders.
Following the detailed design the programmers implement the software. Usually the implementation is done in many different components that are integrated together after they have been built. When all the components have been build and integrated to form a whole, the next phase can start: testing and debugging.
The obtained system is being now thoroughly tested and the bugs found are now solved to be able to release the software. When this phase ends the software is ready and it can now be shipped and/or deployed to the client.
By the early 80’s many software developers realized that the waterfall model based processes have some very serious problems especially when predictability in a software development is hard, requiring lots of changes late in development, which were considered extremely expensive.
Risks and smells associated with the waterfall model
The waterfall model seems very good, predictive and easy to grasp and follow, but practice has proved it has some serious risks associated to it. The fundamental idea behind waterfall is that finding flaws earlier in the process is more economical and thus less risky. Finding a design flaw in the design phase is the least expensive, but finding it in a later phase like implementation or testing can generate serious problems and derail the project. Many times this is very hard, even for very experienced architects such as Martin Fowler who said:
Even skilled designers such as I consider myself to be, are often surprised when we turn such designs into software
[Martin Fowler, 2005].
Big design up front is driven by the enthusiasm that risks can be hunted down and eliminated early and a very good design produced. But …
…the designs produced, can in many cases look very good on paper, or as UML designs, but be seriously flawed when you actually have to program the thing
[Martin Fowler, 2005].
The most serious risk when it comes to the waterfall model is the fact that it is very change adverse. Since it is based on the presumption that everything can be predicted, changes can be very hard to do once the process was started, and extremely costly in later phases.
In many cases, there’s a lot of overhead associated with the process. Usually lots of documentation is produced, to plan, to track, to demonstrate progress, to test, to etc, and in many cases producing the documents becomes a purpose, having people produce very professional documents, very detailed plans, very eye capturing charts, that have a lot of time invested in them, but do not have direct business value for the client, or the value is under the cost.
In a business process, there are documents, reports and charts that can be very helpful to be able to organize and manage, but it is also very important to constantly keep an eye on the amount of documentation, on the amount of time spent in meetings, and basically, in doing anything else that doesn’t have direct value for the client or keeps the whole process together so that it can deliver running tested features to the client.
Being fundamentally a sequential process, which presumes that one phase much be completely finished before the next can be started, the waterfall model propagates delays from one phase to another, every delay adding to the overall length of the project.
Although the risks were being seen from the late 70’s and early 80’s, in the mid 80’s the US Department of Defense, adopted the waterfall processes as a standard to develop software. This was later extended to NATO, and most west European armies adopted it. If the military made the waterfall a standard, then the entire industry was influenced, so the waterfall model process found new heights in adoption and promotion.
The need to evolve
In the early ’90s, a lot of developers started individually, to develop a new process, which was a response to both chaos and to the flaws in waterfall processes.
In their quest to find an answer to the problems associated with chaos and the heavyweight processes used throughout the industry, the developers started to question the very base of the waterfall model, it’s inspiration from other industries. Martin Fowler, one of the most respected authors in the programming field, wrote in an article called “The New Methodology” [] :
When you build a bridge, the cost of the design effort is about 10% of the job, with the rest being construction. In software the amount of the time spent in coding is much, much less. McCornell suggests that for a large project, only 15% of the project is code and unit test, an almost perfect reversal of the build bridge ratios. Even if you lump all testing as part of the construction, the design is still 50% of the work. This raises a very important question about the nature of design in software compared to its role in other branches of engineering.
On the other hand, Craig Larman says that software development is more like new product development where there are lots of uncertainties, rather then product manufacturing which is much more, straight forward, because it means doing something that has been done before:
Mary and Tom Poppendieck, continue the idea that software is a development activity, very different from production environments in other industries, making the following comparison of development and production.
Think of development as creating a recipe and production as following the recipe. … Developing a recipe is a learning process involving trial and error. You would not expect an expert chef's first attempt at a new dish to be the last attempt. In fact, the whole idea of developing a recipe is to try many variations on a theme and discover the best dish
The idea that software, is by its nature, more a continuous learning and adapting process, then a predictive process, sits at the base of many new processes that started to be developed in the early 90’s and which in 2001, were named agile.
On the other hand, some people coming from manufacturing, showed that it was not the inspiration from other industries that was wrong, but the fact that the software industry did not see that the other industries dropped the waterfall based processes 10-20 years ago, in favor of concurrent and lean development. Mary Poppendieck states:
I had been out of the software development industry for a half dozen years, and I was appalled at what I found when I returned. Between PMI (Project Management Institute) and CMM (Capability Maturity Model) certification programs, a heavy emphasis on process definition and detailed, front-end planning seemed to dominate everyone's perception of best practices. Worse, the justification for these approaches was the lean manufacturing movement I knew so well.
I was keenly aware that the success of lean manufacturing rested on a deep understanding of what creates value, why rapid flow is essential, and how to release the brainpower of the people doing the work. In the prevailing focus on process and planning I detected a devaluation of these key principles. I heard, for example, that detailed process definitions were needed so that "anyone can program," while lean manufacturing focused on building skill in frontline people and having them define their own processes.
I heard that spending a lot of time and getting the requirements right upfront was the way to do things "right the first time." I found this curious. I knew that the only way that my code would work the first time I tried to control a machine was to build a complete simulation program and test the code to death. I knew that every product that was delivered to our plant came with a complete set of tests, and "right the first time" meant passing each test every step of the way. You could be sure that next month a new gizmo or tape length would be needed by marketing, so the idea of freezing a product configuration before manufacturing was simply unheard of.
Even worse, it was shown, that the paper of W. Royce from 1970, where the waterfall model was first defined was actually showing the risks associated with the waterfall model, and the author urged the software developers to use the spiral model, which he thought was better. However, the misunderstanding that he was actually saying that waterfall processes are the way to go in software is the biggest misunderstanding in software development.
Conclusion
Clearly good software cannot be developed on the long term in chaotic conditions, and at the same time, waterfall processes are too heavyweight, as in many cases they are an overkill. The new business world in which we live in, where dramatic changes happen very fast in the market, require that software development come up with a new, evolved method, where software can be delivered faster, on more unstable requirements, and where the direction can be changed if needed in the middle of the way.
Agile methodologies have the same goal as all the other processes: producing software. However, the problems that they aim to solve are very different from traditional processes: fast delivery in changing, unpredictable conditions. Adapting to changes rather then trying to predict the future.
http://en.wikipedia.org/wiki/Waterfall_process
http://www.martinfowler.com/articles/newMethodology.html
My insight on the business of software development, expecially in agile methodolgies and lately into functional programming
Thursday, March 27, 2008
Friday, March 21, 2008
The agile mini book - a 6 week series
About 2 years, ago, I spoke with someone about writing an agile mini book. I wrote the book, but things changed, and it wasn't published. Now I decided, to publish it, chapter by chapter, week by week so for the next 6 weeks this is what can be expected:
CH1: Problems and causes
CH2: Agile methodologies
2.1 Introduction
2.2 Agile menifesto and principles
2.3 Agile methodologies, practices, properties and tools
2.4 XP
2.5 SCRUM
2.6 Crystal Clear
2.7 Lean Software Development
2.8 AMDD
2.9 DSDM
2.10 FDD
2.11 Adaptive Software Development
CH3: Communicating
3.1 Introduction
3.2 Communicating with the customer
3.2.1 Requirements
3.2.2 Set based development
3.2.3 Feedback
3.3.4 Showing progress
3.3 Communicating inside the team
3.4 Improving communication: quality vs quantity
3.5 Collaborating
CH4: Learning and adapting
4.1 Introduction
4.2 Circle of life - learning and adapting
4.3 Reflective improvement
CH5: Managing and organizing
5.1 A small agile process practice sample
5.2 Iterative and Incremental process
5.3 Adaptive planning strategy
5.4 Evolutionary design strategy
5.5 Fast delivery strategy
5.6 People first strategy
CH6: Quality and testing
6.1 Internal and external quality of a system
6.2 Automated tests and manual testing
6.3 Test Driven Development
I hope you'll enjoy reading the 80 pages. Each chapter will also be available for download for free, as a pdf.
Thanks,
Dan
CH1: Problems and causes
CH2: Agile methodologies
2.1 Introduction
2.2 Agile menifesto and principles
2.3 Agile methodologies, practices, properties and tools
2.4 XP
2.5 SCRUM
2.6 Crystal Clear
2.7 Lean Software Development
2.8 AMDD
2.9 DSDM
2.10 FDD
2.11 Adaptive Software Development
CH3: Communicating
3.1 Introduction
3.2 Communicating with the customer
3.2.1 Requirements
3.2.2 Set based development
3.2.3 Feedback
3.3.4 Showing progress
3.3 Communicating inside the team
3.4 Improving communication: quality vs quantity
3.5 Collaborating
CH4: Learning and adapting
4.1 Introduction
4.2 Circle of life - learning and adapting
4.3 Reflective improvement
CH5: Managing and organizing
5.1 A small agile process practice sample
5.2 Iterative and Incremental process
5.3 Adaptive planning strategy
5.4 Evolutionary design strategy
5.5 Fast delivery strategy
5.6 People first strategy
CH6: Quality and testing
6.1 Internal and external quality of a system
6.2 Automated tests and manual testing
6.3 Test Driven Development
I hope you'll enjoy reading the 80 pages. Each chapter will also be available for download for free, as a pdf.
Thanks,
Dan
Subscribe to:
Posts (Atom)