Software Development Life Cycle Models with Waterfall Model 2018
Every program and software has a life cycle. It doesn’t matter how large or small the program is, or how many people are working on the project—all programs go through the same steps
Coding and debugging
Your program may compress some of these steps, or combine two or more steps into a single piece of work, but all programs go through all steps of the life cycle. In this blog, we explore top 10 software development life cycle models with waterfall model 2018.
Although every program has a life cycle, many different process variations encompass these steps. Every development process, however, is a variation on two fundamental types. In the first type, the project team will generally do a complete life cycle—at least steps 2 through 7—before they go back and start on the next version of the product.
In the second type, which is more prevalent now, the project team will generally do a partial life cycle—usually steps 3 through 5—and iterate through those steps several times before proceeding to the release step.
These two general process types can be implemented using two classes of project management models. These are traditional plan-driven models and the newer agile development models.
In plan-driven models, the methodology tends to be stricter in terms of process steps and when releases happen. Plan-driven models have more clearly defined phases, and more requirements for sign-off on completion of a phase before moving on to the next phase.
Plan-driven models require more documentation at each phase and verification of completion of each work product. These tend to work well for large contracts for new software with well-defined deliverables.
The agile models are inherently incremental and make the assumption that small, frequent releases produce a more robust product than larger, less frequent ones. Phases in agile models tend to blur together more than in plan-driven models, and there tends to be less documentation of work products required, the basic idea being that code is what is being produced, so developer efforts should focus there.
See the Agile Manifesto web page at http//agilemanifesto.org to get a good feel for the agile development model and goals.
This blog takes a look at several software lifecycle models, both plan is driven and agile, and compares them. There is no one best process for developing software.
Each project must decide on the model that works best for its particular application and base that decision on the project domain, the size of the project, the experience of the team, and the timeline of the project. But first, we have to look at the four factors, or variables, that all software development projects have in common.
The Four Variables
The four variables of software development projects are as follows
1. Cost is probably the most constrained; you can’t spend your way to quality or being on schedule, and as a developer, you have very limited control over cost.
Cost can influence the size of the team or, less often, the types of tools available to the team. For small companies and startups, cost also influences the environment where the developers will work.
2. Time is your delivery schedule and is unfortunately many times imposed on you from the outside. For example, most consumer products (be they hardware or software) will have a delivery date somewhere between August and October in order to hit the holiday buying season. You can’t move Christmas.
If you’re late, the only way to fix your problem is to drop features or lesser quality, neither of which is pretty. Time is also where Brooks’s law gets invoked (adding programmers to a late project just makes it later).
3. Quality is the number and severity of defects you’re willing to release with. You can make short-term gains in delivery schedules by sacrificing quality, but the cost is enormous it will take more time to fix the next release, and your credibility is pretty well shot.
4. Features (also called scope) are what the product actually does. This is what developers should always focus on. It’s the most important of the variables from the customer’s perspective and is also the one you as a developer have the most control over.
Controlling scope allows you to provide managers and customers control over quality, time, and cost. If the developers don’t have control over the feature set for each release, then they are likely to blow the schedule. This is why developers should do the estimates for software work products.
A Model That’s not a Model At All Code and Fix
The first model of software development we’ll talk about isn’t really a model at all. But it is what most of us do when we’re working on small projects by ourselves, or maybe with a single partner. It’s the code and fixes model.
The code and fix model is often used in lieu of actual project management. In this model, there are no formal requirements, no required documentation, and no quality assurance or formal testing, and release is haphazard at best. Don’t even think about effort estimates or schedules when using this model.
Code and fix say take a minimal amount of time to understand the problem and then start coding. Compile your code and try it out. If it doesn’t work, fix the first problem you see and try it again. Continue this cycle of type-compile-run-fix until the program does what you want with no fatal errors and then ship it.
Every programmer knows this model. We’ve all used it way more than once, and it actually works in certain circumstances for quick, disposable tasks. For example, it works well for proof-of-concept programs. There’s no maintenance involved, and the model works well for small, single-person programs. It is, however, a very dangerous model for any other kind of program.
With no real mention of configuration management, little in the way of testing, no architectural planning, and probably little more than a desk check of the program for a code review, this model is good for quick and dirty prototypes and really nothing more. Software created using this model will be small, short on user interface niceties, and idiosyncratic.
That said, code and fix is a terrific way to do quick and dirty prototypes and short, one-off programs. It’s useful to validate architectural decisions and to show a quick version of a user interface design. Use it to understand the larger problem you’re working on.
Cruising over the Waterfall
The first and most traditional of the plan-driven process models is the waterfall model, it was created in 1970 by Winston Royce, and addresses all of the standard lifecycle phases.
It progresses nicely through requirements gathering and analysis, to architectural design, detailed design, coding, debugging, integration and system testing, release, and maintenance. It requires detailed documentation at each stage, along with reviews, archiving of the documents, sign-offs at each process phase, configuration management, and close management of the entire project. It’s an exemplar of the plan-driven process.
Waterfall model also doesn’t work
There are two fundamental and related problems with the waterfall model that hamper its acceptance and make it very difficult to implement. First, it generally requires that you finish phase N before you continue on to phase N+1.
In the simplest example, this means you must nail down all your requirements before you start your architectural design, and finish your coding and debugging before you start anything but unit testing.
In theory, this is great. You’ll have a complete set of requirements, you’ll understand exactly what the customer wants and everything the customer wants, so you can then confidently move on to designing the system.
In practice, though, this never happens. I’ve never worked on a project where all the requirements were nailed down at the beginning of the work. I’ve never seen a project where big things didn’t change somewhere during development. So, finishing one phase before the other begins is problematic.
The second problem with the waterfall is that, as stated, it has no provision for backing up. It is fundamentally based on an assembly-line mentality for developing software. The nice little diagram shows no way to go back and rework your design if you find a problem during implementation.
This is similar to the first problem above. The implications are that you really have to nail down one phase and review everything in detail before you move on.
In practice, this is just not practical. The world doesn’t work this way. You never know everything you need to know at exactly the time you need to know it. This is why software is a wicked problem.
Most organizations that implement the waterfall model modify it to have the ability to back up one or more phases so that missed requirements or bad design decisions can be fixed. This helps and generally makes the waterfall model usable, but the requirement to update all the involved documentation when you do back up makes even this version problematic.
All this being said, the waterfall is a terrific theoretical model. It isolates the different phases of the life cycle and forces you to think about what you really do need to know before you move on.
It’s also a good way to start thinking about very large projects; it gives managers a warm fuzzy because it lets them think they know what’s going on (they don’t, but that’s another story). It's also a good model for inexperienced teams working on a well-defined, new project because it leads them through the life cycle.
The best practice is to iterate and deliver incrementally, treating each iteration as a closed-end “mini-project,” including complete requirements, design, coding, integration, testing, and internal delivery. On the iteration deadline, deliver the (fully-tested, fully-integrated) system thus far to internal stakeholders. Solicit their feedback on that work, and fold that feedback into the plan for the next iteration.
Although the waterfall model is a great theoretical model, it fails to recognize that all the requirements aren’t typically known in advance and that mistakes will be made in the architectural design, detailed design, and coding. Iterative process models make this required change in process steps more explicit and create process models that build products a piece at a time.
In most iterative process models, you’ll take the known requirements—a snapshot of the requirements at some time early in the process—and prioritize them, typically based on the customer’s ranking of what features are most important to deliver first. Notice also that this is the first time we’ve got the customer involved except at the beginning of the whole development cycle.
You then pick the highest priority requirements and plan a series of iterations, where each iteration is a complete project. For each iteration, you’ll add a set of the next highest priority requirements (including some you or the customer may have discovered during the previous iteration) and repeat the project.
By doing a complete project with a subset of the requirements every time at the end of each iteration, you end up with a complete, working, and robust product, albeit with fewer features than the final product will have.
According to Tom DeMarco, these iterative processes follow one basic rule Your project, the whole project, has a binary deliverable. On the scheduled completion day, the project has either delivered a system that is accepted by the user, or it hasn’t. Everyone knows the result on that day.
The object of building a project model is to divide the project into component pieces, each of which has this same characteristic each activity must be defined by a deliverable with objective completion criteria. The deliverables are demonstrably done or not done.”
So, what happens if you estimate wrong? What if you decide to include too many new features in an iteration? What if there are unexpected delays?
Well, if it looks as if you won’t make your iteration deadline, there are only two realistic alternatives move the deadline or remove features. We’ll come back to this problem later when we talk about estimation and scheduling.
The key to iterative development is “live a balanced life—learn some and think some and draw and paint and sing and dance and play and work every day some,” or in the software development world, analyze some and design some and code some and test some every day. We’ll revisit this idea when we talk about the agile development models later in this blog.
Evolving the Iterative Model
A traditional way of implementing the iterative model is known as evolutionary prototyping. In evolutionary prototyping, one prioritizes requirements as they are received and produces a succession of increasingly feature-rich versions of the product. Each version is refined using customer feedback and the results of integration and system testing.
This is an excellent model for an environment of changing or ambiguous requirements or a poorly understood application domain. This is the model that evolved into the modern agile development processes.
Evolutionary prototyping recognizes that it’s very hard to plan the full project from the start and that feedback is a critical element of good analysis and design. It’s somewhat risky from a scheduling point of view, but when compared to any variation of the waterfall model, it has a very good track record.
Evolutionary prototyping provides improved progress visibility for both the customer and project management. It also provides good customer and end-user input to product requirements and does a good job of prioritizing those requirements.
On the downside, evolutionary prototyping leads to the danger of unrealistic schedules, budget overruns, and overly optimistic progress expectations. These can happen because the limited number of requirements implemented in a prototype can give the impression of real progress for a small amount of work.
On the flip side, putting too many requirements in a single prototype can result in schedule slippages because of overly optimistic estimation. This is a tricky balance to maintain.
Because the design evolves over time as the requirements change, there is the possibility of a bad design, unless there’s the provision of re-designing—something that becomes harder and harder to do as the project progresses and your customer is more heavily invested in a particular version of the product.
There is also the possibility of low maintainability, again because the design and code evolve as requirements change. This may lead to lots of re-work, a broken schedule, and increased difficulty in fixing bugs post-release.
Evolutionary prototyping works best with tight, experienced teams who have worked on several projects together. This type of cohesive team is productive and dexterous, able to focus on each iteration and usually producing the coherent, extensible designs that a series of prototypes requires. This model is not generally recommended for inexperienced teams.
Risk The Problem with Plan-Driven Models
The risk is the most basic problem in software. Risk manifests itself in many ways schedule slips, project cancelation, increased defect rates, misunderstanding of the business problem, false feature richness (you’ve added features the customer really doesn’t want or need), and staff turnover.
Managing risk is a very difficult and time-consuming management problem. Minimizing and handling risk are the key areas of risk management. Agile methodologies seek to minimize risk by controlling the four variables of software development.
Agile methods recognize that to minimize risk, developers need to control as many of the variables as possible, but they especially need to control the scope of the project.
Agile uses the metaphor of “learning to drive.” Learning to drive is not pointing the car in the right direction. It’s pointing the car, constantly paying attention, and making the constant minor corrections necessary to keep the car on the road.
In programming, the only constant changes. If you pay attention and cope with change as it occurs, you can keep the cost of change manageable.
Starting in the mid-1990s, a group of process mavens began advocating a new model for software development. As opposed to the heavyweight plan-driven models mentioned earlier and espoused by groups like the Software Engineering Institute (SEI) at Carnegie Mellon University, this new process model was lightweight.
It required less documentation and fewer process controls. It was targeted at small- to medium-sized software projects and smaller teams of developers. It was intended to allow these teams of developers to quickly adjust to changing requirements and customer demands, and it proposed to release completed software much more quickly than the plan-driven models. It was, in a word, agile.
Agile development works from the proposition that the goal of any software development project is working code. And because the focus is on working software, then the development team should spend most of their time writing code, not writing documents. This gives these processes the name lightweight.
Lightweight methodologies have several characteristics they tend to emphasize writing tests before code, frequent product releases, significant customer involvement in development, common code ownership, and refactoring—rewriting code to make it simpler and easier to maintain. Lightweight methodologies also suffer from several myths.
The two most pernicious is probably that lightweight processes are only good for very small projects, and that you don’t have any process discipline in a lightweight project. Both of these are incorrect.
The truth is that lightweight methodologies have been successfully used in many small- and medium-sized projects—say, up to about 500,000 lines of code. They have also been used in very large projects.
These types of projects can nearly always be organized as a set of smaller projects that hang together and provide services to the single large product at the end.
Lightweight processes can be used on the smaller projects quite easily. Lightweight methodologies also require process discipline, especially at the beginning of a project when initial requirements and an iteration cycle are created, and in the test-driven-development used as the heart of the coding process.
The rest of this blog describes the agile values and principles, then looks at two lightweight/agile methodologies, eXtreme Programming (XP) and Scrum, and finally talks about an interesting variant lean software development.
Agile Values and Principles
In early 2001 a group of experienced and innovative developers met in Snowbird, Utah to talk about the state of the software development process. All of them were dissatisfied with traditional plan-driven models and had been experimenting with new lightweight development techniques.
Out of this meeting came the Agile Manifesto. The original description proposed by the group included two parts values (the manifesto itself) and principles. The values are as follows
Individuals and interactions over processes and tools
Working software over comprehensive documentation
Customer collaboration over contract negotiation
Responding to change over following a plan
The idea behind the manifesto is that although the authors understood the value of the latter items in each value, they preferred to think and work on the former items. Those things—individuals, working software, collaboration, and responding to change—are the most important and valuable ideas in getting a software product out the door.
The principles run as follows
1. Our highest priority is to satisfy the customer through the early and continuous delivery of valuable software.
2. Welcome changing requirements, even late in development. Agile processes harness change for the customer’s competitive advantage.
3. Deliver working software frequently, from a couple of weeks to a couple of months, with a preference to the shorter timescale.
4. Business people and developers must work together daily throughout the project.
5. Build projects around motivated individuals. Give them the environment and support they need, and trust them to get the job done.
6. The most efficient and effective method of conveying information to and within a development team is a face-to-face conversation.
7. Working software is the primary way to measure progress.
8. Agile processes promote sustainable development. The sponsors, developers, and users should be able to maintain a constant pace indefinitely.
9. Continuous attention to technical excellence and good design enhances agility.
10. Simplicity—the art of maximizing the amount of work not done—is essential.
11. The best architectures, requirements, and designs emerge from self-organizing teams.
12. At regular intervals, the team reflects on how to become more effective and then tunes and adjusts its behavior accordingly.
eXtreme Programming (XP)
Kent Beck and Ward Cunningham created XP around 1995. XP is a “lightweight, efficient, low-risk, flexible, predictable, scientific, and fun way to develop software.”
XP relies on the following four fundamental ideas
Heavy customer involvement XP requires that a customer representative be part of the development team and be on site at all times. The customer representative works with the team to define the content of each iteration of the product. They also create all the acceptance tests for each interim release.
Continuous unit testing (also known as test-driven development, or TDD) XP calls for developers to write the unit tests for any new features before any of the code is written. In this way, the tests will, of course, initially all fail, but it gives a developer a clear metric for success. When all the unit tests pass, you’ve finished implementing the feature.
Pair programming XP requires that pairs of developers write all code. In a nutshell, pair programming requires two programmers—a driver and a navigator—who share a single computer. The driver is actually writing the code while the navigator watches, catching typos, making suggestions, thinking about design and testing, and so on.
The pair switches places periodically (every 30 minutes or so, or when one of them thinks he has a better way of implementing a piece of code).
Pair programming works on the “two heads are better than one” theory. Although a pair of programmers isn’t quite as productive as two individual programmers when it comes to a number of lines of code written per unit of time, their code usually contains fewer defects, and they have a set of unit tests to show that it works.
This makes them more productive overall. Pair programming also provides the team an opportunity to refactor existing code—to re-design it to make it as simple as possible while still meeting the customer’s requirements.
Pair programming is not exclusive to XP, but XP was the first discipline to use it exclusively. In fact, XP uses pair programming so exclusively that no code written by just a single person is typically allowed into the product.
Short iteration cycles and frequent releases XP typically use release cycles in the range of just a few weeks or months, and each release is composed of several iterations, each on the order of three to five weeks. The combination of frequent releases and an on-site customer representative allows the XP team to get immediate feedback on new features and to uncover design and requirements issues early.
XP also requires constant integration and building of the product. Whenever a programming pair finishes a feature or task and it passes all their unit tests, they immediately integrate and build the entire product. They then use all the unit tests as a regression test suite to make sure the new feature hasn’t broken anything already checked in.
If it does break something, they fix it immediately. So, in an XP project, integrations and builds can happen several times a day. This process gives the team a good feel for where they are in the release cycle every day and gives the customer a completed build on which to run the acceptance tests.
The Four Basic Activities
XP describes four activities that are the bedrock of the discipline Designing Design while you code. “Designing is creating a structure that organizes the logic in the system. Good design organizes the logic so that a change in one part of the system doesn’t always require a change in another part of the system. Good design ensures that every piece of logic in the system has one and only one home. The good design puts the logic near the data it operates on. Good design allows the extension of the system with changes in only one place.”
Coding The code is where the knowledge of the system resides, so it’s your main activity. The fundamental difference between plan-driven models and agile models is this emphasis on the code. In a plan-driven model, the emphasis is on producing a set of work products that together represent the entire work of the project, with the code being just one of the work products.
In agile methodologies, the code is the sole deliverable and so the emphasis is placed squarely there; in addition, by structuring the code properly and keeping comments up to date, the code becomes documentation for the project.
Testing The tests tell you when you’re done coding. Test-driven development is crucial to the idea of managing change. XP depends heavily on writing unit tests before writing the code that they test and on using an automated testing framework to run all the unit tests whenever changes are integrated.
Listening To your partner and to the customer. In any given software development project, there are two types of knowledge. The customer has knowledge of the business application being written and what it is supposed to do.
This is the domain knowledge of the project. The developers have known about the target platform, the programming language(s), and the implementation issues. This is the technical knowledge of the project. The customer doesn’t know the technical side, and the developers don’t have the domain knowledge, so listening—on both sides—is a key activity in developing the product.
Implementing XP The 12 Practices
We (finally) get to the implementation of XP. Here are the rules that every XP team follows during their project. The rules may vary depending on the team and the project, but in order to call yourselves an XP team, you need to do some form of these things. The practices described here draw on everything previously described the four values, the 12 principles, and the four activities. This is really XP
The planning game Develop the scope of the next release by combining business priorities and technical estimates. The customer and the development team need to decide on the stories (read features) that will be included in the next release, the priority of each story, and when the release needs to be done.
The developers are responsible for breaking the stories up into a set of tasks and for estimating the duration of each task. The sum of the durations tells the team what they really think they can get done before the release delivery date. If necessary, stories are moved out of a release if the numbers don’t add up.
Notice that estimation is the responsibility of the developers and not the customer or the manager. In XP only the developers do the estimation. Small releases Put a simple system into production quickly and then release new versions on a very short cycle. Each release has to make sense from a business perspective, so release size will vary.
It’s far better to plan releases in durations of a month or two rather than 6 or 12 months. The longer a release is, the harder it is to estimate.
The metaphor “A simple shared story of how the whole system works.” The metaphor replaces your architecture. It needs to be a coherent explanation of the system that is decomposable into smaller bits—stories. Stories should always be expressed in the vocabulary of the metaphor, and the language of the metaphor should be common to both the customer and the developers.
Simple design Keep the design as simple as you can each day and re-design often to keep it simple.
According to Beck, a simple design (1) runs all the unit tests, (2) has no duplicated code, (3) expresses what each story means in the code, and (4) has the fewest number of classes and methods that make sense to implement the stories so far.
Testing Programmers constantly write unit tests. Tests must all pass before integration. Beck takes the hard line that “any program feature without an automated test simply doesn’t exist.” Although this works for most acceptance tests and should certainly work for all unit tests, this analogy breaks down in some instances, notably in testing the user interface in a GUI.
Even this can be made to work automatically if your test framework can handle the events generated by a GUI interaction. Beyond this, having a good set of written instructions will normally fill the bill.
Refactoring Restructure the system “without changing its behavior” to make it simpler—removing redundancy, eliminating unnecessary layers of code, or adding flexibility. The key to refactoring is to identify areas of code that can be made simpler and to do it while you’re there. Refactoring is closely related to collective ownership and simple design.
Collective ownership gives you permission to change the code, and a simple design imposes on you the responsibility to make the change when you see it needs to be made.
Pair programming Two programmers at one machine must write all production code in an XP project. Any code written alone is thrown away. Pair programming is a dynamic process. You may change partners as often as you change tasks to implement.
This has the effect of reinforcing collective ownership by spreading the knowledge of the entire system around the entire team. And it avoids the “beer truck problem,” where the person who knows everything gets hit by a beer truck and thus sets the project schedule back months.
Collective ownership The team owns everything, implying that anyone can change anything at any time. In some places this is known as “ego-less programming.” Programmers need to buy into the idea that anyone can change their code and that collective ownership extends from code to the entire project; it’s a team project, not an individual one.
Continuous integration Integrate and build every time a task is finished, possibly several times a day (as long as the tests all pass). This helps to isolate problems in the code base; if you’re integrating a single task change, then the most likely place to look for a problem is right there.
40-hour week Work a regular 40-hour week. Never work the second week in a row with overtime. The XP philosophy has a lot in common with many of Tom DeMarco’s Peopleware arguments. People are less productive if they’re working 60 or 70 hours a week than if they’re working 40 hours. When you’re working excessive amounts of overtime, several things happen.
Because you don’t have time to do chores and things related to your “life,” you do them during the workday. Constantly being under deadline pressure and never getting a sustained break also means you get tired and then make more mistakes, which somebody then needs to fix.
But being in control of the project and working 40 hours a week (give or take a few) leaves you with time for a life, time to relax and recharge, and time to focus on your work during the workday—making you more productive, not less.
On-site customer A customer is part of the team, is on-site, writes and executes functional tests, and helps clarify requirements. The customer’s ability to give immediate feedback to changes in the system also increases team confidence that they’re building the right system every day.
Coding standards The team has them, follow them and uses them to improve communication. Because of collective code ownership, the team must have coding standards and everyone must adhere to them.
Without a sensible set of coding guidelines, it would take much, much longer to do refactoring and it would decrease the desire of developers to change the code. Notice that I said sensibly. Your coding standards should make your code easier to read and maintain they shouldn’t constrict creativity.
Scrum software development life cycle model
The second agile methodology we’ll look at is Scrum. Scrum derives its name from rugby, where a Scrum is a means of restarting play after a rules infraction.
The Scrum uses the 8 forwards on a rugby team (out of 15 players in the rugby union form of the game) to attempt to (re)gain control of the ball and move it forward towards the opposing goal line.
The idea in the agile Scrum methodology is that a small team is unified around a single goal and gets together for sprints of development that move them towards that goal.
Scrum is, in fact, older than XP, with the original process management idea coming from Takeuchi and Nonaka’s 1986 paper, “The New New Product Development Game.”15 The first use of the term Scrum is attributed to DeGrace and Stahl’s 1990 book Wicked Problems, Righteous Solutions.
Scrum is a variation on the iterative development approach and incorporates many of the features of XP. Scrum is more of a management approach than XP and doesn’t define many of the detailed development practices (like pair programming or test-driven development) that XP does, although most Scrum projects will use these practices.
Scrum uses teams of typically no more than ten developers. Just like other agile methodologies, Scrum emphasizes the efficacy of small teams and collective ownership.
Scrum defines three roles in a development project. The first is the product owner, the person who generates the requirements for the product and prioritizes them. The requirements normally take the form of user stories—features that can be summarized by sentences like “As a <type of user>, I want to <do or create something>, so that <some value is created>.”
These user stories turn into one or more tasks that suggest how to create the feature. The product owner adds the user stories to the product backlog and prioritizes them.
This points the development team towards the most valuable work. The product owner is also charged with making sure that the team understands the requirements behind the user stories. Once the team completes a user story, they have added value to the end product.
Scrum projects are facilitated by a Scrum master whose job it is to manage the backlogs, run the daily Scrum meetings, coach the team, and protect the team from outside influences during the sprint.
The Scrum master may or may not be a developer but they are an expert in the Scrum process and are the go-to person for questions on Scrum. The Scrum master is emphatically not the manager of the team. Scrum teams are teams of equals and arrive at decisions by consensus.
Besides the product owner and the Scrum master, everyone else involved in the project is on the development team. The development team itself is self-organizing; the members of the Scrum team decide among themselves who will work on what user stories and tasks, assume collective ownership of the project, and decide on the development process they’ll use during the sprint.
The entire team is dedicated to the goal of delivering a working product at the end of every sprint. This organization is reinforced every day at the Scrum meeting.
Scrum is characterized by the sprint, an iteration of between one and four weeks. Sprints are time-boxed in that they are of a fixed duration and the output of a sprint is what work the team can accomplish during the sprint. The delivery date for the sprint does not move out. This means that sometimes a sprint can finish early, and sometimes a sprint will finish with less functionality than was proposed. A sprint always delivers a usable product.
Scrum requirements are encapsulated in two backlogs. The product backlog is the prioritized list of all the requirements for the project; the product owner creates it. The product owner prioritizes the product backlog, and the development team breaks the high-priority user stories into tasks and estimates them. This list of tasks becomes the sprint backlog.
The sprint backlog is the prioritized list of user stories for the current sprint. Once the sprint starts, only the development team may add tasks to the sprint backlog—these are usually bugs found during testing. No outside entity may add items to the sprint backlog, only to the product backlog.
One important thing about the Scrum process is that in most Scrum teams, the sprint backlog is visual. It’s represented on a board using either Post-It notes or index cards, with one note or card per task; it may also be an online virtual board. For example, see Jira or Pivotal Tracker.
This task board always has at least three columns ToDo, InProgress, and Done. The task board provides a visual representation of where the team is during the sprint.
At a glance, any team member can see what tasks are currently available to be picked up, which are actively being worked on, and which are finished and integrated into the product baseline. This visual task board is similar to the Kanban board that we’ll talk about later.
Before the first sprint starts, Scrum has an initial planning phase that creates the product list of the initial requirements, decides on an architecture for implementing the requirements, divides the user stories into prioritized groups for the sprints, and breaks the first set of user stories into tasks to be estimated and assigned.
They stop when their estimates occupy all the time allowed for the sprint. Tasks in a sprint should not be longer than one day of effort. If a task is estimated to take more than one day of effort, it is successively divided into two or more tasks until each task’s effort is the appropriate length.
This rule comes from the observation that humans are terrible at doing exact estimations of large tasks and that estimating task efforts in weeks or months is basically just a guess. So, breaking tasks down into smaller pieces gives the team a more reliable estimate for each.
Sprints have a daily Scrum meeting, which is a stand-up meeting of 15–30 minutes duration (the shorter, the better) where the entire team discusses sprint progress. The daily Scrum meeting allows the team to share information and track sprint progress.
By having daily Scrum meetings, any slip in the schedule or any problems in implementation are immediately obvious and can then be addressed by the team at once.
“The Scrum master ensures that everyone makes progress, records the decisions made at the meeting and tracks action items, and keeps the Scrum meetings short and focused.”
At the Scrum meeting, each team member answers the following three questions in turn
What tasks have you finished since the last Scrum meeting?
Is anything getting in the way of your finishing your tasks?
What tasks are you planning to do between now and the next Scrum meeting?
Discussions other than responses to these three questions are deferred to other meetings. This meeting type has several effects. It allows the entire team to visualize progress towards the sprint and project completion every day. It reinforces team spirit by sharing progress—everyone can feel good about tasks completed.
And finally, the Scrum meeting verbalizes problems, which can then be solved by the entire team.
Some Scrum teams have a meeting in the middle of the sprint called story time. In story time the team and the product owner take a look at the product backlog and begin the process of prioritizing the user stories in the backlog and breaking the high priority stories into tasks that will become part of the next sprint’s backlog. Storytime is optional but has the advantage of preparing for the planning meeting for the next sprint.
At the end of the sprint, two things happen. First, the current version of the product is released to the product owner, who may perform acceptance testing on it. This usually takes the form of a demo of the product near the end of the last day of the sprint. This meeting and demo are called the sprint review.
After the sprint review, the team will wrap up the sprint with a sprint retrospective meeting. In the sprint retrospective, the team looks back on the just completed sprint, congratulates themselves on jobs well done, and looks for areas in which they can improve performance for the next sprint. These meetings typically don’t last long (an hour or so) but are a valuable idea that brings closure to and marks the end of the current sprint.
At the start of the next sprint, another planning meeting is held where the Scrum master and the team re-prioritize the product backlog and create a backlog for the newsprint.
With most Scrum teams, estimates of tasks become better as the project progresses, primarily because the team now has data on how they’ve done estimating on previous sprints.
This effect in Scrum is called velocity; the productivity of the team can actually increase during the project as they gel as a team and get better at estimating tasks. This planning meeting is also where the organization can decide whether the project is finished—or whether to finish the project at all.
After the last scheduled development sprint, a final sprint may be done to bring further closure to the project. This sprint implements no new functionality but prepares the final deliverable for product release. It fixes any remaining bugs, finishes documentation, and generally productizes the code.
Any requirements left in the product backlog are transferred to the next release. A Scrum retrospective is held before the next sprint begins to ponder the previous sprint and see whether any process improvements can be made. Scrum is a project management methodology and is typically silent on development processes.
Despite this, Scrum teams typically use many of the practices described earlier in the XP practices section. Common code ownership, pair programming, small releases, simple design, test-driven development, continuous integration, and coding standards are all common practices in Scrum projects.
Lean Software Development
Lean software development is not really an agile methodology. It’s more of a philosophy that most agile methodologies, like XP and Scrum, draw from and from which they seek guidance and inspiration.
Lean software development comes from the just-in-time manufacturing processes that were introduced in Japan in the 1970s and then made their way around the world in the 1980s and 1990s, encouraged by the publication in 1990 of The Machine That Changed The World by Womack, et. al. Just-in-time manufacturing evolved into first lean manufacturing and then into lean product management systems throughout the 1990s.
The publication of Poppendieck & Poppendieck’s Lean Software Development An Agile Toolkit in 2003 marked the movement of lean into the agile development community.
Lean software development is a set of principles designed to improve productivity, quality, and customer satisfaction. Lean wants to eliminate from your process anything that doesn’t add value to the product. These non-value-adding parts of your process are called waste. Lean also emphasizes that the team should only be working on activities that add value to the product right now.
The Poppendiecks transformed the lean principles that started at Toyota into seven key principles for software development
1. Eliminate Waste
2. Build Quality In
3. Create Knowledge
4. Defer Commitment
5. Deliver Fast
6. Respect People
7. Optimize the Whole
We’ll go through each of these principles briefly to illustrate how they apply to software development and how agile methodologies make use of them.
Principle 1 Eliminate Waste
Lean software development wants you to eliminate anything that doesn’t add value to the product. Things that don’t add value are, as mentioned, waste. Obviously in any kind of production or development environment you want to eliminate waste. Waste costs you money and time. The question here, in a software development project, is “What is a waste?”
Some things may be obviously wasteful too many meetings (some will say meetings of any kind), too much documentation, unnecessary features, and unclear or rapidly changing requirements. But there are others, such as partially written code, code for “future features” that the customer may never use, defects in your code and other quality issues, excessive task switching (you as a developer are assigned several tasks and you have to keep switching between them), and too many features or tasks for a given iteration or release.
All these things constitute waste, and in a lean environment, the team’s job is to minimize waste in all forms. Only by doing that can the team increase productivity and deliver working code fast.
The next question is “How do we eliminate waste?” A general way to focus on waste in a project and work towards eliminating all forms of it is to consider the team’s development process and how it’s working. In Scrum, this is the end of the sprint retrospective. During the retrospective, the team looks back on the just-finished sprint and asks “What can we improve?”
A lean team will turn that question into “What was wasteful and how can we change to be less wasteful next time?” Teams that make these little improvements in their process at the end of every sprint, focusing on one or two items to change each time, will learn more and more about what works and what doesn’t in their process and are on the way to continuous process improvement.
Principle 2 Build Quality In
Quality issues are the bane of every software developer. Defects in code, testing your code more than once, logging defects, fixing defects, re-testing, all result in waste you’d like to eliminate. It turns out that agile processes are on top of removing this type of waste in order to build quality software.
Nearly all agile teams will implement two techniques that improve code quality at the source pair programming and test-driven development (TDD). Both of these techniques allow developers to write, test, and fix code quickly and before the code is integrated into the product code base, where defects become harder to find and fix.
Integrating new features as soon as they’re done gives the testing team a new version of the product to test as quickly as possible and shortens the amount of time between code creation and testing.
Another technique to improve quality in XP teams is constant feedback. In XP, because the customer is part of the team, they can evaluate new iterations of the product constantly, giving the developers instant feedback on what’s working and what’s not.
There is one other relatively painless, but probably the heretical thing you as a developer can do that will add to the quality of your code don’t log defects in a defect tracking system. But wait! How are you going to document that there’s a defect to be fixed later? The answer is you don’t. You fix it—now. As soon as you find it. That builds quality in and eliminates waste at the same time.
Principle 3 Create Knowledge
It seems obvious that, as your team works through requirements, creates a design, and implements the code that will become a product, that you are creating knowledge. However, another way to describe this lean principle is to say, “The team must learn new things constantly.” Learning new things is what’s happening as you work through a project.
You learn by working to understand requirements. You learn by beginning with an initial design and realizing that the design will change as the requirements do. You learn that the detailed design evolves and isn’t truly finished until you have code written.
And you learn that by implementing new requirements and fixing defects that you can make the code simpler by refactoring as often as you can. Thus you’re creating knowledge that’s embodied in the code you produce and ship.
Principle 4 Defer Commitment
This lean principle really dates back to the early 1970s and the advent of top-down structured design. What Defer Commitment means is put off decisions (particularly irreversible ones) as long as you can and only make them when you must. In top-down design, you start with a general formulation of the problem solution and push decisions about implementation down as you make the design and the code more detailed.
This gives you more flexibility at the upper levels and pushes the commitment to a particular design or piece of code down until you have no alternative but to write it. This is why you write libraries and APIs, so at any particular level of the code, you can use something at a lower level without needing to know the implementation details. At that lower level, you’ll hopefully know more about what needs to be done, and the code will write itself.
This principle also means that you shouldn’t put decisions off too late. That has the possibility of delaying the rest of the team and making your design less flexible. Also try to make as few irreversible decisions as you can to give yourself as much flexibility at all levels of your code.
Principle 5 Deliver Fast
Well, this seems obvious too. Particularly in the age of the Internet, mobile applications, and the Internet of Things, it seems that faster must be better. And that’s true. Companies that can bring quality products to market faster will have a competitive edge. “First to market” players will gain a larger market share initially and if they continue to release products quickly can maintain that edge over time.
Additionally, if your team minimizes the amount of time between when the customer or product owner generates the requirements and when you deliver a product that meets those requirements, there is less time for the requirements—and the market—to change.
How do you deliver fast? Well, first you should adhere to other lean principles, especially Eliminate Waste and Build Quality In. Both of these will improve productivity and allow you to deliver product iterations faster.
But there is more. You should keep things simple. This means keep the requirements simple. Don’t add too many features and don’t spend time planning on future features. Don’t over-engineer the solution.
Find a reasonable solution, a reasonable set of data structures, and reasonable algorithms to implement your solution. Remember that the perfect is the enemy of the good—and the fast. Finally, the best way to Deliver Fast is to have an experienced, well-integrated, cooperative, self-organizing, loyal-to-each-other team with the right skill set for your product. Nothing will help more than a good team.
Principle 6 Respect People
Respecting people is all about building strong, productive teams. It’s based on the idea that the people doing the work should make the decisions. Process and product-creation decisions shouldn’t be imposed from above—they should be generated from the trenches. From the manager’s perspective, respecting your team means empowering the team to make their own decisions, including about task time estimates and decomposition, processes, tools, and design.
This empowerment means that the manager must learn to listen to their team and take their ideas and concerns into consideration. It means that the manager must act as a shield for their team so that they can get the job done.
Everyone on the team is enjoined to create an environment where everyone is able to speak their mind and disagreements are resolved with respect for each other. This creates a team with open communication and decision-making transparency.
Principle 7 Optimize the Whole
“A lean organization optimizes the whole value stream, from the time it receives an order to address a customer need until the software is deployed and the need is addressed. If an organization focuses on optimizing something less than the entire value stream, we can just about guarantee that the overall value stream will suffer.”
The main idea behind Optimize the Whole is to keep the entire product picture in sight as you develop. Agile organizations do this by having strong, multi-disciplinary teams that are co-located and that contain all the skills and product creation functions they need to deliver a product that meets the customer’s needs with little reference to another team.
Kanban software development life cycle model
The Kanban method is a practice derived from lean manufacturing (the Toyota Production System) and other change-management systems; it draws most of its original ideas from the just-in-time manufacturing processes. Just like lean software development, Kanban isn’t really a process. Rather it’s an objective and a set of principles and practices to meet that objective.
Kanban uses three ideas to influence a development process work-in-progress (WIP), flow, and lead time. Work-in-progress is the total number of tasks the team is currently working on, including all the states in which a task may find itself (in-progress, done, testing, review, and so on). Flow is the passage of tasks from one state to another on the way to completion. Lead time is the amount of time it takes a task to move from its initial state to its completed state.
The Kanban board, WIP, and Flow
Work-in-progress and flow are illustrated visually in Kanban via the use of a Kanban board. A Kanban board will look familiar to most agile practitioners because it’s a variant on the Scrum task board. The figure shows a generic task/Kanban board. These boards are often physical whiteboards that occupy one wall of a common space for the team.
On this board, the team will either write in tasks or use (in Scrum) index cards or post-it notes to identify tasks. This makes the tasks easy to move from column to column (state to state) on the board. With this type of board, the user has the option of changing the number and headings of the columns in order to make the board fit their process. When Scrum and Kanban are applied to software development, the board might start out looking more like Figure
If the team is using a Kanban board, they’ll add the maximum number of tasks that are allowed in each of the first four columns. This maximum work-in-progress is used to control the flow of work through the team.
In Figure, the maximum number of Work-in-Progress tasks is ten. This means the team can’t work on more than ten tasks at a time, as denoted in each column. So, for example, say that every state on the Kanban board is maxed-out (there are five tasks in development, two in a review, and three in testing) and a developer finishes a task in the Develop state.
That task cannot move to the Review state until one of the two currently under review tasks is finished. We’ve just exposed a bottleneck in the Flow. In Kanban, what the developer would do is jump in to help either review one of the two tasks in that state, or help test a task in the Task state. No new tasks can be pulled into the Develop state until there is room for the finished task downstream.
Let’s further say that at some time later on that there are only three tasks in Develop, one in Review, and three in Test, so the team isn’t working at maximum. If a developer is available, that developer will select a task from the To Do state, pull it into Develop, and begin working. If a developer finishes a task in Develop, that task can then flow into Review, and the developer can then pull a task from the To Do list into Develop and begin work.
Thus, with Kanban, the objective is to maximize Work-in-Progress within the constraint of the maximum number of tasks allowable at any time. There is no time-boxing as in Scrum; the goal is to move tasks through as quickly as possible—have the shortest lead time—while maximizing productivity and working up to the maximum number of tasks allowable.
You can note that when I say maximizing Work-in-Progress, I don’t mean giving the team more work. In fact, the goal in keeping the maximum number of tasks in each state low is to reduce the team’s workload and allow them to focus on a smaller number of tasks (to avoid multi-tasking). The idea is that this will reduce stress and increase quality and productivity.
Kanban also works by using a pull system. Instead of pushing items out to other teams, Kanban tells developers to pull items from their ToDo state as they have the capacity to handle them. You’ll notice that there are some things missing here, notably how things get on the ToDo list, and what Done means. That’s because Kanban isn’t really a project management technique. In order to use Kanban, you need to have some process in place already and you need to manage it.
Probably the biggest difference between using Kanban and Scrum is time-boxing. Scrum uses time-boxed sprints and measures the team’s productivity by using velocity (the number of task points finished per sprint) as its metric.
This works well for Scrum and gives the team an idea of how well they’re performing at the end of every sprint. Over time the team’s average velocity is used as a predictor to decide how many tasks to include in each sprint.
Because Kanban isn’t time-boxed, it uses a different productivity metric lead time. Lead time is the amount of time it takes for the team to get one task from the initial ToDo (or Waiting, or Queued) state to the Done (or Integrated, or Released) state. Lead time can be measured in hours or days; it answers the question “From this point, how long will it take this task to get to Done?”
When a task enters the ToDo state, that date—called the entry date—is marked on the index card. As the task moves from state to state, the date the move happens is also marked on the card until finally the task is complete and it moves to the Done state. That final date is the done date. The difference between the done date and the entry date is the lead time for that task.
Over time, as the team finishes more and more tasks, you can compute the average lead time for the team and answer the question just posed. Note that the lead time is realistic because it also includes the time when the task is in some queue waiting to be worked on.
Kanban is not so different from Scrum, and elements of Kanban are typically used in projects that implement Scrum. In Scrum, at the end of every sprint, you have working software with a particular set of features available. In Kanban, at the end of every task completion, you have working software with a new feature added.
There are two ways of constructing a software design. One way is to make it so simple that there are obviously no deficiencies. And the other way is to make it so complicated that there are no obvious deficiencies. —C. A. R. Hoare
One way to look at software problems is with a model that divides the problems into two different layers
“Wicked” problems fall in the upper layer These are problems that typically come from domains outside of computer science (such as biology, business, meteorology, sociology, political science, and so on).
These types of problems tend to be open-ended, ill-defined, and large in the sense that they require much work. For example, pretty much any kind of a web commerce application is a wicked problem. Horst W. J. Rittel and Melvin M. Webber, in a 1973 paper on social policy,1 gave a definition for and a set of characteristics used to recognize a wicked problem that we’ll look at later in this blog.
“Tame” problems fall into the lower layer These problems tend to cut across other problem domains and tend to be better defined and small. Sorting and searching are great examples of tame problems. Small and well defined don’t mean “easy,” however. Tame problems can be very complicated and difficult to solve.
It’s just that they’re clearly defined and you know when you have a solution. These are the kinds of problems that provide computer scientists with foundations in terms of data structures and algorithms for the wicked problems we solve from other problem domains.
According to Rittel and Webber, a wicked problem is one for which the requirements are completely known only after the problem is solved, or for which the requirements and solution evolve over time.
It turns out this describes most of the “interesting” problems in software development. Recently, Jeff Conklin has revised Rittel and Webber’s description of a wicked problem and provided a more succinct list of the characteristics of wicked problems. To paraphrase
1. A wicked problem is not understood until after the creation of a solution Another way of saying this is that the problem is defined and solved at the same time.
2. Wicked problems have no stopping rule You can create incremental solutions to the problem, but there’s nothing that tells you that you’ve found the correct and final solution.
3. Solutions to wicked problems are not right or wrong They are better or worse, or good-enough or not-good-enough.
4. Every wicked problem is essentially novel and unique Because of the “wickedness” of the problem, even if you have a similar problem next week, you basically have to start over again because the requirements will be different enough and the solution will still be elusive.
5. Every solution to a wicked problem is a one-shot operation See the preceding bullet point.
6. Wicked problems have no given alternative solutions There is no small, finite set of solutions to choose from.
Wicked problems crop up all over the place. For example, creating a word processing program is a wicked problem. You may think you know what a word processor needs to do—insert text, cut and paste, handle paragraphs, print, and so forth. But this list of features is only one person’s list.
As soon as you “finish” your word processor and release it, you’ll be inundated with new feature requests spell checking, footnotes, multiple columns, support for different fonts, colors, styles, and the list goes on. The word processing program is essentially never done—at least not until you release the last version and end-of-life the product.
Word processing is actually a pretty obvious wicked problem. Others might include problems where you don’t really know if you can solve the problem at the start. Expert systems require a user interface, an inference engine, a set of rules, and a database of domain information. For a particular domain, it’s not at all certain at the beginning that you can create the rules that the inference engine will use to reach conclusions and recommendations.
So you have to iterate through different rule sets, send out the next version, and see how well it performs. Then you do it again, adding and modifying rules. You don’t really know whether the solution is correct until you’re done. Now that’s a wicked problem.
Conklin, Rittel, and Webber say that traditional cognitive studies indicate that when faced with a large, complicated (wicked) problem, most people will follow a linear problem-solving approach, working top-down from the problem to the solution.
Instead of this linear, waterfall approach, real wicked problem solvers tend to use an approach that swings from requirements analysis to solution modeling and back until the problem solution is good enough. Conklin calls this an opportunity-driven or opportunistic approach because the designers are looking for an opportunity to make progress toward the solution.
In this figure, the jagged line indicates the designer’s work moving from the problem to a solution prototype and back again, slowly evolving both the requirements understanding and the solution iteration and converging on an implementation that’s good enough to release. As an example, let’s take a quick look at a web application.
Say a nonprofit organization keeps a list of activities for youth in your home county. The list is updated regularly and is distributed to libraries around the county. Currently, the list is kept on a spreadsheet and is distributed in hard copy in a three-ring binder.
The nonprofit wants to put all its data online and make it accessible over the web. It also wants to be able to update the data via the same website. Simple, you say. It’s just a web application with an HTML front end, a database, and middleware code to update and query the database as the back end. Not a problem.
Ah, but this is really a wicked problem in disguise. First of all, the customer has no idea what they want the web page(s) to look like. So whatever you give them the first time will not be precisely what they want; the problem won’t be understood completely until you are done.
Secondly, as you develop prototypes, they will want more features—so the problem has no stopping rule. And finally, as time goes on, the nonprofit will want new features. So there is no “right” answer; there is only a “good enough” answer. Very wicked.
Conklin also provides a list of characteristics of tame problems, ones for which you can easily and reliably find a solution. “A tame problem
1. has a well-defined and stable problem statement;
2. has a definite stopping point, i.e., when the solution is reached;
3. has a solution which can be objectively evaluated as right or wrong;
4. belongs to a class of similar problems which are all solved in the same similar way;
5. has solutions which can be easily tried and abandoned; and
6. comes with a limited set of alternative solutions.”
A terrific example of a tame problem is sorting a list of data values
The problem is easily and clearly stated—sort this list into ascending order using this function to compare data elements.
Sorting has a definite stopping point the list is sorted. The result of a sort can be objectively evaluated (the list is either sorted correctly or it isn’t). Sorting belongs to a class of similar problems that are all solved in the same way. Sorting integers is similar to sorting strings is similar to sorting database records using a key and so on.
Sorting has solutions that can easily be tried and abandoned. Finally, sorting has a limited set of alternative solutions; sorting, by comparison, has a set of known algorithms and a theoretical lower bound.
What does this have to do with design principles, you ask? Well, realizing that most of the larger software problems we’ll encounter have a certain amount of “wickedness” built into them influences how we think about design issues, how we approach the design of a solution to a large, ill-formed problem, and gives us some insight into the design process.
It also lets us abandon the waterfall model with a clear conscience and pushes us to look for unifying heuristics that we can apply to design problems. In this blog, we’ll discuss overall principles for design that I’ll then expand upon in the blogs ahead.
The Design Process
The design is messy. Even if you completely understand the problem requirements (meaning it’s a tame problem), you typically have many alternatives to consider when you’re designing a software solution.
You’ll also usually make lots of mistakes before you come up with a solution that works. This gives the appearance of messiness and disorganization, but really, you’re making progress.
The design is about tradeoffs and priorities. Most software projects are time-limited, so you usually won’t be able to implement all the features that the customer wants. You have to figure out the subset that will give the customer the most high-priority features in the time you have available. So, you have to prioritize the requirements and trade off one subset for another.
The design is heuristic. For the overwhelming majority of projects, there is no set of cut-and-dried rules that say, “First we design component X using technique Y. Then we design component Z using technique W.” Software just doesn’t work that way. Software design is done using a set of ever-changing heuristics (rules of thumb) that each designer acquires over the course of a career.
Over time, good designers learn more heuristics and patterns that allow them to quickly move through the easy bits of a design and get to the heart of the wickedness of the problem. The best thing you can do is to sit at the feet of a master designer and learn the heuristics.
Designs evolve. Good designers recognize that for any problem, tame or wicked, the requirements will change over time. This will then cascade into changes in your design, so your design will evolve over time.
This is particularly true across product releases and new feature additions. The trick here is to create a software architecture that is amenable to change with limited effect on the downstream design and code.
Desirable Design Characteristics (Things Your Design Should Favor)
Regardless of the size of your project or what process you use to do your design, there are a number of desirable characteristics that every software design should have. These are the principles you should adhere to as you consider your design.
Your design doesn’t necessarily need to exhibit all of these characteristics, but having a majority of them will certainly make your software easier to write, understand, and use
The fitness of purpose Your design must work, and it must work correctly in the sense that it must satisfy the requirements you’ve been given within the constraints of the platform on which your software will be running. Don’t add new requirements as you go—the customer will do that for you.
Separation of concerns Related closely to modularity, this principle says you should separate out functional pieces of your design cleanly in order to facilitate ease of maintenance and simplicity. Modularity is good.
Simplicity Keep your design as simple as possible. This will let others understand what you’re up to. If you find a place that can be simplified, do it! If simplifying your design means adding more modules or classes to your design, that’s okay. Simplicity also applies to interfaces between modules or classes.
Simple interfaces allow others to see the data and control flow in your design. In agile methodologies, this idea of simplicity is kept in front of you all the time. Most agile techniques have a rule that says if you’re working on part of a program and you have an opportunity to simplify it (called refactoring in agile-speak), do it right then.
Keep your design and your code as simple as possible at all times.
Ease of maintenance A simple, understandable design is amenable to change. The first kind of change you’ll encounter is fixing errors. Errors occur at all phases of the development process requirements, analysis, design, coding, and testing. The more coherent and easy to understand your design is, the easier it will be to isolate and fix errors.
Loose coupling When you’re separating your design into modules—or in object-oriented design, into classes—the degree to which the classes depend on each other is called coupling. Tightly coupled modules may share data or procedures. This means that a change in one module is much more likely to lead to a required change in the other module.
This increases the maintenance burden and makes the modules more likely to contain errors. Loosely coupled modules, on the other hand, are connected solely by their interfaces. Any data they both need must be passed between procedures or methods via an interface. Loosely coupled modules hide the details of how they perform operations from other modules, sharing only their interfaces.
This lightens the maintenance burden because a change to how one class is implemented will not likely affect how another class operates as long as the interface is invariant. So, changes are isolated and errors are much less likely to propagate.
High cohesion The complement of loose coupling is high cohesion. Cohesion within a module is the degree to which the module is self-contained with regard both to the data it holds and the operations that act on the data.
A class that has high cohesion pretty much has all the data it needs to be defined within the class template, and all the operations that are allowed on the data are defined within the class as well. So, any object that’s instantiated from the class template is very independent and just communicates with other objects via its published interface.
Extensibility An outgrowth of simplicity and loose coupling is the ability to add new features to the design easily. This is extensibility. One of the features of wicked software problems is that they’re never really finished. So, after every release of a product, the next thing that happens is the customer asks for new features. The easier it is to add new features, the cleaner your design is.
Portability Though not high on the list, keeping in mind that your software may need to be ported to another platform (or two or three) is a desirable characteristic. There are a lot of issues involved with porting software, including operating system issues, hardware architecture, and user interface issues. This is particularly true for web applications.
Speaking of heuristics, here’s a short list of good, time-tested heuristics. The list is clearly not exhaustive and it’s pretty idiosyncratic, but it’s a list you can use time and again. Think about these heuristics and try some of them during your next design exercise. We will come back to all of these heuristics in much more detail in later blogs
Find real-world objects to model Alan Davis and Richard Fairley call this intellectual distance. It’s how far your design is from a real-world object. The heuristic here is to try to find real-world objects that are close to things you want to model in your program.
Keeping the real-world object in mind as you’re designing your program helps keep your design closer to the problem. Fairley’s advice is to minimize the intellectual distance between the real-world object and your model of it. Abstraction is key Whether you’re doing object-oriented design and you’re creating interfaces and abstract classes, or you’re doing a more traditional layered design, you want to use abstraction. Abstraction means being lazy.
You put off what you need to do by pushing it higher in the design hierarchy (more abstraction) or pushing it further down (more details). Abstraction is a key element of managing the complexity of a large problem. By abstracting away the details you can see the kernel of the real problem.
Information hiding is your friend Information hiding is the concept that you isolate information—both data and behavior—in your program so that you can isolate errors and isolate changes; you also only allow access to the information via a well-defined interface. A fundamental part of the object-oriented design is encapsulation, a concept that derives from information hiding.
You hide the details of a class away and only allow communication and modification of data via a public interface. This means that your implementation can change, but as long as the interface is consistent and constant, nothing else in your program need change. If you’re not doing object-oriented design, think about using libraries for hiding behavior and using separate data structures (structs in C and C++) for hiding state.
Keep your design modular Breaking your design up into semi-independent pieces has many advantages. It keeps the design manageable in your head; you can just think about one part at a time and leave the others as black boxes. It takes advantage of information hiding and encapsulation. It isolates changes. It helps with extensibility and maintainability. Modularity is just a good thing. Do it.
Identify the parts of your design that are likely to change If you make the assumption that there will be changes in your requirements, then there will likely be changes in your design as well. If you identify those areas of your design that are likely to change, you can separate them, thus mitigating the impact of any changes you need to make.
What are things likely to change? Well, it depends on your application, doesn’t it? Business rules can change (think tax rules or accounting practices), user interfaces can change, hardware can change, and so on. The point here is to anticipate the change and to divide up your design so that the necessary changes are contained.
Use loose coupling, interfaces, and abstract classes Along with modularity, information hiding, and change, using loose coupling will make your design easier to understand and to change as time goes along. Loose coupling says that you should minimize the dependencies of one class (or module) on another. This is so that a change in one module won’t cause changes in other modules.
If the implementation of a module is hidden and only the interface exposed, you can swap out implementations as long as you keep the interface constant. So you implement loose coupling by using well-defined interfaces between modules, and in an object-oriented design, using abstract classes and interfaces to connect classes.
Use your knapsack full of common design patterns Robert Glass describes great software designers as having “a large set of standard patterns” that they carry around with them and apply to their designs. This is what design experience is all about—doing design over and over again and learning from the experience.
In Susan Lammer’s book Programmers at Work, Butler Lampson says, “Most of the time, a new program is a refinement, extension, generalization, or improvement of an existing program. It’s really unusual to do something that’s completely new. . . .” That’s what design patterns are their descriptions of things you’ve already done that you can apply to a new problem.
Adhere to the Principle of One Right Place In his book Programming on Purpose Essays on Software Design, P. J. Plauger says, “My major concern here is the Principle of One Right Place—there should be One Right Place to look for any non-trivial piece of code, and One Right Place to make a likely maintenance change.” Your design should adhere to the Principle of One Right Place; debugging and maintenance will be much easier.
Use diagrams as a design language I’m a visual learner. For me, a picture really is worth a thousand or so words. As I design and code, I’m constantly drawing diagrams so I can visualize how my program is going to hang together, which classes or modules will be talking to each other, what data is dependent on what function, where do the return values go, what is the sequence of events.
This type of visualization can settle the design in your head and can point out errors or possible complications in the design. Whiteboards or paper are cheap; enjoy!
Designers and Creativity
Don’t think that design is cut and dried or that formal processes rules can be imposed to crank out software designs. It’s not like that at all. Although there are formal restrictions and constraints on your design that are imposed by the problem, the problem domain, and the target platform, the process of reaching the design itself need not be formal.
It’s at bottom a creative activity. Bill Curtis, in a 1987 empirical study of software designers, came up with a process that seems to be what most of the designers followed
1. Understand the problem.
2. Decompose the problem into goals and objects.
3. Select and compose plans to solve the problem.
4. Implement the plans.
5. Reflect on the design product and process.
Frankly, this is a pretty general list and doesn’t really tell us all we’d need for software design. Curtis, however, then went deeper into step 3 on his list, “select and compose plans,” and found that his designers used the following steps
1. Build a mental model of a proposed solution.
2. Mentally execute the model to see if it solves the problem— makeup input and simulate the model in your head.
3. If what you get isn’t correct, change the model to remove the errors and go back to step 2 to simulate again.
4. When your sample input produces the correct output, select some more input values and go back and do steps 2 and 3 again.
5. When you’ve done this enough times (you’ll know because you’re experienced) then you’ve got a good model and you can stop.
This deeper technique makes the cognitive and the iterative aspects of design clear and obvious. We see that design is fundamentally a function of the mind, and is idiosyncratic and depends on things about the designer that are outside the process itself.
John Nestor, in a report to the Software Engineering Institute, came up with a list of what are some common characteristics of great designers.
have a large set of standard patterns.
have experienced failing projects.
have mastery of development tools.
have an impulse towards simplicity.
can anticipate change.
can view things from the user’s perspective.
can deal with complexity.