Agile Manifesto (The Complete Guide 2019)

Agile Manifesto

The Agile Manifesto

The Agile Manifesto has its own way of reframing speed as a function of customer value: Working software over comprehensive documentation.


Many critics of Agile approaches have misinterpreted this as an anarchistic decree that all documentation be torn up and discarded forever. But the intent behind this statement of values is actually pretty straightforward: focus on the things that deliver immediate value to your customers. This guide explains the complete Agile Manifesto with examples.


Comprehensive documentation can feel like progress, but until you have something that your customers can actually use, you haven’t made much progress at all.


The fact that the Agile Manifesto specifies “working software” has also contributed to the misconception that Agile is only for software developers and cannot be extended to other parts of an organization. 


Here is the Manifesto for Agile Software Development: 

To understand the four values, you have first to read and understand the end subtext: "That is, while there is value in the items on the right, we value the items on the left more." Let's look at how this works by looking at each value in more detail:


Individuals and interactions over processes and tools: In an Agile environment, we still have processes and tools, but we prefer to keep our use of the light because we value communication between individuals.


If we're to foster successful collaboration, we need common understanding between technical and non-technical people. Tools and processes have a tendency to obfuscate that.


A good example is the User Story, an Agile requirement gathering technique, usually recorded on an index card. It's kept deliberately small so that we can't add too much detail. The aim is to encourage, through conversation, a shared understanding of the task.


In the same way, we should look at all of the following Agile values:

Working software over comprehensive documentation: As a software delivery team, our primary focus should be on delivering the software—fit for purpose and satisfying our customer's need.


In the past, we've made the mistake of using documents to communicate to our customer what we're building. Of course, this led to much confusion and potential ambiguity.


Our customer isn't an expert in building software and would, therefore, find it pretty hard to interpret our documentation and imagine what we might be building. The easiest way to communicate with them is via working software that they can interact with and use.


By getting something useful in front of our customer as soon as possible, we might discover if we're thinking what they're thinking. In this way, we can build out software incrementally while validating early and often with our customer that we're building the right thing.


Customer collaboration over contract negotiation: We aim to build something useful for our customer and hopefully get the best value for them we can. Contracts can constrain this, especially when you start to test the assumptions that were made when the contract was drawn up.


More often than not there are discoveries made along the way or the realization that something was forgotten or that it won't work the way we were expecting. Having to renegotiate a contract, or worse still, recording variances to be carried out at a later stage, both slowdown and constrain the team's ability to deliver something of value to the customer.


Responding to change over following a plan: When considering this Agile Value, it is worth drawing a comparison with the military.


The military operates in a very fluid environment; while they will undoubtedly have a plan of attack, this is often based on incomplete information about the enemy's strength and whereabouts. The military very much has to deal with known knowns, known unknowns, and unknown unknowns.


This is what we call a planning-driven environment; they're planning constantly throughout the battle as new information becomes available.


Plan-driven versus Planning-driven: Plan-driven means a fixed plan which everyone follows and adheres to. This is also known as predictive planning. Planning-driven is more responsive in nature;


when new information comes to light, we adjust our plan. It's called planning-driven because we expect change and so we're always in a state of planning. This is also known as Adaptive Planning


So when going into battle, while they have group objectives, the military operates with a devolved power structure and delegated authority so that each unit can make decisions on the ground as new information is uncovered.


In this way, they can respond to new information affecting the parameters of their mission, while still getting on with their overall objective. If the scope of their mission changes beyond recognition, they can use their chain of command to determine how they should proceed and re-plan if necessary.


In the same way, when we're building software, we don't want to blindly stick to a plan if the scope of our mission starts to change. The ability to respond to new information is what gives us our agility; sometimes we have to deviate from the plan to achieve the overall objective. This enables us to maximize the value delivered to our customer.


The Agile principles

The signatories to the Manifesto all shared a common background in light software development methodologies. The principles they chose reflect this. Again the emphasis is on people-focused outcomes. Each of the following principles supports and elaborates upon the values:


1. Our highest priority is to satisfy the customer through the early and continuous delivery of valuable software: In encouraging incremental delivery as soon and often as we can, we can start to confirm that we are building the right thing.


Most people don't know what they want until they see it, and in my experience, use it. Taking this approach garners early feedback and significantly reduces any risk to our customer.


2. Welcome changing requirements, even late in development. Agile processes harness change for the customer's competitive advantage: Instead of locking scope and ignoring evolving business needs, adapt to new discoveries and re-prioritize work to deliver the most value possible for your customer.


Imagine a game of soccer where the goal posts keep moving; instead of trying to stop them moving, change the way you play.


3. Deliver working software frequently, from a couple of weeks to a couple of months, with a preference for the shorter timescale: The sooner we deliver, the sooner we get feedback.


Not only from our customer that we're building the right thing, but also from our system that we're building it right. Once we get an end-to-end delivery taking place, we can start to iron out problems in our integration and deployment processes.


4. Business people and developers must work together daily throughout the project: To get a good outcome, the customer needs to invest in the building of the software as much as the development team. One of the worst things you can hear from your customer as a software developer is, "You're the expert, you build it."


It means that they are about to have very little involvement in the process of creating their software.

And yes, while software developers are the experts at building software, and have a neat bunch of processes and tools that do just that, we're not the expert in our customer's domain and we're certainly not able to get inside their heads to truly understand what they need. The closer the customer works with the team, the better the result.


5. Build projects around motivated individuals. Give them the environment and support they need, and trust them to get the job done: A software development team is a well-educated bunch of problem solvers.


We don't want to constrain them by telling them how to do their jobs; the people closest to solving the problem will get the best results. Even the military delegate authority to the people on the frontline because they know if the objective is clear, those people are the ones who can and will get the job done.


6. The most efficient and effective method of conveying information to and within a development team is the face-to-face conversation: Face-to-face conversation is a high-bandwidth activity that not only includes words but facial expressions and body language too. It's the fastest way to get information from one human being to another.


It's an interactive process that can be used to quickly resolve any ambiguity via questioning. Couple face-to-face conversation with a whiteboard and you have a powerhouse of understanding between two or more individuals. All other forms of communication dwindle in comparison.


7. Working software is the primary measure of progress: When you think about a software delivery team, and what they are there to do, then there really is nothing else to measure their progress. This principle gives us further guidance around the Agile value working software over comprehensive documentation.


The emphasis is on working software because we don't want to give any false indicators of progress.


For example, if we deliver software that isn't fully tested, then we know that it isn't complete, it has to go through several cycles of testing and fixing. This hasn't moved us any closer to completion of that piece of work because it's still not done.


Done is in the hands of our customer, done is doing the job it was intended to do. Until that point, we aren't 100% sure we've built the right thing, and until that moment we don't have a clear indication of what we might need to redo.


Everything else the software team produces just supports the delivery of the software, from design documents to user guides.


8. Agile processes promote sustainable development. The sponsors, developers, and users should be able to maintain a constant pace indefinitely:


Putting a software delivery team under pressure to deliver happens all the time; it shouldn't, but it does. There are a number of consequences of doing this, some of which we discussed earlier in this blog.


For example, put a team under pressure for long enough, and you'll seriously impact the quality of your product. The team will work long hours, make mistakes, take shortcuts, and so on to get things done for us.


The result won't just affect quality, but also the morale of our team, and their productivity. I've seen this happen time and time again; it results in good people leaving along with all the knowledge they've accumulated.


This principle aims to avoid that scenario from happening. Which means that we have to be smart and use alternative ways of getting things done sooner.


This means seeking value, ruthless prioritization, delivering working software, a focus on quality, and allowing teams to manage their work in progress so they can avoid multitasking.


Studies have shown that multitasking causes context switching time losses of up to 20%. When you think about it, when you're solving complex problems, the deeper you are into the problem, the longer it takes to regain context when you pick it back up.


It's like playing and switching between multiple games of chess. It's not impossible, but it definitely adds time.


I've also seen multitasking defined as messing up multiple things at once.


9. Continuous attention to technical excellence and good design enhances agility: By using solid technical practices and attention to detail when building software, we improve our ability to make enhancements and changes to our software.


For example, Test-Driven Development (TDD) is a practice which is as much about designing our software as it is testing it. It may seem counter-intuitive to use TDD at first, as we're investing time in a practice that seemingly adds to the development time initially.


In the long term, however, the improved design of our software and the confidence it gives us to make subsequent changes enhance our agility.


Technical debt is a term first coined by Ward Cunningham. It describes the accumulation of poor design that crops up in code when decisions have been made to implement something quickly.


Ward described it as Technical Debt because if you don't pay it back in time, it starts to accumulate. As it accumulates, subsequent changes to the software get harder and harder. What should be a simple change suddenly becomes a major refactor/rewrite to implement.


10. Simplicity—the art of maximizing the amount of work not done—is essential: Building the simplest thing we can to fit the current need prevents defensive programming also known as "future proofing."


If we're not sure whether our customer needs something or not, talk to them. If we're building something we're not sure about, we may be solving a problem that we don't have yet.


Remember the You Ain't Gonna Need It (YAGNI) principle when deciding what to do. If you don't have a hard and fast requirement for it, don't do it.


One of the number one causes of bugs is complexity in our code. Anything we can do to simplify it will help us reduce bugs and make our code easier to read for others, thus making it less likely that they'll create bugs too.


11. The best architectures, requirements, and designs emerge from self-organizing teams: People nearest to solving the problem are going to find the best solutions.


Because of their proximity, they will be able to evolve their solutions so that all aspects of the problem are covered. People at a distance are too removed to make good decisions. Employ smart people, empower them, allow them to self-organize, and you'll be amazed by the results.


12. At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behavior accordingly: This is one of the most important principles in my humble opinion and is also my favorite.


A team that takes time to inspect and adapt their approach will identify actions that will allow them to make profound changes to the way they work. The regular interval, for example, every two weeks, gives the team a date in their diary to make time to reflect.


This ensures that they create a habit that leads to a continuous improvement mindset. A continuous improvement mindset is what sets a team on the right path to being the best Agile team they can be.


Agile is a mindset

The final thing I'd like you to consider in this blog is that Agile isn't one particular methodology or another. Neither is it a set of technical practices, although these things do give an excellent foundation.


On top of these processes, tools, and practices, if we layer the values and principles of the manifesto, we start to evolve a more people-centric way of working. This, in turn, helps build software that is more suited to our customer's needs.


In anchoring ourselves to human needs while still producing something that is technically excellent, we are far more likely to make something that meets and goes beyond our customer's expectations. The trust and respect this builds will begin a powerful collaboration of technical and non-technical people.


Over time, as we practice the values and principles, we not only start to determine what works well and what doesn't, but we also start to see how we can bend the rules to create a better approach.


This is when we start to become truly Agile. When the things we do are still grounded in sound processes and tools, with good practices, but we begin to create whole new ways of working that suit our context and begin to shift our organizational culture.


An Example of "Being Agile"

When discussing the Agile Mindset, we often talk about the difference between "doing Agile" and "being Agile."


If we're "doing Agile", we are just at the beginning of our journey. We've probably learned about the Manifesto. Hopefully, we've had some Agile or Scrum training and now our team, who are likely to have a mix of Agile backgrounds, are working out how to apply it.


Right now we're just going through the motions, learning by rote. Over time, with the guidance of our Scrum Master or Agile Coach, we'll start to understand the meaning of the Manifesto and how it applies to our everyday work.


Over time our understanding deepens, and we begin to apply the values and principles without thinking. Our tools and practices allow us to be productive, nimble, and yet, still disciplined.


Rather than seeing ourselves as engineers, we see ourselves as craftsmen and women. We act with pragmatism, we welcome change, and we seek to add business value at every step. Above all else, we're fully tuned to making software that people both need and find truly useful.


If we're not there now, don't worry, we're just not there yet. To give a taste of what it feels like to be on a team who are thinking with an Agile Mindset following is an example scenario.



Imagine we're just about to release a major new feature when our customer comes to us with a last minute request. They've spotted something isn't working quite as they expected and they believe we need to change the existing workflow. Their biggest fear is that it will prevent our users from being able to do a particular part of their job.


Our response

Our team would respond as a group. We'd welcome the change. We'd be grateful that our customer has highlighted this problem to us and that they found it before we released. We would know that incorporating a change won't be a big issue for us; our code, testing and deployment/release strategies are all designed to accommodate this kind of request.


We would work together (our customer is part of the team) to discover more about the missing requirement. We'd use our toolkit to elaborate the feature with our customer, writing out the User Stories and if necessary prototyping the user experience and writing scenarios for each of the Acceptance Criteria.


We'd then work to carry out the changes in our usual disciplined way, likely using TDD to design and unit/integration test our software as well as Behavior-Driven Development (BDD) to automate the acceptance testing.


To begin with, we may carry the work out as a Mob or in pairs. We would definitely come together at the end to ensure we have collective ownership of the problem and the solution.


Once comfortable with the changes made, we'd prepare and release the new software and deploy it with the touch of a button. We might even have a fully automated deployment that deploys as soon as the code is committed to the main branch.


Finally, we'd run a retrospective to perform some root cause analysis using the 5-whys, or a similar technique, to try to discover why we missed the problem in the first place. The retrospective would result in actions that we would take, with the aim of preventing a similar problem occurring again.



In this blog, we looked at two delivery styles, delivery as a software product and delivery as a software project.


We learned that delivery as a software project was hard to get right for multiple reasons. And giving our team only one shot at delivery gave them little or no chance of fine-tuning their approach. In a novel situation, with varying degrees of uncertainty, this could lead to a fair amount of stress.


There is a better chance of succeeding if we reduce the variability. This includes knowledge of the domain, the technology, and of each of our team members' capabilities. So, it is desirable to keep our project teams together as they move from project to project.


What we learned was that when a long-lived team works on a product, they have the opportunity to deliver incrementally. If we deliver in smaller chunks, we're more likely to meet expectations successfully. Plus, teams that work on products are long-lived and have multiple opportunities to fine-tune their delivery approach.


Those who build software, understand well the complex nature of the work we do and the degree of variability that complexity introduces. Embrace that, and we'll learn to love the new control we can gain from focusing on incremental value delivery in an adaptive system.


Agile Software Delivery Methods and How They Fit the Manifesto

Some people like a bit of background before they get started, if that's you then you're in the right place. In this blog, we're going to take a look at the various strands of the modern Agile movement and see how they've come together.


Alternatively, if you're the kind of person who likes to get stuck in first and then get some context later, once you've tried a few things out, skip this blog and go directly to the next one. In it, we'll discover how to get your team up-and-running using the most popular Agile framework, Scrum. If that's you, see you there.


Here we're going to take a look at several Agile methods, including their backgrounds and how they fit the Agile Manifesto.


Or perhaps, more importantly, how the Manifesto fits them because many Agile methods were developed before the Manifesto was written—the practical experience gained by the original members of the Agile Alliance is what gave the Agile Manifesto its substance.


Some have a number of prescribed artifacts and ceremonies; others are much less prescriptive. In most cases, there is no one-size-fits-all approach and most Agile practitioners will mix and match, for example, Scrum and XP. This blog aims to help you decide which method might work for you.


We'll cover the following topics in this section:

A detailed look at the most common methods: Scrum, Kanban, and XP A comparison of specific Agile methods Kanban for software is included in this blog. Although it's technically a Lean approach to software development, Agile and Lean approaches are often combined


How you can choose the right Agile framework

When the original 17 signatories to the Agile Manifesto came together in 2001 to form the Agile Alliance, they each brought with them ideas about how the industry could be changed for the better based on actual experiences.


You see, many of them had already started shifting away from what they deemed heavyweight practices, such as the ones encouraged by Waterfall. Instead, they were putting new ideas into practice and creating SDLC frameworks of their own.


Among the signatories, that weekend was the creators of XP, Scrum, Dynamic Systems Development Method (DSDM), Crystal, Adaptive Software Development (ASD), Feature-Driven Development (FDD), and so on.


They initially called them "light" frameworks, to distinguish them from their heavyweight counterparts, but they didn't want the world to consider them to be lightweight. So, they came up with the term Agile, because one thing all of these frameworks had in common was their adaptive nature.


They noted at the time that some of their thinking was influenced by industrial product development and manufacturing. In his foreword to Lean Software Development (LSD) 2003, Mary and Tom Poppendieck, Jim Highsmith says the following:


""In February 2001, when 'Agile' was adopted as the umbrella word for methodologies such as Extreme Programming, Crystal, Adaptive Software Development, Scrum, and others, the industrial heritage of agile buzzed around the background...""


The industrial heritage that Jim referenced came from, predominantly, three sources:


Product development and in particular how product development companies in the 1980s had been reducing the time to market for new products


Engineering technical practices, which provided for better and, in some cases, fully automated quality assurance on a production line. Lean manufacturing as developed by Toyota Industries


In the following sections we're going to look at three of the Agile methods, first up is Scrum.

Understanding Scrum

Scrum is the most popular framework among Agile teams; 58% of respondents to VersionOne's 11th Annual State of Agile Report use pure Scrum. A further 10% are using a Scrum/XP hybrid.



The following timeline shows a brief history of Scrum:

The first mention of Scrum in the context of product development was a paper written in 1986 by two Japanese professors, Ikujiro Nonaka, and Hirotaka Takeuchi, titled The New New Product Development Game.


In the paper, the two professors describe efforts by product development companies to try to speed up their product development lifecycles to decrease their time to market. They observed that companies that were successfully doing this were employing some interesting alternative approaches.


These companies were assembling small teams of highly capable people with the right skills, setting the vision for them to build the next-generation product, giving them a budget and timeframe, and then getting out of the team's way to let it do its thing.


Some observed characteristics of these teams included having all the skills necessary to carry out the job they were being asked to do—the essence of a cross-functional team.


They were allowed to determine how they best carried out the work, so were self-organizing and autonomous. They used rapid, iterative development cycles to build and validate ideas.


Nonaka and Takeuchi called it the rugby approach because they observed product teams passing the product back and forth among themselves as it was being developed, much like a rugby team passes the ball when moving upfield.


In a rugby game, the team moves as a unit and even though each team member has a specialty regarding position on the field and gameplay, any member of the rugby team can pick up the ball, carry it forward, and score a try or goal.


The same was true of these product development teams—their contribution to the product development was specialist and highly collaborative.


In the section of their paper titled Moving the Scrum downfield, they list the common characteristics of the teams they observed as follows:


Built-in instability: Some aspect of pressure was introduced, which encouraged the product development teams to think out-of-the-box and use an innovative approach to solving the problem.


Self-organizing project teams: The teams were given the autonomy to decide how they carried out the task of solving the problem handed to them. Overlapping development phases:


Instead of the normal sequential-phased development that you get with processes such as Waterfall, the teams worked iteratively, building quickly and evolving their product, with each iteration.


Multiple phases overlapped, such that the following steps might be informed by the discoveries made in the previous one. In this way, the teams were able to gain fast feedback about what would and wouldn't work.


Multilearning: A trial-and-error learning culture is fostered, which allows team members to narrow down options as quickly as possible. They are also encouraged to diversify their skill sets, to create team versatility.


Nonaka and Takeuchi called this multi-learning because they said it supported learning along two dimensions: traversing different layers of the organization (individual, team, unit, and group) and across various functions. This cross-pollination of skills is an aspect of cross-functionality we encourage today.


Subtle control: The approach to managing these projects was very different. To create a space for the team to innovate, they realized command-and-control supervision wouldn't work.


Instead, management would check in with the team regularly to check progress and give feedback, leaving the team to manage its work how it saw fit.


Organizational transfer of learning: If and when the development life cycle began to move towards mass manufacture, the product development team would often be strategically placed in the wider organization to seed knowledge and assist with the preparation for production.


Estimating Agile user requirements

Relative sizing is designed to help us to be more instinctual in our estimates, something that humans are quite good at.

Therefore, before starting with relative sizing, we need to start with a User Story that we know enough about to size.


One way to set this is up is to spread the stories out on the table for the team to find what they think is a good example of a medium-sized story. This will involve some level of discussion amongst the team.


Once you've identified a medium-sized story in the group, put it in the center of the table, and put the rest of the User Stories into a pile.


The medium story sitting in the middle of the table is now your yardstick, and you'll use it as a comparison to the rest. Take the next story from the pile and compare it to the medium story: "Is it smaller, larger, or is it the same size?"


Repeat this process for all the stories that are in the pile; don't worry about granularities of small, medium, or large at this stage. If it's large, it's on the right-hand side of the table, and if it's small it's on the left.


If it's medium, it's in the middle. One way to speed this process up is to hand out stories to each participant and allow them to relative-size the cards themselves.


The advantage of this approach, comparing two or more items against each other, is that we develop a much more instinctual approach to estimation. We're no longer dealing in absolutes because we're looking at things relatively.


The law of averages should mean that we don't need to hold the team to the accuracy of their estimates, as we know things will balance out in the long run. All of this, therefore, makes estimation a much more painless approach.



In this blog, we took a look at why the traditional requirements document doesn't lend itself well to adaptive planning techniques.

User Stories represent a much more human approach to gathering requirements because they focus on the need of the person we're writing software for and less on the technical outcome. They are deliberately designed to generate conversation with the people we are building software for, making the experience much more collaborative.


Defining discrete achievable outcomes for our customer gives us a platform for breaking down requirements into more manageable chunks so that we can deliver them incrementally.


This enables us to prioritize work in a way that wasn't possible with the traditional requirements document. This makes the Product Backlog a much more dynamic set of user requirements. Part of the skill of the Product Owner is how they manage the backlog to get the best result from our business and our team.


Bootstrap Teams with Liftoffs

The aim of this blog is to show our software team how to get the best possible start by using an Agile liftoff.


This approach sets the mission objectives for the team and gives you the opportunity to determine how we're going to work together. The aim is to set our team up for success and get us up-and-running as quickly as possible. We will cover:


  • What are team liftoffs and why do they work?
  • The importance of good vision
  • Working agreements and team chartering
  • Activities for team liftoff
  • What's a team liftoff?


When forming a new team regarding a new business problem, there's going to be a fair degree of getting to know the problem as well as getting to know each other.


A team liftoff is a planned set of activities aimed at getting the team up-and-running as quickly as possible. It is useful for two primary reasons:

1. It sets the tone and gives us a clear purpose

2. It's an opportunity for team building, which will accelerate team formation


Team liftoffs can span from half a day to a whole week, depending on the activities included. Dedicating time now to giving our team the best possible start is a shrewd investment as a liftoff will likely enable us to become high performers sooner.


Working closely and collaboratively are habits that we want our team to form as quickly as possible. Choosing liftoff activities that create a shared understanding and foster a positive team culture is key.


One final thing before we get into the details: it's never too late to run a team liftoff. Even if we've already started on our team mission, running a team liftoff will give a team the opportunity to clarify its understanding and codify its approach.


If we feel our current team lacks direction or understanding or cohesiveness, it's probably a good time run an Agile team liftoff.


ACTIVITY – Agile training

Training for the whole team is recommended; this should include coverage of the Agile fundamentals, as well as the basics of setting up-and-running a Scrum. The intention is to provide broad understanding, allow our team to discuss the approach up front, and have everyone start with the same foundation.


Ideally, this will be carried out by our Agile coach or Scrum Master, an experienced Scrum practitioner who has completed either the Certified Scrum Master or Professional Scrum Master course, and who preferably has experience of coaching other teams through their adoption of Scrum.


Recommended training would include the following:

An experiential introduction: The Paper Plane Factory is a great introduction to iterative working. It teaches the basic concepts of Scrum in a 1-hour activity (plus 30 minutes to debrief).


Agile fundamentals: This includes The Agile Manifesto and why Agile is a mindset, not a method.


The Agile landscape: An opportunity to discuss what options, regarding tools and practices, are available to an Agile practitioner. This will include considering what other Agile practitioners are doing in the field.


The Certified Scrum Master course is a 2-day course taught by Certified Scrum Trainers. It is part theory and part experiential, so it will give you grounding in the Agile fundamentals, as well as hands-on experience of forming your first backlog, creating a Scrum board, and role-playing through an entire, short Sprint, including Daily Scrums, a Sprint Review, and a Sprint Retrospective.


It may seem over the top to be training everyone as Scrum Masters, but the investment is relatively small compared to the return you'll receive from a high-performing software team, and the CSM course is one of the best introductions to Scrum there is.


If this is a route you decide to go down, the quality of training is only ever as good as the trainer, so it is worth seeking recommendations for trainers before you commit.


ACTIVITY – Sharing the vision

The following five steps all contribute to communicating the vision and mission of our team.


Step 1 – Meet the sponsors

It's important that as many of the project/product sponsors as possible attend part or all of our team liftoff. One way or another, they've all contributed to the vision that the Product Owner is holding on our behalf.


This is an opportunity for them to introduce themselves, share their hopes and dreams for the product and usher in the next phase of its evolution.


The logical place to actively include them is in the product vision activity (step 3). Getting both their input and buy-in at this stage is crucial. With our sponsors on board, then our likelihood of success is much higher.


Step 2 – The company purpose

The overarching company purpose, also known as the company's mission statement, should be the single anchor for any product development team. Everything the company does should be traceable back to that.


It's important that the organization's purpose is restated as part of the liftoff and that our team understands how our mission contributes to the overall company mission.


It is usually in the form of a simple statement, for example, here are a few high-profile company mission statements:

Google: "To organize the world's information and make it universally accessible and useful."

Tesla: "To accelerate the world’s transition to sustainable energy."

Kickstarter: "To help bring creative projects to life."


Step 3 – The product vision

The Product Owner in Scrum is the person responsible for holding the product vision; this is a view of the product overall and what problem, or problems, it solves for your customer. Our Product Owner maintains the bigger picture and shares relevant information so that our team can tie back any decisions they make while on a mission.


There are several ways that the product purpose can be defined; it's usually in the form of a business case. For example, a Lean Start-up would use a Lean Business Canvas.


The product vision differs from the product purpose in that it is a shorter, punchier version, something that gets people excited and engaged. Many activities will help us create a product vision and make a business case a little more dynamic; these include the product box, the press release, or an elevator pitch.


The elevator pitch is the most straightforward and can be crafted by the Product Owner. Use the following as a guide to creating one:


Imagine you're the owner of a start-up company with an idea for an exciting new product that is going to change the world. Like all new start-ups, you just need money and are hoping that you can persuade a seed investor or venture capitalist to fund you.


One morning, just after buying your coffee, you jump into an elevator on the way to your shared office space and who should be in there but Jeff Bezos (Amazon). He's just pushed the eighth-floor button; you realize you've only got eight floors to persuade him to invest; what do you say?


Step 4 – The current mission

It's also important that the Product Owner maintains a clear view of the current business problem our team is being asked to solve, typically in the form of a simple mission statement. For example, the following is a real team mission statement:


Enabling new methods of content display, navigation, and discovery.


The mission statement should give our team enough guidance so that we can quickly know if we are on course or if we've deviated. At the same time, it should be broad enough that we can maintain some freedom regarding how we provide solutions. It definitely should not describe how to do something. Instead, it should describe what we are doing.


Step 5 – Define success

As the final stage of setting the vision, we should work with our Product Owner to define how we will recognize success. This not only gives us a clear idea of what the expected outcome of our work is, but also helps us understand where we should put our emphasis.


It's also a time to consider any assumptions that might have been made and to put these out in the open, as ultimately this is what contributes to unmet expectations the most. For example, does this mission require rapid experimentation to see what works in a real-world environment, so we can then learn and build based on our results?


Or is it a mission where we have already gained a clear idea of what we need via user research, and we need to build out something that is simple but super reliable?


In the first example, it may seem obvious to our team; they won't have time to performance-test the application. We will assume that performance testing will be carried out in the subsequent phase once the results of our experiments are concluded. However, a Product Owner wouldn't necessarily know or even have thought of this.


They say the most common cause of a relationship breakdown is unmet expectations. This part of the liftoff is an excellent opportunity for the Product Owner to set expectations from the business's perspective, and for our team to contribute from a technical perspective.


A good starting point is the Success Sliders exercise, demonstrated as follows:


Activity: Success Sliders

What we'll need: A whiteboard, whiteboard markers, post-it notes, and a long straight ruler (unless you're good at drawing straight lines)

Setup: A large table that the whole team can fit comfortably around

Remember: Set a time box before this activity starts

Set up the whiteboard to look like the following figure (note: all of the post-its are in the 3 column):


There are seven success criteria listed. The seven post-its represent the sliders for the corresponding success criteria. They can move between 1 and 5, but cannot be taken off the board. Each slider is currently set to a value of 3, and the Success Sliders are in equilibrium; this is a total of 21 (7x3).


The following rules apply:

We are not allowed to exceed a score of 21, so we can't move all Success Sliders into the 5 column as this would make a total of 35.


We could leave all Success Sliders where they are, but this would not reflect reality. There is almost always a bias for at least one success criterion over another, for example, delivering on time over delivering all the defined scope (or vice versa).


We are now free to move the sliders for any of the success criteria; for every slider that moves up, there must be a corresponding downward movement for another slider.


The intention of the activity is to find out what's important for the successful outcome of this mission. After conversation amongst our group, we should move the sliders to the position that reflects the best outcome for this work.


This conversation may be difficult, but it's intended to help us uncover any assumptions and discuss them openly. The following figure is an example of the completed activity:


Here you can see that the group has decided that delivering value is a higher priority than delivering all of the defined scopes or delivering on time. Maybe the conversation has gone along the lines that they want to get to market sooner with only a core feature set so that they can validate their idea.


As with a lot of these activities, while the outcome is important for our team to base future decisions on, the conversation we have during this activity is just as important as it helps cement our understanding and clarify our purpose.


Defining success metrics

The final step in defining success is to look at our success metrics. These are how we measure whether or not we are moving in the right direction with each iteration. These are typically defined by our Product Owner and shared with the team for refinement. There are several ways of setting success metrics up.


How Seeking Value in User Requirements Will Help You Deliver Better Software Sooner, we'll discuss the following approaches:

Hypothesis-Driven Development: An approach that allows us to take a scientific approach to deliver value.

Objectives and Key Results (OKRs): Many companies use OKRs, including Intel Corp, where they originated.

Data Insights Beliefs and Bets (DIBBs)


Whichever approach is used, we need to make sure our metrics are easily quantifiable and are moving us in a direction that adds value—remember, what we measure is what we get.


Activity – Forming a team charter

  • The team charter covers several aspects of how our team will carry out its work:
  • It's a process definition and agreement about how we will accomplish our work
  • It's the social contract defining how we will interact with each other, and how we will work together as a team


Remember, the team charter is a living document; it will evolve as our team evolves its practices. It should be posted somewhere in our team's area so that we can reference it and annotate it as we go.


The following steps take us through the necessary activities to form an initial team charter.


Step 1 – Defining done

First, we're going to look at defining done. We'll need to work together as a team on this so find somewhere quiet where everyone's contribution can be heard. Here's how to set it up:

Activity: Defining done

What you'll need: Post-it notes, Sharpies, a spare wall or whiteboard


Remember: Set a time box before we start

The Definition of Done (DoD) is the agreement we use to define how we will know when our work is complete. It looks like a checklist. On it are the tasks that we need to carry out for us to deliver a new feature, enhancement, or bug-fix as part of the next increment of working software.


As a team, we are responsible for defining done. A simple activity to do this requires post-its, sharpies, and a whiteboard. For this activity, we ask our team to think of the steps they go through from the point where they are given a requirement, to the point where they deliver it in the form of working software.


Work collaboratively, write down each step on a post-it note, and share as you go. Put each step onto the whiteboard or wall in a timeline from the left (start) to the right (finish).


The team should consider the quality aspects of what they are delivering, as well as the steps they will take to avoid mistakes and make sure the delivery pipeline runs smoothly.


Once the timeline is complete, discuss it as a group. If our group is happy and there's no more to add, for now, write out the timeline as a checklist.


It's useful to remind ourselves that done means more than just "coding done" or "testing is done" or "review is done." To do this, we talk about "done." "Done done" is when we know that absolutely everything that is needed to take this increment to a production-ready state is completed.


Here's an actual example of a team's Definition of Done (DoD):


Step 2 – Working agreement

Next, we look at our social contract; this defines ground-rules for working together as a team.

  • Activity: Creating a working agreement
  • What you'll need: Post-it notes, Sharpies, a spare wall or whiteboard
  • Remember: Set a time box before we start


So let's get started:

1. Set up the whiteboard with the banner WORKING AGREEMENT and the subtext WE WORK BEST TOGETHER WHEN... as per the following figure:


2. Distribute post-its and sharpies to each team member. Explain to the team that they are going to use silent brainstorming to complete the phrase WE WORK BEST TOGETHER WHEN... Each idea for finishing that sentence should be written on a post-it note, one idea per post-it only, and as many post-its/ideas as they like. They can use the example topics for inspiration if they need to.


3. Agree on a time box with the team for silent brainstorming and writing post-it notes, somewhere between 5 to 15 minutes. Then set the timer and start the activity.


4. Once we have finished coming up with ideas, or the time-box is complete (whichever comes first), we take it in turns to go up to the whiteboard and place our post-it notes on it. We should do this one post-it at a time, reading them out loud to the rest of the team as we go.


5. Once each team member has placed their post-its on the board, we should gather around the board as a group. The aim of this stage is to group similar ideas or remove any duplicates.


6. The final step is to review the revised workingagreement and decide if we can all abide by it. Are there any changes? Anything we should add?


After several rounds, our team should be in agreement, and we should have something that looks like the following:


Step 3 – Team calendar

The final step is to work as a team to establish our Sprint calendar. Forming a consensus amongst the group about the days/times that we meet will help ensure everyone can attend and we don't miss out on anyone's contribution.


Explain that it will be easier to first determine the Sprint start and end dates then set up all meetings. For example, Sprint Planning happens on the first day of the Sprint. Sprint Review and Retrospective are on the last day of the Sprint.


The DAILY SCRUM should be every day, at a time agreed on by the team, except when it clashes with the Sprint Planning or Sprint Review or Sprint Retro meetings.


In the example, on the first day of the Sprint, our team has chosen to hold the Daily Scrum at 1.30 p.m., after Sprint Planning. Sometimes teams hold their Daily Scrum directly after Sprint Planning; sometimes they don't feel it's necessary. We should use our discretion.


The same thinking applies to the last day of the Sprint; for example, if we decide to hold both the Sprint Review and Sprint Retrospective events in the afternoon, perhaps we should have a Daily Scrum to coordinate our work in the morning.

Annotate the index cards with post-it notes showing the times as in the preceding example.


Keep team events, such as the Daily Scrum, at the same time each day. This reduces the cognitive load on our team—it means they don't have to think, they'll just get into the rhythm.


Once the team has reached an agreement, schedule all meetings at given time slots for the foreseeable future. Cadence is an essential part of Scrum. Establishing a rhythm is key to the success of the incremental delivery approach. The regular iteration cycles give the team a heartbeat by which it operates.



The aim of the team liftoff is to launch our team on its mission as quickly as possible. To do this, we first need to have context; we need to understand our product and its purpose and how it fits in with the company's goal.


Knowing this background helps us better understand the business problem we're trying to solve. It means we will be much more able to make better design and implementation decisions. For example, is this a standalone solution, or does it fit into a wider ecosystem?


The second part of the liftoff is deciding how best to solve this problem together. This gives our team an operating manual/system, which includes our team's technical process (definition of done) and social contract (working agreement). Both of these help us remove any assumptions on how we're going to work with each other.


For a team transitioning to Agile, this should be underpinned with Agile fundamentals training so that we all have a common foundation in the Agile Mindset.


This is something we will be able to use in our day-to-day work environment, using our knowledge to guide the decisions we are taking. We should continually reflect on how our choices fit the values and principles of the Manifesto for Agile Software Development.


In the Define success section, we discussed the importance of recognizing what success should look like. Without this, we're unable to track our progress and determine if we've completed our mission. We also demonstrated an activity called Success Sliders, which helped us frame which parameters of our mission are important.


In the next blog, we are going to delve more deeply and look at other measurements that we can track which will help us understand if we're on course to a successful mission outcome. This will include a more detailed way of defining and measuring success.


Metrics that will Help your Software Team Deliver

Once our team is on its mission, it's crucial that we know we're moving in the right direction. One question that will help our team know this is: What defines our success? Answering this question helps us discover the measurements we can take to keep us on track.


In this blog, we'll look at various real-world examples that will help our team determine the metrics they should and shouldn't be using. We'll consider negative metrics, positive metrics, metrics that give you quantitative feedback, and metrics that provide you with qualitative feedback.


We'll then discuss how to make this information visible and how to measure trends over time so that you can see you're improving and moving along the right trajectory.


In this blog, we will cover the following topics:

  • Qualitative versus quantitative measurements
  • Negative versus positive metrics
  • What you measure is what you get
  • How will you define success?
  • How to run a workshop that will help your team members set themselves up for success


Team velocity – the engine is working, but it's a negative metric, so be careful Sprint and release burndowns

  • Code quality
  • Code complexity
  • Team health indicators
  • User happiness index


A brief introduction to measurements for Agile software delivery

Before we take a look at example metrics we can use, there are a couple of distinctions we need to make between the types of measurements and how we should use them.


Understanding measurements

There are many measurements that we can use to give us an idea of how our team is performing. Before we start tracking metrics, it's important to understand that what we measure is what we get.


For instance, velocity is a standard measurement used by Scrum teams. It tells us how many User Story Points we're completing on average in each Sprint.


As we'll explain further in the Negative versus positive section, it's a measurement that is used mainly by the team to help them understand how they are performing. If used out of this context, it can cause adverse effects which can make the metrics meaningless.


A lot of measurements work in the same way; that is, they're meaningful to the team but less useful to those outside it. They don't always mean much in isolation either;


we often need to compare and contrast them with other metrics. For example, the percentage of code covered by automated tests is useful if the quality of those tests is also measured.


Qualitative versus quantitative measurements

Simply put, quantitative measurements deal in quantities. They are numeric values such as the total number of members, revenue in dollars, web page views, number of Story Points completed, and so on.


Qualitative metrics relate to the qualities of something. They reflect the sentiments of a person expressed as feelings, opinions, views, or impressions. Examples of qualitative metrics include customer satisfaction and team member happiness.


Sometimes we'll combine both quantitative and qualitative measurements. For example, we judge ease of maintenance for software on:


1. Cyclomatic complexity: A quantitative measurement of the number of independent paths through our code; the more paths, the harder it is to maintain


2. Ease of last change: A qualitative metric based on the Development Team's viewpoint; the harder it feels to make the previous code change, the harder the codebase is to maintain


More often than not, we can translate qualitative measurements into numbers, for example, rating customer satisfaction with our product on a scale of 1 to 10, one being very unhappy, ten being very happy.


However, if we don't also capture the individual's verbal response as to why they are feeling particularly happy or unhappy, then the metric over time may become meaningless.


For example, we won't be able to address a downward trend in customer happiness unless we know why people are unhappy, and there is likely to be more than one reason.


Also, "value" means many things to an organization, so we have to measure each aspect of it, that is, direct revenue reduced costs, return on investment, and so on. Sometimes value is easily quantifiable, for example when a User Story directly produces revenue.


Other times it requires feedback from our customer, telling us whether what the team is building is meeting their needs.


Negative versus positive metrics

A negative metric is a performance indicator that tells us if something is going badly, but it doesn't show us when that same something is necessarily going well.


For example, velocity. If our velocity is low or fluctuating between iterations, this could be a sign that something isn't going well. However, if velocity is normal, there is no guarantee that the team is delivering useful software.


We know that they are working, but that is all we know. Measuring the delivery of valuable software requires a combination of other metrics, including feedback from our customer.


Therefore, velocity is only useful if it's being used to determine what the team is capable of during each Sprint. It will also aid in scheduling releases, but it is no guarantee that the product under development is on track to be fit for purpose.


Something we should also be aware of with metrics such as velocity is that the very act of focusing on them has the potential to cause a decrease in the performance we desire.


A common request to teams is to increase their velocity because we want to get more done. In this situation, the team will raise their velocity, but it won't necessarily increase output.


Instead, a team may conclude that they were too optimistic in their estimates and decide to recalibrate their Story Points. So now a story that would have been five Story Points is eight Story Points instead. This bump in Story Points means they will get more Story Points done in the same amount of time, increasing their velocity.


In my experience, this isn't something done deliberately by a team. Instead, it's done because attention was drawn to a particular measurement, causing the problem to be analyzed and corrected. Unfortunately for us, negative metrics aren't the ones we should be poking around with.


It may seem an obvious thing to say, but to get more work done, we need to do more work. If the team is already at capacity, this won't be possible. However, if we ask them to increase their velocity, they can do that without achieving any more output.


To avoid this scenario, instead, think about the outcome you're trying to reach and then think of ways to improve that. For instance, if our interest in increasing productivity is because we want to release value sooner, we can do this without raising capacity or putting pressure on our team. 


Other examples of negative metrics in software product development are:

Lines of code written: Shows us that our developers are writing code, but doesn't testify to the usefulness or quality of that software in any way. Focus on this metric, and you will certainly get an increase in lines of code written if nothing else.


Test code coverage: The percentage of code covered by automated tests. Shows that tests have been written to run the code, but there’s no guarantee of the effectiveness of those tests in terms of preventing bugs.


Examples of metrics that we could focus on:

Value delivered: Is the customer getting what they want/need?

The flow of value: How much and how often is value delivered?

Code quality: Multiple measurements which focus on two critical characteristics of our software:

  • 1. Is it fit for purpose?
  • 2. How easy is it to maintain?


One fit-for-purpose quality that our customer should care about is the number of bugs released into the wild. This includes bugs that result from a misunderstanding of requirements. The further down the value stream we find bugs, the more expensive they will be to fix. Agile methods advocate testing from the get-go.


The happiness of our team(s): Happy teams are working in a way that is sustainable, with no extended hours or weekend work. Standards and quality will be high. Our team will have a good sense of satisfaction.


In short, negative metrics have their place; however, used unwisely they can have unintentional side-effects and degrade the performance you're looking to enhance.


Quantitative metrics

In this section, we'll look at measurements that track quantities.

Team velocity

Calculate the velocity for an Agile team by adding together all of the estimates for the completed User Stories in a Sprint. For example, if the team is using Story Points to estimate and complete five User Stories with estimates of 5, 2, 2, 3, and 1 respectively, then their velocity is 5 + 3 + 2 + 2 + 1 = 13.


Velocity is a metric that is used by the team for forecasting. For instance, during Sprint Planning we use it to gauge how many User Stories we think we can compete within the upcoming Sprint.


We can use velocity in one of two ways. The first is by averaging the velocity from recent Sprints. If we take the last five Sprints it would look like this: (15 + 14 +20 + 12 + 16) / 5 = 15. The second uses the velocity from the last Sprint; an approach referred to as yesterday's weather.


From Sprint to Sprint, update the board so everyone can see the current value; here we're using the velocity from the last Sprint:

We track the trend over time using a velocity chart;


The two vertical bars per iteration represent the work that the team predicted it could complete in Sprint Planning (Forecast) and the work the team completed during the Sprint (Completed).


Sometimes work "forecast" is higher than work "done," meaning the Sprint didn't go as well as the team predicted. Sometimes forecast is the same as done, meaning it did go as expected.


Sometimes done is higher than forecast because the team completed all items on the Sprint Backlog and had time to pull in additional User Stories from the Product Backlog into the Sprint.


The team shouldn't pull other items into the Sprint Backlog at the expense of existing User Stories in the Sprint Backlog. Before we pull in additional items, we should assist our team with any User Stories currently in progress.


Being part of a cross-functional team means sometimes we have to perform roles other than our specialty. This is the nature of being a T-shaped team player; we do what we need to get the job done.


Velocity will fluctuate from time to time; causes include team members being on leave, or sick, or changing the iteration length. For example, if you have five team members and one is a way for the next Sprint, in theory, your velocity will drop by 20%. Team leave is one factor to take into consideration when Sprint Planning.


If a team feels the change in velocity is significant and worth discussing, they can use the Sprint Retrospective as an opportunity to understand why and see if the root cause is anything that they can fix.


Also, remember that velocity is just one aspect of our forecasting system. Another method teams often use during Sprint Planning is to break down User Stories into tasks. Some teams allocate hours to tasks and then add up the hours on all tasks at the end of Sprint Planning.


They use the total hours to assess if their forecast for the User Stories in the Sprint Backlog is more or less right. Another aspect of forecasting is gut feeling, especially useful when working on small chunks of work which have an air of familiarity.


From an observer's perspective, velocity tells us the team's engine is working, but it doesn't tell us if what the team is working on is delivering value. The only way to measure value is through the software that we deliver meeting our user's needs.


Sprint Burndown chart – TEAM

The Sprint Burndown chart is a useful graphical representation of whether our team is likely to complete all the User Stories in the Sprint Backlog. The following is an example:


The dotted line represents the average burndown rate necessary if the team is to finish every item. The solid line shows the reality that not all tasks are equal; some take longer than others to complete (the plateaus) and some take less time to complete (the vertical drops).


Sometimes the solid line will go up before it comes down; this shows that once the team started working on a story, they uncovered new information and added tasks.


Release burndown charts

Release burndown charts track how much work is left to do before release. To track this successfully, we need to know which stories are in the next release. It is straightforward if we're using index cards to maintain our backlog; we merely separate them into multiple decks representing each release.


To start tracking how long it is before we can release, we first have to estimate all stories in the release. We then add up the number of Story Points to find out how big the release is.


We'll now look at two different types of release burndown chart. For both types, we show the total number of Story Points on the vertical axis and the number of Story Points remaining after each Sprint on the horizontal axis.


Simple Release Burndown chart

The simple Release Burndown chart tracks the total number of points remaining after each iteration. The dotted red line traces the trajectory of the burndown rate, and where it intersects the black horizontal line is the prediction of when the release will complete.


A simple Release Burndown looks like this:

This chart is easy to maintain; it's just a case of adding up the Story Points remaining at the end of each Sprint. However, it isn't always apparent from this chart what has taken place in each Sprint.


To illustrate, look at iteration five. The velocity for the team was just as high as it was in iteration four. However, the number of remaining Story Points doesn't seem to have dropped as far because new User Stories were discovered by the Development Team and added to the backlog.


Again, in subsequent Sprints, the velocity remains high, but the rate of burndown remains slow. In this instance, it's because User Stories are being re-estimated, which has increased their size.


These are normal activities for a Scrum team to carry out; they often have to replan as new information comes to light. To make the details more visible, we can enhance the Release Burndown chart, as we'll explain in the following section.


Enhanced Release Burndown chart

In the Enhanced Release Burndown chart, we record data both above and below the horizontal line. Above the horizontal line, we show the Story Points that were forecast at the beginning of the release. Below the horizontal line, we show any new User Stories added, or an increase in the size of existing User Stories.


A Release Burndown chart is a useful tool that requires all items in the release to be estimated but note this only applies to the Product Backlog items in the release, not the entire Product Backlog. This fact should help our team focus their effort during Product Backlog Refinement sessions.


Of course, if we feel an item on the broader backlog needs to be added to the release, we should do so, and estimate it as soon as we can.


Sizing the story may require a small ad hoc Product Backlog Refinement session; after a Daily Scrum is usually a good time. Otherwise, we can wait for the scheduled Product Backlog refinement session, if they happen frequently.


An Enhanced Release Burndown looks like this:

The two dotted lines plot the trajectory of these two groups of Story Points. The top dotted line represents the velocity of our team, and the number of Story Points decreasing as work gets done.


The dotted line at the bottom represents the trend of work added to or removed from the backlog. The point at which the dotted lines converge gives the forecast for when the release will be complete.


One thing to note is that User Stories can be removed from the Product Backlog as well as added, and a User Story's size in Story Points can decrease as well as increase. All changes in scope are applied below the baseline.


New information comes to light all the time in Scrum's fast-paced environment; we'll need to have a robust Product Backlog Refinement process to prevent estimates from going stale.


We can use this approach to forecasting in any situation where we have a bundle of User Stories that represent a milestone when complete. For example, we could also use an Enhanced Burndown chart for tracking when a single feature, or Epic, will complete.


Code quality

Code quality is essential to keep us Agile; poor quality creates technical debt, the sum weight of which will slow us down. Working with low-quality code feels a little like moving through treacle; changes that we thought would be simple turn out to be much harder than we expected.


When considering measurements to help us ensure code quality is high, we should look to measures that help us identify and remove technical debt from our system.


A good analogy is to think of it a bit like weeding a garden. It's a constant chore that we need to keep doing, usually as part of other tasks, rather than a particular job that we do in its own right.


Even if we're just walking through the garden on the way to somewhere else and see a weed, we pull it out. Taking the time to do this, little by little, will stop the weeds getting out of control before any of them become bushes.


So, what are the software qualities we should aim to remain Agile? And how should we measure them?

The simplest way to measure them is first to set some standards for our team to follow. Record this somewhere visible and easily accessible to the team. The ideal space would be anywhere in a visible workspace alongside a Scrum Board.


Feedback on how we're progressing with quality standards is essential, and this can originate from many sources, such as peer review.


We should set thresholds, so if one or more of our desired qualities drops too low, the team is warned to take action. There are some handy tools that monitor code quality automatically; examples include Code Climate and SonarQube.


These are some aspects of quality we should be monitoring:

Clear: Clear code often comes down to the user's coding conventions. We should aim for consistency. A few examples include:

  • Keep function names short and meaningful.
  • Make good use of whitespace and indentation.
  • Keep lines of code to fewer than 80 characters.


Aim for a low number of lines of code per function—use functional decomposition, reduces conditional logic to the minimum, and so on.


Simple: This is a two-parter:

1. We apply the Agile principle of Simplicity - the art of maximizing the work not done and only write code that we need now.


2. We make refactoring part of our team's DNA. As Mark Twain once said, " I didn't have time to write a short letter, so I wrote a long one instead." He meant that to make something succinct and meaningful, we have to put in time and effort; this applies to our software too.


Well-tested: A broad spectrum of tests which cover unit, integration and UI testing. We should aim for good code coverage and well-written, easily executed tests. If we want to maintain confidence over time, our test suite should be automated as much possible.


Bug-free: Tests are only one part of the equation as they just demonstrate the absence of bugs, not their presence. To aim for a zero defect rate, we have to focus on clear and straightforward code because one of the biggest causes of bugs is code complexity. We'll take a look at this in more detail in the next section.


Documented: We should provide just enough documentation so that those that follow behind us, including ourselves, can pick up the code and have a sound understanding of our original intent. To be frank, the best way to do this is through tests because they are "living" specifications of how our code works.


Better still, we can write tests using Test-Driven Development (TDD), because this gives us specifications up front before we write code. We'll talk more about TDD in the next blog. Besides tests, we should include code comments, README files, configuration files, and so on.


Extensible: Our software should be easy to build on and extend. Many will use a framework to give it some foundational structure. If we are using a framework to help guide extensibility, then we should adhere to the practices advocated by that framework as much as possible to ensure consistency.


Performant: Performance is key to several qualities, including usability and scalability. We should have baseline measures such as response times, time to the first byte, and so on, which we can use to ensure our software remains useful.


Code complexity

One of the leading causes of bugs in software is the complexity of the system. Simple, well-written code is less buggy and more easily maintainable. Complex code is hard to write bug-free, and subsequently hard to maintain.


One measurement we can use to assess our code's maintainability is called cyclomatic complexity, best described as the number of paths that go through one piece of code.


The more paths going through part of our system, the more likely it is to be complex and have to cater to different parameters or conditions. This makes the code harder to read and comprehend, and therefore more likely to contain or introduce bugs.


Unreadable code is the biggest single source of bugs in software, mainly because it's hard to discern all the different paths through it, and what consequences a particular change will have on them. Thus, testing captures most, but not all, scenarios.


Qualitative metrics

In this section, we'll look at measurements that track sentiments.


Defining what success looks like

When working out what to measure, one of the simplest things our team can do is define what success will look like. In doing this, we will identify measurements that we can use to tell us if our team is on the road to a successful outcome.


Defining our success

Bootstrap Teams with Liftoffs, we discussed defining success using an activity called Success Sliders.

In the Success Sliders activity, we asked the team to consider seven different sliders which represent different facets of success: Customer Satisfaction, Team Satisfaction, Deliver Value, Deliver on Time, Deliver on Budget, Deliver All Defined Scope, and Meet Quality Requirements.


These particular characteristics focus on delivery in a project setting. In a long-lived product environment with iterative/incremental delivery, these could be redefined, for example:

  • On time becomes early and often
  • On budget becomes cost-effective
  • Defined scope becomes useful software or satisfied users


However, all teams are different, and these particular factors may not apply to our team or its mission. So, in this section, we are going to take a look at a team activity that is used to define our own set of success factors.


  • Activity: What defines our success?
  • What we'll need: A whiteboard, whiteboard markers, post-it notes, and sharpies
  • Setup: A large table that the whole team can fit comfortably around
  • Remember: Set a time box before this activity starts


Follow these steps to define what success will look like:

1. On the whiteboard, write the headline question: how will we define our success?

2. Pass out post-it notes and Sharpies to each team member. Tell them they're going to do some silent brainstorming. Ask them to answer the headline question and write one item per post-it note.


3. Answer any questions, and when everyone is comfortable, set the time box to 15 minutes and start.


4. When 15 minutes is up, assess the room and determine if anyone needs any more time. If everyone is ready, get the team to take it in turns to post their post-its on the whiteboard. Ask each team member to read their post-its out loud as they do so.


5. There will likely be some duplicates and similar themes. Ask the team to come up to the board and do some affinity mapping, involving grouping related ideas together. Move similar post-it notes next to each other than circle the group and give it a name which encompasses the fundamental concept.


To create our set of Team Success Indicators, we used the post-its within each group to inspire us to write two statements, one positive and one negative, for each indicator. 


Using our Team Success Indicators

Once we have our Success Indicators set up, it's time for us to see how we're tracking each indicator. At the beginning of a retrospective is one place we can do this.


Set up the whiteboard as follows, with the indicators along the top:


Allow 5 minutes for each team member to vote by placing their X in the happy, sad, or neutral row for each indicator. Neutral indicates they're neither positive nor negative, but somewhere in the middle.

Once everyone has voted, the Scrum Master totals the votes for each indicator.


The result will look something like this:

We can use a dashboard to monitor the trends easily. Following is a dashboard for a real-life team, showing their team's health over the past nine months:


As you can see from the last column, Mar 17, things have taken a bit of a downturn in terms of Speed, Healthy Code, Self Organizing, and Clear Goals.


The team was learning a whole bunch of new technologies at once while being put under pressure to deliver by their stakeholders. We conducted a retrospective and used it as an opportunity for the team to define actions they felt that would mitigate this.


The larger arrows indicate a gap in which we didn't record our team's health; it's because we were all on holiday.

The speech bubbles at the bottom represent the number of comments our team has made in the OPEN FEEDBACK section.


This particular dashboard is online, and so are our team health surveys, so we link through so people can read the comments. If we were doing this using a physical workspace, we would post the individual comments on our dashboard for everyone to see.


This process is based on an idea from Spotify, which I've modified slightly from their original. If you'd like to create your own version of this, you can use Spotify's online kit to get you started: It's available under the Creative Commons Attribution-ShareAlike license.


User Happiness Index

"How satisfied are our users?" is a question we need to ask ourselves often. It is one measurement we can use to determine if we're delivering useful software.


There are many ways to foster engagement with the people for whom we're building software. Here are a few suggestions:


For direct feedback we could:

Ask for star ratings and comments, similar in concept to the Apple and Google app store rating systems. We could ask for these via our product, or via a link sent in a message or email.


Observe our user while they use our software. We could ask them to describe what they are currently doing so we can hear the process they have to go through.

Survey a group; for example, each time we release new feature we could wait for a while to ensure uptake and then poll customers to gauge their impressions.


  • Carry out business and customer surveys in person, if possible. This is an excellent way for us to assess the satisfaction of our client.
  • Capture feedback from our customers using a tool such as UserVoice.
  • For indirect feedback we can:
  • Look at all calls that have gone to the service desk regarding our software (number and nature)
  • Use analytics to find out the level of engagement our user group has with particular features



Building software products is an empirical process, and similar to a scientific experiment; we don't always know what is going to work and what isn't. The measurements that we take along the way should be designed to help us determine if we're moving in the right direction.


We looked at two distinct measurement categories, quantitative and qualitative. The quantitative group gives us numerical facts, such as the number of Story Points completed. Qualitative data is more subjective but provides us with feedback regarding human qualities such as satisfaction, both at a team level and at a customer level.


We also looked at the difference between negative and positive metrics. We considered that, while velocity is a useful metric to aid the team in forecasting what it might be able to achieve in a given period, it is not a measurement by which the team should be judged or compared.


The value delivered from a particular velocity varies from team to team. Velocity only tells you the engine is running; the real measure of a team's performance should come from the value they deliver.


In the final section, we considered types of measurement involving probably the most important group of people: those that use our software. This is the ultimate measure of whether we're building the right thing.


In the next section, we're going to look at technical practices that enhance our agility. With a little discipline and the right practices, we can keep our product in good shape, spending less time on bugs and maintenance and more time on the stuff we enjoy doing.


Software Technical Practices are the Foundation of Incremental Software Delivery

Delivering working software in small increments every sprint requires a different way of working. To reduce overhead, teams will often look to technical practices to enhance their ability to deliver.


Choosing the right technical practices will increase our team's agility by giving them the confidence that what they are delivering is well-designed, tested, and meets expectations. By improving the team's confidence, we will speed up our ability to deliver.


In this blog, we'll look at some of those technical practices and how they work with incremental software delivery approaches.

  • Building the thing right versus building the right thing
  • Test-driven development
  • Refactoring
  • Pair programming
  • Emergent design
  • Continuous Integration/Deployment/Delivery and the DevOps culture


Building the thing right versus building the right thing

The practices that we use are often known as "the intangibles" of software delivery because from an outsider's point of view, they aren't visible as part of the features we deliver but they have the potential to help us build a much better product.

  • "Building the thing right" means a focus on crafting our software.
  • "Building the right thing" means focusing on getting good outcomes for the customer.


Unfortunately, sometimes the bias can be towards the latter, the pressure to deliver often being greater than the desire for quality.

Poor quality manifests itself in a few ways: poor performance (slow), doesn't work as expected, takes forever for the team to make requested enhancements, and so on. As a customer, this may only be something that you become aware of after being on the receiving end of poor-quality software.


An incremental approach should help relieve the pressure on the team to deliver. A customer will likely be less nervous about what their money is being spent on if they can see and give feedback on the software. If the customer can use the increments provided so far, it's a win-win.


And although it may seem counter-intuitive, focusing on quality and adopting practices that build it into our software from the beginning will speed up delivery. Why? Because we avoid accumulating something called technical debt.


The Software Industry and the Agile Manifesto, when discussing the Agile principle continuous attention to technical excellence and good design enhances agility. We defined it as follows:


Technical debt is a term first coined by Ward Cunningham; it describes the accumulation of poor design that crops up in code when decisions have been made to implement something quickly.


Ward described it as technical debt because if you don't pay it back in time, it starts to accumulate. As it grows, subsequent changes to the software get harder and harder. What should be a simple change suddenly becomes a significant refactor/rewrite to implement.


When you first started out writing software it was fast and easy, just like when you first started mowing the field. We didn't always take the best or straightest path, and in our haste to get the job done, we probably neglected consistency. But when the field was green, we didn't need to think about things like that.


In his blog post, Ron visualizes something like the following:

Over time, areas of our code get little neglected and small bushes start to form. One day, we find that we don't have time to remove the shrubs, so we begin to mow around them. This is similar to parts of our code that we find hard to maintain.


Instead of tackling the problem head-on and fixing the code, which would slow us down, we go around it, for example by creating similar code that does a slightly different job. The field starts to look like this:


Soon, it gets harder and harder to navigate the area as the bushes get bigger and more numerous. It takes longer and longer to mow the field; at some point, we may even give up trying to mow it at all, only going into that area when we have to. We start to think of this code as "legacy" and consider ways we can replace it. Sometimes, it's just easier to blow it all away and start again:


All of the practices that we describe in this section are aimed at preventing those thickets and bushes springing up in our software. If we maintain a disciplined approach to building software using these techniques, it will ultimately speed up our delivery, increase our agility and our product's medium to long-term viability.



In a nutshell, refactoring is the art of small and continual improvements to the design of our code while preserving its behavior. The intention is to create behavior-preserving transformations to our system which ultimately make it more maintainable.

Each time we change parts of our software, we purposely refactor parts of the code that is in our path. To ensure we preserve current behavior, we use automated tests which tell us if our code is still working as we refactor.


Using Ron's analogy of fields, thickets, and bushes from the previous section, instead of mowing around the bushes, with refactoring, we cut a path through each bush we encounter. It looks a little like the following:


Coding standards are hygiene factors which should be possible to automate using tools such as SonarQube—an open source tool for automating code reviews and the static analysis of code.


One of the principal causes of bugs in our software is complexity, primarily because it makes our code hard to read. If the code is hard to comprehend, it causes developers to make mistakes because they misinterpret how it functions.


Over time, it will even affect the developer who wrote it, and they may struggle to maintain their original intentions.

We should always be thinking about how we can make things more easily readable. Sound naming conventions, fewer lines per method, fewer paths, and loose coupling of code are just a few examples.


Remember, it's relative. Some areas may need more attention than others. Definite red flags requiring attention are units of code with many lines; aim to decompose these into more manageable chunks.


Also look for places that have a high cyclomatic complexity. These are hotspots for attention because the code in these locations has multiple paths running through it, often obscuring some or all of its purpose.


You can refactor without automated tests, but you have to do it carefully. You will need to manually test a lot and use tools that you trust, for example, there are automated refactoring tools for Java and C#.


For instance, most modern IDEs have simple refactoring support, so when changing a method signature, the IDE will help locate and modify it throughout the codebase.

Some tools are purpose built and more sophisticated than those provided by an IDE. Re-sharper by JetBrains is one example which plugs into Visual Studio.


How does this keep us Agile?

It reduces the thickets and bushes that start to spring up in our software; it prevents them from taking firm root in our code. This reduces the time it takes for us to introduce additional functionality or enhance and maintain existing features, making us much more reactive to our customers' changing needs.


Things to try

Try following The Boy Scout Rule, as described by Uncle Bob in Clean Code. He refers to the Boy Scouts of America as having a simple rule: Leave the campground cleaner than you found it.


We can apply this to our code. If we always leave it in a better state than when we found it, over time it will maintain its usefulness. There will be less opportunity for those thickets and bushes to grow.


Remember, refactoring should be treated as making small, continuous improvements to our software.


Test-Driven Development

Test-Driven Development (TDD), is a software discipline in which we write an automated test case before we write any code. It is a principal practice of Extreme Programming.


The basic pattern is as follows:

The first step is to write an automated test case for the next piece of simple functionality we intend to implement. The test will fail because we haven't written any code to fulfill it yet.


The next step is to write the most straightforward code implementation to fulfill the test and make it pass.

The final step is to refactor the code so that it meets our coding and implementation standards.


When using refactoring with TDD, they make a powerfully. The TDD test suite enables us to take the refactor step with the confidence that the behavior hasn't changed, but we're making the code simpler and more in keeping with the system design.


So, if we're using a Model-View-Controller (MVC) framework, then while refactoring our code, we would be ensuring our newly-added functionality is in keeping with the framework's principles.


For example, our quick cut of code to make the test pass may have left some business-related logic in our controller code. During the refactoring step, we would move this code so that the model handles our business rules.


This maintains the separation of concerns called on by the framework we're using, and also makes our software more understandable to anyone coming through behind us (including ourselves).


How does this keep us Agile?

Let's just state up front that the topic of Test-Driven Development can and will cause polarizing responses among software Development Teams. People either love it or hate it.


I believe much of the controversy is caused because we fail to see TDD for what it is: a specification-driven test harness for designing and writing more straightforward software. TDD packs so much punch in the Agile community because it encourages a mindset of building what is necessary and nothing more.


Simple software is easier to maintain, more robust, easier to scale, and lacks technical debt or feature bloat. If Scrum is a set of training wheels for better software delivery, TDD is the training wheels for better (simpler) software design.


The many benefits of using TDD include the following:

It's a specification-focused approach that reduces complexity because we're less likely to write software that we don't need It makes our design simpler and our code clearer


Refactoring refines our design and makes our code more maintainable, a step often haphazardly undertaken if done without TDD's specification-driven framework


Writing tests as specifications upfront improves the quality of our specifications and makes them more understandable to others


The automated test suite serves as documentation for our software—the tests act as the specification, and it's easier for new team members to understand what the code does


Having a suite of readily-repeatable tests gives us the confidence to release our software

The resulting tests support refactoring, ensuring that we can make the code simpler while maintaining its intended behavior

Some argue that TDD approaches take longer to write code, and this is true in the literal sense because we're designing and writing specifications at the same time.


However, when you consider all the other activities involved in software development, other than just "writing" code, this argument breaks down. Plus, the medium to long-term effects on our code base will result in significant cost savings. Plainly put, in my experience, TDD software has far less technical debt and far fewer bugs.


In fact, you should probably see TDD as a way of improving and validating your software design. It's a much more holistic perspective. It's testing++.


Things to try

Select a TDD champion, the person or people who most support the practice within the team. Have them set up the framework and create the initial approach.


Choose one User Story to implement TDD on. Once the User Story is done, have the champion/champions workshop their findings back to the team and teach them how they can implement TDD as well.


The team should then select the next User Story they'd like to TDD and then pick a team member to pair program the solution with the TDD champion.


Alternatively, if the TDD champion is already a confident TDDer, they could introduce the whole team to the Test-Driven Development approach using workshops, or pair or mob programming.


Whichever approach you take, you should work in that configuration as a team until the User Story is "done," that is, delivered into the hands of your customer.


A full SDLC end-to-end experiment is necessary if we're to understand the wide-ranging benefits this particular practice has. Remember, building software isn't just about writing code; our work isn't complete until it's in the hands of our customer.


Pair programming

Pair programming is a software practice where two software developers will share one computer. The one with the keyboard is the one programming and is concerned with the details of implementation. The one without the keyboard is maintaining the bigger picture and the overall direction of the programming.


There will usually be a healthy level of discussion between the pair; the one on the keyboard will often describe what they are doing and thinking, while the other will be calling out the next steps and pointing out any issues.


It's a little bit like a rally car driver working with a navigator. The driver is responsible for focusing on solving the immediate problem, the handling of the car around the course. The navigator has the map, keeps track of their location, and ensures the driver has enough information about the direction being taken and any obstacles ahead.


It works well because the developer who is navigating is holding the bigger picture, which ultimately helps the developer who is driving to keep focused on what needs to be done to solve the immediate problem. The navigator is also able to spot potential problems in the design, and even simple things like typos, which will often go unnoticed otherwise.


When turning up the dials on good practice to the maximum for extreme programming, Kent Beck's view was that if we value peer code review so much, why not do it all the time? Extreme programmers will Pair Program all code that goes into a production environment. Pair programming is peer review on steroids.


How does this keep us Agile?

Pair programming keeps us Agile because it shortens the feedback loop when reviewing code. This is feedback at its earliest and means that we're able to incorporate changes immediately, often nipping potential issues in the bud.


It's particularly potent when we follow the driver/navigator scenario, sometimes known as strong style pairing, the name attributed to a pair programming approach advocated by Llewellyn Falco. He states that "for an idea to turn into the code, it has to go from my head through somebody else's hands."


This is cost-effective in the long run because it creates better quality software from the outset. Two sets of eyes are better than one and pairs are also more likely to keep each other honest; for example, due to peer pressure, we won't take shortcuts.


It's also an excellent way to skill or knowledge share, for instance, if one team member knows a particular part of the system well, or wants to coach a practice such as TDD.


Things to try

When initially introducing pairing, it's worth recognizing that it requires more concentration, mainly because a pair of programmers are less likely to get interrupted or distracted.


Before starting, discuss pairing etiquette:

Agree on a start and finish time for the session. Ensure you schedule time for regular breaks.

Discuss how to avoid distractions. For example, turn off an email and instant messenger notifications. Also, have mobile phones turned to silent or preferably off.


Decide how to manage outside interruptions. For example, explain to the person or persons interrupting that you'll be on a break soon, tell them when that break is and that you can talk to them then.


Determine who will drive first and how often you'll exchange the keyboard. Make sure you swap the keyboard regularly. Do not allow one person to drive exclusively.


Accept that pairing is not silent, but like any new skill, we will need to learn how to describe what we're doing while coding. It's often odd at first.


Remember, don't just pair program the "hard bits," pair program for an entire story from start to delivery.

To keep things interesting, try pairing with different members of the team. Also, bear in mind that you both don't have to be software developers to make a pair.


For example, try pairing with a Product Owner, in particular, if the current work involves creating a tangible aspect of the system such as part of the user interface or another form of output. In these situations, they will be able to offer immediate feedback on the direction you're taking.


Activity – pair programming ping pong

Pair programming ping pong combines the two practices of TDD and pair programming and turns them into a fun, collaborative game.


What you'll need: Two software developers, one computer

Setup: The usual pair programming setup

It starts in the usual pairing way, with the two developers using one computer, one acting as the driver, the other as the navigator.


We play ping pong in the following way:

  • The first developer writes a new test. It fails because there is no code to fulfill it yet. The keyboard is passed to the second developer.
  • The second developer implements the code needed to fulfill the test.
  • The second developer then writes the next test and sees that it fails. The keyboard is passed back to the first developer.
  • The first developer implements the code needed to fulfill the test and so on.
  • In the usual TDD manner, refactoring happens after the code to fulfill the test is written.


Emergent design

At the beginning of this blog, we looked at the birth of Agile and what provoked this movement. In the above section, The Software Industry and the Agile Manifesto, we discussed how, in an attempt to correct our inability to bring projects in on time and budget, we sought to get more precise in our predictions.


It was felt that if other engineering disciplines such as civil engineering were able to be more precise with their process, and software engineering should be more in line with it.


However, the reality is that we don't build software like we construct bridges or buildings or hardware. With all of those physical things, by the time we get to the actual construction phase, we've already created the design schematics. The construction phase is an endeavor in logistics based on assembling the different components in the right order.


As Neal Ford, software architect at Thoughtworks, describes it, with software, the code itself is our design. Our build phase in software is when we compile/interpret the program code so it can be run.


Our program's concrete outputs are the actual results of the design, in the form of pixels illuminated on the screen or messages sent over the ether. That is the tangible result of "building" the design we've written in our software's code.


So, if we're to get better at producing the right output, we have to get better at "designing" our code. Practices we've discussed so far in this blog will certainly help us evolve our design safely and efficiently.


For example, TDD creates a test harness based on our software's specifications. This allows us to design our software to the specifications prescribed in our requirements (User Stories, their associated acceptance criteria, and possible test scenarios).


TDD's red/green/refactor approach to software development helps us ensure that intended behavior will continue to work as we begin to make changes to our software's underlying structure, for example, improving its scalability.


How does this keep us Agile?

Software design isn't something we do before we start writing code, it's something we do as we write it. When patterns emerge, we can begin to take advantage of them, creating abstractions for reuse.


This helps us avoid being caught by the You Ain't Gonna Need It (YAGNI) principle of software design. YAGNI happens when we design software features for some perceived future need and not the actual needs we have now.


It doesn't mean that we just start programming. Having some understanding of the problem and formulating a strategy regarding how we're going to solve the problem is necessary to get us started on the right path. It does mean that we shouldn't try to solve problems we don't have yet.


Activity – emergent design discussion

Hold a time-boxed discussion with the team about software design.


What you'll need: The team, a table big enough for the team to sit around, a whiteboard if you want to record salient parts of the conversation, a timer. Setup: A roundtable discussion with a designated scribe if taking notes.


Compare and contrast software to civil engineering. Take a look at how they design, test, and build versus how we design, test, and build.


Discuss the statement, specifications are our design, the code is our design, testing is our design validation, build/compile is our construction phase, and the output we see on-screen is our building.


The DevOps culture

Continuous Integration (CI), Delivery, and Deployment tackle two problems that we traditionally leave until the end of our software life cycle: integration of code and strategies for deployment.


We've learned that doing something in large chunks, is very risky, especially in a complex software environment. Waiting until the end of your software development process to work out how to combine different parts of the system and deploy them leaves a critical feedback loop open for too long.


Work done during the integration and deployment phase will often include changes to how our software is built. Leaving it until the last moment to receive this feedback will either be costly or will mean it's ignored.


Modern software teams know that the work isn't done until it's delivered to our customer, which for most means it's deployed and operational in our production environment.


They will often look to automate the tools they use to integrate and deploy; after all, it is something they are going to be doing very regularly, so it makes sense to invest time in it.


Our Development Team will take on more care and responsibility in the management of our production environments because they know this will allow them to deliver smoothly and quickly.


It's at this point that the lines begin to blur between what was traditionally seen as development and operations. As we move further towards cloud computing, our infrastructure will become increasingly automated, hence the term infrastructure as code.


This is the rise of the DevOps culture, as we begin to see the cross-pollination of skillsets with a mix of development and operations happening across our teams.


In this section, we'll explain what CI, Delivery, and Deployment are and we'll look at the benefits they bring.


Continuous Integration

When we perform code integration, we're combining the software that we've written with our team's codebase. It's at this point we often discover a few things:

  • How much has changed since we last committed?
  • What degree of overlap has there been? For example, if the shared code has been changed for different purposes.
  • How well does our software work together?


During a commit of our software, we will often need to resolve differences in common code, particularly if several pieces of work are being carried out in the same area of the system simultaneously.


Modern source control systems will highlight the areas that need attention and will not allow the code to commit to going ahead without first resolving the issues. We will often employ a tool that allows for the comparison of the code that is being merged.


Still, this will only go so far in helping us complete the integration. There is still the potential for something to be missed, and if the wrong piece of code is chosen during a merge, then the behavior of the software will change unexpectedly after the commit is complete.


Even if you are in the early stages of the development of your product, it's likely the code is already being used by your customer, so a disciplined approach to code check-in and source control is needed.


The consensus is, the more often our developers commit their code, the sooner they are likely to discover any integration issues and resolve them. Small integrations will resolve more easily and will give each developer involved an idea of the direction the others are taking in the code.


This is why, in the Agile community, there has been a movement away from techniques that prolong the length of time before code is integrated, such as feature branching.


Instead, we favor trunk-based development—small discrete code commits to the main development code branch, sometimes using feature flags or branch by abstraction to hide code that isn't ready to be consumed yet.


Trunk-based development is better known as CI. It involves making regular code commits, at least once a day. The code is built on every commit, if the build and automated tests are successful, the commit is allowed. If the build or tests fail, the commit is not successful and the developer(s) must fix any problems before attempting to commit again.


In CI, we focus on the integration and validation of code at the unit and integration test-level. To achieve this, it's important that we have a reliable suite of automated unit and integration tests.


How does this keep us Agile?

CI has spread beyond the XP community to other Agile practitioners because it reduces the likelihood of integration issues. It also means we receive feedback earlier regarding how our code performs with that of other developers.


CI has significant benefits over source control strategies, such as feature branching, which creates the tendency to refine features until they are ready to release. Leaving the feature branch open for extended periods of time without committing back to the trunk increases the risk of collision.


Things to try

To set up CI, there are a few things that we need to put in place, some of them you'll probably be doing already. My suggestion is that we don't attempt to implement all the steps at once unless we have an experienced DevOps specialist in our team. Instead, we'll need to learn as we go. The following stages are suggestions:


Stage 1:

  • We use a source code repository to manage our software, its configuration, and documentation
  • We check-in code at least once a day
  • Both the building and testing of our software is optimized for us to get fast feedback
  • Our code is built at every check-in


Stage 2:

All tests are automatically run at every check-in; any failures are flagged The build and test results are obvious to all


Stage 3:

Tests are executed in a test environment similar to our production environment. Teams usually set up a CI server to manage this; tools such as Jenkins, Concourse, Codeship, and TeamCity are perfect for this job.


Our software is deployed automatically to the test environment after a successful build.


Continuous Delivery

Continuous Delivery (CD) is an extension of our CI practice and involves setting up push-button deployment processes so that our software can be deployed on request to the production environment. This requires an extra level of discipline on top of CI and means we will need to ensure the following:

  • Our software is always kept in a deployable state.
  • Our software's environment configuration is automated, which means we can deploy to any environment on-demand.
  • It often requires our software's deployment to be outage-less, meaning that new features are delivered seamlessly to our customer.


A deployment pipeline is set up which automatically ensures various steps are taken during each deployment.

Our team takes ownership and responsibility for the deployment and management of their software in the production environment. This means they are no longer reliant on another team and don't have to wait in line.


Setting up a Continuous Delivery process requires close collaboration between our development and operations teams. The cross-pollination of skillsets creates more understanding and ownership of our product in its operational environment within both teams. This is often referred to as a DevOps culture.


This level of automation often involves problem detection during deployment; if any issues are detected, the deployment can be stopped and rolled back immediately. This introduces resilience to our release process.


The benefits that CD brings us to include:

By deploying to production early and often, we significantly reduce the risk that something might go wrong during that later stage of our delivery cycle. We increase resilience in our product, and its overall stability because we can quickly roll forward and back in our production environment seamlessly.


Making changes to production daily irons out any potential problems early and makes deployment to our production environment an insignificant event, and less costly.


Deployment is treated as part of our development cycle; this solves two problems:

Our Development Team has ownership of end-to-end delivery through our deployment pipeline. We no longer have to wait for another team to do work for us.


Our business doesn't have to wait to receive the benefits of our good work.

We close all-important feedback loops by getting our software into our customers' hands as soon as possible.


Things to try

To set up CD on top of the three stages already outlined in the CI section, we'll need an additional stage:


Stage 4:

  • Implement a push-button outage-less deployment of our software to our production environment
  • Confirm success or highlight failure of the release to our human deployment technician
  • Implement automatic rollback functionality should a problem be detected during the rollout phase
  • Make errors visible to our human deployment technician


Continuous Deployment

Once you have CI and CD in place, the final step is to remove the need for the push button and fully automate deployment to our production environment.


For a variety of business reasons, it sometimes makes sense not to switch on new features immediately. We can use strategies such as feature toggles to hide or show features once they've been deployed.


Feature toggles enable us to deploy a feature to our production environment behind a toggle or switch, which can be turned on/off at any time without the need to redeploy code. This has a couple of benefits:


It's rare for a software team to have a testing environment exactly the same as our production environment; it's too expensive. With a feature toggle, however, we are able to selectively turn on the feature and test it.


This will help us ascertain if our software works well with production-only configurations such as load balancing, caching, replication tiers, and so on.


It gives us a more controllable release strategy. With a feature toggle, we can selectively turn on features for certain user groups, for example, in an alpha testing situation where we want our new features to be seen by a select group.


How does this keep us Agile?

The benefits on top of the CD include:

Every change goes into production directly; each increment goes as soon as it is committed.


The feedback loop is as short as possible.

By using strategies such as feature toggles, versioning, or branch by abstraction, it is possible to deploy to our production environment without necessarily using the new functionality.


This means that our code, even though it isn't finished yet, is already deployed to production, which gives us valuable feedback on the integration and deployment of our new software.


Things to try

On top of the stages required to implement CI and Continuous Deployment, we'll need the following stage:


Stage 5:

Implement an automated smoke test during production, triggered by a deployment

Implement a notification system which immediately alerts us of success or failure in production due to a rollout

Automate the outage-less deployment during production, triggered by a successful build/test on our CI server



We've looked at a few different practices that specifically target increasing our confidence when using an incremental delivery approach.

Refactoring helps us keep our software in a healthy and easy to maintain state. Using the analogy of the field, it's essential that we keep the weeds down because before we know it, we may be dealing with thickets or bushes. To do this, we regularly garden our code as we enhance or add to existing areas of functionality.


We can think of Test-Driven Design (TDD) or specification-driven approach because it changes our thought processes regarding how we write software compared to a test-after pattern.


Refactoring and TDD support an emergent approach to designing our software. So, although we still require some architectural design upfront, we require less big-design thinking overall.


Also, the resulting TDD automated test suite helps us verify and validate our software is still working as intended throughout its development life cycle.


Finally, CI, Continuous Delivery, and Continuous Deployment allow us to avoid the significant integration and deployment issues that plague waterfall projects.


All of these practices for building software keep our focus on delivering small increments of working software. They help us avoid the perils of Water-Scrum-Fall.


Tightening Feedback Loops in the Software Development Life Cycle

Now that we've got you through the foundations of setting up your Agile team, this blog is where we start to look at the "secret sauce" of Agile.


The adage "people don't know what they want until they see it" is just as true in the software industry as any other. The sooner we can deliver something useful to our customer, the earlier they can use it in a real-world environment, and the sooner we will be able to gather feedback on whether we're on the right track.


People often talk about Agile "speeding up" delivery, which it usually does. But this doesn't necessarily mean we deliver the same set of requirements at a faster pace just by working overtime.


Working longer hours is an option, but it is not sustainable over an extended period. Instead, delivery in an Agile context means we become smarter in terms of how to deliver the same set of requirements; this is how we speed up.


How we build and deliver software will make a big difference to whether we successfully give our customers something they need.


In this blog, we'll look at techniques for getting early confirmation that our ideas are solving the problems we've been asked to address.

We'll look at three different methods, which can be implemented individually or combined, to help us deliver in a smarter way.

  • Implementing incremental delivery in Agile:
  • Working with software in small, manageable chunks
  • How to make use of inspecting and adapting in your Scrum ceremonies
  • Introducing some Lean thinking to improve flow:
  • Systems thinking: Optimizing the system as a whole, not locally
  • Changing our workflow by managing the work in progress
  • Developing a mindset for continuous process improvement
  • Adopting Lean Startup principles to validate product ideas sooner:
  • Build, Measure, Learn: Learning rapidly by doing and failing fast
  • Implementing incremental delivery in Agile


The Software Industry and the Agile Manifesto, we discussed the Agile values and principles. One principle, in particular, has relevance: "Deliver working software frequently, every couple of weeks to every couple of months, with a preference for the shorter timescale".


Working software is a term we use to describe software that is working as it was intended, that is, it has met all of its acceptance criteria and our Definition of Done (DoD) and is either waiting to be shipped or already has been. The concept of working software is intended to solve several problems we often run into in the software industry:


It moves the focus from documentation and other non-functional deliverables to what we're being paid for, software. The intention of the Agile value working with software over comprehensive documentation isn't to say documentation isn't needed; it's just about maintaining the right balance. We are, after all, a software product development team, not a documentation team.


To illustrate this, if we deliver technical design documentation to our client, it will probably make little sense to them. And unless they are technical, it won't give them any understanding of how their product is evolving. Working software, however, is tangible, much nearer to its final state, and will provide them with a real sense of progress.


We want to deliver increments of working software so that we build up the product as we go. If we do it smartly, parts of the product can be put to use before the rest becomes fully functional.


It emphasizes working software because it lets our client preview how things could be and gives them an opportunity to provide relevant feedback based on the real product.


To give you an analogy, imagine we are considering a new kitchen for our house or apartment. We might look in brochures or go to a kitchen showroom to see examples of kitchens we like.


We might work with a designer to establish the important key attributes that we're looking for. They might create a computer-generated concept drawing of how our kitchen might look once it's finished. It might even be rendered in 3D, and we might be lucky enough to get to walk around it in a virtual reality environment.


However, until our new kitchen is installed in our space, we won't know if it works how we hoped it would. The only way we'll find out is by using it.

The design steps employed by the kitchen designer and our imagination are done to help us envisage any potential problems, but until we're hands-on with our kitchen, we won't know if it's right for us.


If it isn't, then it might be quite costly to fix a finished kitchen. The same goes for software; correcting it can be quite expensive. The later we leave it, the more expensive it becomes.


However, with the right software delivery approach, we can reduce the risk of this happening. To do this, we have to think beyond just using iterations; we have to put some thought into how we slice up our product for delivery.


For instance, just completing part of the data layer of an application, without any implementation of the user interface elements that a user will interact with, doesn't deliver anything of real use for our business.


Without a technical understanding of how software is built, our customer will find it impossible to imagine how the software might look in its final state and will be unable to give any real feedback.


The sooner we can deliver usable software, the sooner we can get feedback on whether it is useful. This is what we mean by tightening feedback loops. Let's look at how to do this.


Working with software in small, manageable chunks

The easiest way to address the risk of people not knowing what they want until they see it is to deliver usable increments of working software to them as soon as possible. A crucial aspect of this early-and-often approach is that we will get meaningful feedback that we can incorporate back into the ongoing build and delivery.


Using this approach to software delivery means that everyone, including our customer, has to understand the caveats. They won't be seeing or using software that is either necessarily complete or has the final polish applied.


In fact, they will see software as it evolves in both function and form. User experience, which includes user interactions, process flow, and graphic design, will also be iteratively applied as we learn more.


How we break down the requirements into small manageable chunks is the first step to achieving this. It's vital that we first deliver the parts of our product that we most want feedback about. These are often our core business processes, the things that directly or indirectly make us money, and therefore involve the most risk.


To achieve this approach, we should stop thinking of building up functionality in layers:

This method may seem sensible because by starting at the backend and developing the data store with its associated business logic, we are creating a foundation to build upon. However, it's somewhat of a construction industry analogy that doesn't apply to how we make software.


Instead, to deliver incrementally, we have to think of each increment as end-to-end functionality that provides some business value to our customer. We often refer to this as vertical slices of software, because every small slice carves through each application layer, delivering aspects of each. The concept of vertical slicing is shown in the following diagram:


We include the User Interface layer so that we provide our client with some way of interacting with our software so that they can make real use of it. If we think of each vertical slice as a feature, we can carve up our features in a way that will make sense as we build out the product incrementally.


For instance, if we are building an online shop, the first vertical slice could be the display of items we have for sale. As well as showing items for sale, we'll probably need some way of managing their details. Again, we can build something rudimentary to start with and then incrementally deliver enhancements as required.


The next vertical slice could be either the checkout process or the search facility. Any of these features can be completed independently of each other, but it probably makes sense to build them in a particular order. For instance, without items for sale the search facility won't work, nor will the checkout process.


If we build up our feature set in a way that makes sense, we get valuable feedback from our customer that we're building the right thing. And as we deliver further increments of working software, they can use these to determine if the software we're developing will meet their needs.


We also get to validate our technical approach by asserting whether or not we're building the thing right. Once we have an end-to-end delivery taking place, we can start to iron out problems in our integration and deployment processes. We will also get to learn sooner how we will configure and manage our production environment.


When building any product, we take the Goldilocks approach—not developing too much, or too little, but getting it just right. This, in particular, should influence our decisions in terms of architecture and DevOps; we do just enough to get our current feature or feature slice working. Architecture and infrastructure have to be built incrementally too.


We'll talk more about how to prioritize features later in this blog, in the Build, Measure, Learn - Adopting Lean Startup and learning to validate ideas section.


Inspection and adaption

When Ken Schwaber and Jeff Sutherland created Scrum, they founded it on empirical process control theory, which has three pillars: transparency, inspection, and adaption.


Transparency comes in many forms in our Scrum team, mainly through the tenet of making work visible. The Scrum Board plainly shows what our team is working on. There is no hidden work in Scrum;


our team members are open and forthcoming about what they are working on when discussing it at the Daily Scrum or the Sprint Review. Even the Product Backlog is available for all to see.


Scrum provides high-bandwidth information regarding our team, our process, and the product on which we're working. At various points, we can inspect the information that we receive and make adjustments to how we work, or what we're working on. If new data comes to light, we adapt and change it up.


In empirical processes, all knowledge is gained through sensory experience and custom and tradition should be avoided.


In this way, we should encourage our Scrum team to challenge the status quo and prevent any this-is-how-we-do-things-around-here thinking. If we do, we will benefit from profound changes in our approach that will significantly increase our chances of success.


In the Scrum framework, there are multiple opportunities to inspect and adapt:

During Sprint Planning, as the team determine how they will implement the Sprint Backlog

At the Daily Scrum, when new information is uncovered during the implementation of a User Story

When a User Story is completed, or a new feature is delivered; we check in with our Product Owner or business owner to verify it is as expected


At the Sprint Review, when our stakeholders are invited to give feedback During the Sprint Retrospective when we can uncover what is and isn't working well, and we create a plan to take action and change things up


These are all checkpoints for us to consider if new information has come to light since the last time we met and decided what to do.


The importance of User Experience (UX)

Along with our Product Owner, our User Experience (UX) specialists are usually the first people to engage directly with our key stakeholders to determine what is wanted.


The job of our UX professionals is to help our team turn what our customer wants into something our customer needs. If the customer needs aren't obvious, a UXer has tools in their toolbox which can be used elicit further information.


Their toolset includes creating wireframes, or semi-interactive prototypes using tools such as InVision, or full mockups using HTML/CSS and JavaScript. Each gives an experience close to the real thing and helps our customers share and understands their vision.


The UX professional, working closely with the Product Owner, is responsible for two aspects:


What is required?

How it will work best

UX covers all aspects of the user interface design. It can be broadly summed up as interaction design and graphic design with a smattering of psychology, but that is just the tip of the iceberg.


Although both roles overlap to a certain degree, user interaction design often concerns itself with user interactions and the process flow within the product. Graphic design concerns itself with the presentation, particularly information hierarchy, typography, and color.


A UX specialist has to have a good ear, patience, and be willing to go through multiple iterations to get feedback.


If we're to create software that is intuitive and straightforward to use, we have to start with the UX because it drives how we build the application.


Different UXs will result in different application architectures. We create software for people; we need to get feedback from these people about whether we're building the right thing as soon as possible


Remember, it's not sufficient to just make software how we think it should work. Instead, we need to turn what the customer wants into something the customer needs.


Shifting left

Shifting left is the concept of incorporating specific practices, which have traditionally been left until late in the process, much earlier in our workflow.


System integration, testing, user experience, and deployment can all profoundly affect the outcome and success of our product; they all validate the product's viability in different ways.


If we start thinking about them sooner in our development lifecycle and we start to build these strategies as we go, we have a much better chance of success and of avoiding any nasty surprises.


The following diagram shows what we mean by shifting left:

By moving these practices towards the beginning of our product development life cycle, we will start to get feedback sooner that these aspects of our product will work.


For example, in a linear development process such as a gated waterfall approach, we often leave a full integration and system test until the end of the project. This creates several problems:


Taking the "big bang" approach to system integration often uncovers many false assumptions that were made, and therefore many changes will be required. It will be costly, with days or even weeks of lost time.


A large-scale test at the end of the development cycle will often find numerous problems that need to be fixed. Sometimes it will discover fundamental issues that require substantial reworking or even going back to the drawing board.


In the same way, I've seen user interaction and graphic design left until the end of the development cycle. These specialists are often brought in late to the project, with the hope that they will make things work better or make things look nice.


Unfortunately, if making things work better or look nice involved significant changes to the user interface, it would also often require a substantial shift in the architecture of the application.


Simply put, by working on the things that could have a profound effect on the outcome of the product sooner, we reduce risk.


The following graph shows the relative cost of fixing deficiencies depending on where they are found in the software development lifecycle:


Defects in software are anything that doesn't work as intended or expected, whether it's a miscommunicated, misunderstood, or poorly implemented requirement, or a scalability or security issue.


We uncover defects by moving increments of working software through our system to "done" as quickly as possible. To do this, we have to incorporate all aspects of software development, often starting with UX, as well as testing and deployment strategies from the outset.


We don't expect to build all of these in full the first time around. Instead, we plan to do just enough so that we can move to the next increment and the next, iteratively improving as we go.


Shifting right

As well as shifting left, you may be wondering if there is such a thing as shifting right. Yes, there is; it's where we start to think in terms of maximizing the value delivered, and delivering that value to the client as soon as possible.


Introducing some Lean thinking to improve the flow

So far, we've looked at breaking work down into small chunks. We've also looked at how we can better inform the work that we carry out by shifting left the activities that we have traditionally neglected until the end of our development cycle.

Now we will apply some Lean thinking to see how we can improve the flow of work.


Agile Software Delivery Methods and How They Fit the Manifesto, we discussed the key tenets of Kanban/Lean, which are:

1. Make the work visible so that we can inspect and adapt.

2. Break down work into similar size units, to smooth flow and reduce the "waste of unevenness."

3. Limit our work in progress so that we focus on improving our end-to-end flow.


In the following section, we talk specifically about how we enhance flow through our system, but first let's try an activity, as there is nothing quite likes a hands-on demonstration of how this works:


ACTIVITY: The coin game.

WHAT YOU'LL NEED: 10 coins, three to eight people, a timing device and a team member to operate it.


SETUP: This game is best played seated around a long table. Arrange the team around the table. Each team member should be easily able to pass coins to the next.


This game is played in three rounds; each round, the coins will start with the first person in the line. The coins have to pass through the hands of every team member. Each coin has to be flipped one at a time before it can be considered "processed" by that team member. The round will end when the last person in the line has flipped all the coins


ROUND ONE: The batch size is 10. Give all 10 coins to the first player. They flip each coin one at a time until all 10 have been flipped and then pass the pile to the next player. The next player flips all the coins one at a time until all are flipped and then gives the pile of 10 to the next player.


And so on. Start the timer when the first coin of the first player is flipped. Stop the timer when the last coin of the last player is flipped. Record the time it took.


ROUND TWO: The batch size is split. Repeat, except give the coins to the first player in two batches, one of four and one of six. Flip the coins in the batch of four first. Once the four have been flipped; pass them on to the next player.


The next player then flips the stack of four coins one by one. Meanwhile, the first player flips all the coins in the bunch of six. Once they've flipped each coin, they pass the batch of six on to the second player.


Again, the timer starts when the first coin is flipped by the first player and stops when the last coin is flipped by the last player. Record the time it took.


ROUND THREE: The batch size is one. For the final round, pass all the coins to the first player. Once again, they flip each coin one at a time, except as soon as they've flipped one coin they pass it to the second player.


The second player can then flip the coin and pass it on to the third player. Again, the timer starts when the first coin is flipped by the first player and stops when the last player flips the last coin.


Play the game first, and then we'll discuss the possible outcomes in the results section.


The coin game results

So, how did we get on with the coin game activity? What did we observe?

During the first round with a batch of 10, we'll have noticed that nine people were sitting around doing nothing, while one person flipped coins.


During the second round, with the batches of four then six, we'll have observed that both batches were faster than the batch of 10. We'll also have seen that the batch of four moved faster than the batch of six. If the batch of six had been played first, it would have slowed the batch of four down to its pace.


After the third round, we'll have noticed that, as the batch size comes down, the time to complete the work speeds up quite dramatically. That's because the utilization of every person in the line increases as we reduce the batch size.

We've optimized the system for the coin game; now it's time to discuss the theory.


Systems thinking – Optimizing the whole

When we're part of a complex system, such as software development, it's easy to believe that doing more in each phase of the SDLC will lead to higher efficiency.


For instance, if we need to set up a test environment to system-integration-test a new feature, and this takes a while to make operational, we'll be inclined to increase the size of the batch of work to make up the time lost setting up the environment.


Once we start to batch work together, and because it's complicated and requires so much setup time, the thinking is that it won't do any harm to add a bit more to the batch, and so it begins to grow.


While this can lead to local efficiency, it becomes problematic when we discover an issue because any reworking will cause delays. And while this may only affect one particular part of the batch, we can't easily unbundle other items, so everything gets delayed until we fix the problem.


As we saw in the coin game, large batches take longer to get through our system. When a problem is discovered inside a batch, this will have a knock-on effect both up and downstream in the process. Reworking will be needed, so people back upstream will need to stop what they are doing and fix the problem.


Meanwhile, people downstream are sitting around twiddling their thumbs, waiting for something to do because the batch has been delayed for reworking.


Gated approaches with handovers cause a big-batch mentality because our local efficiency mindset makes us believe that doing everything in one phase will be more efficient than doing it incrementally. The reality is that doing everything perfectly the first time is just not possible.


In a complex system, when you optimize locally, you tend to de-optimize the system as a whole. Instead, we need to break our big batches of work down into smaller chunks and focus on end-to-end flow through our system, like in the coin game—the smaller batches of coins flow much more evenly.


There is, of course, a balance to be struck; we need to be realistic regarding what is possible. The smallest chunk possible is a discrete piece of functionality that can be delivered and from which we can gain feedback on its applicability. This feature, or slice of a feature, shouldn't be so large that a work item sits in progress for weeks, or months even.


Instead, break items down so that they deliver incremental value. The feedback we get increases in value as it travels down the delivery pipeline.


The more input that we gain, whether it's direct from our customer through a working software demonstration, through integration with other parts of our system, or actually deployed; the better our chances of success.


For example, why hold off deploying to the production environment until you have a critical mass? Instead, use feature flags to deploy, but keep the feature switched off in production until it's ready for release.


We can do this to get critical feedback about system integration and the final step of the deployment process at production time. Plus, with the right infrastructure tweaks to enable us to switch on certain features for a selective audience, we can test our new feature in production before we go live.


Changing our workflow

When using Scrum, the Sprint Backlog is often seen as the batch size. A pattern we often see, and something that we want to minimize as much as possible, is a Scrum Board that looks like this:


You'll notice that almost every single User Story is in play, which probably means that each member of our team is working on something different. This is often a symptom that our team isn't working together, but as individuals, within their specializations—user experience designer, graphic designer, developer, frontend developer, tester.


Each specialist will go through the tasks of a User Story to determine if there is some work they can do. Once they've moved through all stories on the board and consider their tasks "done," they then look at the Product Backlog to see if there is any further work that fits their specialty.


In this scenario, when a software developer considers the coding is complete for one particular User Story, they then move onto the next User Story for the next piece of coding, and so on. This practice causes a few knock-on effects:


Handoffs: Handing work over between the specializations, in this case between the software developer and the reviewer or tester, is wasteful regarding knowledge and time lost during the transfer.


This also includes a transfer of responsibility where, like a game of software development tag, the last person who touches the software becomes responsible.


Interruptions: The team member will need to be pulled off what they're currently doing to fix any problems brought up by either testing or review. There will likely be several iterations of reviewing, testing, and bug fixing.


Waiting in queues: Queues of work start to form. For instance, all the coding is getting done super quickly because all of the developers are so busy coding.


So busy, in fact, none of them are stopping to review each other's code or fix any problems the testers have raised. This leaves each of those User Stories open, with a status of "in progress" on the board, when in fact nobody is working on them.


Multitasking: Despite all best intentions, the lack of synchronization between team members will cause people to be pulled from one task to another to perform handovers or reworking.


When a team works this way, if we lay out each task in sequence it will look a little like a production line, except instead of having just one production line, we have one for each User Story.


The previous person in the line believes they've completed their work and hands it over to the next person to do theirs; they then start the next User Story and open a new production line.


However, making software is not a linear process, and it won't be long before each team member is being pulled from one User Story to another.


Handoffs are especially apparent to the test role. They are often the last people in the process before deployment, and therefore the ones under the most pressure— pressure to get testing done and pressure not to find problems (or if they do, depending on how significant the problems are, to accept that they could be dealt with in another round and to let the software go out in a compromised form).


Thinking in terms of job specializations tends to put us into boxes. It makes us think that business analysts only gather requirements, software developers only write code, and testers only test it. This doesn't team thinking; this is individual thinking.


At worse, it causes an abdication of responsibility, where people only feel responsible for their part of the User Story, rather than the User Story as a whole. This approach is little more than a mini-waterfall. It still shares the same problems associated with cascading work, albeit on a smaller scale.


To accommodate "vertical slices" of useful software, we have to change our workflow. Each User Story becomes the whole team's responsibility. Remember, Scrum intends us to work together to get the ball across the line. The distinction between frontend, backend, UX designer, and tester starts to blur.


To focus on flow, we have to consider the system as a whole, from the point when we start to work on an item to the point where we deliver it. We look to optimize the end-to-end delivery of items in our flow through the close collaboration of our roles.


You might hear end-to-end flow also referred to as cycle time. The cycle time for a User Story starts when work begins on it, and is the number of days to completion, or "done."


One way to improve team cohesiveness and focus are by limiting the amount of Work In Progress (WIP). How do we do this? There are two schools of thought:


Reduce work in progress limits to a minimum; allow the team to get a feel for flow over several items of work, then if necessary increase WIP gradually to see if it increases flow or decreases flow.


Don't set any WIP limits and watch what naturally happens; if a logjam starts to form, reduce the WIP limit for that particular column.


The aim of limiting WIP is to reduce the amount of multitasking any one team member has to do. As we've already mentioned in blog, Agile Software Delivery Methods and How They Fit the Manifesto, one definition of multitasking is "messing multiple things up at once."


This isn't just an approach we use for Kanban. We can apply this to Scrum as well, by merely applying a WIP limit in our in-progress column.


We can start to measure flow by annotating our User Stories with useful information. Apply the start date to mark when work began. Add the finish date when the job completes.


The number of working days between the start and end date is the cycle time of the story. At each Daily Scrum, if the story is in progress but isn't being worked on; mark the story card with a dot. This shows the wait time in days. By reducing delays, we will also increase flow.


There are team configurations that will help us naturally limit WIP:


Pairing: Two Development Team members work together on one User Story, taking it from end to end across to the board. 


Swarming: A temporary burst of the whole team's power, used to get over humps. For instance, to move all the User Stories waiting to be tested, everyone rolls up their sleeves and starts testing. Once we've gotten over the hump, the team then tends to return to their standard configuration.


Mobbing: The whole team's power applied to one User Story at a time. Unlike swarming, the team tends to stay in mob configuration to see a piece of work through from start to end.

Some teams work in mobs permanently, giving them a WIP of one User Story at a time. 


In each of these approaches, people work much more closely together. Software development is about end-to-end delivery. Cross-pollinating and understanding each other's skill sets by literally working together creates better software and most likely sooner. Working this way also avoids handoffs and context switching as much as possible


This is a move from optimizing our "resources" to optimizing our workflow, which at the end of the day achieves what we wanted—increasing the flow of value we deliver to our customer, which in our case is in the form of working software.


Kaizen and developing a small, continuous improvement mindset

We first discussed Kaizen, Agile Software Delivery Methods and How They Fit the Manifesto. It is a Japanese word meaning "change for the better." It is commonly referenced as meaning "continuous improvement," and made famous through its incorporation in the Toyota Production System (TPS).


On the Toyota production lines, sometimes things go wrong. When they do, the station where the problem has occurred gets an allotted time to fix it. If they exceed the time buffer, there will be an impact on the rest of production, and they will need to stop the entire line.


At this point, workers, including the managers, gather at the problem site to determine what seems to be the problem and what needs to be done to fix it.


All too often, when a problem occurs, we see the symptoms of the problem, and we try to fix it by fixing only the symptoms. More often than not, the cause of the problem goes deeper, and if we don't fix more than just the surface-level observations, it will occur again.


So, if the production line starts up again in this situation, it won't be long before the problem recurs and it needs to be shut down once more.


To solve this problem, Toyota introduced a Root Cause Analysis, which uses techniques for getting to the bottom of why something has happened. Two popular methods are Five Whys analysis and Ishikawa (Fishbone diagram).


I'll explain the simpler of these, Five Whys analysis, using a real-world example.

At my current place of work, we encourage our teams to take ownership of their deployments during production. This strategy doesn't come without risk, of course, and the teams do have a lot to learn in the DevOps space.


Fortunately, we have coaches who work explicitly in this area, helping us come up-to-speed. Even so, sometimes the teams can be overly cautious.


That is why we introduced the concept of Fail Cake, inspired by a ThoughtWorks team who did something similar; we could see that we needed to encourage the team to take strategic risks if they were going to advance their learning.


Here is the story of Fail Cake and learning fast.


Fail Cake

Several of the teams I worked with identified specific circumstances which would trigger a Kaizen event (Continuous Improvement Meeting). These triggers included one particularly dire situation: releasing a priority-one bug to production.


If this happened, they would fix the problem immediately. Fortunately, they had a deployment strategy that meant they could usually quickly recover if they couldn't isolate the problem straight away.


While eating the cake, they'd perform a root cause analysis to determine what went wrong in their process and how they could prevent it ever happening again.


The purchase and eating of the cake were deliberate; it was intended to give the team space to reflect and also encourage others involved in the incident to attend and give feedback. As a result of following this process, they very rarely released any priority-one bugs into production.


Here's a real example of one of our teams reflecting on how to improve their process using the Five Whys method.


Root cause analysis with the Five Whys method

"A relentless barrage of 'whys' is the best way to prepare your mind to pierce the clouded veil of thinking caused by the status quo. Use it often." - Shigeo Shingo


Background: The team noticed that User Stories were taking longer to complete; they wanted to understand why. We had the presence of mind to understand that we'd been unconsciously increasing our batch size, so we conducted a Five Whys root cause analysis to try to understand it more:


Why were the team increasing their batch size? Because the time dedicated to the weekly release train meant we might as well bundle more things together.


Why was the weekly release cycle so costly regarding time? Because there was a business-critical part of their system that was written by someone else and had no automated tests, so they had to test it manually.


Why didn't we write automation tests? We had tried, but certain parts of the code were resistant to having tests retro-fitted.


Why didn't we do something different from a full regression test? We had tried multiple different strategies, plus a recent increase in the number of teams meant they could spread the regression test out amongst the group and rotate turns. However, this had the effect of only spreading the pain, not mitigating it entirely.


Why didn't we try a replacement strategy? We could for certain parts of the system; in fact, we had come up with several plans to do so. But we couldn't do everything without making some changes to the existing code, and so it would still require regression testing. Plus, the weekly regression sapped our time and energy.


After conducting the above analysis, our team decided to change up our approach in the following ways:

We would write an automation test suite for the parts of the application we could easily and reliably test. This would reduce the regression test effort. We would look at different strategies for manually regression-testing the rest so that we could find the optimal approach.


We would re-architect the system, designing it with the intention of making it automation-testable from the get-go. We would gradually move away from the existing architecture, replacing parts as we went, using Martin Fowler's Strangler Application pattern.


Adopting Lean Startup methods to validate ideas

The Lean Startup method was devised based on the company start-up experiences of its author, Eric Rees. It aims to shorten product development times by using a hypothesis-driven incremental delivery approach, which is combined with validated learning.


In the following section, we'll provide a brief introduction to Lean Startup thinking and how this might apply generally, not just to organizations that are in their start-up phase.


Build, Measure, Learn

We mentioned just now that Lean Startup is a hypothesis-driven approach; let's pull this apart a little for more understanding. The hypothesis-driven approach involves setting up an experiment to test out if a theory we have has legs or not. It could be as simple as writing a statement with the following template:


We believe that by creating this experience

  • For these people
  • We will get this result
  • And we'll know this is true when we see this happening


A hypothesis has four parts:

The belief: This is something that we think we know based on either evidence from our data, the needs of our customer, or a core competency (something we know how to do well, or are particularly good at)


The target market: Our audience for this particular hypothesis

The outcome: The outcome we believe we will get

The measurement: The metrics that will tell us if our outcome is taking us in the right direction, towards our business objective Using the hypothesis template, we then build out enough of a product feature set so that we can test out our theory.


This is usually done incrementally over a number of iterations. This first cut of our product is known as the Minimum Viable Product (MVP) and is the smallest possible feature set that makes sense for us to validate with real-world users.


To do this, we take a feature-driven approach, where we prioritize certain features over others and get them to a viable offering state as soon as possible.


Prioritization is carried out using a release planning strategy that targets our primary, secondary, and tertiary user groups. Our primary user groups are the key users of our software product, the people who will tell us if our business idea is worth pursuing.


So we'll build software which targets their needs first. This allows us to focus our efforts and target the most important aspects of our product.


We don't have to fully develop a feature to make it viable; we can target certain segments of our market first and test it out before we further enhance it. The aim is to create "light" features which, although limited in their functionality, will allow us to validate our ideas as quickly as possible.


Our validated learning comes into play as we start to measure the success of the experiment to determine if we're on the right track. We use actionable metrics; these are metrics that we've set up to help inform us of what business actions we should take when certain measurements are recorded.


To illustrate, if we're testing a feature that drives new membership sign-ups, we need to measure exactly how many new members signed up as a result of using our feature.


If the non-member to new-member conversion rate is above a certain percentage threshold of people who used our feature, then we can most likely say it's a success, and we can carry on enhancing it further.


Another example is when building a checkout process for our online shop; we are validating two key aspects of our product. Firstly, do our customers want to buy what we're selling? Secondly, do they trust us and our checkout process enough that they will complete a purchase?


We can test this out in a number of ways without building out the full product feature, providing we capture people's actions and their reactions to measure our success.


In the Lean Startup approach, this constant testing of our assumptions is known as the BUILD, MEASURE, LEARN cycle. A Lean Startup is a metrics-driven approach, so before we start building features, we need to think about how we are going to measure their success.


The trick is to use the MVP to learn what does and doesn't work. The measurements we take from the learning phase, and the insights that we generate from them, we then feed into the next increment.


If the measurements indicate we're moving in the right direction and adding value, we continue to build on our current ideas.


If the measurements indicate we're not moving in the right direction, then we assess what to do. If we determine that the experiment is no longer working, we have the option to pivot in a different direction and try out new ideas.


The aim is to build up the increments until we have a Minimum Marketable Product (MMP).


An example of Lean Startup MVP

We can use the Lean Startup mindset whether we're setting out to build a new product or creating a new feature for an existing one. The core concepts remain the same;


first put together the hypothesis, and next create a release strategy which focuses the first release on the minimum core feature set that we need to validate our idea with.


We then build it and test it out with a user group that will give us constructive feedback. This group is often known as our "early adopter" group because they love using new technology, and if it solves a problem that they care about they won't be too concerned about the little "problems" an early product might have.