Agile Manifesto Principles (2019)

Agile Manifesto Principles

The Agile Manifesto

The Agile Manifesto has its own way of reframing speed as a function of customer value: Working software over comprehensive documentation. This blog explains the 20+ Agile Manifesto principles with examples.


Here is the Manifesto for Agile Software Development: 

A good example is the User Story, an Agile requirement gathering technique, usually recorded on an index card. It's kept deliberately small so that we can't add too much detail. The aim is to encourage, through conversation, a shared understanding of the task.


In the same way, we should look at all of the following Agile values:

Working software over comprehensive documentation: As a software delivery team, our primary focus should be on delivering the software—fit for purpose and satisfying our customer's need.


Our customer isn't an expert in building software and would, therefore, find it pretty hard to interpret our documentation and imagine what we might be building. The easiest way to communicate with them is via working software that they can interact with and use.


By getting something useful in front of our customer as soon as possible, we might discover if we're thinking what they're thinking. In this way, we can build out software incrementally while validating early and often with our customer that we're building the right thing.


Customer collaboration over contract negotiation: We aim to build something useful for our customer and hopefully get the best value for them we can. Contracts can constrain this, especially when you start to test the assumptions that were made when the contract was drawn up.


Responding to change over following a plan: When considering this Agile Value, it is worth drawing a comparison with the military.


This is what we call a planning-driven environment; they're planning constantly throughout the battle as new information becomes available.


Plan-driven versus Planning-driven: Plan-driven means a fixed plan which everyone follows and adheres to. This is also known as predictive planning. Planning-driven is more responsive in nature;


when new information comes to light, we adjust our plan. It's called planning-driven because we expect change and so we're always in a state of planning. This is also known as Adaptive Planning


So when going into battle, while they have group objectives, the military operates with a devolved power structure and delegated authority so that each unit can make decisions on the ground as new information is uncovered.


In this way, they can respond to new information affecting the parameters of their mission, while still getting on with their overall objective. If the scope of their mission changes beyond recognition, they can use their chain of command to determine how they should proceed and re-plan if necessary.


In the same way, when we're building software, we don't want to blindly stick to a plan if the scope of our mission starts to change. The ability to respond to new information is what gives us our agility; sometimes we have to deviate from the plan to achieve the overall objective. This enables us to maximize the value delivered to our customer.


Done is in the hands of our customer, done is doing the job it was intended to do. Until that point, we aren't 100% sure we've built the right thing, and until that moment we don't have a clear indication of what we might need to redo.


Everything else the software team produces just supports the delivery of the software, from design documents to user guides.


Agile processes promote sustainable development. The sponsors, developers, and users should be able to maintain a constant pace indefinitely:


Putting a software delivery team under pressure to deliver happens all the time; it shouldn't, but it does. There are a number of consequences of doing this, some of which we discussed earlier in this blog.


For example, put a team under pressure for long enough, and you'll seriously impact the quality of your product. The team will work long hours, make mistakes, take shortcuts, and so on to get things done for us.


The result won't just affect quality, but also the morale of our team, and their productivity. I've seen this happen time and time again; it results in good people leaving along with all the knowledge they've accumulated.


This principle aims to avoid that scenario from happening. Which means that we have to be smart and use alternative ways of getting things done sooner.


This means seeking value, ruthless prioritization, delivering working software, a focus on quality, and allowing teams to manage their work in progress so they can avoid multitasking.


Studies have shown that multitasking causes context switching time losses of up to 20%. When you think about it, when you're solving complex problems, the deeper you are into the problem, the longer it takes to regain context when you pick it back up.


It's like playing and switching between multiple games of chess. It's not impossible, but it definitely adds time.


I've also seen multitasking defined as messing up multiple things at once.


Continuous attention to technical excellence and good design enhances agility: By using solid technical practices and attention to detail when building software, we improve our ability to make enhancements and changes to our software.


For example, Test-Driven Development (TDD) is a practice which is as much about designing our software as it is testing it. It may seem counter-intuitive to use TDD at first, as we're investing time in a practice that seemingly adds to the development time initially.


In the long term, however, the improved design of our software and the confidence it gives us to make subsequent changes enhance our agility.


Technical debt is a term first coined by Ward Cunningham. It describes the accumulation of poor design that crops up in code when decisions have been made to implement something quickly.


Ward described it as Technical Debt because if you don't pay it back in time, it starts to accumulate. As it accumulates, subsequent changes to the software get harder and harder. What should be a simple change suddenly becomes a major refactor/rewrite to implement.


Simplicity—the art of maximizing the amount of work not done—is essential: Building the simplest thing we can to fit the current need prevents defensive programming also known as "future proofing."


If we're not sure whether our customer needs something or not, talk to them. If we're building something we're not sure about, we may be solving a problem that we don't have yet.


One of the number one causes of bugs is complexity in our code. Anything we can do to simplify it will help us reduce bugs and make our code easier to read for others, thus making it less likely that they'll create bugs too.


The best architectures, requirements, and designs emerge from self-organizing teams: People nearest to solving the problem are going to find the best solutions.


Because of their proximity, they will be able to evolve their solutions so that all aspects of the problem are covered. People at a distance are too removed to make good decisions. Employ smart people, empower them, allow them to self-organize, and you'll be amazed by the results.


At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behaviour accordingly: This is one of the most important principles in my humble opinion and is also my favourite.


A team that takes time to inspect and adapt their approach will identify actions that will allow them to make profound changes to the way they work. The regular interval, for example, every two weeks, gives the team a date in their diary to make time to reflect.


This ensures that they create a habit that leads to a continuous improvement mindset. A continuous improvement mindset is what sets a team on the right path to being the best Agile team they can be.


Agile is a mindset

The final thing I'd like you to consider in this blog is that Agile isn't one particular methodology or another. Neither is it a set of technical practices, although these things do give an excellent foundation.


On top of these processes, tools, and practices, if we layer the values and principles of the manifesto, we start to evolve a more people-centric way of working. This, in turn, helps build software that is more suited to our customer's needs.


In anchoring ourselves to human needs while still producing something that is technically excellent, we are far more likely to make something that meets and goes beyond our customer's expectations. The trust and respect this builds will begin a powerful collaboration of technical and non-technical people.


Over time, as we practice the values and principles, we not only start to determine what works well and what doesn't, but we also start to see how we can bend the rules to create a better approach.


This is when we start to become truly Agile. When the things we do are still grounded in sound processes and tools, with good practices, but we begin to create whole new ways of working that suit our context and begin to shift our organizational culture.


An Example of "Being Agile"

When discussing the Agile Mindset, we often talk about the difference between "doing Agile" and "being Agile."


If we're "doing Agile", we are just at the beginning of our journey. We've probably learned about the Manifesto. Hopefully, we've had some Agile or Scrum training and now our team, who are likely to have a mix of Agile backgrounds, are working out how to apply it.


Right now we're just going through the motions, learning by rote. Over time, with the guidance of our Scrum Master or Agile Coach, we'll start to understand the meaning of the Manifesto and how it applies to our everyday work.


Over time our understanding deepens, and we begin to apply the values and principles without thinking. Our tools and practices allow us to be productive, nimble, and yet, still disciplined.


Rather than seeing ourselves as engineers, we see ourselves as craftsmen and women. We act with pragmatism, we welcome change, and we seek to add business value at every step. Above all else, we're fully tuned to making software that people both need and find truly useful.


If we're not there now, don't worry, we're just not there yet. To give a taste of what it feels like to be on a team who are thinking with an Agile Mindset following is an example scenario.



Imagine we're just about to release a major new feature when our customer comes to us with a last minute request. They've spotted something isn't working quite as they expected and they believe we need to change the existing workflow. Their biggest fear is that it will prevent our users from being able to do a particular part of their job.


Our response

Our team would respond as a group. We'd welcome the change. We'd be grateful that our customer has highlighted this problem to us and that they found it before we released. We would know that incorporating a change won't be a big issue for us; our code, testing and deployment/release strategies are all designed to accommodate this kind of request.


We would work together (our customer is part of the team) to discover more about the missing requirement. We'd use our toolkit to elaborate the feature with our customer, writing out the User Stories and if necessary prototyping the user experience and writing scenarios for each of the Acceptance Criteria.


We'd then work to carry out the changes in our usual disciplined way, likely using TDD to design and unit/integration test our software as well as Behavior-Driven Development (BDD) to automate the acceptance testing.


To begin with, we may carry the work out as a Mob or in pairs. We would definitely come together at the end to ensure we have collective ownership of the problem and the solution.


Once comfortable with the changes made, we'd prepare and release the new software and deploy it with the touch of a button. We might even have a fully automated deployment that deploys as soon as the code is committed to the main branch.


Finally, we'd run a retrospective to perform some root cause analysis using the 5-whys, or a similar technique, to try to discover why we missed the problem in the first place. The retrospective would result in actions that we would take, with the aim of preventing a similar problem occurring again.



In this blog, we looked at two delivery styles, delivery as a software product and delivery as a software project.


We learned that delivery as a software project was hard to get right for multiple reasons. And giving our team only one shot at delivery gave them little or no chance of fine-tuning their approach. In a novel situation, with varying degrees of uncertainty, this could lead to a fair amount of stress.


There is a better chance of succeeding if we reduce the variability. This includes knowledge of the domain, the technology, and of each of our team members' capabilities. So, it is desirable to keep our project teams together as they move from project to project.


What we learned was that when a long-lived team works on a product, they have the opportunity to deliver incrementally. If we deliver in smaller chunks, we're more likely to meet expectations successfully. Plus, teams that work on products are long-lived and have multiple opportunities to fine-tune their delivery approach.


Agile Software Delivery Methods and How They Fit the Manifesto

Or perhaps, more importantly, how the Manifesto fits them because many Agile methods were developed before the Manifesto was written—the practical experience gained by the original members of the Agile Alliance is what gave the Agile Manifesto its substance.


Some have a number of prescribed artefacts and ceremonies; others are much less prescriptive. In most cases, there is no one-size-fits-all approach and most Agile practitioners will mix and match, for example, Scrum and XP. This blog aims to help you decide which method might work for you.


We'll cover the following topics in this section:

A detailed look at the most common methods: Scrum, Kanban, and XP A comparison of specific Agile methods Kanban for software is included in this blog. Although it's technically a Lean approach to software development, Agile and Lean approaches are often combined


How you can choose the right Agile framework

When the original 17 signatories to the Agile Manifesto came together in 2001 to form the Agile Alliance, they each brought with them ideas about how the industry could be changed for the better based on actual experiences.


You see, many of them had already started shifting away from what they deemed heavyweight practices, such as the ones encouraged by Waterfall. Instead, they were putting new ideas into practice and creating SDLC frameworks of their own.


Among the signatories, that weekend was the creators of XP, Scrum, Dynamic Systems Development Method (DSDM), Crystal, Adaptive Software Development (ASD), Feature-Driven Development (FDD), and so on.


They initially called them "light" frameworks, to distinguish them from their heavyweight counterparts, but they didn't want the world to consider them to be lightweight. So, they came up with the term Agile, because one thing all of these frameworks had in common was their adaptive nature.


In the following sections we're going to look at three of the Agile methods, first up is Scrum.


Understanding Scrum

Product development companies to try to speed up their product development lifecycles to decrease their time to market. 


These companies were assembling small teams of highly capable people with the right skills, setting the vision for them to build the next-generation product, giving them a budget and timeframe, and then getting out of the team's way to let it do its thing.


Some observed characteristics of these teams included having all the skills necessary to carry out the job they were being asked to do—the essence of a cross-functional team.


They were allowed to determine how they best carried out the work, so were self-organizing and autonomous. They used rapid, iterative development cycles to build and validate ideas.


Nonaka and Takeuchi called it the rugby approach because they observed product teams passing the product back and forth among themselves as it was being developed, much like a rugby team passes the ball when moving upfield.


In a rugby game, the team moves as a unit and even though each team member has a speciality regarding a position on the field and gameplay, any member of the rugby team can pick up the ball, carry it forward, and score a try or goal.


The same was true of these product development teams—their contribution to product development was specialist and highly collaborative.


In the section of their paper titled Moving the Scrum downfield, they list the common characteristics of the teams they observed as follows:


Built-in instability: Some aspect of pressure was introduced, which encouraged the product development teams to think out-of-the-box and use an innovative approach to solving the problem.


Self-organizing project teams: The teams were given the autonomy to decide how they carried out the task of solving the problem handed to them. Overlapping development phases:


Instead of the normal sequential-phased development that you get with processes such as Waterfall, the teams worked iteratively, building quickly and evolving their product, with each iteration.


Multiple phases overlapped, such that the following steps might be informed by the discoveries made in the previous one. In this way, the teams were able to gain fast feedback about what would and wouldn't work.


Multilearning: A trial-and-error learning culture is fostered, which allows team members to narrow down options as quickly as possible. They are also encouraged to diversify their skill sets, to create team versatility.


Nonaka and Takeuchi called this multi-learning because they said it supported learning along two dimensions: traversing different layers of the organization (individual, team, unit, and group) and across various functions. This cross-pollination of skills is an aspect of cross-functionality we encourage today.


Subtle control: The approach to managing these projects was very different. To create a space for the team to innovate, they realized command-and-control supervision wouldn't work.


Instead, management would check in with the team regularly to check progress and give feedback, leaving the team to manage its work how it saw fit.


Estimating Agile user requirements

  1. Relative sizing is designed to help us to be more instinctual in our estimates, something that humans are quite good at.
  2. Therefore, before starting with relative sizing, we need to start with a User Story that we know enough about to size.


One way to set this is up is to spread the stories out on the table for the team to find what they think is a good example of a medium-sized story. This will involve some level of discussion amongst the team.


Once you've identified a medium-sized story in the group, put it in the center of the table, and put the rest of the User Stories into a pile.


The medium story sitting in the middle of the table is now your yardstick, and you'll use it as a comparison to the rest. Take the next story from the pile and compare it to the medium story: "Is it smaller, larger, or is it the same size?"


Repeat this process for all the stories that are in the pile; don't worry about granularities of small, medium, or large at this stage. If it's large, it's on the right-hand side of the table, and if it's small it's on the left.


If it's medium, it's in the middle. One way to speed this process up is to hand out stories to each participant and allow them to relative-size the cards themselves.


The advantage of this approach, comparing two or more items against each other, is that we develop a much more instinctual approach to estimation. We're no longer dealing in absolutes because we're looking at things relatively.


The law of averages should mean that we don't need to hold the team to the accuracy of their estimates, as we know things will balance out in the long run. All of this, therefore, makes estimation a much more painless approach.


This enables us to prioritize work in a way that wasn't possible with the traditional requirements document. This makes the Product Backlog a much more dynamic set of user requirements. Part of the skill of the Product Owner is how they manage the backlog to get the best result from our business and our team.


Bootstrap Teams with Liftoffs

When forming a new team regarding a new business problem, there's going to be a fair degree of getting to know the problem as well as getting to know each other.


A team liftoff is a planned set of activities aimed at getting the team up-and-running as quickly as possible. It is useful for two primary reasons:

  1. It sets the tone and gives us a clear purpose
  2. It's an opportunity for team building, which will accelerate team formation


Team liftoffs can span from half a day to a whole week, depending on the activities included. Dedicating time now to giving our team the best possible start is a shrewd investment as a liftoff will likely enable us to become high performers sooner.


Working closely and collaboratively are habits that we want our team to form as quickly as possible. Choosing liftoff activities that create a shared understanding and foster a positive team culture is key.


One final thing before we get into the details: it's never too late to run a team liftoff. Even if we've already started on our team mission, running a team liftoff will give a team the opportunity to clarify its understanding and codify its approach.


If we feel our current team lacks direction or understanding or cohesiveness, it's probably a good time to run an Agile team liftoff.


 Agile training

Training for the whole team is recommended; this should include coverage of the Agile fundamentals, as well as the basics of setting up-and-running a Scrum. The intention is to provide broad understanding, allow our team to discuss the approach up front, and have everyone start with the same foundation.


ACTIVITY – Sharing the vision

The following five steps all contribute to communicating the vision and mission of our team.


Step 1 – Meet the sponsors

It's important that as many of the project/product sponsors as possible attend part or all of our team liftoff. One way or another, they've all contributed to the vision that the Product Owner is holding on our behalf.


This is an opportunity for them to introduce themselves, share their hopes and dreams for the product and usher in the next phase of its evolution.


The logical place to actively include them is in the product vision activity (step 3). Getting both their input and buy-in at this stage is crucial. With our sponsors on board, then our likelihood of success is much higher.


Step 2 – The company purpose

The overarching company purpose, also known as the company's mission statement, should be the single anchor for any product development team. Everything the company does should be traceable back to that.


It's important that the organization's purpose is restated as part of the liftoff and that our team understands how our mission contributes to the overall company mission.


It is usually in the form of a simple statement, for example, here are a few high-profile company mission statements:

  1. Google: "To organize the world's information and make it universally accessible and useful."
  2. Tesla: "To accelerate the world’s transition to sustainable energy."


Step 3 – The product vision

The Product Owner in Scrum is the person responsible for holding the product vision; this is a view of the product overall and what problem, or problems, it solves for your customer. Our Product Owner maintains the bigger picture and shares relevant information so that our team can tie back any decisions they make while on a mission.


There are several ways that the product purpose can be defined; it's usually in the form of a business case. For example, a Lean Start-up would use a Lean Business Canvas.


The product vision differs from the product purpose in that it is a shorter, punchier version, something that gets people excited and engaged. Many activities will help us create a product vision and make a business case a little more dynamic; these include the product box, the press release, or an elevator pitch.


The elevator pitch is the most straightforward and can be crafted by the Product Owner. Use the following as a guide to creating one:


Imagine you're the owner of a start-up company with an idea for an exciting new product that is going to change the world. Like all new start-ups, you just need the money and are hoping that you can persuade a seed investor or venture capitalist to fund you.


One morning, just after buying your coffee, you jump into an elevator on the way to your shared office space and who should be in there but Jeff Bezos (Amazon). He's just pushed the eighth-floor button; you realize you've only got eight floors to persuade him to invest; what do you say?


Step 4 – The current mission

It's also important that the Product Owner maintains a clear view of the current business problem our team is being asked to solve, typically in the form of a simple mission statement. For example, the following is a real team mission statement:


Enabling new methods of content display, navigation, and discovery.


The mission statement should give our team enough guidance so that we can quickly know if we are on course or if we've deviated. At the same time, it should be broad enough that we can maintain some freedom regarding how we provide solutions. It definitely should not describe how to do something. Instead, it should describe what we are doing.


Step 5 – Define success

As the final stage of setting the vision, we should work with our Product Owner to define how we will recognize success. This not only gives us a clear idea of what the expected outcome of our work is, but also helps us understand where we should put our emphasis.


It's also a time to consider any assumptions that might have been made and to put these out in the open, as ultimately this is what contributes to unmet expectations the most. For example, does this mission require rapid experimentation to see what works in a real-world environment, so we can then learn and build based on our results?


Or is it a mission where we have already gained a clear idea of what we need via user research, and we need to build out something that is simple but super reliable?


In the first example, it may seem obvious to our team; they won't have time to performance-test the application. We will assume that performance testing will be carried out in the subsequent phase once the results of our experiments are concluded. However, a Product Owner wouldn't necessarily know or even have thought of this.


They say the most common cause of a relationship breakdown is unmet expectations. This part of the liftoff is an excellent opportunity for the Product Owner to set expectations from the business's perspective, and for our team to contribute from a technical perspective.


There are seven success criteria listed. The seven post-its represent the sliders for the corresponding success criteria. They can move between 1 and 5, but cannot be taken off the board. Each slider is currently set to a value of 3, and the Success Sliders are in equilibrium; this is a total of 21 (7x3).


The following rules apply:

We are not allowed to exceed a score of 21, so we can't move all Success Sliders into the 5 columns as this would make a total of 35.


We could leave all Success Sliders where they are, but this would not reflect reality. There is almost always a bias for at least one success criterion over another, for example, delivering on time over delivering all the defined scope (or vice versa).


We are now free to move the sliders for any of the success criteria; for every slider that moves up, there must be a corresponding downward movement for another slider.


The intention of the activity is to find out what's important for the successful outcome of this mission. After a conversation amongst our group, we should move the sliders to the position that reflects the best outcome for this work.


This conversation may be difficult, but it's intended to help us uncover any assumptions and discuss them openly. The following figure is an example of the completed activity:


Here you can see that the group has decided that delivering value is a higher priority than delivering all of the defined scopes or delivering on time. Maybe the conversation has gone along the lines that they want to get to market sooner with only a core feature set so that they can validate their idea.


As with a lot of these activities, while the outcome is important for our team to base future decisions on, the conversation we have during this activity is just as important as it helps cement our understanding and clarify our purpose.


Defining success metrics

The final step in defining success is to look at our success metrics. These are how we measure whether or not we are moving in the right direction with each iteration. These are typically defined by our Product Owner and shared with the team for refinement. There are several ways of setting success metrics up.


How Seeking Value in User Requirements Will Help You Deliver Better Software Sooner, we'll discuss the following approaches:

Whichever approach is used, we need to make sure our metrics are easily quantifiable and are moving us in a direction that adds value—remember, what we measure is what we get.


Activity – Forming a team charter

  1. The team charter covers several aspects of how our team will carry out its work:
  2. It's a process definition and agreement about how we will accomplish our work
  3. It's the social contract defining how we will interact with each other, and how we will work together as a team


Remember, the team charter is a living document; it will evolve as our team evolves its practices. It should be posted somewhere in our team's area so that we can reference it and annotate it as we go.


The following steps take us through the necessary activities to form an initial team charter.


Step 1 – Defining done

First, we're going to look at defining done. We'll need to work together as a team on this so find somewhere quiet where everyone's contribution can be heard. Here's how to set it up:

  1. Activity: Defining done
  2. What you'll need: Post-it notes, Sharpies, a spare wall or whiteboard


Remember: Set a time box before we start

The Definition of Done (DoD) is the agreement we use to define how we will know when our work is complete. It looks like a checklist. On it are the tasks that we need to carry out for us to deliver a new feature, enhancement, or bug-fix as part of the next increment of working software.


As a team, we are responsible for defining done. A simple activity to do this requires post-its, sharpies, and a whiteboard. For this activity, we ask our team to think of the steps they go through from the point where they are given a requirement, to the point where they deliver it in the form of working software.


Work collaboratively, write down each step on a post-it note, and share as you go. Put each step onto the whiteboard or wall in a timeline from the left (start) to the right (finish).


The team should consider the quality aspects of what they are delivering, as well as the steps they will take to avoid mistakes and make sure the delivery pipeline runs smoothly.


Once the timeline is complete, discuss it as a group. If our group is happy and there's no more to add, for now, write out the timeline as a checklist.


It's useful to remind ourselves that done means more than just "coding done" or "testing is done" or "review is done." To do this, we talk about "done." "Done done" is when we know that absolutely everything that is needed to take this increment to a production-ready state is completed.


Here's an actual example of a team's Definition of Done (DoD):


Step 2 – Working agreement

Next, we look at our social contract; this defines ground-rules for working together as a team.

  1. Activity: Creating a working agreement
  2. What you'll need: Post-it notes, Sharpies, a spare wall or whiteboard
  3. Remember: Set a time box before we start


So let's get started:

1. Set up the whiteboard with the banner WORKING AGREEMENT.


2. Distribute post-its and sharpies to each team member. Explain to the team that they are going to use silent brainstorming. Each idea for finishing that sentence should be written on a post-it note, one idea per post-it only, and as many post-its/ideas as they like. They can use the example topics for inspiration if they need to.


3. Agree on a time box with the team for silent brainstorming and writing post-it notes, somewhere between 5 to 15 minutes. Then set the timer and start the activity.


4. Once we have finished coming up with ideas, or the time-box is complete (whichever comes first), we take it in turns to go up to the whiteboard and place our post-it notes on it. We should do this one post-it at a time, reading them out loud to the rest of the team as we go.


5. Once each team member has placed their post-its on the board, we should gather around the board as a group. The aim of this stage is to group similar ideas or remove any duplicates.


6. The final step is to review the revised working agreement and decide if we can all abide by it. Are there any changes? Anything we should add?


After several rounds, our team should be in agreement, and we should have something that looks like the following:


Step 3 – Team calendar

The final step is to work as a team to establish our Sprint calendar. Forming a consensus amongst the group about the days/times that we meet will help ensure everyone can attend and we don't miss out on anyone's contribution.


Explain that it will be easier to first determine the Sprint start and end dates then set up all meetings. For example, Sprint Planning happens on the first day of the Sprint. Sprint Review and Retrospective are on the last day of the Sprint.


The DAILY SCRUM should be every day, at a time agreed on by the team, except when it clashes with the Sprint Planning or Sprint Review or Sprint Retro meetings.


In the example, on the first day of the Sprint, our team has chosen to hold the Daily Scrum at 1.30 p.m., after Sprint Planning. Sometimes teams hold their Daily Scrum directly after Sprint Planning; sometimes they don't feel it's necessary. We should use our discretion.


The same thinking applies to the last day of the Sprint; for example, if we decide to hold both the Sprint Review and Sprint Retrospective events in the afternoon, perhaps we should have a Daily Scrum to coordinate our work in the morning.

Annotate the index cards with post-it notes showing the times as in the preceding example.


Keep team events, such as the Daily Scrum, at the same time each day. This reduces the cognitive load on our team—it means they don't have to think, they'll just get into the rhythm.


Once the team has reached an agreement, schedule all meetings at given time slots for the foreseeable future. Cadence is an essential part of Scrum. Establishing a rhythm is key to the success of the incremental delivery approach. The regular iteration cycles give the team a heartbeat by which it operates.



The aim of the team liftoff is to launch our team on its mission as quickly as possible. To do this, we first need to have context; we need to understand our product and its purpose and how it fits in with the company's goal.


Knowing this background helps us better understand the business problem we're trying to solve. It means we will be much more able to make better design and implementation decisions. For example, is this a standalone solution, or does it fit into a wider ecosystem?


The second part of the liftoff is deciding how best to solve this problem together. This gives our team an operating manual/system, which includes our team's technical process (definition of done) and social contract (working agreement). Both of these help us remove any assumptions on how we're going to work with each other.


For a team transitioning to Agile, this should be underpinned with Agile fundamentals training so that we all have a common foundation in the Agile Mindset.


This is something we will be able to use in our day-to-day work environment, using our knowledge to guide the decisions we are taking. We should continually reflect on how our choices fit the values and principles of the Manifesto for Agile Software Development.


In the Define success section, we discussed the importance of recognizing what success should look like. Without this, we're unable to track our progress and determine if we've completed our mission. We also demonstrated an activity called Success Sliders, which helped us frame which parameters of our mission are important.


A brief introduction to measurements for Agile software delivery

Before we take a look at example metrics we can use, there are a couple of distinctions we need to make between the types of measurements and how we should use them.


Understanding measurements

There are many measurements that we can use to give us an idea of how our team is performing. Before we start tracking metrics, it's important to understand that what we measure is what we get.


For instance, velocity is a standard measurement used by Scrum teams. It tells us how many User Story Points we're completing on average in each Sprint.


As we'll explain further in the Negative versus positive section, it's a measurement that is used mainly by the team to help them understand how they are performing. If used out of this context, it can cause adverse effects which can make the metrics meaningless.


A lot of measurements work in the same way; that is, they're meaningful to the team but less useful to those outside it. They don't always mean much in isolation either;


we often need to compare and contrast them with other metrics. For example, the percentage of code covered by automated tests is useful if the quality of those tests is also measured.


Qualitative versus quantitative measurements

1. Cyclomatic complexity: A quantitative measurement of the number of independent paths through our code; the more paths, the harder it is to maintain


2. Ease of last change: A qualitative metric based on the Development Team's viewpoint; the harder it feels to make the previous code change, the harder the codebase is to maintain


More often than not, we can translate qualitative measurements into numbers, for example, rating customer satisfaction with our product on a scale of 1 to 10, one being very unhappy, ten being very happy.


However, if we don't also capture the individual's verbal response as to why they are feeling particularly happy or unhappy, then the metric over time may become meaningless.


For example, we won't be able to address a downward trend in customer happiness unless we know why people are unhappy, and there is likely to be more than one reason.


Also, "value" means many things to an organization, so we have to measure each aspect of it, that is, direct revenue reduced costs, return on investment, and so on. Sometimes the value is easily quantifiable, for example when a User Story directly produces revenue.


Other times it requires feedback from our customer, telling us whether what the team is building is meeting their needs.


Negative versus positive metrics

A negative metric is a performance indicator that tells us if something is going badly, but it doesn't show us when that same something is necessarily going well.


For example, velocity. If our velocity is low or fluctuating between iterations, this could be a sign that something isn't going well. However, if velocity is normal, there is no guarantee that the team is delivering useful software.


We know that they are working, but that is all we know. Measuring the delivery of valuable software requires a combination of other metrics, including feedback from our customer.


Therefore, velocity is only useful if it's being used to determine what the team is capable of during each Sprint. It will also aid in scheduling releases, but it is no guarantee that the product under development is on track to be fit for purpose.


Something we should also be aware of with metrics such as velocity is that the very act of focusing on them has the potential to cause a decrease in the performance we desire.


A common request to teams is to increase their velocity because we want to get more done. In this situation, the team will raise their velocity, but it won't necessarily increase output.


Instead, a team may conclude that they were too optimistic in their estimates and decide to recalibrate their Story Points. So now a story that would have been five Story Points is eight Story Points instead. This bump in Story Points means they will get more Story Points done in the same amount of time, increasing their velocity.


In my experience, this isn't something done deliberately by a team. Instead, it's done because attention was drawn to a particular measurement, causing the problem to be analyzed and corrected. Unfortunately for us, negative metrics aren't the ones we should be poking around with.


It may seem an obvious thing to say, but to get more work done, we need to do more work. If the team is already at capacity, this won't be possible. However, if we ask them to increase their velocity, they can do that without achieving any more output.


To avoid this scenario, instead, think about the outcome you're trying to reach and then think of ways to improve that. For instance, if our interest in increasing productivity is because we want to release value sooner, we can do this without raising capacity or putting pressure on our team. 


Other examples of negative metrics in software product development are:

Lines of code written: Shows us that our developers are writing code, but doesn't testify to the usefulness or quality of that software in any way. Focus on this metric, and you will certainly get an increase in lines of code written if nothing else.


Test code coverage: The percentage of code covered by automated tests. Shows that tests have been written to run the code, but there’s no guarantee of the effectiveness of those tests in terms of preventing bugs.


Examples of metrics that we could focus on:

  1. Value delivered: Is the customer getting what they want/need?
  2. The flow of value: How much and how often is value delivered?

Code quality: Multiple measurements which focus on two critical characteristics of our software:

  1. Is it fit for purpose?
  2.  How easy is it to maintain?


One fit-for-purpose quality that our customer should care about is the number of bugs released into the wild. This includes bugs that result from a misunderstanding of requirements. The further down the value stream we find bugs, the more expensive they will be to fix. Agile methods advocate testing from the get-go.


The happiness of our team(s): Happy teams are working in a way that is sustainable, with no extended hours or weekend work. Standards and quality will be high. Our team will have a good sense of satisfaction.


In short, negative metrics have their place; however, used unwisely they can have unintentional side-effects and degrade the performance you're looking to enhance.



An Enhanced Release Burndown looks like this:

The two dotted lines plot the trajectory of these two groups of Story Points. The top dotted line represents the velocity of our team, and the number of Story Points decreasing as work gets done.


The dotted line at the bottom represents the trend of work added to or removed from the backlog. The point at which the dotted lines converge gives the forecast for when the release will be complete.


One thing to note is that User Stories can be removed from the Product Backlog as well as added, and a User Story's size in Story Points can decrease as well as increase. All changes in scope are applied below the baseline.


New information comes to light all the time in Scrum's fast-paced environment; we'll need to have a robust Product Backlog Refinement process to prevent estimates from going stale.


We can use this approach to forecasting in any situation where we have a bundle of User Stories that represent a milestone when complete. For example, we could also use an Enhanced Burndown chart for tracking when a single feature, or Epic, will complete.


So, what are the software qualities we should aim to remain Agile? And how should we measure them?

The simplest way to measure them is first to set some standards for our team to follow. Record this somewhere visible and easily accessible to the team. The ideal space would be anywhere in a visible workspace alongside a Scrum Board.


Feedback on how we're progressing with quality standards is essential, and this can originate from many sources, such as peer review.


We should set thresholds, so if one or more of our desired qualities drops too low, the team is warned to take action. There are some handy tools that monitor code quality automatically; examples include Code Climate and SonarQube.


These are some aspects of quality we should be monitoring:

Clear: Clear code often comes down to the user's coding conventions. We should aim for consistency. A few examples include:

  1. Keep function names short and meaningful.
  2. Make good use of whitespace and indentation.
  3. Keep lines of code to fewer than 80 characters.


Aim for a low number of lines of code per function—use functional decomposition, reduces conditional logic to the minimum, and so on.


Simple: This is a two-parter:

1. We apply the Agile principle of Simplicity - the art of maximizing the work not done and only write code that we need now.


2. We make refactoring part of our team's DNA. As Mark Twain once said, " I didn't have time to write a short letter, so I wrote a long one instead." He meant that to make something succinct and meaningful, we have to put in time and effort; this applies to our software too.


Well-tested: A broad spectrum of tests which cover unit, integration and UI testing. We should aim for good code coverage and well-written, easily executed tests. If we want to maintain confidence over time, our test suite should be automated as much possible.


Bug-free: Tests are only one part of the equation as they just demonstrate the absence of bugs, not their presence. To aim for a zero defect rate, we have to focus on clear and straightforward code because one of the biggest causes of bugs is code complexity. We'll take a look at this in more detail in the next section.


Documented: We should provide just enough documentation so that those that follow behind us, including ourselves, can pick up the code and have a sound understanding of our original intent. To be frank, the best way to do this is through tests because they are "living" specifications of how our code works.


Better still, we can write tests using Test-Driven Development (TDD), because this gives us specifications up front before we write code. We'll talk more about TDD in the next blog. Besides tests, we should include code comments, README files, configuration files, and so on.


Extensible: Our software should be easy to build on and extend. Many will use a framework to give it some foundational structure. If we are using a framework to help guide extensibility, then we should adhere to the practices advocated by that framework as much as possible to ensure consistency.


Performant: Performance is key to several qualities, including usability and scalability. We should have baseline measures such as response times, time to the first byte, and so on, which we can use to ensure our software remains useful.


Qualitative metrics

Defining what success looks like

When working out what to measure, one of the simplest things our team can do is define what success will look like. In doing this, we will identify measurements that we can use to tell us if our team is on the road to a successful outcome.


Defining our success

Bootstrap Teams with Liftoffs, we discussed defining success using an activity called Success Sliders.

In the Success Sliders activity, we asked the team to consider seven different sliders which represent different facets of success: Customer Satisfaction, Team Satisfaction, Deliver Value, Deliver on Time, Deliver on Budget, Deliver All Defined Scope, and Meet Quality Requirements.


These particular characteristics focus on delivery in a project setting. In a long-lived product environment with iterative/incremental delivery, these could be redefined, for example:

  1. On time becomes early and often
  2. On budget becomes cost-effective
  3. The defined scope becomes useful software or satisfied users


However, all teams are different, and these particular factors may not apply to our team or its mission. So, in this section, we are going to take a look at a team activity that is used to define our own set of success factors.


  1. Activity: What defines our success?
  2. What we'll need: A whiteboard, whiteboard markers, post-it notes, and sharpies
  3. Setup: A large table that the whole team can fit comfortably around
  4. Remember: Set a time box before this activity starts


Follow these steps to define what success will look like:

1. On the whiteboard, write the headline question: how will we define our success?

2. Pass out post-it notes and Sharpies to each team member. Tell them they're going to do some silent brainstorming. Ask them to answer the headline question and write one item per post-it note.


3. Answer any questions, and when everyone is comfortable, set the time box to 15 minutes and start.


4. When 15 minutes is up, assess the room and determine if anyone needs any more time. If everyone is ready, get the team to take it in turns to post their post-its on the whiteboard. Ask each team member to read their post-its out loud as they do so.


5. There will likely be some duplicates and similar themes. Ask the team to come up to the board and do some affinity mapping, involving grouping related ideas together. Move similar post-it notes next to each other than circle the group and give it a name which encompasses the fundamental concept.


To create our set of Team Success Indicators, we used the post-its within each group to inspire us to write two statements, one positive and one negative, for each indicator. 


Using our Team Success Indicators

The team was learning a whole bunch of new technologies at once while being put under pressure to deliver by their stakeholders. We conducted a retrospective and used it as an opportunity for the team to define actions they felt that would mitigate this.


The larger arrows indicate a gap in which we didn't record our team's health; it's because we were all on holiday.


This particular dashboard is online, and so are our team health surveys, so we link through so people can read the comments. If we were doing this using a physical workspace, we would post the individual comments on our dashboard for everyone to see.


This process is based on an idea from Spotify, which I've modified slightly from their original. If you'd like to create your own version of this, you can use Spotify's online kit to get you started:


User Happiness Index

"How satisfied are our users?" is a question we need to ask ourselves often. It is one measurement we can use to determine if we're delivering useful software.


There are many ways to foster engagement with the people for whom we're building software. Here are a few suggestions:


For direct feedback we could:

Ask for star ratings and comments, similar in concept to the Apple and Google app store rating systems. We could ask for these via our product, or via a link sent in a message or email.


Observe our user while they use our software. We could ask them to describe what they are currently doing so we can hear the process they have to go through.

Survey a group; for example, each time we release a new feature we could wait for a while to ensure uptake and then poll customers to gauge their impressions.


  1. Carry out business and customer surveys in person, if possible. This is an excellent way for us to assess the satisfaction of our client.
  2. Capture feedback from our customers using a tool such as UserVoice.
  3. For indirect feedback we can:
  4. Look at all calls that have gone to the service desk regarding our software (number and nature)
  5. Use analytics to find out the level of engagement our user group has with particular features