Agile Software Development

Agile Software Development

What is Agile Software Development?

How we build and deliver software will make a big difference to whether we successfully give our customers something they need. We'll look at several different methods, which can be implemented individually or combined, to help us deliver in a smarter way.  In this blog, we explain the Agile Software Development with examples.

  1. Implementing incremental delivery in Agile:
  2. Working with software in small, manageable chunks
  3. How to make use of inspecting and adapting in your Scrum ceremonies
  4. Introducing some Lean thinking to improve flow:
  5. Systems thinking: Optimizing the system as a whole, not locally
  6. Changing our workflow by managing the work in progress
  7. Developing a mindset for continuous process improvement
  8. Adopting Lean Startup principles to validate product ideas sooner:
  9. Build, Measure, Learn: Learning rapidly by doing and failing fast
  10. Implementing incremental delivery in Agile

 

Working software is a term we use to describe software that is working as it was intended, that is, it has met all of its acceptance criteria and our Definition of Done (DoD) and is either waiting to be shipped or already has been. The concept of working software is intended to solve several problems we often run into in the software industry:

 

It moves the focus from documentation and other non-functional deliverables to what we're being paid for, software. The intention of the Agile value working with software over comprehensive documentation isn't to say documentation isn't needed; it's just about maintaining the right balance. We are, after all, a software product development team, not a documentation team.

 

To illustrate this, if we deliver technical design documentation to our client, it will probably make little sense to them. And unless they are technical, it won't give them any understanding of how their product is evolving. Working software, however, is tangible, much nearer to its final state, and will provide them with a real sense of progress.

 

We want to deliver increments of working software so that we build up the product as we go. If we do it smartly, parts of the product can be put to use before the rest becomes fully functional.

 

It emphasizes working software because it lets our client preview how things could be and gives them an opportunity to provide relevant feedback based on the real product.

 

However, with the right software delivery approach, we can reduce the risk of this happening. To do this, we have to think beyond just using iterations; we have to put some thought into how we slice up our product for delivery.

 

For instance, just completing part of the data layer of an application, without any implementation of the user interface elements that a user will interact with, doesn't deliver anything of real use for our business.

 

Without a technical understanding of how software is built, our customer will find it impossible to imagine how the software might look in its final state and will be unable to give any real feedback.

 

The sooner we can deliver usable software, the sooner we can get feedback on whether it is useful. This is what we mean by tightening feedback loops. Let's look at how to do this.

 

Working with software in small, manageable chunks

The easiest way to address the risk of people not knowing what they want until they see it is to deliver usable increments of working software to them as soon as possible.

 

A crucial aspect of this early-and-often approach is that we will get meaningful feedback that we can incorporate back into the ongoing build and delivery.

 

Using this approach to software delivery means that everyone, including our customer, has to understand the caveats. They won't be seeing or using software that is either necessarily complete or has the final polish applied.

 

In fact, they will see software as it evolves in both function and form. User experience, which includes user interactions, process flow, and graphic design, will also be iteratively applied as we learn more.

 

How we break down the requirements into small manageable chunks is the first step to achieving this. It's vital that we first deliver the parts of our product that we most want feedback about. These are often our core business processes, the things that directly or indirectly make us money, and therefore involve the most risk.

 

To achieve this approach, we should stop thinking of building up functionality in layers:

This method may seem sensible because by starting at the backend and developing the data store with its associated business logic, we are creating a foundation to build upon. However, it's somewhat of a construction industry analogy that doesn't apply to how we make software.

 

Instead, to deliver incrementally, we have to think of each increment as end-to-end functionality that provides some business value to our customer. We often refer to this as vertical slices of software, because every small slice carves through each application layer, delivering aspects of each. The concept of vertical slicing is shown in the following diagram:

 

We include the User Interface layer so that we provide our client with some way of interacting with our software so that they can make real use of it. If we think of each vertical slice as a feature, we can carve up our features in a way that will make sense as we build out the product incrementally.

 

For instance, if we are building an online shop, the first vertical slice could be the display of items we have for sale. As well as showing items for sale, we'll probably need some way of managing their details. Again, we can build something rudimentary to start with and then incrementally deliver enhancements as required.

 

The next vertical slice could be either the checkout process or the search facility. Any of these features can be completed independently of each other, but it probably makes sense to build them in a particular order. For instance, without items for sale the search facility won't work, nor will the checkout process.

 

If we build up our feature set in a way that makes sense, we get valuable feedback from our customer that we're building the right thing. And as we deliver further increments of working software, they can use these to determine if the software we're developing will meet their needs.

 

We also get to validate our technical approach by asserting whether or not we're building the thing right. Once we have an end-to-end delivery taking place, we can start to iron out problems in our integration and deployment processes. We will also get to learn sooner how we will configure and manage our production environment.

 

When building any product, we take the Goldilocks approach—not developing too much, or too little, but getting it just right. This, in particular, should influence our decisions in terms of architecture and DevOps; we do just enough to get our current feature or feature slice working. Architecture and infrastructure have to be built incrementally too.

 

We'll talk more about how to prioritize features later in this blog, in the Build, Measure, Learn - Adopting Lean Startup and learning to validate ideas section.

 

In the Scrum framework, there are multiple opportunities to inspect and adapt:

  1. During Sprint Planning, as the team determine how they will implement the Sprint Backlog
  2. At the Daily Scrum, when new information is uncovered during the implementation of a User Story
  3. When a User Story is completed, or a new feature is delivered; we check in with our Product Owner or business owner to verify it is as expected

 

At the Sprint Review, when our stakeholders are invited to give feedback During the Sprint Retrospective when we can uncover what is and isn't working well, and we create a plan to take action and change things up

 

These are all checkpoints for us to consider if new information has come to light since the last time we met and decided what to do.

 

The importance of User Experience (UX)

Along with our Product Owner, our User Experience (UX) specialists are usually the first people to engage directly with our key stakeholders to determine what is wanted.

 

The job of our UX professionals is to help our team turn what our customer wants into something our customer needs. If the customer needs aren't obvious, a UXer has tools in their toolbox which can be used elicit further information.

 

Their toolset includes creating wireframes, or semi-interactive prototypes using tools such as InVision, or full mockups using HTML/CSS and JavaScript. Each gives an experience close to the real thing and helps our customers share and understands their vision.

 

The UX professional, working closely with the Product Owner, is responsible for two aspects:

  1. What is required?
  2. How it will work best

 

UX covers all aspects of the user interface design. It can be broadly summed up as interaction design and graphic design with a smattering of psychology, but that is just the tip of the iceberg.

 

Although both roles overlap to a certain degree, user interaction design often concerns itself with user interactions and the process flow within the product. Graphic design concerns itself with the presentation, particularly information hierarchy, typography, and color.

 

A UX specialist has to have a good ear, patience, and be willing to go through multiple iterations to get feedback.

 

If we're to create software that is intuitive and straightforward to use, we have to start with the UX because it drives how we build the application.

 

Different UXs will result in different application architectures. We create software for people; we need to get feedback from these people about whether we're building the right thing as soon as possible

 

Remember, it's not sufficient to just make software how we think it should work. Instead, we need to turn what the customer wants into something the customer needs.

 

Shifting left

Shifting left is the concept of incorporating specific practices, which have traditionally been left until late in the process, much earlier in our workflow.

 

System integration, testing, user experience, and deployment can all profoundly affect the outcome and success of our product; they all validate the product's viability in different ways.

 

If we start thinking about them sooner in our development lifecycle and we start to build these strategies as we go, we have a much better chance of success and of avoiding any nasty surprises.

 

By moving these practices towards the beginning of our product development life cycle, we will start to get feedback sooner that these aspects of our product will work.

 

For example, in a linear development process such as a gated waterfall approach, we often leave a full integration and system test until the end of the project. This creates several problems:

 

Taking the "big bang" approach to system integration often uncovers many false assumptions that were made, and therefore many changes will be required. It will be costly, with days or even weeks of lost time.

 

A large-scale test at the end of the development cycle will often find numerous problems that need to be fixed. Sometimes it will discover fundamental issues that require substantial reworking or even going back to the drawing board.

 

In the same way, I've seen user interaction and graphic design left until the end of the development cycle. These specialists are often brought in late to the project, with the hope that they will make things work better or make things look nice.

 

Unfortunately, if making things work better or look nice involved significant changes to the user interface, it would also often require a substantial shift in the architecture of the application.

 

Simply put, by working on the things that could have a profound effect on the outcome of the product sooner, we reduce risk.

 

The following graph shows the relative cost of fixing deficiencies depending on where they are found in the software development lifecycle:

 

Defects in software are anything that doesn't work as intended or expected, whether it's a miscommunicated, misunderstood, or poorly implemented requirement, or a scalability or security issue.

 

We uncover defects by moving increments of working software through our system to "done" as quickly as possible. To do this, we have to incorporate all aspects of software development, often starting with UX, as well as testing and deployment strategies from the outset.

 

We don't expect to build all of these in full the first time around. Instead, we plan to do just enough so that we can move to the next increment and the next, iteratively improving as we go.

 

Shifting right

As well as shifting left, you may be wondering if there is such a thing as shifting right. Yes, there is; it's where we start to think in terms of maximizing the value delivered, and delivering that value to the client as soon as possible.

 

Introducing some Lean thinking to improve the flow

So far, we've looked at breaking work down into small chunks. We've also looked at how we can better inform the work that we carry out by shifting left the activities that we have traditionally neglected until the end of our development cycle.

Now we will apply some Lean thinking to see how we can improve the flow of work.

 

Agile Software Delivery Methods and How They Fit the Manifesto, we discussed the key tenets of Kanban/Lean, which are:

  1. Make the work visible so that we can inspect and adapt.
  2. Break down work into similar size units, to smooth flow and reduce the "waste of unevenness."
  3. Limit our work in progress so that we focus on improving our end-to-end flow.

 

In the following section, we talk specifically about how we enhance flow through our system, but first let's try an activity, as there is nothing quite likes a hands-on demonstration of how this works:

 

SETUP: This game is best played seated around a long table. Arrange the team around the table. Each team member should be easily able to pass coins to the next.

 

This game is played in three rounds; each round, the coins will start with the first person in the line. The coins have to pass through the hands of every team member. Each coin has to be flipped one at a time before it can be considered "processed" by that team member. The round will end when the last person in the line has flipped all the coins

 

The batch size is 10. Give all 10 coins to the first player. They flip each coin one at a time until all 10 have been flipped and then pass the pile to the next player. The next player flips all the coins one at a time until all are flipped and then gives the pile of 10 to the next player.

 

And so on. Start the timer when the first coin of the first player is flipped. Stop the timer when the last coin of the last player is flipped. Record the time it took.

 

The batch size is split. Repeat, except give the coins to the first player in two batches, one of four and one of six. Flip the coins in the batch of four first. Once the four have been flipped; pass them on to the next player.

 

The next player then flips the stack of four coins one by one. Meanwhile, the first player flips all the coins in the bunch of six. Once they've flipped each coin, they pass the batch of six on to the second player.

 

Again, the timer starts when the first coin is flipped by the first player and stops when the last coin is flipped by the last player. Record the time it took.

 

The batch size is one. For the final round, pass all the coins to the first player. Once again, they flip each coin one at a time, except as soon as they've flipped one coin they pass it to the second player.

 

The second player can then flip the coin and pass it on to the third player. Again, the timer starts when the first coin is flipped by the first player and stops when the last player flips the last coin.

 

Play the game first, and then we'll discuss the possible outcomes in the results section.

 

The coin game results

So, how did we get on with the coin game activity? What did we observe?

During the first round with a batch of 10, we'll have noticed that nine people were sitting around doing nothing, while one person flipped coins.

 

During the second round, with the batches of four then six, we'll have observed that both batches were faster than the batch of 10. We'll also have seen that the batch of four moved faster than the batch of six. If the batch of six had been played first, it would have slowed the batch of four down to its pace.

 

After the third round, we'll have noticed that, as the batch size comes down, the time to complete the work speeds up quite dramatically. That's because the utilization of every person in the line increases as we reduce the batch size.

We've optimized the system for the coin game; now it's time to discuss the theory.

 

Systems thinking – Optimizing the whole

When we're part of a complex system, such as software development, it's easy to believe that doing more in each phase of the SDLC will lead to higher efficiency.

 

For instance, if we need to set up a test environment to system-integration-test a new feature, and this takes a while to make operational, we'll be inclined to increase the size of the batch of work to make up the time lost setting up the environment.

 

Once we start to batch work together, and because it's complicated and requires so much setup time, the thinking is that it won't do any harm to add a bit more to the batch, and so it begins to grow.

 

While this can lead to local efficiency, it becomes problematic when we discover an issue because any reworking will cause delays. And while this may only affect one particular part of the batch, we can't easily unbundle other items, so everything gets delayed until we fix the problem.

 

As we saw in the coin game, large batches take longer to get through our system. When a problem is discovered inside a batch, this will have a knock-on effect both up and downstream in the process. Reworking will be needed, so people back upstream will need to stop what they are doing and fix the problem.

 

Meanwhile, people downstream are sitting around twiddling their thumbs, waiting for something to do because the batch has been delayed for reworking.

 

Gated approaches with handovers cause a big-batch mentality because our local efficiency mindset makes us believe that doing everything in one phase will be more efficient than doing it incrementally. The reality is that doing everything perfectly the first time is just not possible.

 

In a complex system, when you optimize locally, you tend to de-optimize the system as a whole. Instead, we need to break our big batches of work down into smaller chunks and focus on end-to-end flow through our system, like in the coin game—the smaller batches of coins flow much more evenly.

 

There is, of course, a balance to be struck; we need to be realistic regarding what is possible. The smallest chunk possible is a discrete piece of functionality that can be delivered and from which we can gain feedback on its applicability. This feature, or slice of a feature, shouldn't be so large that a work item sits in progress for weeks, or months even.

 

Instead, break items down so that they deliver incremental value. The feedback we get increases in value as it travels down the delivery pipeline.

 

The more input that we gain, whether it's direct from our customer through a working software demonstration, through integration with other parts of our system, or actually deployed; the better our chances of success.

 

For example, why hold off deploying to the production environment until you have a critical mass? Instead, use feature flags to deploy, but keep the feature switched off in production until it's ready for release.

 

We can do this to get critical feedback about system integration and the final step of the deployment process at production time. Plus, with the right infrastructure tweaks to enable us to switch on certain features for a selective audience, we can test our new feature in production before we go live.

 

Changing our workflow

When using Scrum, the Sprint Backlog is often seen as the batch size. A pattern we often see, and something that we want to minimize as much as possible, is a Scrum Board that looks like this:

 

You'll notice that almost every single User Story is in play, which probably means that each member of our team is working on something different. This is often a symptom that our team isn't working together, but as individuals, within their specializations—user experience designer, graphic designer, developer, frontend developer, tester.

 

Each specialist will go through the tasks of a User Story to determine if there is some work they can do. Once they've moved through all stories on the board and consider their tasks "done," they then look at the Product Backlog to see if there is any further work that fits their specialty.

 

In this scenario, when a software developer considers the coding is complete for one particular User Story, they then move onto the next User Story for the next piece of coding, and so on. This practice causes a few knock-on effects:

 

Handoffs: Handing work over between the specializations, in this case between the software developer and the reviewer or tester, is wasteful regarding knowledge and time lost during the transfer.

 

This also includes a transfer of responsibility where, like a game of software development tag, the last person who touches the software becomes responsible.

 

Interruptions: The team member will need to be pulled off what they're currently doing to fix any problems brought up by either testing or review. There will likely be several iterations of reviewing, testing, and bug fixing.

 

Waiting in queues: Queues of work start to form. For instance, all the coding is getting done super quickly because all of the developers are so busy coding.

 

So busy, in fact, none of them are stopping to review each other's code or fix any problems the testers have raised. This leaves each of those User Stories open, with a status of "in progress" on the board, when in fact nobody is working on them.

 

Multitasking: Despite all best intentions, the lack of synchronization between team members will cause people to be pulled from one task to another to perform handovers or reworking.

 

When a team works this way, if we lay out each task in sequence it will look a little like a production line, except instead of having just one production line, we have one for each User Story.

 

The previous person in the line believes they've completed their work and hands it over to the next person to do theirs; they then start the next User Story and open a new production line.

 

However, making software is not a linear process, and it won't be long before each team member is being pulled from one User Story to another.

 

Handoffs are especially apparent to the test role. They are often the last people in the process before deployment, and therefore the ones under the most pressure— pressure to get testing done and pressure not to find problems (or if they do, depending on how significant the problems are, to accept that they could be dealt with in another round and to let the software go out in a compromised form).

 

Thinking in terms of job specializations tends to put us into boxes. It makes us think that business analysts only gather requirements, software developers only write code, and testers only test it. This doesn't team thinking; this is individual thinking.

 

At worse, it causes an abdication of responsibility, where people only feel responsible for their part of the User Story, rather than the User Story as a whole. This approach is little more than a mini-waterfall. It still shares the same problems associated with cascading work, albeit on a smaller scale.

 

To accommodate "vertical slices" of useful software, we have to change our workflow. Each User Story becomes the whole team's responsibility. Remember, Scrum intends us to work together to get the ball across the line. The distinction between frontend, backend, UX designer, and tester starts to blur.

 

To focus on flow, we have to consider the system as a whole, from the point when we start to work on an item to the point where we deliver it. We look to optimize the end-to-end delivery of items in our flow through the close collaboration of our roles.

 

You might hear end-to-end flow also referred to as cycle time. The cycle time for a User Story starts when work begins on it, and is the number of days to completion, or "done."

 

One way to improve team cohesiveness and focus is by limiting the amount of Work In Progress (WIP). How do we do this? There are two schools of thought:

 

Reduce work in progress limits to a minimum; allow the team to get a feel for flow over several items of work, then if necessary increase WIP gradually to see if it increases flow or decreases flow.

 

Don't set any WIP limits and watch what naturally happens; if a logjam starts to form, reduce the WIP limit for that particular column.

 

The aim of limiting WIP is to reduce the amount of multitasking any one team member has to do. As we've already mentioned in blog, Agile Software Delivery Methods and How They Fit the Manifesto, one definition of multitasking is "messing multiple things up at once."

 

This isn't just an approach we use for Kanban. We can apply this to Scrum as well, by merely applying a WIP limit in our in-progress column.

 

We can start to measure flow by annotating our User Stories with useful information. Apply the start date to mark when work began. Add the finish date when the job completes.

 

The number of working days between the start and end date is the cycle time of the story. At each Daily Scrum, if the story is in progress but isn't being worked on; mark the story card with a dot. This shows the wait time in days. By reducing delays, we will also increase flow.

 

There are team configurations that will help us naturally limit WIP:

 

Pairing: Two Development Team members work together on one User Story, taking it from end to end across to the board. 

 

Swarming: A temporary burst of the whole team's power, used to get over humps. For instance, to move all the User Stories waiting to be tested, everyone rolls up their sleeves and starts testing. Once we've gotten over the hump, the team then tends to return to their standard configuration.

 

Mobbing: The whole team's power applied to one User Story at a time. Unlike swarming, the team tends to stay in mob configuration to see a piece of work through from start to end.

Some teams work in mobs permanently, giving them a WIP of one User Story at a time. 

 

In each of these approaches, people work much more closely together. Software development is about end-to-end delivery. Cross-pollinating and understanding each other's skill sets by literally working together creates better software and most likely sooner. Working this way also avoids handoffs and context switching as much as possible

 

This is a move from optimizing our "resources" to optimizing our workflow, which at the end of the day achieves what we wanted—increasing the flow of value we deliver to our customer, which in our case is in the form of working software.

 

Kaizen and developing a small, continuous improvement mindset

We first discussed Kaizen, Agile Software Delivery Methods and How They Fit the Manifesto. It is a Japanese word meaning "change for the better." It is commonly referenced as meaning "continuous improvement," and made famous through its incorporation in the Toyota Production System (TPS).

 

On the Toyota production lines, sometimes things go wrong. When they do, the station where the problem has occurred gets an allotted time to fix it. If they exceed the time buffer, there will be an impact on the rest of the production, and they will need to stop the entire line.

 

At this point, workers, including the managers, gather at the problem site to determine what seems to be the problem and what needs to be done to fix it.

 

All too often, when a problem occurs, we see the symptoms of the problem, and we try to fix it by fixing only the symptoms. More often than not, the cause of the problem goes deeper, and if we don't fix more than just the surface-level observations, it will occur again.

 

So, if the production line starts up again in this situation, it won't be long before the problem recurs and it needs to be shut down once more.

 

To solve this problem, Toyota introduced a Root Cause Analysis, which uses techniques for getting to the bottom of why something has happened. 

 

I'll explain the simpler of these, Five Whys analysis, using a real-world example.

At my current place of work, we encourage our teams to take ownership of their deployments during production. This strategy doesn't come without risk, of course, and the teams do have a lot to learn in the DevOps space.

 

Fortunately, we have coaches who work explicitly in this area, helping us come up-to-speed. Even so, sometimes the teams can be overly cautious.

 

That is why we introduced the concept of Fail Cake, inspired by a ThoughtWorks team who did something similar; we could see that we needed to encourage the team to take strategic risks if they were going to advance their learning.

 

Here is the story of Fail Cake and learning fast.

 

Fail Cake

Several of the teams I worked with identified specific circumstances which would trigger a Kaizen event (Continuous Improvement Meeting). These triggers included one particularly dire situation: releasing a priority-one bug to production.

 

If this happened, they would fix the problem immediately. Fortunately, they had a deployment strategy that meant they could usually quickly recover if they couldn't isolate the problem straight away.

 

While eating the cake, they'd perform a root cause analysis to determine what went wrong in their process and how they could prevent it ever happening again.

 

The purchase and eating of the cake were deliberate; it was intended to give the team space to reflect and also encourage others involved in the incident to attend and give feedback. As a result of following this process, they very rarely released any priority-one bugs into production.

 

Here's a real example of one of our teams reflecting on how to improve their process using the Five Whys method.

 

Root cause analysis with the Five Whys method

"A relentless barrage of 'whys' is the best way to prepare your mind to pierce the clouded veil of thinking caused by the status quo. Use it often." - Shigeo Shingo

 

Background: The team noticed that User Stories were taking longer to complete; they wanted to understand why. We had the presence of mind to understand that we'd been unconsciously increasing our batch size, so we conducted a Five Whys root cause analysis to try to understand it more:

 

Why were the team increasing their batch size? Because the time dedicated to the weekly release train meant we might as well bundle more things together.

 

Why was the weekly release cycle so costly regarding time? Because there was a business-critical part of their system that was written by someone else and had no automated tests, so they had to test it manually.

 

Why didn't we write automation tests? We had tried, but certain parts of the code were resistant to having tests retro-fitted.

 

Why didn't we do something different from a full regression test? We had tried multiple different strategies, plus a recent increase in the number of teams meant they could spread the regression test out amongst the group and rotate turns. However, this had the effect of only spreading the pain, not mitigating it entirely.

 

Why didn't we try a replacement strategy? We could for certain parts of the system; in fact, we had come up with several plans to do so. But we couldn't do everything without making some changes to the existing code, and so it would still require regression testing. Plus, the weekly regression sapped our time and energy.

 

After conducting the above analysis, our team decided to change up our approach in the following ways:

We would write an automation test suite for the parts of the application we could easily and reliably test. This would reduce the regression test effort. We would look at different strategies for manually regression-testing the rest so that we could find the optimal approach.

 

We would re-architect the system, designing it with the intention of making it automation-testable from the get-go. We would gradually move away from the existing architecture, replacing parts as we went, using Martin Fowler's Strangler Application pattern.

 

Adopting Lean Startup methods to validate ideas

The Lean Startup method was devised based on the company start-up experiences of its author, Eric Rees. It aims to shorten product development times by using a hypothesis-driven incremental delivery approach, which is combined with validated learning.

 

In the following section, we'll provide a brief introduction to Lean Startup thinking and how this might apply generally, not just to organizations that are in their start-up phase.

 

Build, Measure, Learn

We mentioned just now that Lean Startup is a hypothesis-driven approach; let's pull this apart a little for more understanding. The hypothesis-driven approach involves setting up an experiment to test out if a theory we have has legs or not. It could be as simple as writing a statement with the following template:

 

We believe that by creating this experience

  1. For these people
  2. We will get this result
  3. And we'll know this is true when we see this happening

 

A hypothesis has four parts:

The belief: This is something that we think we know based on either evidence from our data, the needs of our customer, or a core competency (something we know how to do well, or are particularly good at)

 

The target market: Our audience for this particular hypothesis

The outcome: The outcome we believe we will get

The measurement: The metrics that will tell us if our outcome is taking us in the right direction, towards our business objective Using the hypothesis template, we then build out enough of a product feature set so that we can test out our theory.

 

This is usually done incrementally over a number of iterations. This first cut of our product is known as the Minimum Viable Product (MVP) and is the smallest possible feature set that makes sense for us to validate with real-world users.

 

To do this, we take a feature-driven approach, where we prioritize certain features over others and get them to a viable offering state as soon as possible.

 

Prioritization is carried out using a release planning strategy that targets our primary, secondary, and tertiary user groups. Our primary user groups are the key users of our software product, the people who will tell us if our business idea is worth pursuing.

 

So we'll build software which targets their needs first. This allows us to focus our efforts and target the most important aspects of our product.

 

We don't have to fully develop a feature to make it viable; we can target certain segments of our market first and test it out before we further enhance it. The aim is to create "light" features which, although limited in their functionality, will allow us to validate our ideas as quickly as possible.

 

Our validated learning comes into play as we start to measure the success of the experiment to determine if we're on the right track. We use actionable metrics; these are metrics that we've set up to help inform us of what business actions we should take when certain measurements are recorded.

 

To illustrate, if we're testing a feature that drives new membership sign-ups, we need to measure exactly how many new members signed up as a result of using our feature.

 

If the non-member to new-member conversion rate is above a certain percentage threshold of people who used our feature, then we can most likely say it's a success, and we can carry on enhancing it further.

 

Another example is when building a checkout process for our online shop; we are validating two key aspects of our product. Firstly, do our customers want to buy what we're selling? Secondly, do they trust us and our checkout process enough that they will complete a purchase?

 

We can test this out in a number of ways without building out the full product feature, providing we capture people's actions and their reactions to measure our success.

 

In the Lean Startup approach, this constant testing of our assumptions is known as the BUILD, MEASURE, LEARN cycle. A Lean Startup is a metrics-driven approach, so before we start building features, we need to think about how we are going to measure their success.

 

The trick is to use the MVP to learn what does and doesn't work. The measurements we take from the learning phase, and the insights that we generate from them, we then feed into the next increment.

 

If the measurements indicate we're moving in the right direction and adding value, we continue to build on our current ideas.

 

If the measurements indicate we're not moving in the right direction, then we assess what to do. If we determine that the experiment is no longer working, we have the option to pivot in a different direction and try out new ideas.

 

The aim is to build up the increments until we have a Minimum Marketable Product (MMP).

 

An example of Lean Startup MVP

We can use the Lean Startup mindset whether we're setting out to build a new product or creating a new feature for an existing one. The core concepts remain the same;

 

first, put together the hypothesis, and next create a release strategy which focuses the first release on the minimum core feature set that we need to validate our idea with.

 

We then build it and test it out with a user group that will give us constructive feedback. This group is often known as our "early adopter" group because they love using new technology, and if it solves a problem that they care about they won't be too concerned about the little "problems" an early product might have.

Recommend