Coding Challenges in Software Projects

Coding Challenges in Software Projects

Coding Challenges in Software Projects

Imagining human processes is a hard thing, and this makes it very hard to design and code them without seeing them in action, which is essentially what planning a software project in advance is. Mistakes, Challenges, and omissions in the planning process lead to incorrect technical decisions, which in turn make the inevitable change to correct those mistakes surprisingly expensively. This blog explains all the Coding Challenges that are arises in Software Projects.

 

The most obvious solution to the problem—more and better planning—tends actually to exacerbate the problem. You may be thinking that if more planning makes the problem worse, there’s potentially a radical way of avoiding the problem altogether. If so, then ten points to you for your perceptiveness. 

 

The sad truth about software projects

sad truth about software projects

Let’s start with an example.

My first introduction to the wonderful world of software came in the form of a job as a junior code-monkey at a software agency. One of our regular clients was an insurance firm, for whom we built systems that allowed their call-center teams to provide renewal quotes and other similarly thrilling projects.

 

The firm’s latest flagship initiative was a partnership with a chain of high-end auto dealers where, at the point of purchase of a new car, the dealers would try to sell customers various forms of “premium” insurance that were entirely unrelated to car ownership. The customers would end up with some insurance they hadn’t even known they needed, the insurance firm would get some more business, the dealers would take a cut—everybody would win.

 

Of course, this was contingent on the dealers being able to tell the customers what insurance they were eligible for, and how much it would cost, and for that, they would need some software. The insurance firm had an in-house development team, but their time was blocked up overhauling their internal systems, so my consultancy was brought in.

 

We were given a set of requirements, for which we produced a bid, and once it was signed off a team of four of us got to work on building the quote generation website, codenamed Project Upsell.

 

That was when the problems began

problems began

First of all, because our software was going to interface with the insurance firm’s in-house pricing softwarewe needed to work closely with the insurance firm’s in-house development team, who knew how it worked. However, they resented our presence because the higher-ups had a tendency to refer to us as the “crack troops” who had been “parachuted in” to “rescue” their “overwhelmed” team, and they tended to make such references in front of the in-house developers.

 

The next problem came when we were trying to set up a database of quotes so that we had a record of what quotes we had generated for which customers. It turned out we only needed a very simple database, and we had quoted the cost of the project on the premise that building it would be easy.

 

However, two weeks into the development process, one of the in-house developers peeked at the source code we were writing and raised an immediate concern to his superior: we weren’t using “EntityCapture”! This was deemed so serious that crisis teleconferences were arranged.

 

EntityCapture, it transpired, was a piece of enterprise software for which the insurance firm had bought a very expensive license a few years previously, for use in their in-house systems. It was designed to handle storing a certain sort of insurance-related customer information in a database, but with a bit of pushing and pulling you could just about use it to store other things. The downside was that it was phenomenally complicated to put in place.

 

We, having never heard of it, didn’t think to try to use it, and when we did hear about it we decided we definitely didn’t want to use it—it was sort of the equivalent of hooking up a tap in your bathroom to the hot water supply via the hydraulic system that powers construction site diggers: technically possible, but it’ll make your plumbing objectively worse, no matter how good the hydraulic system is at being a hydraulic system.

 

And that was when the wheels really fell off

project

It turned out that while the car dealers had enthusiastically agreed to the project in principle, the insurance firm hadn’t consulted them when it came to drawing up the requirements. It was only in the meeting at the end of the project that the dealership representatives saw what had been built, and realized that it differed fairly radically from what they had been expecting.

 

They had assumed they could offer quotes almost instantaneously; however, Project Upsell required them to guide the customer through seven screens worth of questions about their health, income, and other awkwardly personal questions. Outraged, they complained that it would be impossible for a dealer to work this process into a car purchase without jeopardizing the original sale.

 

The meeting quickly descended into a shouting match between the dealers and the insurers, with us developers hiding behind our laptop screens and trying to think happy thoughts.

 

Eventually, after many gritted-teeth compromises, and months after the original delivery date, a new version of Project Upsell was launched. It wasn’t what anyone wanted, but it was the best that could be agreed upon by all parties and delivered remotely near the original budget.

 

The launch went off with only a few hitches (the worst being that after launch no one took the payment system out of test mode, which meant that the first hundred or so customers who actually bought insurance weren’t actually charged for it), and everyone celebrated the end of Project Upsell.

 

“So are we just going to ignore how badly the project went, then?”

 

He gave me a sardonic smile. “What you have to understand is that this is one of the most successful projects we’ve ever done with these guys.” In the years that have passed since I’ve often thought back to that response. It was my first exposure to a sad truth about software development that has, over the rest of my career, become more and more apparent to me. That truth is this: software projects go wrong.

 

Software projects go wrong in an entire cornucopia of excitingly varied ways. Even the most innocuous little thing, like knocking up a quick website for a friend’s amateur knitting society, has the potential to degrade into a drawn-out process of recriminations, complications, and the fraying of friendly relations. Somehow, software projects have a much greater propensity than other sorts of projects to get really fouled up in all sorts of inventively ghastly ways.

 

Crunchy numbers

software development

At this point you might be inclined to say to me: John, just because your experience of software development has been a history of disasters, it doesn’t follow that this is a universal phenomenon.

 

Is it not more likely that you’re simply an incompetent programmer and a horrible project manager? The answer is that, yes, I am probably both those things. However, I’m not simply extrapolating from my personal experiences. The dataset I am working with is considerably larger.

 

Example

Let’s start with a big example. The UK’s National Health Service, founded in 1948, is one of the top ten largest employers in the world. It has a budget of over $130bn annually, funded via the government by taxpayers, and is responsible for the provision of health care to all of the 70m-odd inhabitants of the United Kingdom.

 

It’s a massive beast, but it’s largely decentralized, broken up into small pieces to make it more manageable. This is great, except for the fact that individual patients tend to interact with multiple different pieces if their illness is more than remotely serious. And the different pieces all need to be kept abreast of what has been discovered and recommended by other parts of the system.

 

Every year the Standish Group releases something called the CHAOS Report, a survey, and analysis of software project success rates. The 2015 report analyzed 50,000 projects around the world. It has the project success rate, where success is defined as delivery on time, on a budget, and with a satisfactory result, pegged at 29%. 2015 was not a remarkable year—that figure had remained stable, within +/-2%, for the preceding four years.

 

And that’s a comparatively positive figure. Steve bros released data in 2014 suggesting up to 80% of new product development projects are failures. And let’s be clear when I say failure I don’t mean some trivial schedule slip. According to the McKinsey, the average project schedule overrun is 33%. That 33 percent is enough to cost a large company millions and send a small company under.

 

Examples of large-scale IT disaster are everywhere, from when the Ford Motor Company spent $400m on a new purchasing system only to abandon if after finding it wasn’t fit for purpose,6 to Healthcare.gov, which was supposed to cost less than $100m and ended up costing up to $2bn.7

 

This is, of course, horrifying. Yes, there are mitigating factors. The Standish Group report makes clear that the overall results are made worse by the truly appalling, train-wreck track record of large- and extra-large-scale projects.

 

The smaller the project, the greater its chance of success, and compared to government- and multi-national scale projects, an awful lot of projects are on the small end of the scale. But even so, the best you can hope for, going by the stats, is a slightly-better-than-50% chance of success so long as your project doesn’t get remotely large in scope. There’s no way those sorts of odds will let a project manager sleep soundly at night.

 

Why on earth is this the case? That’s the question we’ll be addressing over the rest of this blog. I’m going to argue that, apart from the normal factors that affect any project (poor communication, weak leadership, etc), there are three big problems that are peculiar to software.

 

They are at the heart of the sad truth about software development and together make clear how building software is nothing like building a house. Understanding these should be the top priority of anyone who is entrusting their future professional success to a team of software developers.

 

The Imagination Problem

Imagination Problem

Looking at the studies, a pattern emerges. The McKinsey report identifies “unclear objectives” and “lack of business focus” as the most significant cause of project failure. The Steve bros report claims that 70% of studies fail due to poor requirements.

 

A 2011 study by Geneca blames “fuzzy requirements” and the business being “out of sync with project requirements,” and claims that three-quarters of executives are so pessimistic about the outcome that they anticipate that their projects will fail before they even start. We seem to be entering an entirely Dilbert-esque world where nobody seems to know what it is they actually need before they start building it.

 

That’s what the analysts and consultants who study failure post hoc say. But we can go straight to the horse’s mouth as well. Stack Overflow (an online forum for software developers to share technical problems and solutions) undertakes an annual survey of software developers to assess the state of the discipline around the world.

 

In the 2016 edition, which had 50,000 respondents, one of the questions asked about the major challenges experienced at work. The most popular responses confirm the story told by the studies: a third of the developers who answered complained about unspecific requirements and a similar number also complained about poor documentation. 28% said that changing requirements were a major challenge.

 

The overall picture of how a typical project fails, according to software developers, is something like this: someone asks a developer to do something. They only have a vague notion of what they want, and they communicate it poorly. The developers do their best to interpret what the client wanted based on what they actually asked for, and then start trying to build it, but before they’ve got very far the client changes their mind about what they want anyway. No wonder nothing ever gets finished on time!

 

This is, of course, a very biased interpretation—notice how, according to developers, none of the blame for software project failures falls on the developers. But, coupled with the studies documented above, it becomes apparent that there is something very wrong with the requirements and specifications that are given to developers at the start of a project—they don’t communicate clearly and completely what it is that developers need to build in order for the project to be a success.

 

Why is this? Well, let’s reject out of hand the notion that project managers are bad communicators in general. An ability to communicate clearly is pretty much what makes a project manager a project manager, and a desire to do so is normally what leads people to become project managers in the first place.

 

And let’s bear in mind that specifications are not supposed to be technical documents. They’re supposed to be written in plain English (or whatever the local language is), describing what is required in a non-technical way, allowing technical people to then infer the technical details from them. The fact that project managers tend to be non-technical should again not be the cause of any problems.

 

So we have a situation where people who are good at communicating clearly in plain English are failing to communicate clearly what it is that needs to be built. The only possible conclusion we can draw from this is the first big, tragic, counterintuitive truth about software: those people don’t actually know what needs building. That sounds absurd, of course, so let’s tidy up a little bit and give it a name.

 

We’ll call this the Imagination Problem, and we’ll characterize it as follows: when it comes to describing a proposed piece of software, where the software is non-trivial and does not exist yet, it is almost impossible to imagine how the software will behave with enough detail and precision to communicate clearly and completely a specification for that piece of software. Or more bluntly, as coders tend to put it: “The customer never knows what they want.”

 

Not convinced? Let’s look at an example

online birthday card website

Birthday wishes

Suppose we want to make the world’s simplest online birthday card website. Visitors will come to the site, enter the name and email address of the intended recipient along with a personal message, and hit a “create” button, causing the recipient to be emailed a link. When they click the link they will be taken to a page where they see:

Dear [their name],

Happy Birthday!

[personalized message]

This text will be displayed on top of a picture of an elephant holding a balloon. Cute, no?

 

How would we write up our requirements in such a way that we could pass them on to a developer? Well, to be honest, we sort of just did. The above feels like a pretty complete specification that’s clear enough that anyone who wasn’t deliberately trying to misunderstand would know what we were after. Great! So we pass that on to a developer, along with the elephant picture for them to use, and a mock-up showing what colors and fonts to use for the text, and they get to work.

 

Then comes the first problem. We didn’t include what to do about long messages. When a user enters more than about 50 words, the text spills off the bottom of the elephant picture. It looks terrible! But that’s ok. This was never designed for long messages. The intention was to allow people to write short, personalized notes. So let’s impose a rule that users aren’t allowed to enter more than 50 words.

 

OK, says the developer, that should be easy enough to put in place, and it’ll only take an hour or so to add. Everything’s fine, then.

 

Except, when the new version is delivered, and we start playing with it, we discover that you only find out when you try to hit the create button if you’ve gone over the word limit. But, equally, it’s really hard to estimate how many words you’ve written, and it’s a pain to count the words again and again as you go. So unless you’ve got a really short message, this thing is pretty annoying to use.

 

What would be better would be if there was a little counter that told you how many words you’d written as you went along, and maybe turned red when you had less than 5 words to do.

 

Now, let’s be clear: no one’s pretending there was anything about a word counter in the original requirements. But now that you have a working version of the app in front of you, it’s become clear that the app is not fit for purpose without the counter—it’s just not a fun experience to use, so there’s no point having the app at all unless it has a counter.

 

We explain this to the developer, and they sigh and say they can add a counter, but it’ll take a while. See, based on the original specification they built everything using a “server-side” language, but the word counter requires “client-side” processing, so they’ll need to set up some stuff to allow them to use a client-side language. But they crack on and work a little bit late, and they get it done, and you now have version 3 of the app, with a shiny counter and everything is good.

 

Technical specifications, human processes

Technical specifications

So if we accept the premise that it’s surprisingly hard to define requirements for software projects in advance, we have a bit of a problem, because the traditional process of managing projects involves planning everything out in advance. First, you work out what it is you want to do, then you do it, right? But with software, you don’t seem to be able to know what it is you want to do before you do it. It’s as if software development was designed expressly so as to be unmanageable.

 

But why is this? Software engineering is just another type of engineering, and other disciplines don’t have this problem. Which isn’t to say that other sorts of engineering aren’t tremendously hard to manage, but rather that they don’t seem to produce failures on anything like the scale that software does.

 

Equally, the process of building a piece of software really does look analogous to the process of building a house, and while the world of construction is fraught with logistical potential disasters, its track record is much better than that of software. What’s going on?

 

The theory is this: the construction industry, and most brands of engineering, are about creating things, whereas software is normally about creating processes. I don’t mean processes in the way that a combustion engine has a process—it’s not that software involves moving parts.

 

Rather, the vast majority of software, particularly the stuff built in a business context, is about creating a framework to enable a human process. In the e-card example above, the exciting thing isn’t the mechanism for taking text and laying it out on top of a picture of an elephant.

coder

The exciting thing is enabling a process whereby one person writes some things and clicks a button, and a second person gets an email with a bit they can click on to see something that is generated from the things the first person wrote. And the difficult bit isn’t imagining how the mechanism works. The difficult bit is imagining the details of how the human process works.

 

To put it another way, when we are planning software, we’re normally not planning software. Rather we’re planning a new process for employees, clients, or customers, and also some software to enable that process. This is always the case when we’re building a new product, and almost always the case when we’re upgrading something already in existence.

 

 Even when we’re just digitizing an existing system, it’s no good saying, “But the process is exactly the same, we’re just now recording the information in a database instead of on a paper form.” Typing things into a computer is a very different process than writing on a piece of paper when you’re looking at the level of individual actions by human beings, and, as we’ve seen above, it’s at that level that lots of nasty and easy-to-overlook problems lurk.

 

This distinction of subject matter between physical things and human processes points to the reason behind the Imagination Problem: it’s really hard to clearly imagine an entire process. Our brains aren’t very good at visualizing them. And there are far fewer tools we can use to help us.

 

Consider what would happen if we were trying to build a bridge. We would draw up detailed architectural plans that we could pore over. We would build a scale model of the bridge so that we could actually see exactly what it would look like and inspect every fine detail, all before we started building. This would give us ample opportunity to spot problems (it’s too low and boats can’t get through, that color of the stone is disgusting, it runs straight into a cliff face so there’s no way to get off it) at the point where making changes is cheap.

 

But there’s no equivalent of the architectural model when it comes to software. For a model of a process to be useful, it has to actually work because as well as its spatial properties we also need our model to illustrate its temporal properties. And the thing about software is that building a working scale model basically takes as much time as building the full-size thing it represents. So we more or less have to build the whole thing in order to be able to see the problems and gaps in our initial design.

 

This has some problematic ramifications because the software has yet more ways in which it doesn’t behave like other forms of engineering. Suppose we were part way through building a bridge, and we realized there was a minor ambiguity in the plans which needed to be resolved before we could proceed.

 

We’d have to choose a resolution to the ambiguity and plan and execute the relevant additional work. There would be cost and time implications, but we would expect them to be minor, in proportion to the size of the ambiguity.

 

Compare now what happens when we come across a similar ambiguity part way through a software project. A minor ambiguity requires a minor clarification, which might involve adding a relatively small piece of functionality, something that looks pretty easy. But as Randall Monroe, creator of the hugely popular webcomic XKCD points out: “In computer science, it can be hard to explain the difference between the easy and the virtually impossible.”

 

The software is constrained by the limits of the technologies it is built upon, and, like the proverbial military general who always tries to re-fight the last war, software technologies tend to be optimized towards solving last year’s big problems.

 

A year is a long time in the world of software (more on this later), and this means that there’s a good chance that while the technology in use facilitates 9 out of every 10 features that the spec requires, one in every 10 (and it’s always one that looks from the outside just like the other 9) will turn out to be completely unsupported and will require extreme lengths in order to make it happen.

 

In the e-card example, I wasn’t kidding about the complexities of dynamically resizing text to fit in a box—if you’re using HTML and CSS, while doing it for images is trivially easy, doing it for text is unexpectedly complex. This sort of thing means that there is a greater chance that the minor clarification of the minor ambiguity will result in a surprisingly large schedule slippage.

 

Starting from the wrong place

Starting from the wrong place

It’s worth noting, however, that most of the time when a minor addition causes a major headache, the problem isn’t that the thing to be added is difficult to pull off in and of itself. Rather, it’s very common to be told that those difficult things wouldn’t be difficult if only a particular technical decision had been made differently at some earlier point in the process.

 

Why? Because, again, building software is nothing like building a house. When you build a house for a given design, your choice of materials and techniques is fairly constrained—if you want it to look like brickwork, you build it with bricks; if you’re after that glass and steel effect, you use glass and steel to build it. In software, nothing is so certain.

 

Let’s start with a choice of language. Every piece of computer code is written in a particular language, one that is normally very formally defined, and that both humans and computers can understand (to make a massive oversimplification for the sake of convenience, humans write in the language and computers read it). A fundamental choice when deciding to build any piece of software is deciding which language to build it in. How wide is the choice?

 

Well, consider the website http://www.99-bottles-of-beer.net. It comprises a collection of computer programs, each doing the same thing (printing out the lyrics to the eponymous song) in a different language. It features 1,500 distinct languages and doesn’t pretend to be comprehensive.

 

Admittedly, the vast majority of these languages would be such terrible choices for any serious project that they can safely be ignored. The main reason is simply their obscurity—a language that lots of people know and use will have lots of helpful language-specific tools that can be used to speed up development, a large online community of people who can help when one gets stuck and, crucially, compatibility with the sorts of systems that one might need one’s new piece of software to interact with (i.e., to oversimplify massively once again, some sorts of computer don’t know how to read some computer languages).

languages

So you might be able to reduce your choice of languages to less than 5 contenders, and you’ll probably be guided by the languages that your existing developers are familiar with. Ultimately, though, the deciding factor is whether the language enables you easily to fulfill the requirements of the project (i.e., sort of: does that language have a large and detailed enough vocabulary to allow your coders to write down what they want to do without having to make up a bunch of new words).

 

Next comes the question of frameworks. A framework is broadly a tool that provides a structure and a format to the code one writes. (Cheap analogy: You’ve decided you’re going to write your document in English. Now, are you going to write it in Google Docs, Excel, Keynote, etc.?) It’s normally designed to facilitate a particular sort of program, and typically large pieces of software (where the amount of code written by the developers is going to be more than a few hundred lines) are easier to manage if they use a particular framework.

 

Each framework is specific to a particular language, so depending on your language of choice there may be tens of viable frameworks to choose. Or you may choose not to use a framework at all (the equivalent, I suppose, of using something simple like Notepad to write your document). The key question is whether the problems that the framework solves are the problems that your project will face.

 

Once you’ve picked a framework, the next questions will be about which if any libraries to choose, what infrastructure and other tools to adopt: a whole plethora of decisions to make. (Libraries are, sort of, lists of additional words with definitions that you can optionally teach your computer so that you can use those words when you write your code.

 

They can speed things up tremendously, because instead of painstakingly having to describe to your computer how to, e.g., display a date in a nice user-friendly format, you can simply use a library that defines a word that the computer understands as “display a date in this nice user-friendly format,” and then all you have to do is write that one word).

 

Normally you’ll be able to start writing code before all these decisions are made, and the decisions will be increasingly low-risk: if you find out you picked the wrong library (because the words it defines don’t quite mean the things you need them to mean), normally it won’t be too painful to change it later (because hopefully you won’t have to change too much of your code to start using a new library).

 

As in, it will be painful, but not excruciating. But if you find out you picked the wrong framework to achieve what you want to achieve, that’s going to hurt, because you will need to throw away quite a lot of the code you’ve written, because it’s normally fairly framework-dependent. And if you realize you picked the wrong language, get ready for some very, very unpleasant meetings, because it’s time to more or less start the project again from scratch.

 

How do you find out you’ve picked the wrong library, framework, or language? Well, it normally happens when an ambiguous part of the spec is “clarified” with a new requirement that it turns out is not at all well catered for by your existing technology choices.

 

If that sounds scary, consider that I’m only talking about changes that might be necessitated to the tools chosen for a project. We haven’t even touched on the other source of change pain, which is changes that apply directly to the code that has been written. All in all, changes to requirements can really, really hurt, no matter how small those changes look with a non-technical hat on.

 

A counterproductive mitigation

software project

The misery that change can bring is well known in software circles. The jargon for describing the emergence of new requirements part way through a project is “feature creep,” whose connotations are grotesqueness and insidiousness—no one wants to be creepy, after all. So even the terminology we use makes it very clear that software change is loathed and feared.

 

Given how painful change can be in a software project, it’s understandable that one’s natural attitude when faced with this problem is to do everything possible to reduce change. And this is indeed the approach of the typical project manager who has been burned once or more already by a project whose requirements drifted halfway through.

 

What’s unfortunate is that the standard approach to avoiding change is often extremely counterproductive. This is because the standard approach is to assume that change is caused by ambiguity in specs, and that ambiguity is caused by insufficient planning. The solution, it is therefore assumed, is more detailed planning, and an almost obsessive determination to map out every detail of the software to be built. But this doesn’t banish the Imagination Problem.

 

In fact, while seemingly beating it back, in fact, it merely feeds it, strengthening it for its inevitable return once development begins. Because since the human brain struggles so much to imagine a process without actually seeing it in action, the way the process is imagined to be at the start may be quite different from how it actually needs to be.

 

Which means that all the obsessive planning normally leads to a photorealistic portrait of the wrong thing. Which means that all those additional details so painstakingly planned become red herrings leading to incorrect technology choices being committed to and a false sense of confidence that makes it harder to notice that the initial requirements need to change until later in the process.

 

Broadly, the more you plan, the more likely your plans are to have a mistake, so the more likely you are to have to change your plans, and changing plans was what you went into this to avoid. It sounds absurd, and of course, it is. But it also happens to be how software projects pan out time and time again. Because software development actually is a little bit absurd, and it just comes with the territory.

 

The Estimation Problem

Estimation Problem

While the Imagination Problem is largely a failure on the part of non-technical people, and one that coders love to cite as the source of all project failures, it is at most only a part of the problem as a whole. There is a second issue which software developers are much less willing to credit with derailing software projects, largely because the blame for this one normally falls squarely on the shoulders of the developers themselves. I am talking about something we will call the Estimation Problem.

 

A few years ago I was tangentially involved in an utter car crash of a project (this one wasn’t my fault, for once). A piece of software had to be delivered to match a hardware launch, and the small team of developers building it, whose track record at self-organizing wasn’t great, were assigned a project manager from the hardware team—a mechanical engineer by training—to make sure they delivered on time.

 

The project manager duly went around the stakeholders establishing what needed to be done, and asked each of the engineers how long each chunk of work would take, then drew up a Gantt chart and declared the project “kicked off.”

 

Time passed, and the developers remained busy and optimistic—they were making great progress, and everything was basically on track. Except that that’s not quite what the project manager’s weekly updates suggested. The tone was generally upbeat, as the PM was echoing the positive sentiments of the developers, who were after all the best placed to say how the project was going.

 

But the shape of the Gantt chart kept changing. All the short lines on the left representing the first tasks to be accomplished kept getting longer, so that their projected completion date was slightly in the future, while all the long lines on the right representing the final tasks kept getting shorter, to fit them all between the ongoing first tasks and the project end date. The project remained officially on track, but none of the milestones were being hit—each week they were just being pushed back and back, closer to the final project completion deadline.

 

Needless to say, this could only go on for so long. A few weeks before the hard deadline, the CEO stepped in and demanded to see a demo of the software. The whole team assembled around a screen, and the project manager opened up the application—and nothing happened. The CEO, bewildered, asked why he was being shown an application that was so buggy none of the functionality appeared. The indignant reply from the developers was that, on the contrary, the software wasn’t buggy at all. The features weren’t appearing simply because they hadn’t been built yet.

time

It became pretty clear at that point that the project was massive, horrendously, unrescuable behind schedule, and a committee of middle managers (including me) was assembled to find out what on earth had gone wrong. The poor project manager was hauled in front of us and subjected to a grilling. His explanation was pretty simple.

 

Not knowing much about software development himself, he trusted his developers to give him estimates of how long things would take. The first task, they had originally said, would take a week, so he’d allocated a week for it in his project plan. At the end of the first week, he had asked if it was done.

 

The developers had replied no, it wasn’t technically finished, because it had turned out to be more complicated than expected; but the good news was that they now understood the system a lot better, so once they finished this task, the next task would be much easier. So they were still confident they’d hit the final deadline.

 

The project manager, trusting them, reported back their opinions in his weekly report. A very similar conversation was had the following week, and a similar adjustment was made. At the end of the third week, they said that, yes, the first task was technically finished, but that actually it turned out there was some low-level stuff that needed to be added before the functionality actually worked, so could they add a new task to the first milestone and start working on that?

 

But the good news was that they were really getting to grips with the system now, and once they had this initial issue sorted, they’d really be flying and all the later milestones would be a breeze.

 

At this point in the interview, the poor project manager nearly broke down, as he explained that every single task that had been completed (and there weren’t many) had taken at least three times longer than the developers had originally estimated. How could he possibly bring in a project on time, he protested, if the bloody software devs didn’t have a clue what they were doing?

 

A known issue

plan

The horror that this particular manager was experiencing is familiar, to a greater or lesser extent, to most people who’ve had to plan software projects. There is nothing so optimistic and unreliable as a developer’s estimate. And the optimism is often so pervasive that it leads developers to absurd assertions simply to allow them to concede neither that (a) they were wrong in an earlier estimate nor (b) that therefore the project is now behind schedule.

 

This curious psychological bias is so ubiquitous that some software-focused project management tools actually have features built in to compensate for them. Fog Creek Software’s FogBugz tool has a feature called Evidence-Based Scheduling

 

This automatically records the average discrepancy between each individual developer’s estimates for tasks and the amount of time those tasks actually took and uses it to generate individualized “multipliers” for each developer. For any given project, it then tracks which developer estimated the length of the task, and applies their individual multiplier to get the “evidence-based” estimate, and predicts the length of the project as a whole based on these evidence-based estimates.

 

The fact that the company that makes this tool went to the trouble of building this feature for their project manager customers (which is entirely premised on the assumption that software engineers can’t be trusted to reliably estimate how long it will take them to do their job) tells us an awful lot about how much faith the industry as a whole has in developers’ powers of prediction.

 

One very obvious point to get out of the way immediately is that it’s not at all unreasonable to expect software developers to be fairly good at estimating time. On the one hand, the subject matter is something that only software developers have a hope of putting time values on.

 

As mentioned earlier, there’s potentially a huge disparity between the complexity of the functionality to be built and the complexity of the code that needs to be written, and non-developers can’t be expected to guess at the latter sort of complexity, which is the driving factor in the amount of coding time required. So if anyone can do it, it’s software developers.

 

And on the other hand, coders get plenty of practice in giving estimates. In almost every development team that is doing active development (as opposed to simply “maintaining” a code base—more on this in later blogs), every task gets estimated before it is undertaken. Developers spend a good proportion of their lives making estimates, and in theory, they are better equipped than anyone else to be accurate. Why, then, does everything take so much longer than it’s supposed to?

Stack Overflow survey

Well, if you ask a developer they’ll be fairly likely to blame their manager. In the 2016 Stack Overflow survey, 35% of developers listed “unrealistic expectations” as a major challenge. In other words, it’s not that things take longer than expected, it’s that they take longer than wanted, which is a separate thing entirely. Now, in some circumstances, this is a fair criticism, but it is at the same time fairly irrelevant. In cases where project plans are being drawn up without consultation with developers, the projects won’t go according to plan; but this has nothing to do with estimation.

 

However, developers normally are consulted about how long development tasks will take (because most project managers aren’t entirely insane), and project plans are drawn up based on what developers say; and what’s interesting is that in these situations developers often still complain about unrealistic expectations.

 

This might seem a little hypocritical since the developers are the source of the expectations in the first place. But the common complaint is about what the jargonists refer to as “contingency”, and this gives us our first clue when it comes to understanding the estimation problem.

 

Suppose first thing on Monday you come to me and ask me how long it’ll take to build you a website, and I say 5 days. If I can get started straight away, you might reasonably suppose that the website will be finished by the end of Friday, and depending on how confident I sounded you might make plans based around the website is life for the weekend. If I sounded very confident you might think it entirely reasonable to fully rely on me hitting my Friday deadline.

 

I as a developer, on the other hand, might be horrified to learn that my estimate of Friday has been turned into a hard deadline. There’s a very clear distinction in my mind between me finishing building the website and the website being finished. There are a whole host of additional time-consuming factors to consider. What about an opportunity for user feedback? What about time for testing and QA? What about time for deployment? And what, crucially, about time for contingency?

 

Let’s look at each of these in turn. “User feedback” means, “that moment when you realize that what you asked me for isn’t what you wanted.” In other words, I’m anticipating that this project will experience the Imagination Problem. “Testing and QA” means, “time spent discovering the mistakes I made when building the site.”

Software developers learn

Software developers learn from experience that it’s impossible to build software without making mistakes—typos, logical errors, etc.—and that as much as we’d all like to notice and fix those mistakes as we go along, in real life, there are always some that are discovered after we think we’re finished. We won’t linger on this topic as it’s covered in great detail in a later blog, but for now, just note that I didn’t build in time for fixing my mistakes into my initial estimate.

 

My third complaint was about “time for deployment.” Broadly that means, “putting all the code I wrote onto a server,” which is a tiny bit time-consuming anyway and can also uncover more mistakes that I made. Again, note that I didn’t build deployment time into my initial estimate.

 

Contingency

Contingency

Finally, I complained about contingency. Broadly what I meant was, “something unexpectedly taking longer than predicted.” Now, this might surprise you the client, because I, who ought to know what I’m talking about, said very confidently that it would take a week, but now I’m telling you off for only giving me a week because I might need extra time for things that I can’t really specify. You didn’t build in contingency time because I sounded so confident.

 

But the truth is this: I was very confident, not that building the website would take a week, but rather than building the website would take a week if nothing unexpectedly took longer than predicted. I, as a developer, fully expect something to unexpectedly take longer than predicted. It’s just that I don’t know which thing will take longer than predicted.

 

 We’ll get to this in more detail later. For now, a server is “a computer that’s connected to the Internet that other computers connect to when they want to look at a particular website.”

 

I’ve built into it every possible manifestation of a peculiar phenomenon, namely the situation where a developer’s estimate might diverge from how long the developer thinks something might actually take. Not every developer leaves out any of the things I’ve described from their estimate, and very few developers leave them all out.

 

Nevertheless, in my experience, one or more of the above factors is surprisingly common in any estimation process, and it goes some way to explaining how management expectations can be based on developer estimates and yet still feel unrealistic to the developers who made the estimates.

 

The two main types of flaws in estimates can broadly be categorized as not taking into account the uninteresting, and not taking into account the unknown, and I’ll look at them more deeply in turn.

 

The uninteresting

Software tasks

Software tasks are normally described in non-technical terms like distinct unitary pieces of work. “Build a new web page that allows users to buy more credit.” “Add a button that sends a report to an administrator.” And on the technical side, there are normally only one or two large chunks of work involved in completing a task, and it’s these that catch developers’ imaginations.

 

These are the intellectual challenges that the developer must solve, the opportunities to apply a particular technique or use a particular code library. How will the credit purchase page interact with the payment provider to charge users’ credit cards the appropriate amount? How will the relevant data be collected, aggregated, and formatted to allow it to be sent to the administrator? These are the things the mind focuses on when trying to estimate how long a task will take.

 

The difficulty is that software tasks also normally include a whole host of smaller supporting chunks of work that need to be completed for the task to count as finished. That web page needs to be accessible to all and only users who are logged in. When payment is taken, credit has to be applied to the right user’s account.

 

If payment is unsuccessful, a message needs to be shown to the user explaining what has gone wrong. The report button? It needs to be “styled” so that it looks like the other buttons in the software, etc. Even when these things are spelled out explicitly in the spec, or when they are clearly enough implied that the developer would never omit them, somehow because they are secondary to the real meat of the task, it’s very easy for them to slip out of mind when trying to imagine the amount of work remaining to be done.

 

At one company I worked for, the name for this was “80% syndrome,” which was that very common tendency to think of a task as 80% done when in fact it was only about halfway there, simply because the second half of the task is mostly made up of the easy-to-ignore little fiddly bits.

 

Extreme Changes

Extreme Changes

I once got suckered in by an extreme case of 80% syndrome. I was at a company whose sprawling, wide-ranging tech platform had been built over many years by a series of different agencies, who between them created a mismatched patchwork of not-very-well-integrated parts. One of the most not-very-well-integrated bits was a “Single Sign-On,” or SSO.

 

This is broadly a little website that lets you visit it, and log into it with a username and password, and then visit a whole range of other websites that know how to talk to it so you can be automatically logged into them without having to enter your password again. In a large and sprawling system that’s spread across several websites, it’s potentially a helpful glue to stick all the bits together with.

 

However, our SSO was lacking most of the features we needed, built in a coding language that none of our developers were familiar with, and set up in a way that made it really surprisingly expensive to run.

 

Because it was missing some key features, only a few parts of our system actually used it—you still had to manually log into each of the other parts when you visited them. Integrating it with those other parts would be impossible in its current state.

 

There was a strong case for rebuilding it entirely, but we were a small team with a lot of deadlines, and there was always something more urgent to do. It was my job to prioritize the team’s workload, and SSO integration remained low on the list.

 

This wasn’t enough to put off Alba, a developer who had recently joined and was dismayed at how disjointed our system was. Being very intelligent, and having experience with relevant authentication mechanisms, she worked out a simple and elegant way of building a cheaper-to-run, more easily extendable SSO using the language the rest of the team were most familiar with.

 

As there was still no time available for her to build it during office hours, she decided to work on it on her own time. Christmas was coming up, so she used the week that the office was closed to get some code written. (I hope she took a break on the day itself).

 

When the team reconvened after the holiday break, Alba proudly announced that she had rewritten the SSO in three days. I was amazed.

 

“What? The whole thing? Is it ready to roll out?”

SSO rewrite

To which Alba replied (and this is crucial), “Basically, yep.”

 

This, of course, changed things. There hadn’t previously been a case for diverting resources to the SSO rewrite over other more urgent work. But since the work was basically finished, giving Alba a couple of days to polish it and roll it out would be a big win, basically for free, and it’d set the tech team off to a great start for the year. So I immediately put Alba onto finishing the SSO.

 

I must confess, many of the developers were a little suspicious. The original SSO had been outsourced to an offshore agency who had taken a couple of months to complete it with a team of developers working on it, and that was to build something that was missing many of the features we needed.

 

It seemed unlikely that Alba could genuinely have written a functionally equivalent replacement of that in three days, much less that she could have incorporated all the extra new stuff that she was claiming. One of the senior developers pointed as much out to me and refused to be brushed off by my repeated insistence that Alba had worked a Christmas miracle and we shouldn’t question it.

 

So after some prodding, I took a little look at the code Alba had written. What I found was a beautiful, elegant authentication mechanism, flawlessly architected and undoubtedly the sort of mechanism we needed. And that mechanism was, indeed, basically complete. It was a testament to Alba’s indisputable technical expertise that she managed to put the whole thing together over 3 days.

 

But.

SSO-project

The mechanism Alba had created was written in a vacuum, with no consideration for how it could be swapped in for the old SSO, given that the parts of the system that already interacted with the old SSO expected it to work in a particular way that was entirely different from how the new one worked.

 

To replace the old SSO we would either have to adopt the new one to be backward-compatible, or update all the things that interacted with it. And this was going to be a big chunk of work.

 

I had clearly been far too optimistic in my interpretation of Alba’s own optimism. Never mind. I had a more thorough chat with Alba about the various things we’d need to do to be able to swap in her new SSO, and she remained optimistic about them. “The work’s basically done, it’s just a case of wiring it up.”

 

Alba got on with the wiring up in January. Things took a little longer than expected, but by the end of the month, she said it was “very nearly” finished. I ended up leaving that company in February (no, I wasn’t fired for my team failing to deliver the SSO, but you could argue that I should have been fired for failing to manage expectations appropriately), and when I left, the wiring up was not quite there, but “very, very nearly” finished.

 

In May I had lunch with another developer from the company to hear how things were getting on. By that point, I was told in an exasperated tone, the new SSO was “very, very, very nearly finished.” Not bad for something that was basically ready to roll out at the start of January.

 

This was an extreme case, and it was extreme by dint of the fact that for the task in question, the interesting bit—the elegant mechanism—comprised at best 5% of the total task, and the uninteresting wiring up of the new mechanism to the old bits of the system comprised the other 95%.

 

The developer’s head was focused exclusively on the 5%, and that meant that all estimates were made on the assumption that 5% was actually the 95%. To my eternal chagrin, I didn’t notice until far too late that the interestingness factor was skewing the estimates.

 

Hopefully you’ll be less foolish than me, but even so, consider this: if developers can be so misguidedly optimistic as this when they’re already stuck into a task, think how much more wrong they can go while they’re estimating it at the very start of the process. Are you sure you know how to compensate for this sort of bias?

 

The unknown

over-optimism

The other common cause of over-optimism comes from the way in which developers imagine the problems that they need to solve. Typically when estimating a task a developer will think about how they intend to solve the problems inherent in the task, then imagine what their solution will look like and think about how long each part of the solution will take to write.

 

The problem with this is that in actual fact, working out the best way to solve the problem is a fair chunk of the process of solving it, and the solution that comes to mind after a moment’s reflection is likely to differ from the solution ultimately chosen.

 

The reason for this is that for any problem in software development, there are normally huge numbers of possible solutions. We have already discussed how the choice of tools and building materials (if we persist in trying to make the construction analogy work), in the form of languages, frameworks, and libraries, is vast. But even when the tools and materials are chosen, the actual construction process is nothing like, for example, building a brick wall. It’s more like writing an essay.

 

Ask two developers to complete the same task and their code might be unrecognizable, in the same way, that two students given the same assignment might turn in two completely dissimilar pieces of work, even though both fulfill the requirements of the assignment.

 

The reason for this is not simply a case of personal style. Variations in approach can also be evaluated less subjectively with respect to how well they cope with “edge cases”, how easy they will be to add to or adjust in future, and how easy they are for others to understand.

 

It may also turn out that a particular approach, although seeming to score well on all of the above points, must actually be rejected because it renders one or more features of the original requirements actually impossible to fulfill.

 

estimation process

These flawed solutions are of particular relevance in the estimation process, because if it’s one of these that the developer has in mind when estimating (not having seen in advance the flaw), then not only will they waste time working on that solution, but when the flaw is discovered and a new approach is needed, it may well be that the additional time taken to adopt the new approach is completely different to the time estimated, because the developer must go about it in a completely different way.

 

The more sophisticated developer will try to take this level of the unknown into account when making estimates, avoiding assumptions about what solution will be the correct one. But when they do so they leave themselves with very little to fall back on to help them establish the time taken. They can only provide a gut feel about how much a non-specific solution might be expected to take.

 

What’s surprising (or at least, it surprises me) about all this is that there are always so many different, equally valid options of building software, given that most software does basically the same thing.

 

Broadly, software presents one user a chance to put some information into it (be it a form on the company’s holiday request system, the “new email” window on your mail client, or the delivery address screen in your shopping website’s purchase pages), and then it checks that information, normally stores it somewhere, shows the user something relevant, and then at some point later shows a different user what the first user put in, or possibly some aggregate of what multiple users have put in.

 

Someone puts information in, someone gets information out. Given that, it does seem a little bit absurd that there hasn’t been some level of standardization, both of the tools available and of the techniques used.

 

It seems to me that there should be a good analogy with walls. Lots of people want to put up walls in a variety of different places, for a variety of reasons, but walls all do basically the same things: they keep people out, they keep warmth in, they provide privacy, and they support things built above ground level, like roofs.

 

In the days of mud and sticks, I’m sure there were a million ways to build a crude wall, but then bricks came along, and they were a standard shape and size, and there’s one basic way of building a wall with bricks (albeit with several variations), and standard bricks and standard techniques will see you right in most situations where a wall needs building, and if you’re a professional bricklayer you’ll have a pretty reliable idea of how much it’ll take to build any given wall, and this is recognized as a pretty good thing.

 

Software feels like it’s stuck in the sticks and mud phase when we would patently all be better off if we could have some bricks to work with. Why aren’t their software bricks?

 

One view (read: excuse) I’ve often heard aired is the “software is a young discipline” theory. It’s unfair, goes the argument, to compare building software to other forms of engineering because we haven’t been doing it very long. It takes time for consistent processes to emerge, for practices to standardize. At the moment we’re in a phase of semi-blind experimentation, and that’s just how it goes.

 

Which would be quite convincing, if it weren’t also such utter rubbish. For one thing, software isn’t a very young discipline at all. Ada Lovelace wrote the first published computer program in 1843, more than 100 years before the first artificial satellite, and you don’t hear NASA whingeing about being too young a discipline to be expected to have stable best practices and common standards for building rockets.

 

For another, if we were really groping towards a blessed age of stability, one would have thought there would have been some progress towards it by now. Whereas in fact languages and libraries are proliferating, new paradigms in programming appear with alarming regularity, and the rate of change of technology is, by all accounts, increasing. This does not feel like the transition towards a new, “mature” state.

 

programming

Rather, it seems to me (beware, I feel another theory coming on) that software development is in a state of change because software is tied to the cutting edge of technologies that are continually redefining what we can expect from them, and therefore changing what we want from them.

 

Our basic expectations of a wall have remained the same for several thousand years—if I wanted to build a new wall today, and found that I had somehow overlooked the presence of a seventeenth century wall exactly where I wanted my new wall to go, there’s a decent chance that old wall would serve my new need.

 

But now suppose I work for a company that needs a system for processing employee expenses, and I’ve just discovered that the company has some old expense processing software. It does broadly the same thing—one user puts in information about expenses, and another looks at that information and approves it, leading to the accounts department being notified about reimbursement. Could I just reuse it?

 

The answer is probably yes, so long as the old software isn’t so old as to be obsolete. But how old can it be before it becomes obsolete? Well, assuming we’re in 2017 now, the old software can’t be from the 1970s, because I don’t want to have to input information on punch cards. It can’t be from the ’80s because I can’t have it running on a central mainframe—I don’t have one of those. It can’t be from the ’90s because I need it to be accessible via the Internet.

 

It can’t be from the 2000s because I need it to be mobile-friendly. If it’s from the 2010s then it might just about serve, but setting up a way of running it is going to be painful because the hardware and supporting software it relies on are probably obsolete and there have been many security flaws uncovered by them.

 

Even if the system is only 5 years old then integration with other systems will be a pain, maintenance will be harder because it will require vanishingly rare tools, and it’ll probably look pretty dated.

 

The rate of change of software is absolutely breathtaking and will continue to be so for as long as humanity continues to use the computer (using the term “computer” broadly) to redefine and reinvent its world, which I would suggest may well be forever. 

 

New possibilities will lead to new requirements, resulting in new languages, tools, and techniques, all of which means that even though software developers continue to solve broadly the same problems, every new attempt at a solution involves some element of the unknown, because there is not, and never will be, a single, stable, univerAlba-understood and easily estimable way of solving a particular problem. 

 

Every time it’s solved it’ll be solved in a slightly different context, and that context is the killer when it comes to accurate estimation.

 

Refusing to play the game

Estimation Problem

What to do, then? The most endearingly petulant solution to the Estimation Problem is a movement that has sprung up over recent years around the hashtag #noestimates.

 

The premises of this movement are that (a) accurate estimation is impossible, (b) estimates are often a means used by managers to impose unrealistic deadlines on developers, and (c) time put into coming up with estimates is the time that could be spent on development instead.

 

Therefore, say the #noestimates crowd, the mistake is asking for estimates in the first place. Businesses should be weaned off this childish and unhelpful dependency on these made-up-numbers that have no real meaning or value.

 

Perhaps it will not surprise you to learn that this movement is far more popular among developers than managers. It’s worth having a little look at the premises of this theory, because it has gained a surprising amount of traction, and it’s worth considering whether and in what circumstances it makes sense, and if it doesn’t, how to respond to its proponents.

 

accurate estimation

Regarding the idea that accurate estimation is impossible, one argument offered is that software development is actually like scientific research. In the same way that it would be absurd to ask a scientist how long it will take to prove the existence of dark matter, the argument goes (since software development is all about exploring the unknown—apparently when dealing with known stuff you don’t need software developers, because what you need already exists), so too is it absurd to ask them to estimate how long their work will take. To which the response is surely, “Come now, don’t take yourself so seriously.”

 

Yes, there are lots of unknowns in software development. No, it’s nothing like the level of the unknown of pure scientific research. Anyone who has ever had to pay a builder more than they originally quoted because something unexpected happened part way through the build (c.f., circa 2006 when I tried to get help renovating my kitchen) knows that for any job that is estimated there is always some level of the unknown.

 

Depending on how much unknown stuff there is, estimation can be easier or harder. Scientific research is at one extreme. Just because software development is on the spectrum, it doesn’t mean it’s at the extreme too. Estimating software tasks is really, really hard, but to dismiss it as impossible is, frankly, a bit churlish.

 

Turning to the idea that somehow requiring estimates is part of a management conspiracy to put pressure on developers, I can only repeat the point I made above, that managers’ deadlines are built on developers’ estimates. With my coder hat on, if we developers feel that our deadlines are unrealistic, it’s because we have failed to provide appropriate estimates—we have failed to accommodate an appropriate level of unknown-ness into the numbers we have provided, and we only have ourselves to blame.

 

Finally, I will concede that the idea that time spent estimating is time wasted does actually make sense, so long as you have absolutely no understanding of nor interest in how a business works. Developers who describe an estimate-free business tend to suggest that product development should be a process of incrementally improving something, making it better and better as quickly as possible, and at each stage taking decisions based on what the product is rather than guesses about what the product might be at various points in the future.

 

Which is very sweet, but I would like to present to you three short scenarios, all drawn from personal experience which the #noestimates gang completely fail to take into account.

 

Scenario one:

start-up runway

The start-up runway. Your company is going to run out of money in October, which means you need to start pitching to investors in June to have a hope of surviving. It’s currently April. You could either devote your energies to rebuilding the UI to make it more attractive, or you could try to make that whole new dashboard you’ve been talking about.

 

The latter would be a coup, but it will only be valuable if at the demo it meets a certain minimum specification, otherwise, the investors won’t be interested. You have to decide whether to build the dashboard or just to rebuild the UI. The key question: if you build the dashboard, will you have it to the minimum useful spec by June?

 

Scenario two:

website

The quote. You want to have built a new mini-site to publicize your company’s big new initiative. But your budget is limited and you have a maximum amount you can spend on it. You’ve asked a development agency you trust (who bill by the day) how much it’ll cost to get it done, so you know whether to greenlight it or whether to can the whole idea and spend the money elsewhere.

 

Scenario three:

the launch. This year there’s a big international product release by your company. It’ll involve training hundreds of staff worldwide, a coordinated global marketing initiative, and a giant transcontinental exercise in logistics. The product can’t be shipped until its accompanying software is polished and feature-rich. The CEO has asked the tech department when the software will be ready so that the rest of the company can start scheduling their deliverables.

 

It turns out that in the real world, estimates are really, really important because product development doesn’t normally happen in a vacuum. I absolutely accept that in the rare scenarios where it is possible to avoid making any estimates at all, there are real advantages to not making them and just getting on with writing code instead. It sounds pretty idyllic. But I have yet to work on a project or product where that would actually be feasible.

 

Estimates are graphs, not points

Estimates are graphs,

Assuming, then, that you find, like me, that simply not using any estimates at all isn’t possible, you’re going to need to have a way of working with them, despite their being pretty consistently unreliable. One thing you may find helpful is to stop thinking of an estimate as a duration in time and start thinking of it as a probability distribution curve.

 

That is to say, do the level of unknowingness, when a developer says “5 days,” even when they’ve taken into account the uninteresting, it’s best to understand that as meaning that:

  • It’s reasonably likely the task will take 5 days to complete.
  • It’s also somewhat likely that the task will take 7 days to complete.
  • It’s not going to be that surprising if the task takes 10 days to complete.
  • You can be pretty confident that the task will not take 25 days to complete.

 

This also goes the other way:

  • It’s perfectly possible that the task will take only 4 days to complete.
  • There’s an outside chance it might only take 3 days to complete.
  • With the best will in the world, there’s no way it’ll only take 1 day to complete.

 

If you were to plot a graph of likeliness vs. task duration, the high point of the graph would be at the 5 days mark. But that doesn’t mean it’s safe to assume that the task will take 5 days for the purpose of planning. Quite the opposite: it’s painfully clear from the history of software development that 5 days is a very unsafe assumption to make, thanks to all that stuff to the right of the 5-day point on the graph.

 

There’s a very decent chance the task will take longer than the number the developer gave, and therefore that an assumption of 5 days would cause problems.

 

Now you might think that actually, with enough tasks, things ought to sort of balance out. If half the tasks take longer than estimated, but half take less time, then in the long run surely they’ll balance out and the project as a whole will be roughly on track, right? Sadly, though, real life doesn’t behave like that.

 

First of all, entrusting one’s professional success to the law of large numbers is arguably rash. Second of all, in real life that stuff we talked about earlier where developers forget the uninteresting stuff means that there’s a skewing factor that means tasks are more likely to be wrong on the long side than the short.

 

And thirdly, remember that the graph will be asymmetrical— a task estimated to take 5 days could absolutely take 10 more days than expected (i.e., 15 days total) to complete, whereas it couldn’t possibly take 10 fewer days (i.e., 5 days total). The high point of the graph might be at 5 days, but most of its area will be to the right of that. The developer is telling you the mode, but you’re concerned about the mean, which is a larger figure.

 

The trick is therefore to make estimates that take into account a big enough chunk of the probability distribution to make you feel comfortable. As a general heuristic, when developers give me estimates that seem to adequately account for the uninteresting, I tend to account for the unknown by doubling those estimates in order to come up with a completion date. That has served me fairly well so far.

 

The downside of doing this sort of doubling is that you either have to conceal your schedules from your developers, or you have to essentially say to them, “I think everything will take twice as long as you say it will.” Sometimes developers respond well to this—they appreciate that you are giving them that contingency buffer they’ve always wanted.

 

But sometimes they take offense at your cynical attitude towards their estimates. Or worse, they can take the perceived “extra time” available as an opportunity to do a whole bunch of extra things that no one really needed but they wanted to do anyway, which together serve to push the whole project back behind schedule again.

 

Empiricism

Empiricism

To get developers on board it can be helpful to take a more empirical approach, and one such route is story points. Story points are a staple of the Agile process, but broadly what they are is a way of letting developers provide estimates in a way that allows one to adjust for developers’ tendency towards over-optimism transparently and without hurting anyone’s feelings.

 

The way it works varies from company to company, but the broad gist is this: when you have a chunk of work that needs doing, break it down into tasks, and get the developers to estimate them, but instead of assigning a number of days, ask them to assign a number of points. The first time around, equate a number of points to a duration, so that, for example, 1 point means a couple of hours, 2 points means half a day, 3 points means a day and 5 points means 2 days.

 

When the work is done, divide the total number of story points by the amount of developer-days taken (i.e., the sum of the number of days that each developer worked on the project) to get your “velocity,” which is a measure of the number of points that the team can complete per day.

 

So, for example, suppose I have a project that involves 3 tasks. One is small and should take only half a day, so it’s estimated as being worth 2 points. One is about a day’s worth of work so is given 3 points, and one is even bigger, so is given 5 points. It takes a week to get it finished, during which one of the two developers puts in 2 days on the project and 3 days on other things, and one developer is on vacation for 2 days, so puts in 3 days of work.

 

This means that the 10 story points were completed in 5 developer-days, meaning that the velocity of the team is 2 story points per developer day.

 

Now, here comes the clever bit. The next time there’s another chunk of work to be done, you ask the developers to assign story points based on how each task compares in size to the tasks as they were estimated the last time around. If a task feels like it’s of a similar size to that small task from the last time around, give it 2 points.

 

If it feels more like the slightly larger task, give it 3, and so on. Based on the total number of points assigned, and the velocity you established earlier, you can estimate the number of developer-days it’ll take to complete the chunk of work. When the chunk of work is done, you can revise your velocity based on the actual number of developer-days it took, and use that revised velocity the next time around.

 

The good thing about this approach is that it gets more accurate the longer you do it, because developer estimates, when converted to story points, do broadly correlate to the length of time the tasks will take, so long as you can find out how much to scale up the estimates by—which is what your velocity does, and the velocity get more and more accurate as time goes by.

 

Equally, by stopping your developers from estimating amounts of time, you can compensate for their built-in optimism without having to contradict them. If your team’s velocity turns out to be 1 point per developer-day, and a developer says that a task is worth 1 point, you can assert without hurting anyone’s feelings that the task will take a day to complete, because the evidence justifying that assertion is plain to see.

 

Whereas if you asked the developer how long the task would take, they might well say “a couple of hours”—after all, that’s what “1 point” originally meant—at which point even though past evidence suggests it’d take a day, if you actually said so you’d be contradicting the developer, and tensions might arise.

 

Essentially, story points capture what developers are good at estimating, which is the relative size of a given task, while leaving out the thing they’re bad at providing, which is the absolute duration, instead deriving that from past performance.

 

The downside of the story points system, of course, is that it relies on accumulating a bunch of data for any sort of accuracy to kick in. Which is great if your process is iterative and long-running, but it’s less than helpful when a new team is assembled at the start of a big new project, and you are asked to commit to some timescales before you’ve had the luxury of doing some work to calibrate the estimates of the developers.

 

At that point, the best advice I can offer is to do my doubling trick. (If the implications of slipping behind schedule are really serious, see if you can get away with tripling the estimates you get. Not kidding.)

 

Regardless of how you interpret developer estimates, what I hope I’ve made clear is that such estimates require interpretation, and should seldom be taken at face value. There’s a ramification to this, and it’s not a very nice one: you’re going to need to make sure that people who make decisions about timelines who haven’t been taught how to interpret developer estimates don’t have too much contact with developers without an interpreter present.

 

If you’re a project manager and you have some developers working for you, be wary of the CEO dropping by to check how things are going when you don’t happen to be around. You may know that when your database specialist says “there’s about a week’s worth of work left,” that means things are looking good for launch in a month’s time, but the big boss probably doesn’t.

 

Hopefully your developers know not to make promises about timelines in your absence, and hopefully your boss knows that you’re the only one to trust when it comes to reporting on status (and no one wants to put up barriers to communication in the workplace); but take it from me, this one can bite you, hard. You have been warned.

 

The Arithmetic Problem

Arithmetic Problem

Don’t worry; I’m almost out of nasty surprises about software project management. But there’s one more big one that we’re going to have to cover to get a complete picture of all the nastiness that lies in store for the hapless technical manager.

 

I’m calling this one the Arithmetic Problem, but its essence is described most famously by Frederick Brooks in a formulation known as Brooks’s Law, which he articulated in one of the truly great blogs about managing software development, his 1975 classic, The Mythical Man-Month.

 

In the last section, we used the term “developer days,” which is just a different unit for measuring the same basic thing as a “man-month” (and a more popular one these days, since it preserves the pleasing alliteration whilst avoiding the slightly uncomfortable and often-inaccurate gender-specificity).

 

It’s a measure of how long something will take that varies in absolute time depending on the number of developers available. A team of 3 developers working for 5 days on a problem spend 15 developer days on it, and so on.

 

Where programming tasks are completely unrelated, they can be performed in parallel by two people twice as quickly as one person doing them one by one. But the moment the tasks are in any way related, the gain of using more people starts to decrease. The conceptual complexity of interacting software components necessitates a clear understanding of the system, and when two people are working on a system, each needs to understand what the other is working on.

 

Communication is tricky and slow (because what one is communicating is closely linked to human processes, and as we have discussed, we’re not very good at imagining and describing processes), and so as more people get added to a project, more time needs to be allocated to communicating ideas and, sadly, to clearing up miscommunications.

 

So when planning a project, when your estimates for tasks are normally based on a developer imagining doing them one by one, it can be hard to use that information to predict how long it will take a team of developers to complete the tasks.

 

Brooks’s Law

But there’s an additional sting in the tail. Brooks’s Law, established through Brooks’s painful firsthand experience and corroborated by the similar experiences of hundreds of other project managers, is this: “adding manpower to a late software project makes it later.”

 

The primary reason for this is that software projects involve, as well as the developers building all the component pieces of a piece of software, the developers building up clear and coherent mental models of how the software works.

 

As the thing gets more complex they need these mental models to help them navigate the code base and not accidentally break one thing by fixing another. Developers brought on to a project part way through don’t have these mental models and therefore take a long time to get up to speed.

 

It takes quite a lot of help from other developers (in the form of direct conversation, code review, and fixing what the new developers break) to get the new developers to the point of full productivity, all of which help requires a lot of time from the original developers. Things slow down, at least in the short term, when teams grow in the middle of a project, and the slow-down can be dramatic.

 

If arithmetic with developer days is hard before a project starts, it becomes almost completely meaningless once the project is up and running.

 

In summary

In this blog we’ve seen that, unlike when building a house, when it comes to software it’s almost impossible to know what you want. And even if you did know, it would be impossible to know how long each part would take to do. And even if you did know the theoretical length of each task, it would be impossible to work out the amount of time it would take an actual team of a specified size to do it.

Which goes some way to explaining the sordid catalog of failure that is the history of software projects over the last fifty years.