20+ Coding Challenges in Software (2019)

Coding Challenges in Software Projects

20+ Coding Challenges in Software Projects 2019

Imagining human processes is a hard thing, and this makes it very hard to design and code them without seeing them in action, which is essentially what planning a software project in advance is.

 

Mistakes, Challenges, and omissions in the planning process lead to incorrect technical decisions, which in turn make the inevitable change to correct those mistakes surprisingly expensively. This tutorial explains 20+ Coding Challenges that are arises in Software Projects in 2019. And explains several tips for overcoming that challenges without losing time and budget.

 

The most obvious solution to the problem—more and better planning—tends actually to exacerbate the problem. You may be thinking that if more planning makes the problem worse, there’s potentially a radical way of avoiding the problem altogether. If so, then ten points to you for your perceptiveness. 

sad truth about software projects

Coding Challenge 1:

The Challenge of Developer Empowerment

The biggest potential challenge for commercial software organizations, however, is the developer. As the availability of source code and services has democratized access to technology and irrevocably altered traditional procurement processes, developers have increasingly become technology kingmakers.

 

Where technology acquisition was once the province of the CIO, today it’s the practitioner leading that process, because, by the time a CIO typically hears about a project today, a majority of the technology and architectural decisions have already been made.

 

To be sure, this is not the first time we have seen a major shift within organizational technology buyers. As technology grew more common within enterprises, for example, responsibility for purchasing and procurement gradually moved from traditional operational elements to departments that specialized in technology usage and deployment—what we today know as IT.

 

In more recent years, meanwhile, IT’s control of technology usage and adoption has been waning, frequently in favor of more empowered marketing departments. But from the standpoint of a commercial software vendor, there is one key difference between developers and other organizational bodies that have controlled technology acquisition in years past: developers typically have no budget.

 

Which means that the single most important audience from a technology adoption standpoint frequently has limited or no ability to pay for the products a commercial software vendor is offering.

 

Most commercial software vendors, though slow to wake up to this new reality, are beginning to move aggressively to court developer populations and update their engagement and outreach capabilities.

 

Instead of relying strictly on an enterprise-focused sales force, armies of technical evangelists and developer engagement professionals are being unleashed on unwitting developer populations in an attempt to ensure a given software vendor’s relevance for the population most likely to be making technical decisions.

 

The problem facing software vendors, however, is that while recognizing the problem is indeed the first step, the solution is less than obvious.

 

Commercial software vendors are particularly disadvantaged because they are compelled to compete in a two-front war with both free software and software made available as a service. From a competitive standpoint, this means downward pressure on price with potentially higher costs driven by a need to compete with different models.

 

But as discussed, even commercial open source vendors are challenged by a market that is increasingly valuing convenience and availability over performance and features. Developers have long prized availability and speed, and while open source software is preferred, open source software that is managed by a third party possesses crucial advantages in terms of convenience

 

Coding Challenge 2:

The Challenge of Competing with Free

One of the most obvious factors acting as a brake on software pricing has been the wider availability of open source software.

As the Encyclopedia Britannica discovered with Wikipedia, it is difficult to compete with free, even in cases where the premium product is technically superior or differentiated in a meaningful way.

Likewise, proprietary software vendors have been forced to adapt to a library of open source alternatives growing by the hour.

 

Organizations seeking to commercialize open source software realized this, of course, and deliberately incorporated it as part of their market approach. While MySQL may not have succeeded in shrinking the market to three billion, it is interesting to note that growing usage of MySQL was concurrent with a declining ability of Oracle to sell new licenses.

 

Which may explain both why Sun valued MySQL at one-third of a $3 billion dollar market and why Oracle later acquired Sun and MySQL. The downward price pressure imposed by open source alternatives has become sufficiently visible, in fact, as to begin raising alarm bells among financial analysts.

problems began

Even hardware companies are at risk. Cisco has built itself into a large systems supplier primarily on the back of its networking business; switching and routing are typically responsible for close to two-thirds of the company’s revenue.

 

For years, even as areas such as compute and to lesser extent storage began to succumb to the onslaught of software-powered alternatives such as the public cloud, networking remained relatively immune. 

 

Nor is any respite in sight. What began as a trickle of open source software with the early success of the Linux, Apache, MySQL, and PHP (LAMP) stack has lately turned into a flood. The number of software categories without a viable open source choice is growing smaller by the day. As Donald Knuth recently put it in an interview:

 

The success of open source code is perhaps the only thing in the computer field that hasn’t surprised me during the past several decades. But it still hasn’t reached its full potential;

 

I believe that open-source programs will begin to be completely dominant as the economy moves more and more from products towards services, and as more and more volunteers arise to improve the code.— DONALD KNUTH

 

From modern technology organizations committing to open sourcing the majority of their nonstrategic codebase to companies using open source as an asymmetrical means of market competition to software developers making their solutions to various problems available to the wider world.

 

The rapid growth of open source has forced commercial software vendors to adapt both their products and the models with which they sell them. In many cases, open source software has also impacted pricing.

 

So if open source is cannibalizing the commercial software markets, it’s all smooth sailing for those who commercialize open source, right? Well, not exactly. In order to monetize an otherwise free and open source software product, vendors have been forced to develop creative new business models to get buyers to pay for what they can otherwise get for free. The most common of these are described below.

 

Support/service

The most common model of commercial open source is support and service. Instead of paying for the product, buyers pay vendors to support a product they can otherwise obtain at no cost. The advantage of this model is that most large organizations require commercial support for production applications, so sales are less of a challenge.

 

The disadvantage of this services-only approach is that the deal size is commensurately lower than with traditional commercial software that includes both a license component and support and service.

 

Dual licensing

Another popular model historically has been dual licensing. Best exemplified by MySQL, this model requires that a relatively restrictive open source license, typically the GPL, be applied to a given codebase.

 

Unlike so-called permissive licenses such as Apache, BSD, or MIT, the GPL, and other similar “copyleft” licenses require that any changes, modifications, or additions to a given code-base be made available under exactly the same terms.

 

For commercial organizations that wish to embed open source in a closed source product, this could mean being forced to open source code that would prefer to remain proprietary.

 

Rather than comply, then, buyers can purchase an alternative, hence the “dual,” license for the product, which allows them to keep their code private without requiring anything in turn. While this model spawned a host of imitators, however, usage of it has been declining for some time.

 

For the model to work, the vendor must maintain or acquire copyright permissions for the entirety of the codebase, and as MySQL itself has proven, this can prove problematic over time as would-be contributors make fixes and updates available but decline to share copyright with the vendor.

 

Open core/hybrid source

Probably the most common model today, open core describes an open source project that is partial, even mostly, open source, but with some portion of the project or some features remaining proprietary. Typically there is a basic level of functionality—referred to as the core—which remains open, and proprietary features or capabilities are added upon and around this.

 

The highest profile example of this model today is Hadoop. Cloudera, the first organization to commercialize the data processing platform, contributes along with other organizations, commercial and otherwise, to the base Hadoop project, which is open source. A proprietary product that includes management functionality is then sold to customers on top of the base open project. This model is viable but can be difficult to sustain.

 

One of the challenges for those adhering to the open core model is that the functionality of the underlying open source project is evolving at all times, which means that the proprietary extensions or features must outpace the development of the open source project to remain attractive to customers.

 

The reality then for purveyors of commercial open source offerings is that while the models have been demonstrated to work, in some cases generating substantial market value, none work particularly well. Each model has limitations that act to inhibit the types of growth we have seen from the software market in years past.

 

It’s commonly accepted, in fact, that the market will never see another Red Hat, which is to say another billion-dollar revenue entity that primarily or exclusively sells open source software. And even those responsible for open source businesses admit that open source is problematic from a business model standpoint

 

If open source is challenging proprietary software companies, and also those wishing to commercialize the open source software, the real moral of the story might be that it’s pretty hard to build a successful, standalone software company, period.

 

Coding Challenge 3:

Technical specifications, human processes

Technical specifications

So if we accept the premise that it’s surprisingly hard to define requirements for software projects in advance, we have a bit of a problem, because the traditional process of managing projects involves planning everything out in advance.

 

First, you work out what it is you want to do, then you do it, right? But with software, you don’t seem to be able to know what it is you want to do before you do it. It’s as if software development was designed expressly so as to be unmanageable.

 

But why is this? Software engineering is just another type of engineering, and other disciplines don’t have this problem. Which isn’t to say that other sorts of engineering aren’t tremendously hard to manage, but rather that they don’t seem to produce failures on anything like the scale that software does.

 

Equally, the process of building a piece of software really does look analogous to the process of building a house, and while the world of construction is fraught with logistical potential disasters, its track record is much better than that of software. What’s going on?

 

The theory is this: the construction industry, and most brands of engineering, are about creating things, whereas software is normally about creating processes. I don’t mean processes in the way that a combustion engine has a process—it’s not that software involves moving parts.

 

Rather, the vast majority of software, particularly the stuff built in a business context, is about creating a framework to enable a human process. In the e-card example above, the exciting thing isn’t the mechanism for taking text and laying it out on top of a picture of an elephant.

coder

The exciting thing is enabling a process whereby one person writes some things and clicks a button, and a second person gets an email with a bit they can click on to see something that is generated from the things the first person wrote.

 

And the difficult bit isn’t imagining how the mechanism works. The difficult bit is imagining the details of how the human process works.

 

To put it another way, when we are planning software, we’re normally not planning software. Rather we’re planning a new process for employees, clients, or customers, and also some software to enable that process. This is always the case when we’re building a new product, and almost always the case when we’re upgrading something already in existence.

 

 Even when we’re just digitizing an existing system, it’s no good saying, But the process is exactly the same, we’re just now recording the information in a database instead of on a paper form.”

 

Typing things into a computer is a very different process than writing on a piece of paper when you’re looking at the level of individual actions by human beings, and, as we’ve seen above, it’s at that level that lots of nasty and easy-to-overlook problems lurk.

 

This distinction of subject matter between physical things and human processes points to the reason behind the Imagination Problem: it’s really hard to clearly imagine an entire process. Our brains aren’t very good at visualizing them. And there are far fewer tools we can use to help us.

 

Consider what would happen if we were trying to build a bridge. We would draw up detailed architectural plans that we could pore over. We would build a scale model of the bridge so that we could actually see exactly what it would look like and inspect every fine detail, all before we started building.

This would give us ample opportunity to spot problems at the point where making changes is cheap.

 

But there’s no equivalent of the architectural model when it comes to software. For a model of a process to be useful, it has to actually work because as well as its spatial properties we also need our model to illustrate its temporal properties.

 

And the thing about software is that building a working scale model basically takes as much time as building the full-size thing it represents. So we more or less have to build the whole thing in order to be able to see the problems and gaps in our initial design.

 

This has some problematic ramifications because the software has yet more ways in which it doesn’t behave like other forms of engineering. Suppose we were part way through building a bridge, and we realized there was a minor ambiguity in the plans which needed to be resolved before we could proceed.

 

We’d have to choose a resolution to the ambiguity and plan and execute the relevant additional work. There would be cost and time implications, but we would expect them to be minor, in proportion to the size of the ambiguity.

 

Compare now what happens when we come across a similar ambiguity part way through a software project. A minor ambiguity requires a minor clarification, which might involve adding a relatively small piece of functionality, something that looks pretty easy.

 “In computer science, it can be hard to explain the difference between the easy and the virtually impossible.”

 

The software is constrained by the limits of the technologies it is built upon, and, like the proverbial military general who always tries to re-fight the last war, software technologies tend to be optimized towards solving last year’s big problems.

 

A year is a long time in the world of software (more on this later), and this means that there’s a good chance that while the technology in use facilitates 9 out of every 10 features that the spec requires, one in every 10 (and it’s always one that looks from the outside just like the other 9) will turn out to be completely unsupported and will require extreme lengths in order to make it happen.

 

In the e-card example, I wasn’t kidding about the complexities of dynamically resizing text to fit in a box—if you’re using HTML and CSS, while doing it for images is trivially easy, doing it for text is unexpectedly complex.

 

This sort of thing means that there is a greater chance that the minor clarification of the minor ambiguity will result in a surprisingly large schedule slippage.

 

[Note: You can free download the complete Office 365 and Office 2019 com setup Guide for here]

 

Coding Challenge 4:

Starting from the wrong place

Starting from the wrong place

It’s worth noting, however, that most of the time when a minor addition causes a major headache, the problem isn’t that the thing to be added is difficult to pull off in and of itself.

 

Rather, it’s very common to be told that those difficult things wouldn’t be difficult if only a particular technical decision had been made differently at some earlier point in the process.

 

Why? Because, again, building software is nothing like building a house. When you build a house for a given design, your choice of materials and techniques is fairly constrained—if you want it to look like brickwork, you build it with bricks; if you’re after that glass and steel effect, you use glass and steel to build it. In software, nothing is so certain.

 

Let’s start with a choice of language. Every piece of computer code is written in a particular language, one that is normally very formally defined, and that both humans and computers can understand (to make a massive oversimplification for the sake of convenience, humans write in the language and computers read it).

 

A fundamental choice when deciding to build any piece of software is deciding which language to build it in. How wide is the choice?

 

Well, consider the website http://www.99.net. It comprises a collection of computer programs, each doing the same thing (printing out the lyrics to the eponymous song) in a different language. It features 1,500 distinct languages and doesn’t pretend to be comprehensive.

 

Admittedly, the vast majority of these languages would be such terrible choices for any serious project that they can safely be ignored. The main reason is simply their obscurity—a language that lots of people know and use will have lots of helpful language-specific tools that can be used to speed up development.

 

A large online community of people who can help when one gets stuck and, crucially, compatibility with the sorts of systems that one might need one’s new piece of software to interact with (i.e., to oversimplify massively once again, some sorts of computer don’t know how to read some computer languages).

 

languages

So you might be able to reduce your choice of languages to less than 5 contenders, and you’ll probably be guided by the languages that your existing developers are familiar with.

 

Ultimately, though, the deciding factor is whether the language enables you easily to fulfill the requirements of the project (i.e., sort of: does that language have a large and detailed enough vocabulary to allow your coders to write down what they want to do without having to make up a bunch of new words).

 

Next comes the question of frameworks. A framework is broadly a tool that provides a structure and a format to the code one writes. (Cheap analogy: You’ve decided you’re going to write your document in English. Now, are you going to write it in Google Docs, Excel, Keynote, etc.?)

 

It’s normally designed to facilitate a particular sort of program, and typically large pieces of software (where the amount of code written by the developers is going to be more than a few hundred lines) are easier to manage if they use a particular framework.

 

Each framework is specific to a particular language, so depending on your language of choice there may be tens of viable frameworks to choose. Or you may choose not to use a framework at all (the equivalent, I suppose, of using something simple like Notepad to write your document). The key question is whether the problems that the framework solves are the problems that your project will face.

 

Once you’ve picked a framework, the next questions will be about which if any libraries to choose, what infrastructure and other tools to adopt: a whole plethora of decisions to make.

 

Libraries are, sort of, lists of additional words with definitions that you can optionally teach your computer so that you can use those words when you write your code.

 

They can speed things up tremendously, because instead of painstakingly having to describe to your computer how to, e.g., display a date in a nice user-friendly format, you can simply use a library that defines a word that the computer understands as “display a date in this nice user-friendly format,” and then all you have to do is write that one word.

 

Normally you’ll be able to start writing code before all these decisions are made, and the decisions will be increasingly low-risk: if you find out you picked the wrong library (because the words it defines don’t quite mean the things you need them to mean), normally it won’t be too painful to change it later.

 

As in, it will be painful, but not excruciating. But if you find out you picked the wrong framework to achieve what you want to achieve, that’s going to hurt, because you will need to throw away quite a lot of the code you’ve written, because it’s normally fairly framework-dependent.

 

And if you realize you picked the wrong language, get ready for some very, very unpleasant meetings, because it’s time to more or less start the project again from scratch.

 

How do you find out you’ve picked the wrong library, framework, or language? Well, it normally happens when an ambiguous part of the spec is “clarified” with a new requirement that it turns out is not at all well catered for by your existing technology choices.

 

If that sounds scary, consider that I’m only talking about changes that might be necessitated to the tools chosen for a project. We haven’t even touched on the other source of change pain, which is changes that apply directly to the code that has been written.

All in all, changes to requirements can really, really hurt, no matter how small those changes look with a non-technical hat on.

 

Coding Challenge 5:

The Challenge of Competing with Available

In addition to the challenge of competing with software that is freely available, commercial software vendors are increasingly pitted against software that is freely consumed as a service.

 

Certainly, this was the view of then-Microsoft CTO Ray Ozzie, who as far back as 2005, had told the software-focused company that “the most important step is for each of us to internalize the transformative and disruptive potential of services.” Unfortunately for Ozzie, it wasn’t until his departure in 2010 that Microsoft appeared to be fully putting its weight behind that message with the launch of Azure.

 

In the early years of the last decade, vendors such as Salesforce were building traditional business applications that were not designed to be shipped to and installed at a customer’s premises, but rather hosted remotely and accessed via a browser. While the browser technology of the time had its limitations, the model’s convenience more than offset any functional issues with the platform.

 

Over time, the browser-based delivery model became not only accepted but the default. Even software that is installed and hosted on-premise today is more often than not consumed via a browser; native applications are increasingly rare outside of mobile.

 

More recently, other categories of software such as databases have been made available as services. Amazon and Google, for example, both host versions of the open source MySQL database that users may leverage via an API.

 

Both companies also offer more specialized database services in Redshift and BigQuery, respectively. Beyond database-style services, Amazon and Google, along with a wide market of other public infrastructure providers, offer a range of compute, storage, and other software-based services.

 

What this means for commercial software providers then is that would-be customers are increasingly able to choose between two options: software they are responsible for selecting, purchasing, installing, customizing, integrating, deploying, and maintaining, or software they simply consume as a service.

 

While not yet appropriate in every customer scenario, the number of problematic use cases for SaaS is growing smaller by the day. Which in turn means that the competitive pressure on standalone, on-premise software is increased because it is inherently more difficult to consume than SaaS alternatives.

 

Coding Challenge 6:

The Imagination Problem

Imagination Problem

Looking at the studies, a pattern emerges. The McKinsey report identifies “unclear objectives” and “lack of business focus” as the most significant cause of project failure. The Steve bros report claims that 70% of studies fail due to poor requirements.

 

A 2011 study by Geneca blames “fuzzy requirements” and the business being “out of sync with project requirements,” and claims that three-quarters of executives are so pessimistic about the outcome that they anticipate that their projects will fail before they even start.

 

That’s what the analysts and consultants who study failure post hoc say. But we can go straight to the horse’s mouth as well. Stack Overflow (an online forum for software developers to share technical problems and solutions) undertakes an annual survey of software developers to assess the state of the discipline around the world.

 

In the 2016 edition, which had 50,000 respondents, one of the questions asked about the major challenges experienced at work.

The most popular responses confirm the story told by the studies:

  • A third of the developers who answered complained about unspecific requirements and a similar number also complained about poor documentation.
  • 28% said that changing requirements were a major challenge.

 

The overall picture of how a typical project fails, according to software developers, is something like this: someone asks a developer to do something. They only have a vague notion of what they want, and they communicate it poorly.

 

The developers do their best to interpret what the client wanted based on what they actually asked for, and then start trying to build it, but before they’ve got very far the client changes their mind about what they want anyway. No wonder nothing ever gets finished on time!

 

This is, of course, a very biased interpretation—notice how, according to developers, none of the blame for software project failures falls on the developers.

 

But, coupled with the studies documented above, it becomes apparent that there is something very wrong with the requirements and specifications that are given to developers at the start of a project—they don’t communicate clearly and completely what it is that developers need to build in order for the project to be a success.

 

Why is this? Well, let’s reject out of hand the notion that project managers are bad communicators in general. An ability to communicate clearly is pretty much what makes a project manager a project manager, and a desire to do so is normally what leads people to become project managers in the first place.

 

And let’s bear in mind that specifications are not supposed to be technical documents. They’re supposed to be written in plain English (or whatever the local language is), describing what is required in a non-technical way, allowing technical people to then infer the technical details from them.

The fact that project managers tend to be non-technical should again not be the cause of any problems.

 

So we have a situation where people who are good at communicating clearly in plain English are failing to communicate clearly what it is that needs to be built.

 

The only possible conclusion we can draw from this is the first big, tragic, counterintuitive truth about software: those people don’t actually know what needs building. That sounds absurd, of course, so let’s tidy up a little bit and give it a name.

 

We’ll call this the Imagination Problem, and we’ll characterize it as follows: when it comes to describing a proposed piece of software, where the software is non-trivial and does not exist yet, it is almost impossible to imagine how the software will behave with enough detail and precision to communicate clearly and completely a specification for that piece of software.

 

Or more bluntly, as coders tend to put it: “The customer never knows what they want.”

 

Coding Challenge 7:

A counterproductive mitigation

software project

The misery that change can bring is well known in software circles. The jargon for describing the emergence of new requirements part way through a project is “feature creep,” whose connotations are grotesqueness and insidiousness—no one wants to be creepy, after all.

 

So even the terminology we use makes it very clear that software change is loathed and feared.

 

Given how painful change can be in a software project, it’s understandable that one’s natural attitude when faced with this problem is to do everything possible to reduce change.

 

And this is indeed the approach of the typical project manager who has been burned once or more already by a project whose requirements drifted halfway through.

 

What’s unfortunate is that the standard approach to avoiding change is often extremely counterproductive. This is because the standard approach is to assume that change is caused by ambiguity in specs, and that ambiguity is caused by insufficient planning.

 

The solution, it is therefore assumed, is more detailed planning, and an almost obsessive determination to map out every detail of the software to be built. But this doesn’t banish the Imagination Problem.

 

In fact, while seemingly beating it back, in fact, it merely feeds it, strengthening it for its inevitable return once development begins. Because since the human brain struggles so much to imagine a process without actually seeing it in action, the way the process is imagined to be at the start may be quite different from how it actually needs to be.

 

Which means that all the obsessive planning normally leads to a photorealistic portrait of the wrong thing. Which means that all those additional details so painstakingly planned become red herrings leading to incorrect technology choices being committed to and a false sense of confidence that makes it harder to notice that the initial requirements need to change until later in the process.

 

Broadly, the more you plan, the more likely your plans are to have a mistake, so the more likely you are to have to change your plans, and changing plans was what you went into this to avoid.

 

It sounds absurd, and of course, it is. But it also happens to be how software projects pan out time and time again. Because software development actually is a little bit absurd, and it just comes with the territory.

 

Coding Challenge 8:

The Estimation Problem

Estimation Problem

While the Imagination Problem is largely a failure on the part of non-technical people, and one that coders love to cite as the source of all project failures, it is at most only a part of the problem as a whole.

 

There is a second issue which software developers are much less willing to credit with derailing software projects, largely because the blame for this one normally falls squarely on the shoulders of the developers themselves. I am talking about something we will call the Estimation Problem.

 

A few years ago I was tangentially involved in an utter car crash of a project (this one wasn’t my fault, for once).

A piece of software had to be delivered to match a hardware launch, and the small team of developers building it, whose track record at self-organizing wasn’t great, were assigned a project manager from the hardware team—a mechanical engineer by training—to make sure they delivered on time.

 

The project manager duly went around the stakeholders establishing what needed to be done, and asked each of the engineers how long each chunk of work would take, then drew up a Gantt chart and declared the project “kicked off.”

 

Time passed, and the developers remained busy and optimistic—they were making great progress, and everything was basically on track. Except that that’s not quite what the project manager’s weekly updates suggested.

The tone was generally upbeat, as the PM was echoing the positive sentiments of the developers, who were after all the best placed to say how the project was going.

 

But the shape of the Gantt chart kept changing. All the short lines on the left representing the first tasks to be accomplished kept getting longer, so that their projected completion date was slightly in the future.

 

While all the long lines on the right representing the final tasks kept getting shorter, to fit them all between the ongoing first tasks and the project end date.

The project remained officially on track, but none of the milestones were being hit—each week they were just being pushed back and back, closer to the final project completion deadline.

 

Needless to say, this could only go on for so long. A few weeks before the hard deadline, the CEO stepped in and demanded to see a demo of the software. The whole team assembled around a screen, and the project manager opened up the application—and nothing happened.

 

The CEO, bewildered, asked why he was being shown an application that was so buggy none of the functionality appeared. The indignant reply from the developers was that, on the contrary, the software wasn’t buggy at all. The features weren’t appearing simply because they hadn’t been built yet.

time

It became pretty clear at that point that the project was massive, horrendously, unrescuable behind schedule, and a committee of middle managers (including me) was assembled to find out what on earth had gone wrong. The poor project manager was hauled in front of us and subjected to a grilling. His explanation was pretty simple.

 

Not knowing much about software development himself, he trusted his developers to give him estimates of how long things would take. The first task, they had originally said, would take a week, so he’d allocated a week for it in his project plan. At the end of the first week, he had asked if it was done.

 

The developers had replied no, it wasn’t technically finished, because it had turned out to be more complicated than expected; but the good news was that they now understood the system a lot better, so once they finished this task, the next task would be much easier. So they were still confident they’d hit the final deadline.

 

The project manager, trusting them, reported back their opinions in his weekly report. A very similar conversation was had the following week, and a similar adjustment was made.

 

At this point in the interview, the poor project manager nearly broke down, as he explained that every single task that had been completed (and there weren’t many) had taken at least three times longer than the developers had originally estimated.

 

How could he possibly bring in a project on time, he protested, if the bloody software devs didn’t have a clue what they were doing?

 

Coding Challenge 9:

A known issue

plan

The horror that this particular manager was experiencing is familiar, to a greater or lesser extent, to most people who’ve had to plan software projects. There is nothing so optimistic and unreliable as a developer’s estimate.

 

And the optimism is often so pervasive that it leads developers to absurd assertions simply to allow them to concede neither that (a) they were wrong in an earlier estimate nor (b) that therefore the project is now behind schedule.

 

This curious psychological bias is so ubiquitous that some software-focused project management tools actually have features built in to compensate for them. Fog Creek Software’s FogBugz tool has a feature called Evidence-Based Scheduling.

 

This automatically records the average discrepancy between each individual developer’s estimates for tasks and the amount of time those tasks actually took and uses it to generate individualized “multipliers” for each developer.

 

For any given project, it then tracks which developer estimated the length of the task, and applies their individual multiplier to get the “evidence-based” estimate, and predicts the length of the project as a whole based on these evidence-based estimates.

 

The fact that the company that makes this tool went to the trouble of building this feature for their project manager customers (which is entirely premised on the assumption that software engineers can’t be trusted to reliably estimate how long it will take them to do their job) tells us an awful lot about how much faith the industry as a whole has in developers’ powers of prediction.

 

One very obvious point to get out of the way immediately is that it’s not at all unreasonable to expect software developers to be fairly good at estimating time. On the one hand, the subject matter is something that only software developers have a hope of putting time values on.

 

As mentioned earlier, there’s potentially a huge disparity between the complexity of the functionality to be built and the complexity of the code that needs to be written, and non-developers can’t be expected to guess at the latter sort of complexity, which is the driving factor in the amount of coding time required. So if anyone can do it, it’s software developers.

 

And on the other hand, coders get plenty of practice in giving estimates. In almost every development team that is doing active development (as opposed to simply “maintaining” a code base—more on this in later blogs), every task gets estimated before it is undertaken.

 

Developers spend a good proportion of their lives making estimates, and in theory, they are better equipped than anyone else to be accurate. Why, then, does everything take so much longer than it’s supposed to?

Stack Overflow survey

Well, if you ask a developer they’ll be fairly likely to blame their manager. In the 2016 Stack Overflow survey, 35% of developers listed “unrealistic expectations” as a major challenge. In other words, it’s not that things take longer than expected, it’s that they take longer than wanted, which is a separate thing entirely.

 

Now, in some circumstances, this is a fair criticism, but it is at the same time fairly irrelevant. In cases where project plans are being drawn up without consultation with developers, the projects won’t go according to plan; but this has nothing to do with estimation.

 

However, developers normally are consulted about how long development tasks will take (because most project managers aren’t entirely insane), and project plans are drawn up based on what developers say; and what’s interesting is that in these situations developers often still complain about unrealistic expectations.

 

This might seem a little hypocritical since the developers are the source of the expectations in the first place. But the common complaint is about what the jargonists refer to as “contingency”, and this gives us our first clue when it comes to understanding the estimation problem.

 

Suppose first thing on Monday you come to me and ask me how long it’ll take to build you a website, and I say 5 days.

If I can get started straight away, you might reasonably suppose that the website will be finished by the end of Friday, and depending on how confident I sounded you might make plans based around the website is live for the weekend.

 

If I sounded very confident you might think it entirely reasonable to fully rely on me hitting my Friday deadline.

 

I as a developer, on the other hand, might be horrified to learn that my estimate of Friday has been turned into a hard deadline. There’s a very clear distinction in my mind between me finishing building the website and the website being finished.

 

There are a whole host of additional time-consuming factors to consider. What about an opportunity for user feedback? What about time for testing and QA? What about time for deployment? And what, crucially, about time for contingency?

 

Let’s look at each of these in turn. “User feedback” means, “that moment when you realize that what you asked me for isn’t what you wanted.” In other words, I’m anticipating that this project will experience the Imagination Problem. “Testing and QA” means, “time spent discovering the mistakes I made when building the site.”

Software developers learn

Software developers learn from experience that it’s impossible to build software without making mistakes—typos, logical errors, etc.—and that as much as we’d all like to notice and fix those mistakes as we go along, in real life, there are always some that are discovered after we think we’re finished.

 

My third complaint was about “time for deployment.” Broadly that means, “putting all the code I wrote onto a server,” which is a tiny bit time-consuming anyway and can also uncover more mistakes that I made. Again, note that I didn’t build deployment time into my initial estimate.

 

Coding Challenge 10:

Contingency

Contingency

Finally, I complained about contingency. Broadly what I meant was, “something unexpectedly taking longer than predicted.” Now, this might surprise you the client, because I, who ought to know what I’m talking about, said very confidently that it would take a week.

 

But now I’m telling you off for only giving me a week because I might need extra time for things that I can’t really specify. You didn’t build in contingency time because I sounded so confident.

 

But the truth is this: I was very confident, not that building the website would take a week, but rather than building the website would take a week if nothing unexpectedly took longer than predicted.

 

I, as a developer, fully expect something to unexpectedly take longer than predicted. It’s just that I don’t know which thing will take longer than predicted.

 For now, a server is “a computer that’s connected to the Internet that other computers connect to when they want to look at a particular website.”

 

I’ve built into it every possible manifestation of a peculiar phenomenon, namely the situation where a developer’s estimate might diverge from how long the developer thinks something might actually take.

Not every developer leaves out any of the things I’ve described from their estimate, and very few developers leave them all out.

 

Nevertheless, in my experience, one or more of the above factors is surprisingly common in any estimation process, and it goes some way to explaining how management expectations can be based on developer estimates and yet still feel unrealistic to the developers who made the estimates.

 

The two main types of flaws in estimates can broadly be categorized as not taking into account the uninteresting, and not taking into account the unknown, and I’ll look at them more deeply in turn.

 

Coding Challenge 11:

The uninteresting

Software tasks

Software tasks are normally described in non-technical terms like distinct unitary pieces of work. “Build a new web page that allows users to buy more credit.” “Add a button that sends a report to an administrator.”

 

And on the technical side, there are normally only one or two large chunks of work involved in completing a task, and it’s these that catch developers’ imaginations.

 

These are the intellectual challenges that the developer must solve, the opportunities to apply a particular technique or use a particular code library. How will the credit purchase page interact with the payment provider to charge users’ credit cards the appropriate amount?

 

How will the relevant data be collected, aggregated, and formatted to allow it to be sent to the administrator? These are the things the mind focuses on when trying to estimate how long a task will take.

 

The difficulty is that software tasks also normally include a whole host of smaller supporting chunks of work that need to be completed for the task to count as finished. That web page needs to be accessible to all and only users who are logged in. When payment is taken, credit has to be applied to the right user’s account.

 

If payment is unsuccessful, a message needs to be shown to the user explaining what has gone wrong. The report button? It needs to be “styled” so that it looks like the other buttons in the software, etc.

 

Even when these things are spelled out explicitly in the spec, or when they are clearly enough implied that the developer would never omit them, somehow because they are secondary to the real meat of the task, it’s very easy for them to slip out of mind when trying to imagine the amount of work remaining to be done.

 

At one company I worked for, the name for this was “80% syndrome,” which was that very common tendency to think of a task as 80% done when in fact it was only about halfway there, simply because the second half of the task is mostly made up of the easy-to-ignore little fiddly bits.

 

Coding Challenge 12:

The Challenge of Competing with Your Customer

Two decades ago, businesses needing a way to persist data were most likely to turn to one of the major relational database suppliers: IBM, Microsoft, or Oracle. But as users’ demands for innovation grew, vendors were unable to keep pace. 

 

 

Whether driven by an inability of vendors to meet their needs, the high cost of the proprietary software, or both, the end result was that users with the means to create their own databases did so. From Google’s Dremel, Pregel, and Spanner to Facebook’s Cassandra, Hive, and Presto, an entirely new ecosystem of software was created by businesses that do not sell software.

 

Nor was this roll-your-own trend limited to databases; Git, the distributed version control system behind GitHub, was written by Linus Torvalds to replace a commercial alternative. Rails, the popular Ruby-based software framework available on PaaS platforms like Heroku, was originally extracted from an existing SaaS product from 37Signals called Basecamp.

 

Even companies like GE are helping to fund noncommercial software, having contributed $105 million to Pivotal, the home of projects like Cloud Foundry. The first version of the OpenStack project, meanwhile, was created using code from both NASA and Rackspace.

 

Not every business has the resources or capability to build software that would compare favorably to commercial alternatives, of course. But fortunately for less technically capable entities, a great deal of the software written to solve problems within an organization is subsequently released as open source software.

 

With the exceptions of Dremel, Pregel, and Spanner—about which there are papers documenting the technology and approach—every example mentioned was released as open source software. When users are able to help other users solve technical problems with software, the attendant commercial opportunity for vendors becomes more problematic.

 

It means either competing with free, which as discussed is enormously difficult, or attempting to monetize free by commercializing open source software, which is possible but growing more difficult.

 

All of which helps explain why, as we’ll see later, many commercial organizations are aggressively diversifying their revenue streams by expanding from software into services.

 

Coding Challenge 13:

The unknown error

over-optimism

The other common cause of over-optimism comes from the way in which developers imagine the problems that they need to solve. Typically when estimating a task a developer will think about how they intend to solve the problems inherent in the task, then imagine what their solution will look like and think about how long each part of the solution will take to write.

 

The problem with this is that in actual fact, working out the best way to solve the problem is a fair chunk of the process of solving it, and the solution that comes to mind after a moment’s reflection is likely to differ from the solution ultimately chosen.

 

The reason for this is that for any problem in software development, there are normally huge numbers of possible solutions. We have already discussed how the choice of tools and building materials (if we persist in trying to make the construction analogy work), in the form of languages, frameworks, and libraries, is vast.

 

But even when the tools and materials are chosen, the actual construction process is nothing like, for example, building a brick wall. It’s more like writing an essay.

 

Ask two developers to complete the same task and their code might be unrecognizable, in the same way, that two students given the same assignment might turn in two completely dissimilar pieces of work, even though both fulfill the requirements of the assignment.

 

The reason for this is not simply a case of personal style. Variations in approach can also be evaluated less subjectively with respect to how well they cope with “edge cases”, how easy they will be to add to or adjust in future, and how easy they are for others to understand.

 

It may also turn out that a particular approach, although seeming to score well on all of the above points, must actually be rejected because it renders one or more features of the original requirements actually impossible to fulfill.

estimation process

These flawed solutions are of particular relevance in the estimation process, because if it’s one of these that the developer has in mind when estimating (not having seen in advance the flaw), then not only will they waste time to work on that solution.

 

But when the flaw is discovered and a new approach is needed, it may well be that the additional time taken to adopt the new approach is completely different to the time estimated, because the developer must go about it in a completely different way.

 

The more sophisticated developer will try to take this level of the unknown into account when making estimates, avoiding assumptions about what solution will be the correct one.

 

But when they do so they leave themselves with very little to fall back on to help them establish the time taken. They can only provide a gut feel about how much a non-specific solution might be expected to take.

 

What’s surprising (or at least, it surprises me) about all this is that there are always so many different, equally valid options of building software, given that most software does basically the same thing.

 

Broadly, software presents one user a chance to put some information into it (be it a form on the company’s holiday request system, the “new email” window on your mail client, or the delivery address screen in your shopping website’s purchase pages), and then it checks that information.

 

Normally stores it somewhere, shows the user something relevant, and then at some point later shows a different user what the first user put in, or possibly some aggregate of what multiple users have put in.

 

Someone puts information in, someone gets information out. Given that, it does seem a little bit absurd that there hasn’t been some level of standardization, both of the tools available and of the techniques used.

 

It seems to me that there should be a good analogy with walls. Lots of people want to put up walls in a variety of different places, for a variety of reasons, but walls all do basically the same things: they keep people out, they keep warmth in, they provide privacy, and they support things built above ground level, like roofs.

 

Software feels like it’s stuck in the sticks and mud phase when we would patently all be better off if we could have some bricks to work with. Why aren’t their software bricks?

 

One view (read: excuse) I’ve often heard aired is the “software is a young discipline” theory. It’s unfair, goes the argument, to compare building software to other forms of engineering because we haven’t been doing it very long.

 

It takes time for consistent processes to emerge, for practices to standardize. At the moment we’re in a phase of semi-blind experimentation, and that’s just how it goes.

 

Which would be quite convincing, if it weren’t also such utter rubbish. For one thing, software isn’t a very young discipline at all. 

 

New paradigms in programming appear with alarming regularity, and the rate of change of technology is, by all accounts, increasing. This does not feel like the transition towards a new, “mature” state.

programming

Rather, it seems to me (beware, I feel another theory coming on) that software development is in a state of change because software is tied to the cutting edge of technologies that are continually redefining what we can expect from them, and therefore changing what we want from them.

 

Our basic expectations of a wall have remained the same for several thousand years—if I wanted to build a new wall today, and found that I had somehow overlooked the presence of a seventeenth century wall exactly where I wanted my new wall to go, there’s a decent chance that old wall would serve my new need.

 

But now suppose I work for a company that needs a system for processing employee expenses, and I’ve just discovered that the company has some old expense processing software.

 

It does broadly the same thing—one user puts in information about expenses, and another looks at that information and approves it, leading to the accounts department being notified about reimbursement. Could I just reuse it?

 

The answer is probably yes, so long as the old software isn’t so old as to be obsolete. But how old can it be before it becomes obsolete? Well, assuming we’re in 2017 now, the old software can’t be from the 1970s, because I don’t want to have to input information on punch cards.

 

Even if the system is only 5 years old then integration with other systems will be a pain, maintenance will be harder because it will require vanishingly rare tools, and it’ll probably look pretty dated.

 

The rate of change of software is absolutely breathtaking and will continue to be so for as long as humanity continues to use the computer (using the term “computer” broadly) to redefine and reinvent its world, which I would suggest may well be forever. 

 

New possibilities will lead to new requirements, resulting in new languages, tools, and techniques, all of which means that even though software developers continue to solve broadly the same problems.

 

Every new attempt at a solution involves some element of the unknown, because there is not, and never will be, a single, stable, univerAlba-understood and easily estimable way of solving a particular problem. 

 

Every time it’s solved it’ll be solved in a slightly different context, and that context is the killer when it comes to accurate estimation.

 

Coding Challenge 14:

Refusing to play the game

Estimation Problem

What to do, then? The most endearingly petulant solution to the Estimation Problem is a movement that has sprung up over recent years around the hashtag #noestimates.

 

The premises of this movement are that (a) accurate estimation is impossible, (b) estimates are often a means used by managers to impose unrealistic deadlines on developers, and (c) time put into coming up with estimates is the time that could be spent on development instead.

 

Therefore, say the #noestimates crowd, the mistake is asking for estimates in the first place. Businesses should be weaned off this childish and unhelpful dependency on these made-up-numbers that have no real meaning or value.

 

Perhaps it will not surprise you to learn that this movement is far more popular among developers than managers.

 It’s worth having a little look at the premises of this theory, because it has gained a surprising amount of traction, and it’s worth considering whether and in what circumstances it makes sense, and if it doesn’t, how to respond to its proponents.

accurate estimation

Regarding the idea that accurate estimation is impossible, one argument offered is that software development is actually like scientific research. In the same way that it would be absurd to ask a scientist how long it will take to prove the existence of dark matter.

The argument goes, so too is it absurd to ask them to estimate how long their work will take. To which the response is surely, “Come now, don’t take yourself so seriously.”

 

Yes, there are lots of unknowns in software development. No, it’s nothing like the level of the unknown of pure scientific research. Anyone who has ever had to pay a builder more than they originally quoted because something unexpected happened part way through the build knows that for any job that is estimated there is always some level of the unknown.

 

Depending on how much unknown stuff there is, estimation can be easier or harder. Scientific research is at one extreme. Just because software development is on the spectrum, it doesn’t mean it’s at the extreme too. Estimating software tasks is really, really hard, but to dismiss it as impossible is, frankly, a bit churlish.

 

Turning to the idea that somehow requiring estimates is part of a management conspiracy to put pressure on developers, I can only repeat the point I made above, that managers’ deadlines are built on developers’ estimates.

 

With my coder hat on, if we developers feel that our deadlines are unrealistic, it’s because we have failed to provide appropriate estimates—we have failed to accommodate an appropriate level of unknownness into the numbers we have provided, and we only have ourselves to blame.

 

Finally, I will concede that the idea that time spent estimating is time wasted does actually make sense, so long as you have absolutely no understanding of nor interest in how a business works.

 

Developers who describe an estimate-free business tend to suggest that product development should be a process of incrementally improving something, making it better and better as quickly as possible, and at each stage taking decisions based on what the product is rather than guesses about what the product might be at various points in the future.

 

Which is very sweet, but I would like to present to you three short scenarios, all drawn from personal experience which the #noestimates gang completely fail to take into account.

 

Scenario one:

start-up runway

The start-up runway. Your company is going to run out of money in October, which means you need to start pitching to investors in June to have a hope of surviving.

 

It’s currently April. You could either devote your energies to rebuilding the UI to make it more attractive, or you could try to make that whole new dashboard you’ve been talking about.

 

The latter would be a coup, but it will only be valuable if at the demo it meets a certain minimum specification, otherwise, the investors won’t be interested. You have to decide whether to build the dashboard or just to rebuild the UI. The key question: if you build the dashboard, will you have it to the minimum useful spec by June?

 

Scenario two:

website

The quote. You want to have built a new mini-site to publicize your company’s big new initiative. But your budget is limited and you have a maximum amount you can spend on it.

 

You’ve asked a development agency you trust (who bill by the day) how much it’ll cost to get it done, so you know whether to greenlight it or whether to can the whole idea and spend the money elsewhere.

 

Coding Challenge 15:

Estimates are graphs, not points

Estimates are graphs,

Assuming, then, that you find, like me, that simply not using any estimates at all isn’t possible, you’re going to need to have a way of working with them, despite their being pretty consistently unreliable.

One thing you may find helpful is to stop thinking of an estimate as a duration in time and start thinking of it as a probability distribution curve.

 

That is to say, do the level of unknowingness, when a developer says “5 days,” even when they’ve taken into account the uninteresting, it’s best to understand that as meaning that:

  • It’s reasonably likely the task will take 5 days to complete.
  • It’s also somewhat likely that the task will take 7 days to complete.
  • It’s not going to be that surprising if the task takes 10 days to complete.
  • You can be pretty confident that the task will not take 25 days to complete.

 

This also goes the other way:

  • It’s perfectly possible that the task will take only 4 days to complete.
  • There’s an outside chance it might only take 3 days to complete.
  • With the best will in the world, there’s no way it’ll only take 1 day to complete.

 

If you were to plot a graph of likeliness vs. task duration, the high point of the graph would be at the 5 days mark. But that doesn’t mean it’s safe to assume that the task will take 5 days for the purpose of planning.

 

Quite the opposite: it’s painfully clear from the history of software development that 5 days is a very unsafe assumption to make, thanks to all that stuff to the right of the 5-day point on the graph.

 

There’s a very decent chance the task will take longer than the number the developer gave, and therefore that an assumption of 5 days would cause problems.

 

Now you might think that actually, with enough tasks, things ought to sort of balance out. If half the tasks take longer than estimated, but half take less time, then in the long run surely they’ll balance out and the project as a whole will be roughly on track, right? Sadly, though, real life doesn’t behave like that.

 

First of all, entrusting one’s professional success to the law of large numbers is arguably rash. Second of all, in real life that stuff we talked about earlier where developers forget the uninteresting stuff means that there’s a skewing factor that means tasks are more likely to be wrong on the long side than the short.

 

And thirdly, remember that the graph will be asymmetrical— a task estimated to take 5 days could absolutely take 10 more days than expected (i.e., 15 days total) to complete, whereas it couldn’t possibly take 10 fewer days (i.e., 5 days total).

 

The high point of the graph might be at 5 days, but most of its area will be to the right of that. The developer is telling you the mode, but you’re concerned about the mean, which is a larger figure.

 

The trick is therefore to make estimates that take into account a big enough chunk of the probability distribution to make you feel comfortable.

 

As a general heuristic, when developers give me estimates that seem to adequately account for the uninteresting, I tend to account for the unknown by doubling those estimates in order to come up with a completion date. That has served me fairly well so far.

 

The downside of doing this sort of doubling is that you either have to conceal your schedules from your developers, or you have to essentially say to them, “I think everything will take twice as long as you say it will.” Sometimes developers respond well to this—they appreciate that you are giving them that contingency buffer they’ve always wanted.

 

But sometimes they take offense at your cynical attitude towards their estimates. Or worse, they can take the perceived “extra time” available as an opportunity to do a whole bunch of extra things that no one really needed but they wanted to do anyway, which together serve to push the whole project back behind schedule again.

 

Coding Challenge 17:

Empiricism

Empiricism

To get developers on board it can be helpful to take a more empirical approach, and one such route is story points. Story points are a staple of the Agile process, but broadly what they are is a way of letting developers provide estimates in a way that allows one to adjust for developers’ tendency towards over-optimism transparently and without hurting anyone’s feelings.

 

The way it works varies from company to company, but the broad gist is this: when you have a chunk of work that needs doing, break it down into tasks, and get the developers to estimate them, but instead of assigning a number of days, ask them to assign a number of points.

 

The first time around, equate a number of points to duration, so that, for example, 1 point means a couple of hours, 2 points means half a day, 3 points means a day and 5 points means 2 days.

 

When the work is done, divide the total number of story points by the amount of developer-days taken (i.e., the sum of the number of days that each developer worked on the project) to get your “velocity,” which is a measure of the number of points that the team can complete per day.

 

So, for example, suppose I have a project that involves 3 tasks. One is small and should take only half a day, so it’s estimated as being worth 2 points. One is about a day’s worth of work so is given 3 points, and one is even bigger, so is given 5 points.

 

It takes a week to get it finished, during which one of the two developers puts in 2 days on the project and 3 days on other things, and one developer is on vacation for 2 days, so puts in 3 days of work.

 

This means that the 10 story points were completed in 5 developer-days, meaning that the velocity of the team is 2 story points per developer day.

 

Now, here comes the clever bit. The next time there’s another chunk of work to be done, you ask the developers to assign story points based on how each task compares in size to the tasks as they were estimated the last time around. If a task feels like it’s of a similar size to that small task from the last time around, give it 2 points.

 

If it feels more like the slightly larger task, give it 3, and so on. Based on the total number of points assigned, and the velocity you established earlier, you can estimate the number of developer-days it’ll take to complete the chunk of work.

 

When the chunk of work is done, you can revise your velocity based on the actual number of developer-days it took, and use that revised velocity the next time around.

 

The good thing about this approach is that it gets more accurate the longer you do it, because developer estimates, when converted to story points, do broadly correlate to the length of time the tasks will take, so long as you can find out how much to scale up the estimates by—which is what your velocity does, and the velocity get more and more accurate as time goes by.

 

Equally, by stopping your developers from estimating amounts of time, you can compensate for their built-in optimism without having to contradict them.

If your team’s velocity turns out to be 1 point per developer-day, and a developer says that a task is worth 1 point, you can assert without hurting anyone’s feelings that the task will take a day to complete, because the evidence justifying that assertion is plain to see.

 

Whereas if you asked the developer how long the task would take, they might well say “a couple of hours”—after all, that’s what “1 point” originally meant—at which point even though past evidence suggests it’d take a day, if you actually said so you’d be contradicting the developer, and tensions might arise.

 

Essentially, story points capture what developers are good at estimating, which is the relative size of a given task, while leaving out the thing they’re bad at providing, which is the absolute duration, instead deriving that from past performance.

 

The downside of the story points system, of course, is that it relies on accumulating a bunch of data for any sort of accuracy to kick in. Which is great if your process is iterative and long-running.

 

But it’s less than helpful when a new team is assembled at the start of a big new project, and you are asked to commit to some timescales before you’ve had the luxury of doing some work to calibrate the estimates of the developers.

 

At that point, the best advice I can offer is to do my doubling trick. (If the implications of slipping behind schedule are really serious, see if you can get away with tripling the estimates you get. Not kidding.)

 

Regardless of how you interpret developer estimates, what I hope I’ve made clear is that such estimates require interpretation, and should seldom be taken at face value.

 

There’s a ramification to this, and it’s not a very nice one: you’re going to need to make sure that people who make decisions about timelines who haven’t been taught how to interpret developer estimates don’t have too much contact with developers without an interpreter present.

 

If you’re a project manager and you have some developers working for you, be wary of the CEO dropping by to check how things are going when you don’t happen to be around.

 

You may know that when your database specialist says “there’s about a week’s worth of work left,” that means things are looking good for launch in a month’s time, but the big boss probably doesn’t.

 

Hopefully your developers know not to make promises about timelines in your absence, and hopefully your boss knows that you’re the only one to trust when it comes to reporting on status (and no one wants to put up barriers to communication in the workplace); but take it from me, this one can bite you, hard. You have been warned.

 

Coding Challenge 18:

The Arithmetic Problem

Arithmetic Problem

Don’t worry; I’m almost out of nasty surprises about software project management. But there’s one more big one that we’re going to have to cover to get a complete picture of all the nastiness that lies in store for the hapless technical manager.

 

I’m calling this one the Arithmetic Problem, but its essence is described most famously by Frederick Brooks in a formulation known as Brooks’s Law, which he articulated in one of the truly great blogs about managing software development, his 1975 classic, The Mythical Man-Month.

 

In the last section, we used the term “developer days,” which is just a different unit for measuring the same basic thing as a “man-month” (and a more popular one these days, since it preserves the pleasing alliteration whilst avoiding the slightly uncomfortable and often-inaccurate gender-specificity).

 

It’s a measure of how long something will take that varies in absolute time depending on the number of developers available. A team of 3 developers working for 5 days on a problem spend 15 developer days on it, and so on.

 

Where programming tasks are completely unrelated, they can be performed in parallel by two people twice as quickly as one person doing them one by one. But the moment the tasks are in any way related, the gain of using more people starts to decrease.

 

The conceptual complexity of interacting software components necessitates a clear understanding of the system, and when two people are working on a system, each needs to understand what the other is working on.

 

Communication is tricky and slow (because what one is communicating is closely linked to human processes, and as we have discussed, we’re not very good at imagining and describing processes), and so as more people get added to a project, more time needs to be allocated to communicating ideas and, sadly, to clearing up miscommunications.

 

So when planning a project, when your estimates for tasks are normally based on a developer imagining doing them one by one, it can be hard to use that information to predict how long it will take a team of developers to complete the tasks.

 

Brooks’s Law

 Brooks’s Law, established through Brooks’s painful firsthand experience and corroborated by the similar experiences of hundreds of other project managers, is this: “adding manpower to a late software project makes it later.”

 

The primary reason for this is that software projects involve, as well as the developers building all the component pieces of a piece of software, the developers building up clear and coherent mental models of how the software works.

 

As the thing gets more complex they need these mental models to help them navigate the code base and not accidentally break one thing by fixing another. Developers brought on to a project part way through don’t have these mental models and therefore take a long time to get up to speed.

 

It takes quite a lot of help from other developers (in the form of direct conversation, code review, and fixing what the new developers break) to get the new developers to the point of full productivity, all of which help requires a lot of time from the original developers. Things slow down, at least in the short term, when teams grow in the middle of a project, and the slow-down can be dramatic.

 

If arithmetic with developer days is hard before a project starts, it becomes almost completely meaningless once the project is up and running.

 

Coding Challenge 19:

Crunchy numbers

software development

At this point you might be inclined to say to me: John, just because your experience of software development has been a history of disasters, it doesn’t follow that this is a universal phenomenon.

 

Is it not more likely that you’re simply an incompetent programmer and a horrible project manager? The answer is that, yes, I am probably both those things. However, I’m not simply extrapolating from my personal experiences. The dataset I am working with is considerably larger.

 

Every year the Standish Group releases something called the CHAOS Report, a survey, and analysis of software project success rates. The 2015 report analyzed 50,000 projects around the world.

 

It has the project success rate, where success is defined as delivery on time, on a budget, and with a satisfactory result, pegged at 29%. 2015 was not a remarkable year—that figure had remained stable, within +/-2%, for the preceding four years.

 

And that’s a comparatively positive figure. Steve bros released data in 2014 suggesting up to 80% of new product development projects are failures. And let’s be clear when I say failure I don’t mean some trivial schedule slip.

 

According to the McKinsey, the average project schedule overrun is 33%. That 33 percent is enough to cost a large company millions and send a small company under.

 

The smaller the project, the greater its chance of success, and compared to government- and multi-national scale projects, an awful lot of projects are on the small end of the scale.

 

But even so, the best you can hope for, going by the stats, is a slightly-better-than-50% chance of success so long as your project doesn’t get remotely large in scope. There’s no way those sorts of odds will let a project manager sleep soundly at night.

 

I’m going to argue that, apart from the normal factors that affect any project (poor communication, weak leadership, etc), there are three big problems that are peculiar to software.

 

They are at the heart of the sad truth about software development and together make clear how building software is nothing like building a house. Understanding these should be the top priority of anyone who is entrusting their future professional success to a team of software developers.

 

 
Coding Challenge 20:

Extreme Changes

Extreme Changes

I once got suckered in by an extreme case of 80% syndrome. I was at a company whose sprawling, wide-ranging tech platform had been built over many years by a series of different agencies, who between them created a mismatched patchwork of not-very-well-integrated parts. One of the most not-very-well-integrated bits was a “Single Sign-On,” or SSO.

 

This is broadly a little website that lets you visit it, and log into it with a username and password, and then visit a whole range of other websites that know how to talk to it so you can be automatically logged into them without having to enter your password again.

 

In a large and sprawling system that’s spread across several websites, it’s potentially a helpful glue to stick all the bits together with.

 

However, our SSO was lacking most of the features we needed, built in a coding language that none of our developers were familiar with, and set up in a way that made it really surprisingly expensive to run.

 

Because it was missing some key features, only a few parts of our system actually used it—you still had to manually log into each of the other parts when you visited them. Integrating it with those other parts would be impossible in its current state.

 

There was a strong case for rebuilding it entirely, but we were a small team with a lot of deadlines, and there was always something more urgent to do. It was my job to prioritize the team’s workload, and SSO integration remained low on the list.

 

This wasn’t enough to put off Alba, a developer who had recently joined and was dismayed at how disjointed our system was. Being very intelligent, and having experience with relevant authentication mechanisms, she worked out a simple and elegant way of building a cheaper-to-run, more easily extendable SSO using the language the rest of the team were most familiar with.

 

As there was still no time available for her to build it during office hours, she decided to work on it on her own time. Christmas was coming up, so she used the week that the office was closed to get some code written. (I hope she took a break on the day itself).

 

When the team reconvened after the holiday break, Alba proudly announced that she had rewritten the SSO in three days. I was amazed.

 

“What? The whole thing? Is it ready to roll out?”

SSO rewrite

To which Alba replied (and this is crucial), “Basically, yep.”

 

This, of course, changed things. There hadn’t previously been a case for diverting resources to the SSO rewrite over other more urgent work.

But since the work was basically finished, giving Alba a couple of days to polish it and roll it out would be a big win, basically for free, and it’d set the tech team off to a great start for the year. So I immediately put Alba onto finishing the SSO.

 

I must confess, many of the developers were a little suspicious. The original SSO had been outsourced to an offshore agency who had taken a couple of months to complete it with a team of developers working on it, and that was to build something that was missing many of the features we needed.

 

It seemed unlikely that Alba could genuinely have written a functionally equivalent replacement of that in three days, much less that she could have incorporated all the extra new stuff that she was claiming.

 

One of the senior developers pointed as much out to me and refused to be brushed off by my repeated insistence that Alba had worked a Christmas miracle and we shouldn’t question it.

 

So after some prodding, I took a little look at the code Alba had written. What I found was a beautiful, elegant authentication mechanism, flawlessly architected and undoubtedly the sort of mechanism we needed.

And that mechanism was, indeed, basically complete. It was a testament to Alba’s indisputable technical expertise that she managed to put the whole thing together over 3 days.

 

But.

SSO-project

The mechanism Alba had created was written in a vacuum, with no consideration for how it could be swapped in for the old SSO, given that the parts of the system that already interacted with the old SSO expected it to work in a particular way that was entirely different from how the new one worked.

 

To replace the old SSO we would either have to adopt the new one to be backward-compatible or update all the things that interacted with it. And this was going to be a big chunk of work.

 

I had clearly been far too optimistic in my interpretation of Alba’s own optimism. Never mind. I had a more thorough chat with Alba about the various things we’d need to do to be able to swap in her new SSO, and she remained optimistic about them. “The work’s basically done, it’s just a case of wiring it up.”

 

Alba got on with the wiring up in January. Things took a little longer than expected, but by the end of the month, she said it was “very nearly” finished.

 

I ended up leaving that company in February (no, I wasn’t fired for my team failing to deliver the SSO, but you could argue that I should have been fired for failing to manage expectations appropriately), and when I left, the wiring up was not quite there, but “very, very nearly” finished.

 

In May I had lunch with another developer from the company to hear how things were getting on. By that point, I was told in an exasperated tone, the new SSO was “very, very, very nearly finished.” Not bad for something that was basically ready to roll out at the start of January.

 

This was an extreme case, and it was extreme by dint of the fact that for the task in question, the interesting bit—the elegant mechanism—comprised at best 5% of the total task, and the uninteresting wiring up of the new mechanism to the old bits of the system comprised the other 95%.

 

The developer’s head was focused exclusively on the 5%, and that meant that all estimates were made on the assumption that 5% was actually the 95%. To my eternal chagrin, I didn’t notice until far too late that the interestingness factor was skewing the estimates.

 

Hopefully you’ll be less foolish than me, but even so, consider this: if developers can be so misguidedly optimistic as this when they’re already stuck into a task, think how much more wrong they can go while they’re estimating it at the very start of the process. Are you sure you know how to compensate for this sort of bias?

 

Recommend