Discussion

# How Do You Measure Capacity?

Analyzing how product teams predict what they can accomplish in a given iteration, and discussing how we can make this more accurate.

Product companies of all sizes benefit from long-term planning. For small companies, product plans provide an anchor, and a useful aide in conversations with clients about feature requests or product enhancements. For larger enterprises with many teams, planning becomes a crucial tool to orchestrate a complex workload across myriad teams. In both cases, long planning exercises are necessary to bring a product roadmap back to reality, by keeping lofty product goals grounded in commitments that are actually achievable in a fixed iteration.

Underpinning these product planning conversations is the concept of *capacity*, and how it’s measured. Our ability as product leaders to accurately plan iterations is dependent on how accurately we can measure our team’s capacity. But how do we measure capacity? In practice, this is something that is challenging to do well, and failing to do so accurately can result in missed deadlines and ultimately, missed opportunities.

In this article, we will go over some common methods of measuring team capacity in practice, and discuss how these measures can be further improved and made more accurate.

## What is capacity?

To answer the question “how do you measure capacity,” it’s helpful to first clarify the question “what is capacity?”. Simply put, a team’s *capacity* is the amount of work it can complete in a fixed amount of time. Intuitively, this definition makes sense but opens up a few questions of its own. In particular, one would naturally ask how we can quantify this metric.

Normally, product teams estimate the engineering effort required to implement a given feature in *person-weeks*, where a feature requiring 1 person-week of effort requires allocating 1 engineer to work on it for a week. In an agile context, this is a useful unit for measuring effort because it plays nicely with typical planning and execution processes, such as weekly sprints. This concept of effort extends naturally to capacity. Indeed, we can define capacity as (and will henceforth use capacity in this article to refer to) the number of person-weeks of effort a team can complete in a fixed iteration.

## A naive model

Given this definition of capacity, there is an obvious, but somewhat naive, method of measuring capacity that arises. That is, we can say that for a given iteration, a team’s capacity is the number of people in the team multiplied by the number of weeks in the iteration:

`capacity = num_engineers * iteration_weeks`

Now, if this definition is making you feel uncomfortable, then your feelings are serving you well. There are a number of problems with measuring capacity in this way. First off, it is completely unrealistic. Quite explicitly, this metric assumes that each engineer on your team is achieving 100% utilization in every week of an iteration. This just isn’t feasible. Vacations, sick days, and organizational overhead such as meetings will all reduce an engineer’s capacity from this theoretical maximum. In fact, this capacity measurement is *only* useful as an upper bound for other capacity measures and has no practical use as a measure for planning exercises. Okay, then what should we use?

## A simple refinement

A refinement of this naive model that is commonly used for product planning is the introduction of a *utilization factor*. A utilization factor is a number between 0 and 1 that represents the average effective utilization of all engineers on the team for a given iteration. With this factor, our capacity formula becomes:

`capacity = utilization * num_engineers * iteration_weeks`

The addition of the utilization factor creates a much more flexible model wherein the problem of accurately measuring capacity is reduced to the problem of accurately measuring average effective overhead.

## How can we make this better?

Adding a utilization factor is an improvement over the naive model, but it still leaves a lot to be desired. In practice, these utilization factors are often just an estimate or a product of intuition and guesswork. This is surprising given the focus on data-driven development across the industry when building products. What if we were to take a data-driven approach to our capacity measures? It stands to reason that we should be able to produce much more accurate measures, and if you ask me, the data to do this is already available.

In an agile context, it is common to break iterations into a number of sprints (typically one or two weeks long) and associate *story points* (an abstract measure of effort) to the individual work items in each sprint. In the long run, this becomes a gold mine of historical execution data. Using this data over the course of a few iterations, you can derive an average weekly capacity in story points instead of person-weeks. If you then use story points to estimate effort in planning cycles, you will have a more accurate view of what is containable for a given iteration.

Indeed, this only begins to scratch the surface of what is possible. With advances in AI technology, a predictive model based on historical execution data could make these estimates even better. A model trained in this way would implicitly account for changes in effective utilization throughout the year that are hard to account for explicitly, e.g., holidays, summer vacations, or even fluctuations in team motivation throughout the seasons.

At PlanEngine, it is our mission to help you create better plans in less time, so you can focus on executing on your vision and delighting your customers. When PlanEngine becomes available, we will offer out-of-the box tools to help you build more accurate capacity plans and avoid making commitments you can’t keep.

## Conclusion

In this article, we took a look at some different strategies for measuring team capacity used in practice. We also explored how to use historical execution data to better estimate future capacity and discussed how new technology could be used to improve this even further. Thanks for reading. I hope you learned something useful.