Virtualization Technology News and Information
Tangled Up in New - What We Don't Get About Demand vs. Capacity in Software Delivery
By Lee Reid, Sr. Value Stream Architect, Tasktop

The Bob Dylan song "Tangled Up In Blue", an exceptional masterpiece of imagery, poetry, chords and elegant use of time, tells a complex story of intertwined love. This song carries us into glimpses of being in the shoes of the narrator in a way that we not only feel the highs and lows but can't help but find our own entanglement parallel.

To me, it's a story of an alternating magnetic-like force that draws the characters together and then apart over and over in an irresistible but flawed relationship.

So what's this song have to do with software delivery? Well, this article is titled "Tangled up in New" as a play on Dylan's masterpiece because, in enterprise software delivery, we tend to have our own irresistible, flawed, and codependent relationship of assigning and accepting new work into our value streams without understanding the impact.

We can't help ourselves! We love new ideas, and, whether we're on the giving end or receiving end, it's like Dylan's lines at the end of the song:


We always did feel the same

We just saw it from a different point of view

Tangled up in blue"

That is, are we Tangled up in New when it comes to understanding effective ways to balance the ever-exciting new ideas with the precious capacity of our product value streams?

Excessive WIP

One of the most insidious culprits routinely inflicting the effectiveness of software delivery is an excessive amount of work in progress (WIP). Let's look at a typical enterprise scenario. The software delivery value stream- the set of activities that must take place from customer request to customer delivery-is composed of a network of teams doing the work. These teams are typically working at a fixed capacity. There's no magical elastic ability to absorb more work than they have been able to process. And, it's really hard to say "No, we can't do that"! 

However, as demand increases the teams don't really say "Yes" either. Rather, they say, "we'll put it on our backlog". From the perspective of a team that has fixed capacity, this is a way to try and forget about the demand on their fixed capacity and think ... not now but our future selves will have more time to get to this. It's a mixture of people pleasing and wishful thinking.

Unfortunately, our current selves were the future selves last quarter, and teams are unlikely to have more capacity now than they did then. And, it gets worse. When the demand side is more demanding and has no visibility into the capacity side, it becomes even harder for teams to say "No" and the work gets committed. Consequently, teams become tangled up in taking on new work while they are unable to complete existing commitments.

Fixed Capacity

Fixed capacity is a reality for most software delivery teams. Unfortunately, it's really hard to accept. Let's try thinking of it from another point of view. Suppose that you run a medium-sized craft brewery that at full capacity produces specialty beer at a rate of 20 kegs per day. And, the lead time to produce a keg of beer from the moment you start mixing ingredients until the beer is sealed in a keg is 8 days. Now, if you were to take a walk from the shipping dock where today's 20 kegs are heading out and continue all the way along your operations until you reached the point of initial mixing, you would find materials in process that add up to 8x20 = 160 kegs worth of work in process.

Any brewery processing activities you do above that amount, such as pre-packaging or stacking up materials in advance, is not adding value. Not only would it clutter valuable floor space in your brewery, it would also amount to work that was started that may actually not be realized in value. In other words, until you make an improvement that results in added capacity, you are unable to take on any orders that call for more than 20 kegs per day. You have to say "No" or your brewery will collapse, since you'll waste ingredients and materials trying to start additional batches that you can't finish.

The brewery analogy is somewhat easy to understand because we can visualize a linear production line at the brewery where one could walk along and see the process in action. But when it comes to large scale, complex IT software delivery, it's not so easy to see because the nature of our work is much less tangible. It's knowledge-based work which originates in plans, is transformed by ideas and conversations, tracked in software delivery tools, tested, and then deployed across multiple infrastructure components. So, even with our best efforts, we can't see that when our teams are running at capacity, or worse, have accepted work beyond their capacity.

Learning to See Capacity

Measuring and visualizing the flow of value-adding and protecting work (features, defects, debt, risk) across this network helps us to manage capacity better. We can use this visibility of the "supply" system (the software delivery value stream) to apply queuing theory, so we can estimate the ideal amount of WIP a value stream can have.

Queuing theory (Little's Law) essentially states that, assuming a steady state, the average number of items in a system is equal to the average arrival rate of items multiplied by the amount of time the item spends in the system.

Once you have visibility into value work items, use this simple formula:

Ideal WIP = Average number of work items completed per day x average time work items(s) take from start to finish per day

This enables us to see, sometimes for the first time, the capacity of a set of teams performing the end-to-end set of activities in a value stream. we can see the amount of work that a team has capacity to take on at any given time.

There is tremendous value in planning demand to align with capacity. We all know the intrinsic value of reducing the cost of context switching among our team members. That's key. But perhaps the biggest payoff we notice is the boost in morale when the team members feel the burden of excess baggage lifted and they feel the joy of seeing their work generate value for customers in a very timely manner.

Having insights into the capacity from the baselining flow across the value stream also gives management and team members knowledge of their true capacity, their true throughput, and opportunities to continuously improve their throughput with simple experiments that test the ability to remove bottlenecks and waste to improve capacity.

What we don't get about software delivery is that there's very little elasticity in capacity. Furthermore, we add fuel to that fire when planners call for demand in excess of capacity. Until we make capacity visible we're more likely making a bad situation worse. Until our teams on the receiving end have the data to show that they are operating at capacity they will continue to be codependent by saying "yes" when they should say "no" because we're at capacity. Until we address this blind codependency, we continue to be Tangled up in New.



Lee Reid 

Lee Reid is a Senior Value Stream Architect at Tasktop, helping customers to map, analyse and optimize flow across their value streams to guide their continuous improvement journeys. Lee has over 25 years of experience in a variety of software development and lean transformation roles at two startup firms, two large firms, and one higher education institution. His experience ranges from software development in both waterfall and agile teams to leading IT and continuous improvement efforts. Lee has career certifications including TOGAF Certified Enterprise Architect, The Open Group Distinguished IT Specialist, Lean Facilitator, and is a co-inventor of four U.S. patents.

Published Friday, October 08, 2021 7:36 AM by David Marshall
Filed under:
There are no comments for this post.
To post a comment, you must be a registered user. Registration is free and easy! Sign up now!
<October 2021>