Both Agile and Critical Chain are proven as effective techniques for improving the speed and reliability of projects—and for doing so dramatically in some cases. Each uses a very different language and somewhat different methods, however, so it’s easy to view them as conflicting vs. complementary. After all, Agile’s use of athletic terms like sprints, scrum teams, and of course agility all sound like attractive characteristics to have on our projects; throw in a few more serious-sounding terms like backlogs, burndown charts, story points, and sprint retrospectives, and the impression of “professional athleticism” might quickly establish itself in our minds. Contrast this with Critical Chain’s less-enticing lexicon of project buffering, project staggering, and the buffer protection index—not to mention the technical-sounding, soporofic title of “Critical Chain” itself—and we are more likely to walk away with the sense of someone telling us “eat your broccoli.” It all sounds like it might be good stuff we should be doing, but not particularly fun or delicious.
There is more in common between Agile and Critical Chain than might first meet the eye, however, and the two can be highly complementary if integrated sensibly, as laid out in my ACCLAIM approach. In order to expose what’s common and complementary, let’s start by dispelling the 5 most common myths in the Agile vs. Critical Chain debate:
Myth #1: Agile is for when scope is fuzzy and fast-changing, while Critical Chain requires detailed planning on a well-defined scope.
The reality is that all projects carry a degree of uncertainty, and as a result, all projects require some type of buffering approach in order to manage that uncertainty while delivering on a commitment. Because software-development projects often carry a large degree of scope uncertainty in the early phases, it’s logical that Agile adopts a scope-buffering approach. But regardless of the method used, any project with lots of early-phase scope uncertainty will benefit from a strong emphasis on scope-refinement activities before any solid commitments are made. For example, if my project scope is initially defined as “driving to the beach,” I need to do a good bit of scope refinement before locking in on a commitment. Are we driving to the quiet little beach on the bay an hour away with no traffic, to a big popular beach on the coast 3 hours away with lots of traffic, or to a cool surfing destination on the opposite coast 3,000 miles away? Once we have the answers to those questions mostly sorted out, we can begin to estimate how much work we have before us, what types of resources we need to execute the work, and how long it might take for our assigned staff to get it done—whether we organize this work into defined tasks or time-boxed sprints. Also, if we end up with a very specific scope—let’s say we’re driving to the quiet little beach an hour away with no traffic—then Agile’s scope-buffering approach makes little sense, because if I arrive ahead of schedule, my project is done, and I have no “nice-to-have” requirements to add on beyond that…after all, I’m not going to keep driving into the bay. Similarly, if I run into unexpected surprises along the way, and only get 90% of the way there before the beach closes, it doesn’t help me to just add the remaining 10% “to the next sprint.”
In contrast, Critical Chain’s preference for schedule buffering makes perfect sense for my 1-hour drive—I just need to leave a bit early in case I run into unexpected issues along the way—and the task of refining scope to make sure that’s where I want to go in the first place is the same, regardless of which execution approach I end up using.
So, like any project-management approach, both Agile and Critical Chain must address the issue of scope refinement, and both require buffering against inherent project uncertainty. The only difference is that Agile calls for scope buffering regardless of the situation, while Critical Chain just asks that, whatever type of buffering might be appropriate, buffers are shown as time-based.
But wait, doesn’t Critical Chain require significant planning, identification of all task dependencies, and complete resource-loading of all tasks? Not as much as you might think. On the example of our 1-hour drive, it might be perfectly sufficient to just have a few basic tasks identified and sequenced, like “Drive to highway, get gas, drive on highway, drive on local roads to beach, find parking at beach,” with resource-loading (1 driver + 1 car) and duration estimates for each task. Not much different than sequencing a few sprints and assigning a scrum team in an Agile framework, is it? In fact, a good Agile practitioner will tell you that Agile is not an excuse to avoid sound planning, even if you expect lots of re-planning along the way. Similarly, a good Critical Chain practitioner will tell you that nearly all projects need to refine scope and re-plan to some degree as the project progresses—just make sure these refinements and planning updates are maintained in a logically sound project plan. At the end of the day, sound planning is sound planning, regardless of the execution approach.
Myth #2: Agile’s team-based, time-boxed approach conflicts with Critical Chain’s elimination of multi-tasking and of task-level commitments.
An Agile sprint is designed, in part, to encourage focused execution; so is Critical Chain’s emphasis on single-tasking. Similarly, Agile’s scrum-team approach is designed, in part, to aggregate the risk associated with an individual team member’s task execution; so is Critical Chain’s elimination of task-level commitments. In other words, both achieve speed and reliability improvements by focusing execution and aggregating risk. Interestingly, the scrum teams that understand the value of single-tasking place more emphasis on that than on the question of what the optimal sprint duration should be. Similarly, the scrum teams that understand the value of eliminating sprint-level commitments achieve greater project speed and reliability than those that don’t. (See my previous blog posts on single-tasking and sprint-level commitments for more on this topic.)
Myth #3: I have to choose either Critical Chain or Agile as my project-level execution approach for all projects in my portfolio.
At the project level, I can choose either. I can also apply Agile methods to help refine scope early in a project, and apply Critical Chain methods once scope is refined. I can even move a critical resource like a technical architect back and forth from an Agile project to a Critical Chain project with no real issue, as long as the architect is familiar enough with both methods. A good analogy here is attending a football game vs. attending a golf tournament—we adapt quickly and effortlessly to the norm of cheering boisterously and on impulse at a football game, and then to the norm of cheering politely and only after the golfer has made her drive or putt at a golf tournament, once we understand that these two different behaviors are expected in the two different contexts.
Myth #4: Portfolios of Agile projects are best managed with “scaled Agile” methods, while portfolios of Critical Chain projects are best managed with “multi-project” Critical Chain methods.
This thinking has a certain appeal, as it helps us feel like we’ve neatly categorized like items together (the red blocks go in the red container, while the green blocks go in the green container!). But as a project portfolio manager, such groupings contribute little to helping me achieve my two fundamental objectives:
1) Maximize the throughput of project completions (get more done)
2) Maximize portfolio reliability (deliver projects as promised)
If I could achieve these two objectives by having Agile projects in one group, and non-Agile projects in another group, then there would be no problem. But this outcome is highly unlikely, because one group or the other will nearly always find itself in a healthier position than its counterpart, and such imbalances are exactly what portfolios are supposed to balance out. Why let projects in one group fail while projects in the other group are all ahead of schedule, especially if resources in both groups have similar sets of skills?
This brings me to a third fundamental objective:
3) Maximize practicality (use the best tool for the job)
While this is more of an “enabling” (vs. outcome-oriented) objective, it is no less crucial for most IT portfolios. The zealotry of any “one-size-fits-all” approach to delivering each and every project in the portfolio using a single methodology will almost always carry more negatives than positives. Some people are more productive in a smaller scrum team than they are in a larger project team, while other people appreciate the additional safety that a large Critical Chain project can offer by aggregating risk across a larger project team. Some projects might benefit greatly by having a scope buffer to help manage uncertainty, while other projects may not benefit at all from a scope buffer, and instead require a schedule or budget buffer. So while it’s a natural impulse to say, “Let’s just mandate XYZ Approach on all projects!”, the reality is that most IT project portfolios will perform much better with a more flexible, practical, “best tool for the job” mix of methods.
Myth #5: It doesn’t matter what portfolio-management approach we use, as long as it helps us prioritize some projects over others.
It’s actually a sad state of affairs that this myth persists so strongly, not because prioritization isn’t important to a project portfolio, but because it’s not as important as improving the throughput and reliability of projects. Some project portfolio management frameworks even boast at their “ruthlessness” at killing off lower-priority projects, especially those that fall behind. Presumably, the projects targeted for slaughter had been chartered and funded for good reason, whether they got struck by Murphy’s Law or not, so killing them only cements their failure. Do we really think that the best we can do is to accept failure so often and cancel so many worthwhile projects? What our IT project portfolios really need is not a way to euthanize unhealthy projects, but a way to bring them to successful completion without putting the healthy projects at risk.
So if our goal is to bring all projects to completion, and all projects are buffered to hedge against failure, then all I really need to know is how much buffer I have left on each project at any given moment, and on the whole portfolio taken together, and then compare this buffer consumption against how much progress I’m making. Even in the highest-performing project portfolios, some projects will burn buffer faster than they should, but this is OK because other projects will have conserved enough buffer to make up for it, enabling us to bring the entire portfolio to completion. And in the lowest-performing project portfolios, we can still minimize failures by allocating buffers in a way that maximizes project completions.
It turns out that, while Critical Chain is one of many sound project-level planning and execution approaches, it is the only proven portfolio-level approach that helps balance buffers across projects, while encouraging the protection and conservation of buffers at every opportunity, and maximizing the throughput of project completions (see my previous post for a more in-depth discussion on throughput and reliability for project portfolios). At some point, Agile approaches that are scaled to the portfolio level might develop similar buffer-protection methods, but given that Agile relies so heavily on scope buffers, some projects will still face the issue that a schedule buffer or budget buffer will sometimes be much more useful and impactful.
The bottom line is this: At the project level, use a practical, flexible, “best tool for the job” mix of approaches to focus execution and aggregate risk, while exposing project buffers as time-based. Then on the portfolio level, use Critical Chain to stagger projects and balance buffer consumption, maximizing the throughput and reliability of projects.