It’s against my sunny nature to start things off with a negative title, but someone had to say it—Agile fails the test as a universally applicable method.
Whether you’re already up in arms, or simply intrigued to hear more, you might be interested to learn that at least one signatory of the Agile Manifesto would seem to agree with me (see Jeff Sutherland’s May 2016 article in the Harvard Business Review), pointing out scenarios for when Agile can be impactful, and scenarios for when it likely won’t be. Such context-specific nuances are interesting to those of us who live for that sort of thing, but don’t do much for those seeking universally applicable techniques.
Building on that, I thought I’d take a swing at pointing out examples that highlight some of the key differences between Agile and universally applicable methods—which in my definition, are systematic, coherent frameworks that can be applied in any project context:
1) The notion of continuous value delivery via small batch sizes and single-task focus (for humans) is systematic and coherent no matter the context, while the notion of time-boxing work into arbitrary x-week windows is not.
Takeaway: abandon sprints and their disruptive task interruptions in favor of flow-maximizing Lean and Theory of Constraints principles.
2) The notion of using schedule, scope, and/or budget buffers to protect a project’s expected value from the variability inherent in project execution is systematic and coherent no matter the context, while the notion of defaulting to “feature backlogs” (scope buffers) no matter what is not.
Takeaway: keep an open mind on what type of buffer, or mix of buffers, will offer the best protection of project ROI.
3) The notion that the workers closest to the work will generally know the work best, and as a team will usually come up with better solutions than any single team member (including the boss) is systematic and coherent no matter the context, while the notion that a scrum team should always have X people, with a scrum master but never a PM, that the X-person team is inviolate with no member of it allowed to depart to help out any other project no matter what, and that all X members should be relied upon to do all manner of scoping/prioritizing/scheduling/planning/designing/architecting/developing/testing/deploying as a team (rather than specialize), is not.
Takeaway A: do what all intuitive PMs have always done, and take whatever flexibility in the triple constraint you can get, from wherever you can get it—then manage it like it’s the only thing separating success from failure…which it often is.
Takeaway B: elevate the importance of the most constrained resource types from the project level to the portfolio level, while being careful not to disrupt their single-task focus.
Takeaway C: collaboration is great, but so is specialization, so test which one yields the greatest team-wide task flow, and under which circumstances, before zealously racing to one extreme or another.
4) The notion that there is always one path or chain of tasks through a project that is the longest, and that accelerating a project schedule can only be done by accelerating this longest path or chain, is systematic and coherent no matter the context—even if we don’t yet know what that path or chain might actually be. If, on some projects, we simply have no clue what these tasks (or their complexity) may be, then we need to make sure we understand how tolerant the project’s value proposition is to significant changes in schedule, scope, and investment levels (budget), before undertaking the project at all.
Takeway: even a rough guess at the work at hand, its structure, its task-level sequential dependencies, its resource-loading & leveling, and its task-level value contributions, is better than an arbitary budget-boxed or time-boxed project estimate.
5) The notion that every project portfolio has a throughput-limiting resource constraint that follows scientific flow-density principles is systematic and coherent no matter the context.
Takeway: all systems have a single-biggest impediment to getting greater throughput of the desired goal, and in project portfolio systems, that impediment is nearly always a constrained resource type.
6) The notion that planning is a waste of time because things will probably change anyway, and that we can just execute a mish-mash of tasks in best-guess priority order and end up with a value-optimized result is not systematic, coherent, or context-independent. Rather, it’s an indicator of a poorly developed value proposition—even if that value proposition is so compelling that its investors feel confident of taking the very high risk of it’s state of being poorly developed. Furthermore, the notion that things are likely to change lends even stronger rationale to ascertain the current state, so that we know what we’re considering changing from, as well as what we’re considering changing to.
Takeaway: planning is always a good idea. Or as Dwight Eisenhower famously put it, “Plans are useless, but planning is indispensable.”
In short, I think Agile and Scrum have done us a great service in reminding us of the importance of small batch sizes, continuous value delivery, focused execution, teamwork, transparency, unity of purpose, and protection from failure. But they didn’t invent any of these things, and many Agilists still don’t understand what some of these terms really mean, adopting the lingo and rote guidance without deep understanding of what might actually drive dramatic performance improvements. In fact, many “Agilistas” are so blindly obedient to the techniques, they don’t seem to understand why Agile works when it works, or why it doesn’t work when it doesn’t work—perhaps the truest indicator of the lack of systematic, coherent, and context-independent approach that should underpin major advances in management science.
You must log in to post a comment. Log in now.