Are Your Transformational Change Projects Successful?
Nod your head if you have ever heard, seen or (heaven forbid), quoted this statistic: “70% of transformational change efforts fail.”
You nodded, right? Let’s face it; the 70% failure statistic is dramatic. It builds the case for hiring experienced transformational change practitioners. It cautions implementers to learn about change management practices and integrate them into their tactical tasks.
Unfortunately, it’s a made-up number. Back in the 90s, Michael Hammer speculated about the success rate of re-engineering projects and since then, authors and speakers have cited 70% as the failure rate for all types of change programs. Several transformational change practitioners have dug into the change archives and vigorously refuted it. (See here and here) Yet, it persists.
Even if no one had refuted the number, I stopped believing it years ago. As a measurement practitioner, I have found that:
- Few organizations are disciplined or adept at identifying measures of success at the outset of their projects;
- The data to measure success is often difficult to collect;
- The evidence of success can rarely be attributed solely to the change effort;
- Leaders move the finish line or unexpected circumstances cause it to move;
- The initial sponsor leaves and her replacement does not revisit the measures.
Given all this evidence against it, how can anyone state with such certainty that 70% of change projects fail?
Make Measurement Simple, But Not Simplistic
Here is a suggestion: instead of quoting a fabricated and meaningless statistic, why not apply some discipline to your change projects and evaluate your own success rate? Conceptually, it’s simple. You identify what you expect to happen, you measure what happened, and you determine why you achieved or did not achieve your goals.
“Ah,” you say, “the world is complicated! Organizations, leaders, and goals change. The competitive landscape shifts. The economy doesn’t behave as predicted. And the larger the project, the greater the risk and uncertainty we face, which makes measurement that much more challenging.”
It’s true that measuring large, complex, multi-year projects can be daunting. But what are your options? Throw up your arms and avoid measurement altogether? Set goals but never evaluate your success? Or, adopt a simple but flexible framework that helps you learn as you go, adjust your direction, and achieve your goals? I suggest the last option.
Your measurement process doesn’t need to be complicated. First, break your project into phases such as:
1. Diagnosis and solution design
2. Planning and resourcing
3. Execution
4. Realizing the future state
At each stage, ask a series of questions to assess your readiness to move to the next stage. The questions are basic and essential for any major project, transformational or not. You can and should customize these questions for your organization based on what you expect to achieve. Regardless of the questions chosen, the primary aim at this point is to identify and agree on what questions to answer (see the graphic below).
After you identify the questions and get agreement from the program sponsors and key stakeholders, identify the measurement indicators. This step should be easy if you have identified the right questions. Below are possible indicators for each of the four stages and questions outlined above.
How Does This Work in the Wild?
You may be thinking: “This sounds like a lot of work.” Yes, measuring the success of transformational change takes time, but it also helps the organization stay focused on the goal. Without clear goals and a measurement process, how will you know if the investment in time, money, and resources has produced value for clients, shareholders, and employees?
Let me share a recent client experience with a Silicon Valley technology firm to illustrate how this process works. For the past five years, the organization had been quite successful. However, one segment of the business faced major competitive threats but could not effectively compete due to an aging services portfolio. They were often late to market on new services and had lost touch with shifting customer expectations. They were ill-equipped to address these gaps due to inefficient work processes, an outdated organizational structure, and technology that did not support continuous innovation.
Their opportunity was to transform the organization by rebuilding its services portfolio and go-to-market processes. The sponsors set an 18-month time frame from project launch to full implementation. Throughout the project, the sponsors reiterated the famous Darwin quote: “It is not the strongest of the species that survive, nor the most intelligent, but the ones most responsive to change.”
The project was complex and required involvement from three different teams reporting to three different senior leaders. Project discipline was key to success. In addition, the program sponsors insisted on measuring success not only as the project moved from stage to stage but most critically, for the outcome of the entire effort. They defined outcome indicators at four levels: financial, customer/partners, operational, and learning /culture and constructed a measurement model aligned to the project roadmap (see below).
As the team drilled into each measure, they discovered that much of the data was not available or too difficult to gather. The sponsors agreed to simplify their approach. They replaced some of the hard indicators with ‘soft’ indicators from surveys and used proxy measures where data was readily available. They also recognized that evidence of the outcomes could take several years. To ensure the transformation was moving in the right direction, the team added progress measures to the mix. Everyone agreed that imperfect data was better than no data at all. Their final measurement framework is shown below.
When the project completed the “Make it Work” stage, the senior leaders concluded they should move the project into operations where segment leaders would adopt the measures as part of their goals. (Find more information on this case study here.)
Twelve months later, I followed up with several members of the project team. They felt the project had accomplished many, but not all the objectives. They acknowledged they still had work to do, including process improvements and reduced cycle time for new services. The segment leaders acknowledged the need for continued focus on the future state and ongoing communication about the vision and its importance to the business.
Was the project successful? In the minds of the sponsor and the team, yes. Did it achieve 100% of its goals? No. Was the investment worth it? The organization believes it was.
Final Thoughts
Transformational projects often take years to complete. Often, what senior leaders envision for the future state shifts as the effort progresses. Some portions of the project work, some don’t and ideally, the team learns from both.
Is it realistic to evaluate a program based on a single overall metric of success or failure? I don’t think so. Rather, identify a suite of balanced measures and evaluate your project against all of them. At that point, you don’t need to worry about the 70% number.