KENNESAW, Ga. | Feb 8, 2021
We are all painfully aware that there is plenty of uncertainty in the data we analyze and in the results that we generate through data science processes. Most of the time, we focus on removing as much uncertainty as possible in the attempt to provide the single “best” answer to the question we’ve been asked. However, when uncertainty is unusually high, we can serve ourselves and our sponsors better if we embrace the uncertainty instead of trying to fully contain it. In this blog, I’ll provide a recent example of this approach and explain the implications you can take from it.
One of the companies I work with is one of the premier healthcare provider organizations in the world. As a result, you can be sure that COVID has had a massive impact on the organization since the day the crisis began. The good news is that the organization already had in place a large analytics and data science organization, as well as a culture where analytics are widely utilized to support decision making. So, it wasn’t a surprise when early in the crisis the senior management team requested some urgent analysis to forecast where things would be heading.
Of course, as is typical, what the executives asked for was the single “best” answer to their questions. In this case, that meant precise predictions of what to expect in the coming weeks and months across key parts of the organization’s operations. The areas of interest ranged from hospital bed capacity, to medical supply availability, to a wide spectrum of relevant business metrics. The analytics team got to work.
If you recall the early days of COVID, the projections in the news were all over the map. There were many competing models from different researchers projecting how the pandemic would play out. Not only were the models different, but the assumptions feeding the models were different. As a result, various projections of how fast COVID would spread were literally orders of magnitude apart from each other. Over time, the projections converged as more data was available. But initially there was no way to know who would be right.
As the team started working through the analysis the executives requested, they realized how many assumptions they were making and how sensitive their predictions were to those assumptions. In other words, they saw the same pattern of uncertainty from the news emerging in their own work. The challenge was in the attempt to decide which of the disparate projections they should take to the executives and declare “best”. Once they locked that “best” projection in and made it public, they would be on the hook for how accurate it was. The team wasn’t confident they could get it right given the uncertainty and didn’t want to risk their reputation by being viewed as having wrong answers. As a result, they decided to take a different approach.
The team realized that not only did they have little confidence in which projection would be best, but that damage to their credibility was almost unavoidable if they placed a single, all-in bet since that bet would very likely end up being wrong. As a result, they decided to deliver a range of projections along with the explanation of how those projections came to differ and under what assumptions the projections would each play out. Instead of providing the “best” answer, they provided multiple answers ordered from low to high that helped to put the projections in context of the vast uncertainty that underpinned them all.
As you can imagine, the executives were initially put off by this approach as they felt it was a weaselly way to avoid making a call. However, the team stood their ground and stressed the uncertainty and risk in choosing a single forecast. The team focused their executives on the range of outcomes and encouraged targeting the middle of the range with initial plans, but also having back up plans in place to react quickly if the reality started drifting toward one of the extremes. Instead of going all in and succeeding or failing with a single plan, the organization was able to develop a base plan alongside contingency plans up front.
As time passed, the executives saw that what played out typically did closely follow one of the projections, though it was never clear which projection it would be at the outset. While they didn’t know up front which projection would be right, they did know up front what the reasonable range of expectation was. It was certainly unsettling initially, and required additional work in the short term, to develop not just the base plan, but also multiple contingency plans. However, planning ahead saved a lot of time in the long run and allowed faster response when reality inevitably drifted from the base plan. By admitting the lack of confidence that they could identify the right answer up front, they were able to effectively plan and adjust as reality unfolded.
It can feel great to have the “best” answer and to develop a single plan to respond to an issue. However, that can backfire if a plan is executed with too much confidence. When there is much uncertainty, such as with COVID, success is more likely if plans are made that account for the uncertainty. In the example discussed here, the analytics team was eventually praised for the
fact they helped the executives understand and plan for the uncertainty in the projections
that they were providing. The executives are no longer resistant to what they initially thought to be a weaselly
approach and now want to see more of it. They came to love the ability to understand
the uncertainty better. Your organization should consider using the same approach.
By Bill Franks | February 9, 2021
Originally published by the International Institute for Analytics