TL;DR Organisations have no brains. And even humans, with brains, can barely imagine unexpected risks. Making sure we are adequately prepared for potential risks requires a thorough exercise of risk assessment. Each project or program requires such an effort. However, we often neglect that.
Not so obvious
When I mention the very simple idea that failing to plan very often results in almost planned failure, most people nod or react with an “of course, that’s obvious.” Well, it isn’t. As an internal auditor, I’ve seen my share of situations in which an organisation failed to plan for eventualities in the context of a project and was left holding a very big bucket of problems.
Lack of organisational imagination
Now, how do we get into that can of conundrum? It often comes down to a lack of organisational imagination. Let me clarify that a bit.
People have this internal risk management engine, which is often quite good at protecting them. This is evolution at its best: had we not evolved such a risk management engine, we would never have been here. Something would have eaten us. Or we would have drowned. Or burned. You get the picture.
It gets a bit worse. Even if we do have an understanding of risk, we will likely overestimate risks in close proximity, either geographically or in time. I refer to this as risk proximity overcompensation. Usually, the overcompensation is triggered by a recent, close event. A case in point:
After a terrorist event, lots of people want the government to show them its readiness to protect them. They want to see police, and even soldiers on the streets even though the event has happened. This is risk proximity overcompensation.
The problem is that while we have such a built-in reaction as individuals in case a certain risk concerns us, organisations do not have such an automatic reaction. Organisations do not have brains. However, because organisations consist of people, we often assume that the organisation will react like a person would. That is an erroneous assumption.
Solutions to organisational risk identification deficiencies
A structured approach, such as a program or project management approach, will allow an organisation to go through the paces we go through almost intuitively in order to create some sort of risk map. That is the risk management aspect of program or project management, and it is an essential part of such approaches. However, not enough organisations apply program or project risk management principles to the most relevant of their activities.
Now, as humans, even when there is a program or project risk management approach, we tend to make things worse for the organisation. Because when ego is involved, when a person appropriates a program or a project in such a way that they exclusively manage it, their inherent bias will become the program or project’s bias.
Risks in automation projects
Now, that is exactly what we often encounter in automation projects. Let’s quickly establish why organisations start large scale automation projects. What is the current economic reality? The cost of capital is almost zero, if it isn’t below zero. Getting access to capital, especially when you are an established business with a good standing, is usually not that hard. Labor, on the other hand, is quite expensive. Organisations chose to borrow money in order to invest in automation to replace people. We see it all around us.
For example, yesterday, the Dutch ING bank announced a significant lay-off of personnel in its Belgian branches, not because the results are bad, because they want to safeguard their future and invest in automation. To paraphrase one executive: “You repair the roof when the sun is shining.”
But will all automation automatically lead to better results? Or could it lead to significant problems? I believe that inadequately planned automation projects, which fail to adequately make an inventory of and manage key risks, are very likely to fail. Let me share a concrete experience.
More than 15 years ago, I was auditing an organisation that had gone through a major ICT transition and had implemented an ERP system in its operations across the entire world. The ERP replaced quite a few administrative collaborators and was lauded as the best investment decision made by the then-CEO.
The opposite of risk proximity overcompensation
There is a hint of what happened next in that last sentence. But let me work up to it slowly. When I mentioned overcompensation earlier, I failed to mention the inverse of that reaction: distant or unknown risk under-compensation. We have difficulty in imagining things that have never happened. We appear to lack the imagination for unknown problems.
We are great at worrying, excessively and often unnecessarily, about risks that we understand. We often have no clue as to what is yet to come. This is a big issue in risk management. We over-emphasise the relevance of risks we know to be close to us, as I mentioned a bit earlier. However, we often completely fail to accurately predict the likelihood and impact of events that have never happened to us. Often, these events don’t even show up in a risk model. And if they do, we tend to underestimate them significantly.
Let me continue the story I started to tell about the organisation that was implementing that ERP system. The ERP was implemented and the entire organisation ran on the new system … until one of its key clients began to complain it was running out of stock of one of their key components, produced b the products, which had not been supplied by the organisation for a long time. They had anticipated potential shortfalls in production during the transition by buffering stock with its clients. When the organisation queried its ERP, nothing appeared to be wrong. Production orders were going out to its production factory which was tasked with producing this critical component. So they called the factory. Which informed them they had been wondering why there were no production orders coming in. But these were to have come through the ERP, the organisation insisted. What ERP, the factory replied. The one we installed recently, the organisation responded. You did not install anything here, the factory replied.
They had completely failed to include this factory in the ERP roll-out plans, although it had been configured in the system. It just never had a real physical backend on the factory floor, or, for that matter, anywhere in the factory.
This was an unknown, uncharted risk that had never even shown up in their risk models. They never even considered it as a potential issue.
While this is an extreme case – I’m an auditor, I have a lot of extreme cases in my portfolio – it illustrates that key concept. Failing to plan, which they did, resulted in an almost planned failure.
In conclusion
Combining new initiatives, such as automation, with a lack of a very clear and thorough understanding of the underlying processes as well as a lack of understanding of potential failure modes will lead to significant problems, as it did here.
But why? Because no one had the imagination to think this issue through to the end. It was someone else’s problem. And in the end, it was no one’s problem and therefore became everyone’s problem.
Of course, it cost the CEO his job. As well as a significant number of his management team. But this issue was entirely avoidable. Program and project risk management are worth investing time and effort in. We often fail to do that. And that is a problem. One everyone should be aware of.