In organisations today, the use of solutions based on Office-package style software is often lamented. “We can’t have all these Excel sheets floating around. We need to set up a proper, robust system for this!”, is an exclamation often heard from people who are ostensibly more ambitious on their companies’ behalf.
This piece is a defense of free-floating Excel sheets, manual processes, and ad hoc solutions. At the same time, it’s an argument against the impulse, often present in large organisations, to implement big systems to replace ad hoc solutions – whether this be systems for reporting and business intelligence systems like Tableau or CRM systems like Sales Force. Even though these systems may be the only viable solution in some cases, there are at least three reasons to pause before calling your IT vendor.
You trade a well-defined, tame risk for uncertain, unmanageable risk
An often-heard argument for replacing ad hoc solutions with big systems is that ad hoc solutions are prone to human error. For example, the risk of human error in Excel-based solutions is used as an argument for replacing these with more structured and robust analytics tools. However, there is a simple and flexible alternative solution to mitigating human error in ad hoc solutions – more humans! Take the example of Harvard economists Carmen Reinhard’s and Kenneth Rogoff’s infamous excel-errors. The conclusion of two widely influential articles from these academics – which was used to argue for austerity policies to improve growth – was found to be fraught with errors, largely because they had omitted part of an Excel row when averaging growth figures. As human error is often as painfully banal as this, its mitigation can be equally simple; we could simply have people review the excel-sheets of academics like Reinhard and Rogoff to check for errors. Often, this human solution is far cheaper than and as effective as providing every professor with digital crutches in the form of some more refined data processing system. In addition, humans have the nice property of being flexible in what they can check for, not needing every type of error carefully spelled out for them to find it. Contrast this with trying to build error-proof systems which only protect from exactly the type of errors that they were designed to control for – dangerously underestimating the ingenuity of human daftness to come up with new ways to fail. Finally, using humans instills the skill and culture of error-checking in the organisation whereas relying on systems will often make people feel less responsible for data quality.
In summary, human error – like many of the vices of ad hoc solutions – is a tame problem. This is not to say that they can’t have large consequences, but that there are simple and flexible ways to address the problems, such as simply throwing people at it.
Compare this to the risk of implementing a large system solution. These risks are often wild and cannot easily be managed. For example, the building of the Danish Tax Authority’s system for assessing the value of properties (Ejendomsvurderingssytemet) was started in 2015 with an initial assessed budget of 96 million DKK. In an audit from 2021, the State Auditor assed that the project had so far cost 3,6 billion DKK- i.e. roughly 37 times the initial budget and the system is still not fully implemented! Various attempts have of course been made to mitigate the problems underway in the implementation, but clearly, none have ultimately been successful at containing the cost of development. This is symptomatic; the risks of implementing large systems are not only large in terms of effect but are also unmanageable – there are no straightforward ways of mitigating them. Contrary to the error proneness of ad hoc solutions, throwing humans at the problems is not an effective way of addressing these risks.
Implementing a system can in general be described as the reverse of what Nassim Taleb calls optionality. Optionality is meant to describe situations where you buy yourself a chance for unlimited upside at a limited downside cost. For example, the upside of going to a party is that you might meet your future partner. The downside is limited to you potentially being bored at the party. Implementing large systems follows this logic in reverse. It opens you up to enormous risk while it, if it works, saves you from some relatively tame problems.
You assume the best use of resources is to solve easy problems faster
Systematic solutions, when they work, help you solve problems that you were already able to solve before, but faster. But the resources that an organisation uses to implement new systems are often those that you would otherwise use to crack genuinely new problems or challenges – e.g., project managers, developers, senior leadership attention etc. In other words, for this to make sense, the organisation must be at a place where the best thing it can do is what it already does, but better or faster, rather than crack new problems. Whether this is the case depends on the organisation and its context, but it’s notable that many have described the true challenges organisations face today as so-called “wicked problems.” Following the canonical description of Rittel and Webber (1973), the characteristics of wicked problems include that they defy definitive formulation, lack a well-described set of permissible operations, and their solutions lack a clearly defined test criterion. Examples of wicked problems are found in social policy, climate policy, healthcare and business. These characteristics make it clear that they are exactly the types of problems that we shouldn’t expect to be solved by the implementation of some system. For example, many companies may ponder how to simultaneously make their supply chain cost competitive, robust to macro changes and aligned with various stakeholder and regulatory demands, such as climate change or the rights of workers. We do not and should not expect these problems to be solved by the implementation of a new ERP system. However, if resources are bound up in implementing a new ERP system, we won’t have a chance to actually crack such wicked problems. As comforting as it may be for organisations to spend their resources implementing a system to enhance solving problems to which they already know the solution, it inhibits the organisation from spending resources to solve true significant problems.
You solve yesterday’s problem with tomorrow’s resources
When large complicated systems are relevant, they are not available and when they become available, they are no longer relevant. Developing and implementing systems is often estimated to take a substantial amount of time and then ends up taking more. Once the system is implemented, it often ends up fulfilling yesterday’s and not today’s requirements and taking up resources of today and tomorrow for further development and maintenance.
The most common approach for dealing with this problem is to utilise an agile development approach; developing, testing, and deploying prototypes in small increments to allow for the requirements to change over time and for the systems to adapt. However, small manual solutions often are such prototypes. They are often built by or in close collaboration with the actual users and allow for the flexibility of being changed over time. Therefore, a truly agile approach is to make incremental tactical improvements to these ad hoc solutions rather than replace them with large systems.
In general, the urge for a large system is driven by a wish to avoid manual processes and a fear of human error. This dream of error-proof automated systems makes us forget the significant risks that these systems entail. Rather than being a source of concern, we should celebrate when we are able to handle complex processes with simple, ad hoc tools driven by human ingenuity. In the end, the best solution may simply be a conscientious person with an Excel sheet.
Joachim Skanderby Johansen is a regular writer at Unreasonable Doubt. He writes on the ethics and practicalities of responsibility and uncertainty. He occasionally defends dead liberal ideas. Joachim works in the financial sector. He has a master’s degree from the London School of Economics and Copenhagen Business School.
If you liked this article, you might also like our post on the dangers of artificial mediocrity.