#14 What’s the worst that could happen? Two ways to think in uncertainty

Hello!

Welcome to The Makers and a very warm welcome to new subscribers joining us this week.

You can read all previous issues here (scroll to bottom of the page) and subscribe to the accompanying YouTube channel Ours For The Making here.

This month, in the run up to COP26, I’m publishing a mini-series on ways you can try and make sense of the climate crisis through a long term lens. It includes some deeper than usual takes on how we frame our discussion of climate change.

Previous issues of this mini-series:

Today’s issue:

  1. What’s the worst that could happen? Two ways to think and act in genuine uncertainty
  2. This week’s introduction: New video series: The ‘future-gifting’ mindset
  3. Future jam today: The Global Tiller newsletter.

1. What’s the worst that could happen? 👀

How can we (individually and as societies) think and act in genuine uncertainty?

In part three of this let’s-think-about-climate-through-a-long-term-lens-series, I’m going to share two rules that can be applied to how we act on the climate crisis when we simply do not have all the information we might like to have.

Let’s start with a thought experiment. (A slightly grisly one.)

Imagine you are thinking about some part of your life. It could be your career, your finances, your children’s education, your hobbies, your health. 

Now picture the worst worst-case scenario that relates to that area of life. Think of something harmful and irreversible. What’s the very worst that could plausibly happen? 

(I told you it was a bit grisly.)

For example, the area of life I chose was ‘playing music’. (I play piano and guitar.) The very worst scenario I can imagine would be that I’m injured in such a way that I can no longer play or even sense that I’m playing.

However, importantly, I have no way of knowing when that worst-case scenario might happen or if it will happen at all. 

The scenario just sits there in the ‘possibility space.’ 

The question is, “what action – if any – should I take to eliminate the possibility of this worst-case scenario from happening in the future?”

What costs am I willing to bear to stop it from happening?

Insuring against the worst

I suspect that you’ve done a similar calculation at some point in your own life. You’ve perhaps chosen to limit or eliminate your exposure to certain dangers by taking out some form of insurance: car or home insurance; mobile/cell phone insurance; or life insurance. 

Where you face catastrophic, irreversible loss (which I dearly hope isn’t too often), you might lack all the information necessary to be clear about whether or when those terrible things might happen. But, you still might wish to take significant action to protect yourself and those around you.

Some things are, after all, worth paying an insurance premium to protect or protect against.

Of course, insurance will be calculated based on some grasp of the various risks and probability of those risks materialising. But what if you can’t calculate the likelihood of something happening – i.e. you’re in a situation of genuine uncertainty?

Risk is not uncertainty?

risk is measureable. You can estimate its likelihood. You can hang probabilities on it: 10% likelihood, 4 in a 1,000 chance in the next 12 months, et cetera. 

Uncertainty, on the other hand is a different beast. According to Frank Knight in his 1933 work Risk, Uncertainty and Profit, uncertainty resists any type of calculation. 

Or, as John Maynard Keynes put it in 1936:

The sense in which I am using the term [uncertainty] is that in which the prospects of a European war is uncertain, or the price of copper and the rate of interest twenty years hence, or the obsolence of a new invention, or the position of private wealth-owners in the social system in 1970. About these matters there is no scientific basis on which to form any calculable probability whatever. We simple do not know.

(Keynes, 1936)

So now, when we think about worst-case scenarios, we could think about the quantifiable risks and unknownable uncertainties. 

It’s this second quality – unknownable uncertainties – that we’re concerned with when we think about the climate crisis.

Two rules for acting in uncertainty

We know a lot about how the climate has changed. Since the pre-industrial era, the world has heated by 1.2C. We know that parts of Earth are becoming unliveable. You’ve probably witnessed first-hand more extreme weather events over the past few years and have likely seen unsettling footage of heat domes, fire and floods from around the world.

Our climate is changing. And it’s rate of change is driven by our behaviour.

But, uncertainties remain. For example, we don’t know how sensitive the climate system is to very small changes in temperature. We also don’t know precisely when or whether further increases in temperature will breach possible ‘tipping points’, triggering dangerous, irreversible changes.

Here are two principles to help you think about how we might move forward in these uncertain times. 

They are:

  1. The Maximin Principle
  2. The Precautionary Principle

While the origins of these ideas come from how we approach big issues of risk and uncertainty at a national and international level, there’s nothing stopping you from applying them to questions in your own life.

The Maximin Principle: what is it?

The Maximin Principle is interested in the very worse scenarios that might happen. It says: you should take action to stop catastrophic loss from happening. 

So let’s select a single looming issue that we face, perhaps as a country – a future pandemic or climate change. 

Next, we take that issue and imagine a range of plausible scenarios, lining them up from the least worst- to the worst worst-case scenarios. The Maximin Principle would direct us to eliminate the worst-worst-case scenario and its outcomes. Every time. 

Problems

In his 2021 book Averting Catastrophe, Cass Sunstein, a US jurist and Harvard Professor, notes several issues that should stop us invoking the maximin rule in all circumstances. 

  1. the ‘solution’ to the worst-case scenario might be eye-wateringly expensive
  2. the steps you take to stop something unacceptably bad happening might create worse-case scenarios of their own
  3. the steps you take to eliminate anticipated dangers also wipe-out the prospects of very large potential gains or ‘miracles’

So, where and when might the Maximin Principle be applicable?

In some situations, such as with the climate crisis, we can imagine the bad things that can happen – the droughts, the heatwaves, the flooding, the destruction and displacement. 

But, we have no way of calculating satisfactorily how likely these events are to occur at a particular point in time. As Keynes said of future circumstances that exhibit genuine uncertainty, “we simply do not know.” 

Now, it might be that given enough time we could gather more information and make a calculation.

However, the option of “wait and learn” isn’t feasible because we might sail through a tipping point, triggering irreversible climate change from which there might be no return. 

Fumbling in the dark

We’re being asked to act in the absence of important information. We can’t be sure we’re doing the right thing. 

It’s dark; we’re fumbling. 

Sunstein notes that when the costs of avoiding a particular bad outcome are not extremely high, the Maximin Principle works well. He cites examples such as:

– A pandemic which has an unknown probability of occurring, but the steps to prevent it are not especially costly.

Tick. Maximin Principle invoked.

– A financial crisis with an unknown probability of happening. Capital and liquidity requirements, while onerous are not unmanageable and can dampen impacts.

Tick.

And if we turn our attention to the climate crisis? 

Here we lack some important information of the probability of the worst-worst case scenarios coming to fruition in a particular timeframe. (Note that the IPCC has set out worst, intermediate and best-case scenarios in its most recent report that projections for when certain temperatures might be hit. The difficulty arises from how and how much and how fast the climate in general and regional climate systems in particular respond to these temperature changes.)

On one side, we are faced with climatic data that presents a deeply concerning and worsening picture. But, we face considerable uncertainty about known unknowns and unknown unknowns. 

On the other side, we face the prospect of unlimited planetary collapse. 

So, here uncertainty faces off catastrophic impacts. Which wins? 

Sunstein notes: 

“there are no simple rules here. Judgements, not calculations, are required, and ideally, they will come from a well-functioning democratic process. But even when judgements are required, they can be bounded by applicable principles, which can prevent a lot of trouble.”

(Sunstein, 2021)

So, where does this leave us?

The Maximin Principle states that you should always eliminate the worst-case scenario. But, if you took this rule a bit too seriously, you’d be paralysed by the possibility of dangerous things no matter how unlikely. 

If the worst-case scenario of crossing a road was your inability to get out of the way of that moving vehicle that fails to stop, the Maximin Principle would demand you eliminate that possibility. You either never cross a road or (somehow) have all moving vehicles removed from all roads.

This clearly is nonsense and highly debilitating. 

Maximin minimalism? 

So, perhaps the Maximin Principle has some limited application in certain instances. It should be applied sparingly. 

Minimally. 

For example, where something really bad could happen but it is unknown (and perhaps unknowable) when it might happen and the best estimate of the costs of eliminating that danger are less than the impact of the bad thing itself.

IOW,Impacts of unknown really dangerous thing are bigger and scarier than the cost of avoiding really dangerous thing.

And what about that second rule, the Precautionary Principle?

The Precautionary Principle: what is it?

The Precautionary Principle (a kind of, “better safe than sorry”) outlaws any action that might cause harm against people and the environment. It places the entire burden of proving the safety case of any action on the person proposing the action. It’s a basic tenet of environmental policy.

So in situations where, as Keynes would put it, “we simply do not know”, the Precautionary Principle springs into action halting activities that might cause harm. Critically, the absence of complete information about why something might be dangerous is no excuse for not taking action to prevent something that might be harmful.

For example, the 1992 Rio Declaration states: 

“Where there are threats of serious or irreversible damage, lack of full scientific certainty shall not be used as a reason for postponing cost-effective measures to prevent environmental degradation.”

(Rio Declaration, 1992)

You could think of the Precautionary Principle as a kind of emergency handbrake.

Again, a strong version of the Precautionary Principle, is, according to Daniel Kahneman “obviously untenable”. You’d never dare do anything.

Applying the Precautionary Principle

A good example of where the Precautionary Principle might come into play is for deciding whether to sanction technological solutions to help solve the climate crisis.

Take geo-engineering. Should we allow some form of widespread technological fix to, say, deflect the sun’s rays and serve as a cap on global temperatures? 

The potential upside being global temperature stability, but at what price? If we don’t understand how this technology would interface with regional weather systems – the north Atlantic drift, monsoon rains and so on – our decision could imperil millions.

In this instance would it be reasonable to invoke the Precautionary Principle?

Self-declared techno-optimist, Lord Martin Rees, UK Astronomer Royal, believes we should “remain upbeat about science and technology – we shouldn’t put the brakes on progress.” However, we can’t shrug off “unfamiliar and potential catastrophic hazards”. So, our efforts in technology should be guided by a culture of “responsible innovation.” (See On Our Future, 2018)

He’s unsurprisingly cool on the idea of applying the Precautionary Principle in any doctrinarian way. He views a strong application as having a “manifest downside”. Sunstein views its wrong application as “costly.”

But costly to whom and for whom? These questions lie at the heart of reaching wise judgement on what action to permit, take, halt or postpose. (And tie into my previous posts on weighing and counting the future.)

Conclusion

In life – as individuals and as societies – we face difficult questions but sometimes lack important information. 

We might also need to accept that there might not always be clear answers.

Choosing to “wait and see” if we can learn more can come at a huge cost (for example, passing climate ‘tipping points’). It can also create even greater and costlier problems. 

Choosing to act in the dark and “just do it” can be foolhardy and unwise. 

The “better safe than sorry” option (The Precautionary Principle) can also be unhelpful if applied too readily. For example, if stopping a technology removes one harm, but not allowing it creates harm elsewhere in equal measure, or denies us possible ‘miracle’ upsides.

Applying the Maximin Principle, Sunstein suggests, offers an alternative to be used in only particular circumstances. These are: where the risk isn’t just the worst on the menu, but the meaning of that loss to people or the environment is so great, it places it in a category that is widely considered by society as completely unacceptable. 

In these circumstances society allows for decision-makers to take out a lotof insurance. And that’s probably wise.

(The caveat being that the mitigating action doesn’t manifest dangers that are worse still!)

So, returning to our thought experiment, where in your life do you apply precaution? And which worst worst-case scenarios would you be willing to eliminate, whatever the cost?

2. This week’s introduction 📽️ 

New video series: The ‘future-gifting’ mindset

I’ve been busy shooting another new video exploring a simple mindset that might help you reboot your relationship with the future – something I call ‘future-gifting’.

I wrote an article for the website Taking Time which introduces the concept. You can read that here.

Perhaps you’ll never look at your future the way same again?

You can watch the new video here.

3. Future jam today 😋

For this month in the run up to the UN Climate Conference in Glasgow, The Makers is teaming up with The Global Tiller newsletter. This week, its newsletter provides a guide to COP26. You can read it here.

Thanks 🙏

Thanks for reading, watching, subscribing and being a Maker. I really appreciate it.

If you’ve enjoyed this edition of The Makers, you’d be doing me a kind and generous favour by sharing it with someone who might enjoy it also. 

And if you have questions or comments, do hit reply. 

Until next time…

Best wishes,

James

Leave a Comment

CONNECT

SUBSCRIBE

Join the ‘Makers!

My monthly newsletter helps you to sharpen your analytical thinking and doing. You can sign up below to join a growing community of curious, analytical readers (like you).