Barriers to bold calls: why disagreeing with consensus is fraught with difficulty

TLDR: It’s easier to go with the group consensus than it is to disagree. This is especially true when making predictions. However, over-concurrence within a group (also known as “groupthink”) can lead to poor outcomes. This long read explains where groupthink comes from and how to address it within your organisation by supporting truth-telling.

Dalle generated image representing loneliness in big decisions

Recently Fitch, one of the three big credit rating agencies, downgraded the US government’s credit rating from AAA (the best possible rating) to AA+ (the second best rating). They made this decision based on governance risks, highlighted by the recent debt ceiling brinkmanship and the increasing levels of partisan politics. 

And… people weren’t happy about the decision. 

Former US Treasury Secretary Larry Summers said Fitch's decision is "bizarre and inept” and current US Treasury Secretary Janet Yellen called the decision “arbitrary”. They weren’t the only ones. Criticism also came from Mohamed El-Erian, Barack Obama, Jamie Dimon and several others.

I have no idea whether Fitch is right to downgrade the US, but the vocal backlash made me think about how hard it is to be an independent voice and to make bold predictions about the world which others may vocally disagree with. By downgrading the US to their second highest credit rating, essentially Fitch are saying the US almost certainly won’t default but the probability of it happening is slightly higher than they were previously. The reality of this call (in probability terms) means Fitch probably won’t be proven correct because a default still remains unlikely. But they still take all the heat from the decision. Some people may reasonably ask the question - why even bother? 

Much of our society relies on people being able to push against consensus and tell the truth about what they see happening in the world. The world of economics and finance relies on regulators, rating agencies and market analysts - all of whom are paid to be truth tellers, but most of whom align with consensus opinions most of the time. More generally we need scientists, politicians and our colleagues to be able to tell us hard truths about the world in order to learn, improve, avoid risks and capitalise on new opportunities.

Truth-telling often comes in the form of predictions about the future, and these decisions can be difficult to objectively quantify. Making bold calls is scary. If you’re right, you’ll only find out much later down the line. If you’re wrong, you’ll never hear the end of it. Actually even when you’re right you’ll get criticism until it’s obvious that you were right and even then nobody like it when someone says “I told you so”. 

Part of the challenge is that once a consensus forms within a group, it’s hard to break. A series of papers in the late 1980s and 1990s started to explore what drove “groupthink” - i.e., the over-concurrence of opinion among a group of actors. This quantitative assessment of the predictors of groupthink found that 5 main factors predicted faulty group decisions about whether countries should go to war (a high stakes setting where decision making tends to be well documented). The following 5 areas were associated with errors in prediction which led to over-concurrence: 

  1. Inappropriate estimation of abilities

  2. Close-mindedness

  3. Lack of traditional of impartial leadership

  4. Lack of tradition of methodological procedures

  5. Pressures towards uniformity

This post covers these 5 reasons for why making bold predictions and deviating from the group is hard, and how it can be even harder for some underrepresented participants such as women and minorities to be truth tellers within organisations. 

1 - Inappropriate estimation of abilities: threading the needle between too much confidence and too much self-doubt

In this paper, the highest strength factor that drove prediction errors was whether the group had an accurate assessment of their own abilities. 

Assessing your own abilities can be difficult. The (now famous) the Dunning Kruger effect describes how people who have a lower amount of competence may not be able to accurately assess their capabilities so when doing a self-assessment tend to overestimate how good they are at that particular skill. For an external observer, this false confidence can be hard to distinguish from those who are confident in their abilities for good reasons, and sometimes leads to some lower competency individuals rising the ranks of various organisations. 

Overconfidence in abilities can result in complacency in how work is done, which leads to less deliberation and more errors. A healthy amount of self-doubt can result in a higher standard of results (e.g., this 2010 sports psychology study). Small amounts of self doubt can drive higher effort levels and can force individuals to scrutinise the quality of their work more. 

However, this is made more complicated by group settings. If you’re around those who are agreeing amongst themselves, you will feel pressure to agree with them. Surrounding yourself by people you consider to be high performers is normally a positive way of increasing your learning and likelihood of success. However, doing so may also trigger “imposter syndrome” as you compare your performance to your perceptions of peer performance. Difficulty internalising positive feedback about your work, biased comparisons with other people (e.g., the tendency to compare their best performance with your everyday performance), and attribution errors about the quality of their work can all amplify feelings of self-doubt. Extreme self-doubt and imposter syndrome can lead to performance anxiety and an unwillingness to take intellectual risks. Self-doubt can manifest in poor self-efficacy (your belief in your own abilities). This in turn can lead to people giving up as soon as they hit roadblocks, which drives lower performance (which can in turn reinforce the beliefs of low self-efficacy). We don’t hear about bold predictions from people who underestimate their own abilities, because they tend not to make those kinds of predictions.

The best performance appears to be a finely balanced belief in your overall ability, paired with a healthy scepticism of any given judgement call which drives higher effort and diligence.

2 - Closed-mindedness: our aversion to unlikely events and flip-flopping

There are two forms of close-mindedness which impact prediction. First, how open are you to considering outcomes that others think are unlikely and second, how well do you respond to new information. 

Unlikely outcomes happen all the time. This is partially because lots of outcomes happen all the time and things that have a 5% chance of happening occur at roughly 5% of the time. Unlikely outcomes also occur because they’re often less unlikely than the group thinks they are (see the rest of this post). But we under

predict unlikely outcomes because it's more intellectually vulnerable to make those predictions. 

One reason for under-prediction of unlikely outcomes is that we tend to anchor our beliefs on what everyone expects to happen, and only then do we look for information to disprove this view. But we often don’t look that hard, because once we’re started from that position we suffer from confirmation biases which help us affirm our starting position. These biases skew our processes away from being open-minded to unlikely outcomes.

A second reason for these under-predictions is that we make quality assessments based on dispositional factors such as a person’s character, status or likeability. This is a phenomenon known as fundamental attribution error. We tend to rate the predictions of people we like better than people we don’t like. You can even see this with how we describe people who predict unlikely outcomes. Dominic Cummings used to advise the government on a range of topics, but one of his person “missions” was to introduce more scientific thinking and superforecasting into the government. He wanted to do this by hiring more “weirdos and misfits”. This is an absolutely classic example of fundamental attribution error. Nothing about being weird or a social misfit makes you produce more effective work or better predictions.

Underprediction of unlikely events is a reverse result from these attribution errors - we associate predictions of likely outcomes with normality, and making a prediction of an unlikely outcome is associated with being abnormal. Unsurprisingly people normally don’t like to think of themselves as abnormal and this drives a reluctance to predict unlikely outcomes. This tends to bias us against even considering predictions that might be seen as unlikely - a kind of closed mindedness which inevitably worsens our prediction ability.

In addition to underprediction of unlikely outcomes, a second form of closed-mindedness is when people don’t react to new information. Many people make a prediction early on and then continue to dig deeper into that assessment, even if the facts around them change. This entrenchment happens on both sides of debates - and without an effective mechanism to challenge ideas without changing people, it leads to increasing polarisation of views. In prediction markets this leads to a big divide between permanent doomsayers (in financial markets they are called "perma-bears") and those who refuse to consider negative outcomes (the so called "perma-bulls").

This entrenchment is also clear in politics. Politicians who change their minds worry that they’ll be accused of flip-flopping or u-turning. The change in position threatens their identity and implies their previous prediction was an error which only incompetent people make. But the reality is, errors and mistakes are part of the prediction / bold decision making process. They’re inevitable when you’re working in an uncertain environment. And people are more forgiving of politicians flip-flopping than most would normally expect. When polled about how they see u-turns, the most common response is that “u-turns are normally a good sign, showing that they are willing to listen and change their minds”. In fact, that question has been repeatedly polled by YouGove 8 times since 2019 and the results have always been the same. Being open minded and changing your view is almost always less socially costly than people assume - especially in the face of new information.

Both these two results of closed-mindedness tend to make bold and accurate predictions harder than they otherwise would be.

3 - Lack of tradition of impartial leadership: leaders that hear what they want to hear

Beyond assessments of your own abilities and assessments of the facts on the ground, the decisions we make are also a function of the environments in which we make them. The last three factors all relate to elements of the environment which push us towards agreeing with each other. 

First, if the leadership of an organisation is not impartial, people around the leaders will tend to agree with them. Consistent studies have found that the nature of leadership structures impact the behaviour of people in those organisations. Many of the original studies wouldn’t be allowed to take place today for ethical reasons, but famously the Milgram electric shock experiment and Stanford prison experiment both showed that instruction from authority figures when largely unchallenged. 

The same types of behavioural patterns apply to truth telling in organisations. A study in the Taiwanese military found that the more hierarchical authoritative figures were told selective truths where bad news was omitted from their direct reports. This can be driven by lower levels of psychological safety - where employees worry that impartial leaders will see difficult truths as personal criticism, which in turn will lead to repercussions (or in the extreme persecution of that employee through personal vendettas). This fear to tell the truth may be learned through the lived experiences of employees in these organisations, or it can even be driven by hypothetical worries. In the latter case, these fears about leadership won’t be dispelled unless and until employees see others tell difficult truths and not face repercussions. 

More recent work has looked at the role of CEOs and leaders as “Chief truth officers”. The idea is that creating an environment where people can be honest and bold is a proactive process. The three recommendations from the paper are for leaders to:

  • Make the role of truth-telling clear & important by emphasising the different between opinion & fact and creating a decision process which values evidence

  • Role model truth-telling by being bold and direct with external stakeholders

  • Create the right environment to speak up by protecting those who speak up from unfair retribution and rewarding them for their directness

4 - Lack of tradition of methodological procedures: garbage-in / garbage-out processes make bad predictions

Having a clear method of how you get to a decision can be a major protective factor against groupthink and poor prediction. This is true for a few reasons: first, a clear process for making predictions can be back tested for reliability; second, having a clear process can prevent biases where we convince ourselves a false reality is true; and third, openness about the process allows people to criticise the method rather than make personal attacks on the person making the prediction.

If you need to make a bold prediction or disagree with the consensus view, you want to be comfortable that the process you’re using to come to an alternative view is accurate and reliable. Defining a clear methodological procedure for coming to a decision, or at least some prediction guardrails, can allow you to explore how that process would serve in other historical episodes or hypothetical future scenarios (processes in finance referred to as back-testing and stress-testing). But the challenge in many organisations is that designing good versions of these processes are labour intensive and tend to not be rewarded (unless something goes badly wrong). People don’t focus on rigour until cutting corners results in an adverse outcome. This is true across a wide range of industries - oil & gas companies did not roll out rigorous safety procedures until they had big oil spills, pilots didn’t do procedural check-lists before take-off until there were sufficiently high accident rates and regulators didn’t put banks through intensive stress tests until the financial crisis. The same lack of methodological rigour is rocking the world of academic psychology and behavioural science at the moment. Several key results have failed to replicate in the last 10 years and in other instances big names in the field appear such as Dan Ariely (who’s book you probably own) appear to have fabricated experimental data. People have a tendency to cut corners when they can, and the easiest way to do that is to look across at your name in another prestigious company who is making predictions and tweak your results so they look similar to theirs. Protecting against this requires organisational commitment, consistent incentives to do things properly and a culture where process is reported on transparently and honestly. 

Commitment to a clear methodology is also important because it prevents us from fooling ourselves. In a previous post I wrote about why it’s so hard to quit things that feel like they aren’t going your way. The two biggest biases are: one, the tendency is to attribute failure to external factors and success to your own brilliance; and two, the tendency to extend the timelines later on to give yourself more time to succeed. Both of these also apply to making predictions and bold calls. If things don’t go as you predicted, you can always find one-off factors which may have influenced results. And predictions have a habit of becoming open ended - something that was supposed to happen within 6 months stretches to 12 months then 18 months. These behaviours feed into the other challenges we’ve discussed. Extending timelines mean that people hang onto bad predictions longer than they should and don’t update forecasts based on new information. Attribution biases mean that people falsely boost their success rates (or their perceived competence) leading to overestimation issues. A stable methodology is protective against these biases. In particular, taking decisions such as the timeline and success criteria and setting them in stone early on in the decision making process can prevent these biases (or at least make them more explicit). 

Finally, having an open and transparent methodology shifts the heat away from people and towards process. Disagreement has a tendency to spiral into personal criticism, especially online: people are more likely to categorise online discussion as criticism than in person discussion  and genuine online criticism is more likely to be shallow and personal. Instead of the shallow and personal challenges, forcing someone to discuss and disagree with the methodology used is a slower and more deliberate process, which is more likely to result in thoughtful dialogue. Transparency about methodology also contributes to trust in the outcomes of your prediction or decision, especially if they are controversial decisions. This is a familiar principle for people who study law, where the principle of open justice (it is as important for justice to be seen to be done as it is that justice is actually done) is frequently cited. Similar results are found in psychology where reactions to decisions are shaped by perceptions of procedural fairness. As a result, being open about how you got to a particular answer can help take the heat out of a controversial outcome.

5 - Pressure towards uniformity: there’s safety in numbers

When Fitch announced their downgrade, there was one group of people who were delighted (or were at least relieved). In 2011 after the previous debt ceiling university-essay-crisis-style negotiation ended, Standard & Poor (S&P is another of the big 3 rating agencies) downgraded the US credit rating. This caused a big stir, with the stock market tumbling more than 5% in a single day. Similarly to this month’s downgrade, people were unhappy. Criticism came from the Obama administration, economists like Paul Krugman and filmmaker Michael Moore even called for the government to “show some guts” and arrest the head of S&P

So when Fitch made the decision over 10 years later to align their ratings with S&P, David Beers who used to run the S&P ratings division, came out to bat for his previous call:

“It’s fair to say that the rating agencies, based on their own criteria, have been pretty timid in their actions…If anything, Fitch’s action is simply confirming what S&P decided back in 2011, and here we are in 2023.”

You can almost hear the tension that he’s held since 2011 leave his body. The language Beers uses (“simply confirming”) speaks to validation, after a period where presumably he has felt quite alone. Part of the challenge of making big predictions is that it tends to be isolating. Loneliness and isolation can really impact wellbeing and happiness (see my last post on Loneliness) and is often associated with identity struggles. In the case of Beers, a 2011 interview revealed that he saw it as his “responsibility” to be honest about the state of the US government, which may explain how and why he tolerated the attacks that came with the big decision.

Isolation isn’t the only reason we tend to align to uniformity. Social pressures and incentives both play a big role in our tendency to conform. In the famous Asch line experiment in the 1950s, participants were shown a line and asked which of three other lines were most close to the first line in length. Participants did this exercise in groups of 8, however, unbeknownst to them the 7 other participants were in fact confederates (psychology lingo for actors hired by the experimenters). They did this line exercise three times; for the first two times the actors all said the obviously correct answer, but on the third time they said the obviously incorrect answer. The one true participant goes last, after having heard all 7 agree with each other on an answer that appears obviously incorrect. The main result of the experiment is that the participants normally agree with the other 7 “participants”. It’s worth watching the video of these experiments, you can see the moment participants pause and think before just agreeing with everyone else in the group. The experiment results are typically explained by a desire for people to feel part of the in-group and avoid the social isolation from being different to the norm. This result is pretty fascinating given how unimportant and low stakes the setting is. Participants have no reason to believe that they’ll see any of the others ever again, nor is there any financial or reputational cost for deviating from the group. Yet we still see this social pressure driving participant behaviour. 

In the real world, the stakes are higher. When your job involves making big decisions or predictions, being alone in a bold call is scary. If you’re right, you’ll only find out much later down the line. If you’re wrong, you’ll never hear the end of it. Whereas sticking with the consensus is easy. Even if you’re wrong, everyone else was wrong too! Nobody saw it coming. This asymmetric incentive structure further pressures people to conform to a uniform response. And unlike some of the other challenges identified in this long read, this desire for uniformity is harder to abate.

Being disagreeable is seen differently depending on who you are 

Finally, it’s worth spending some time to focus on why making bold calls can feel riskier for some people than others. Being direct and truthful about the world often involves being disagreeable. In Adam Grant’s book “Give and Take”, he discusses how the most effective and honest advice tends to come from disagreeable givers. People who are comfortable deviating from the norm even if it is potentially disruptive or upsetting, but at the same time want to give without necessarily expecting anything back in return. This archetype of a disagreeable given is quite frequently seen in people who are trusted advisors of people in leadership roles. 

However, being disagreeable is perceived totally differently depending on who you are. Women are expected to be agreeable, and often competence is seen as contingent on agreeableness. This means that for many women, the archetype of a disagreeable giver can be close to impossible to attain. The same paper found that the longer the time that passes from an incident, the more agreeable women are expected to be in their opinions about it. This also has implications on how companies think about learning from mistakes in retrospect - if you wait too long to do a retrospective on a project, there will be more pressure on the female employees to be positive about how things went. The same result was not found for males, where delays in giving disagreeable opinions were more tolerated. 

Gender isn’t the only factor which influences an employee’s ability to tell the truth. Tolerance of disagreement with a consensus is also related to how much status an individual has and how stable that status is. If the bold prediction comes at the risk of loss of status, then individuals will feel threatened by that risk and will avoid bold predictions. If on the other hand, the status is not under threat, then higher status makes it easier to make big calls. This mirrors the results about stress and status stability, which show that participants who feel like their status is under threat have higher stress levels and lower performance than those whose status is stable. Those from minority groups may feel like their status is more under threat because they can see fewer people who look like them who have gained and retained that status. Since the perceived (and sometimes true) cost of disagreement is higher, “disagreeable truth-telling” is more of a burden on those from minority backgrounds.

Final thoughts: some actions that can encourage truth-telling 

Accurate forecasting and prediction can be particularly thankless tasks, especially when your predictions deviate from a consensus view. We have discussed how cognitive biases and group dynamics encourage over-concurrence. Understanding some of the causes of group-think can help us identify some tactics to encourage truth telling:

  • Increasing the amount of psychological safety in teams can lower the perceived stakes from disagreement. This can be done through deliberate exercises, by having leadership being emotionally vulnerable with their teams or by making regular time for shared moments of reflection and learning. 

  • Creating explicit space & permission for people to review decisions and disagree with them, enables people of all backgrounds to deviate from a group consensus without feeling status threat. One option for this is to run formal “red team reviews” where a second team (unrelated to the first) makes an independent assessment of a decision once the majority of the work has been done but before the decision is final.

  • Building an objective measurement process can avoid the methodological biases which encourages group think. One good option to keep yourself honest is to set success or kill criteria for a decision or project which cannot be moved or changed. Part of what I do with Uncover is separating the identification of problems with the measurement of whether solutions are working. This separation helps us avoid over-commitment to solutions which aren’t getting results. 

However, even if you do all of these things groupthink is still challenging to avoid. Part of the problem is that the world is uncertain. Crucially, it is uncertain in two ways - aleatoric uncertainty and epistemic uncertainty. The former describes the randomness that we can easily model and understand (e.g., a fair coin toss has a 50% chance of coming up heads, but the outcome is uncertain until we actually toss the coin). People are pretty comfortable with aleatoric uncertainty, you can get a better understanding by doing more data collection or building a better model for the world. 

Epistemic uncertainty is trickier, maybe the risk is unknowable or at least very difficult to quantify. Epistemic certainty is harder to mitigate - more modelling or research is unlikely to help very much. It is a measure of how well we understand the system, rather than how likely something is to happen within that system. When making predictions about the world, we assume the problem has aleatoric uncertainty, when really there is often quite a bit of epistemic uncertainty. We want to believe that the answer is knowable, and so when others come up with more “rigorous” ways to understand the world we want them to be correct. This makes the problem of groupthink worse, and makes the perceived cost of failure higher - “you didn’t do enough modelling or analysis to predict the probabilities correctly”. Addressing epistemic uncertainty requires you think differently about the problem, which is a more vulnerable and challenging exercise.

Ultimately, as a leader you have to be able to tolerate both types of uncertainty. And if, after the fact, it becomes clear that there was a high degree of epistemic uncertainty, you have to be willing to learn and rethink how you address those decisions and predictions in the future. The fact that you will face uncertainty is a certainty. Being prepared to make bolder calls can help you prepare for different outcomes, but ultimately cannot fully address the fact that you have to respond to randomness in the world. 

Previous
Previous

Old and new notions of resilience

Next
Next

Loneliness and the workplace - an epidemic impacting our youngest colleagues