Loading component...
Cognitive bias: The hidden psychology of decision-making

Podcast episode
Garreth Hanley:
This is INTHEBLACK, a leadership, strategy, and business podcast, brought to you by CPA Australia.Tahn Sharpe:
Welcome to the INTHEBLACK podcast. I'm Tahn Sharpe, editor of INTHEBLACK at CPA Australia. In this episode, we're diving into the topic of cognitive bias, which is often overlooked but deeply influential across so many professional sectors. We all have inherent mental shortcuts that help us make decisions quickly, but they can quietly undermine objectivity, especially in professions built on precision, trust, and ethical responsibility.Joining us is Dr. Victor Goh Weng Yew, who has a PhD in applied psychology and is an associate professor from the Department of Psychology, as well as the acting dean at HELP University in Malaysia. Dr. Goh is acknowledged as a respected voice in helping understand how biases like confirmation bias, anchoring, and overconfidence can affect judgement, even in data-driven fields like accounting and finance.
We'll explore how these biases show up in everyday work, what professionals need to know about their ethical implications, and how the rise of AI and automation adds a new layer of complexity to decision-making. Welcome, Dr. Goh.
Victor Goh:
It's a pleasure to be here, Tahn.Tahn Sharpe:
Great. So, let's set the table for today's discussion. To begin with, could you explain what cognitive biases are and why they're such a powerful force in how we make decisions in the workplace?Victor Goh:
Right. That's a fantastic question. And in fact, you've already touched on some of them already when you mentioned anchoring bias, confirmation bias. But basically, cognitive biases are a large family of flaws in thinking, in psychology, that most humans are susceptible to. Now, they're not necessarily a bad thing. In fact, cognitive biases exist because heuristics exist. And heuristics exist because they allow us, our human brains, to process information in a way that's very efficient, that doesn't have to take a lot of time, doesn't take a lot of mental resources to come up with relatively valid, relatively accurate answers.And it is these heuristics, actually, that enables us to go through life, whether it's personal relationships or professional careers. And it allows us to process information very efficiently. And it allows us to actually look at the world in a much more simplistic manner. The problem is when heuristics leads us to flawed decision-making, and that's when the heuristic that we have become known as a cognitive bias.
Tahn Sharpe:
Okay. So, let's delve into how those heuristics kind of manifest themselves. In your experience, what are some of the most common cognitive biases that show up in high-pressure or detail-orientated professions like accounting and finance?Victor Goh:
From my experience being a consultant with multiple organisations, especially those in accounting and finance, what we've found most common would be the anchoring bias, confirmation bias, loss aversion as a heuristic, as well as an overconfidence bias. So, these are four of the more common ones. If I were to add a fifth one, it will also be the availability heuristic as well.We see this happen a lot, especially in high-pressure fields, where they feel that they cannot make a mistake. When you cannot make a mistake, that's when confirmation bias, anchoring biases, and availability heuristics come into play much more compared to other professions or other fields where they're more forgiving of mistakes and the pressure is a little bit lower.
Tahn Sharpe:
Makes sense. You've mentioned confirmation bias a few times, which is often discussed in an accounting context. It's where people seek out information that, I guess, supports what they already believe. Indeed, I guess it's a bias that algorithms feed on and perpetuate in some ways. But could you explain how this confirmation bias might influence someone's judgement and what the consequences could be at work?Victor Goh:
So, just to recap, confirmation bias is the tendency for people to look and seek out information that already confirms their pre-existing beliefs. And these kinds of confirmation biases can have severe impacts in work, simply because in a confirmation bias perspective, the individual who is under that bias, not only do they seek out information that confirms their belief, but they tend to disregard information, no matter how clear or how well-sourced it is. They tend to disregard information that disproves their own worldview or goes against what they believe to be true.And in a field such as accounting and finance, that can be quite a big issue, simply because if you're only looking for what you believe to be true, then you are no longer working based on evidence. You're working based primarily on your perception, and primarily based on your own emotions. So it's no longer evidence-based. And in accounting and finance, if you don't operate based on evidence, the consequences could lead to financial losses. And in fact, in some cases, it could also lead to financial fraud, simply because you are disregarding information that you don't believe fits your particular perspective or your particular worldview.
Tahn Sharpe:
Another one that you mentioned there is anchoring bias where people rely too heavily on the first piece of information they receive. How does this bias typically affect decision-making? And what are some of the ways to counteract it?Victor Goh:
So, as an applied psychologist, anchoring bias is one of the biggest, most interesting phenomenons that we study in the workplace. And anchoring bias is basically the human tendency to pay too much attention to the first number that we hear, and we tend to negotiate or we tend to revolve around that main number that we first hear. And that's why it's called the anchoring bias, because the first number that we hear sort of anchors us to that particular value. And even when we negotiate, or even when we look for new data, the new data or the new negotiation revolves around the first number that we hear.So, for example, how this works in real life is, if, let's say, you're buying a secondhand car and the salesman has told you that the car has been sold for 20,000 Australian dollars, for example, your counteroffer will revolve around that first number that you've heard, which is 20,000 Australian dollars. Instead of counteroffering for, let's say, 5,000 Australian dollars, you try to make your counteroffer as close as possible to the first number that you are hearing.
How does this affect decision-making? It affects decision-making because, again, similar to other cognitive biases, you're no longer operating based on facts. You're really operating based on a heuristic. Why does this happen? This happens because anchoring gives us a point of reference. And in a world that's not really clear. So a lot of things in our world is not really clear. So, for example, how much is the worth of a car? How much is the worth of a coffee? We don't really know how to place value on these things. And that's why the anchoring bias works so well, because if a coffee used to cost me $3 today, it won't cost me or it shouldn't cost me $9 tomorrow. It's anchored at $3 already.
The problem here is when you are not aware of the anchors that you are using. So, if, for example, somebody comes in and they drop an anchor, they drop a value that is way above or way below what the actual value of something is, you tend to then operate around that first number, even if that first number was inaccurate in the first place. And that's dangerous, especially in the world of AI, because the algorithms that AI uses is based on the data that we've fed it. And if our data itself is susceptible to anchoring biases, we are giving wrong information to AI. And that can also lead to a perpetuation of anchoring biases right down the chain.
Tahn Sharpe:
You can also imagine, I mean, when you put it in that context, how central anchoring bias could be in sales tactics.Victor Goh:
Exactly. So, in fact, a majority of very successful salespeople, so when we do a deep dive and an analysis on their performance, what makes them such a successful salesperson, it's because they understand how the anchoring bias works. And if you meet any successful salesperson, they will never let you drop the anchor first. They will always try to be the first person that throws out that number because they know that if they throw out that number, it anchors whoever they're talking to around that particular value. And the customer then doesn't deviate too much from that value. So it kind of safeguards their own value proposition as well.Tahn Sharpe:
It's a good takeaway for my next negotiation. Now, finance professionals, I guess, often pride themselves on being objective and data-driven. But how can overconfidence bias sneak in, especially among experienced professionals?Victor Goh:
I think overconfidence bias is a plague across all professional fields, not just finance and accounting. But in this particular case, it's particularly dangerous, simply because the overconfidence bias feeds off the other cognitive biases that we've mentioned earlier. So, for example, if you're not aware that you have a tendency towards a confirmation bias or you're not aware that you've been a victim of an anchoring tactic that's been used against you, then that overconfidence bias that you actually have exacerbates the problem, because you are overconfident.You believe in your ability, your knowledge, the accuracy of your own predictions, the accuracy of your own valuations, but you're not aware that your own knowledge and valuations have been already influenced by a confirmation bias or been influenced by an anchoring bias. So, on the grand scheme of things, the overconfidence bias is actually a lot more dangerous simply because it exacerbates the dangers and the harmful effects of all the previous biases that we've already spoken about.
Tahn Sharpe:
We’ll be back with more in just a moment — but first, a quick word from CPA Australia.Are you navigating the ethical use of AI in finance? AI and Ethics for Malaysia is a new micro-credential designed specifically for accounting and finance professionals in Malaysia, featuring expert speakers, case studies and examples. This course provides a comprehensive exploration of the ethical considerations essential for the responsible use of AI technologies in the workplace.
Available now through CPA Australia, the link is in the show notes — enroll today and lead with confidence. Now, back to our conversation with Dr Goh. So, let's talk about some of the interpersonal work environment factors. From a psychological perspective, what role does organisational culture play in either reinforcing or reducing the impact of bias in teams, especially when people are under pressure to meet deadlines or targets?
Victor Goh:
So, that's a great question, and organisational culture plays a huge role. In fact, organisational culture sets the psychological dynamics of the workplace. Several examples I can share from my own experience. For example, if you work in an organisational culture, you work within an organisational culture that's really tense, so they often favour fast decision-making, it's always rush deadlines, then the tendency for the confirmation bias to kick in becomes a lot stronger compared to an organisational culture where they're more relaxed about their timelines, simply because if it's tense and you are always rushing, you tend to only want information that confirms your point of view because you don't have the time to actually process counterinformation that goes against your point of view. So, that's one example.Another example of how organisational culture can affect your bias is... So, one example I could give you when I worked with a Japanese firm was, the Japanese firm... The Japanese culture is very interesting because it's a very collectivistic culture, and people tend to not disagree with each other. And that leads to an overconfidence bias, because if everybody is agreeing, everybody's on the same page, people tend to feel that their decisions are definitely correct. So, how the Japanese came up with a solution to this is actually quite brilliant. They understood that they have an organisational culture that's very collectivistic.
And therefore, in every single meeting, what they would do is they would purposely appoint a devil's advocate. And that person's job is simply to point out mistakes, to point out weaknesses, to point out potential issues with the group's decision-making. And that avoids the overconfidence bias, that avoids the confirmation bias, simply because that's one person whose only job is to actually disagree, is to actually point out weaknesses. So, these are aspects of organisational cultures that can then affect the impact of bias in your team itself.
Tahn Sharpe:
It makes sense, doesn't it, to have someone there to play devil's advocate who can sort of guard against that collectivism and make everyone question some of their assumptions.Victor Goh:
Yes. And we see this happen a lot in most organisations. So it's not just collectivistic cultures. We see this happen in the US as well. This has happened in Europe as well, especially when you have a really charismatic leader. If you have a very charismatic boss, a very charismatic CEO, it's even more important to understand that the more charismatic your leaders are, the higher the tendency towards confirmation bias. There are higher tendency towards cognitive biases that may exist within the team itself. And how do we combat that? By making sure the structures, the processes are in place.Tahn Sharpe:
Yeah. I imagine there's other ways to guard against that as well, which would also help. And I guess diversity is one of them, to help guard against groupthink.Victor Goh:
Right. You're correct. Diversity of opinions, diversity of backgrounds. People who come from different backgrounds normally have different ways of thinking. So therefore, they are naturally prone to point out issues that may not be as obvious or as salient to others from the same background. So diversity is actually a really great strength that most organisations should leverage on instead of seeing it as just something that they need to do in that sense.Tahn Sharpe:
Now, I just want to ask about AI, and the rise of automation and AI in finance has been pretty clear in the last year or two, everything from algorithmic trading to automated auditing. How might cognitive biases still influence outcomes even when machines are involved? From an ethical standpoint, what are the risks when cognitive biases go unchecked in professions that deal with public trust like accounting and finance? And how can individuals and organisations create accountability around this?Victor Goh:
So, there are two ways, I would say, that cognitive biases can still influence outcomes, even with the involvement of machines or AI. And the first way is in what kind of information we are feeding the AI in the first place. Because we have to always understand that AIs don't exist in a vacuum. How powerful your AI model is, is heavily dependent, heavily reliant on the kind of data that you are feeding your AI model.And like I mentioned really briefly earlier, if you are feeding flawed data, if you are feeding data that's already been influenced by the anchoring bias, by the availability heuristic, by the confirmation bias, by a loss aversion bias, if you're feeding those kinds of data into the AI algorithms, then you have to understand that because the data set itself is flawed, the AI can't detect that, and the AI will just give you solutions or recommendations or numbers that are based on flawed data. So, that's one major issue that we see already. What kind, the quality of the data that you're feeding the AI, that's one major issue.
And the second issue I would say is because the AI models that we have right now, at least the ones that I'm aware of on the market, suffers from a black box problem. And what is the black box problem? The black box problem is that we don't understand how the AI algorithms came up with their solutions. You don't understand how the AI algorithms came up with their answers, came up with their suggestions.
And that's even more dangerous, simply because if you don't know how the AI came to those conclusions, then the anchoring heuristic, the anchoring bias becomes even more salient, because the AI is just going to suggest to you a number or a particular value, but you have no idea how they came to that value. But that value itself, despite having no logical backing, no scientific backing to that value, you are going to be anchored by that number.
So, if the AI, for any reason, hallucinates that number and gives you a completely unrealistic number, you are still going to be affected by the anchoring bias based on a completely logical number. The difference between asking a human and asking an AI for solutions is that at least with the human, you could then question their logic and see how they came up with that number. Whereas with an AI, because the black box phenomenon, you can't actually question, and that makes the anchoring heuristic a lot more dangerous.
Tahn Sharpe:
Right. That makes sense, then, I guess, the movement towards what I believe, correct me if I'm wrong, but glass box AI so that you can find out where it came from, those biases, and then guard against sort of systemizing those biases.Victor Goh:
Yes. But even in the glass box AI, that still doesn't solve the first problem that I mentioned, which is the kind of data sets that you're training it on. And if the data set itself is flawed, even in the glass box AI, you're still going to get wrong answers, you're still going to get answers that have been heavily influenced by cognitive biases, depending on the people who have fed the data into the AI model itself.So, when you ask about this ethical standpoint, what are the risks, I believe I've mentioned those risks already. But the second question that you asked, Tahn, about how can individuals and organisations create accountability around this, I would say the first and most important thing that they can do, companies can do, would be to understand the existence of these kinds of biases. Because if you don't know that they exist, it's very hard to actually guard against them.
And secondly, I would say one of the biggest things that we have actually done, at least in our part of the world, is we are trying to reframe how organisations see work output, how organisations embrace failure, how organisations see mistakes. And I think that's really important, because if you can reframe how we approach mistakes, we reframe how we see the value of our work, then we are less shoehorned, I would say, or less pressured into not making mistakes, and the less pressure into coming up with accurate answers.
And sometimes it is that pressure. It is that pressure to find accurate answers that leads people to rely on AI, simply because they believe AI is more accurate than they can be. It also leads people to have these kind of confirmation biases or these kind of availability heuristics, simply because they're afraid of making the wrong decisions. They're afraid to come up with the wrong answers.
And we see this happen in organisations quite a bit. Organisations, especially in the US, who are open to failure, tend to come up with a lot more accurate answers. Whereas organisations who place an over-emphasis on accurate answers tend to come up with a lot of inaccurate answers because of the culture that the expectations have created in that sense.
Tahn Sharpe:
And to finish, Dr. Goh, maybe we can touch on the individuals. What are some practical steps that individuals can take to build awareness of cognitive bias and reduce its impact on decision-making? Are there any tools or habits that you'd recommend?Victor Goh:
Some of the habits that I recommend that all professionals should do would be to always bear in mind if everything looks great to you, that's when you should be the most cautious. If every data set seems to confirm your beliefs, that's when you need to be even more cautious. So, the idea here is that the world is rarely ideal. And if something really does look ideal, you have to start questioning and saying, "Am I missing something? Is my perspective too one-sided?" That's one of the things that individuals can do to safeguard themselves against cognitive biases.Another fantastic tool to actually safeguard yourself against cognitive biases is to always have a devil's advocate. Because sometimes, no matter how aware you are, there are still blind spots. And if you have the practice of approaching maybe a colleague or a trusted companion and giving them your data, giving them your conclusions, and asking them to critique your data and critique your conclusions, that is one of the most effective ways to uncovering any particular cognitive biases that you may not be aware that you have.
Tahn Sharpe:
Great advice. That's it for today's episode of INTHEBLACK. A big thank you to Dr. Goh for helping us unpack the psychology behind cognitive bias and for offering practical insights in how finance professionals can stay objective, ethical and aware in an increasingly automated world.Victor Goh:
Oh, thank you very much for having me, Tahn. It's a pleasure.Tahn Sharpe:
Garreth Hanley:
For our listeners eager to learn more, please check out the show notes for links and more information. And don't forget to subscribe to INTHEBLACK and share this episode with your colleagues and friends in the business community. Until next time, thanks for listening.
To find out more about our other podcasts, check out the show notes for this episode. And we hope you can join us again next time for another episode of INTHEBLACK.
Loading component...
About the episode
Cognitive bias. It quietly shapes many of the decisions made in accounting and finance – often without you realising it.
This unintended bias can become a risk if left unchecked – especially in professions built on precision, trust and ethical responsibility.
Drawing on real-world examples, this episode explores the diverse types of cognitive bias, and how they can impact accounting and finance professionals in our day-to-day work.
Key learnings include:
- Why cognitive bias exists and when it becomes a professional risk
- How common biases affect financial judgement
- The ethical consequences of bias in accounting and finance
- How organisational culture can reduce or reinforce biased thinking
- Practical steps individuals can take to challenge assumptions and improve decisions
- How the rise of AI and automation adds a new layer of complexity to decision-making
The discussion also shows how biases such as confirmation bias, anchoring and overconfidence quietly influence decisions, often without conscious awareness.
Host: Tahn Sharpe, Editor, CPA Australia
Guest: Dr Victor Goh Weng Yew, a PhD in applied psychology and Associate Professor at the Department of Psychology at HELP University in Malaysia, where he is also the acting dean.
For more, head to HELP University online and to its page on LinkedIn.
Loving this podcast?
Listen to more INTHEBLACK episodes and other CPA Australia podcasts on YouTube.
CPA Australia publishes four podcasts, providing commentary and thought leadership across business, finance, and accounting:
Search for them in your podcast platform.
You can email the podcast team at [email protected]
Subscribe to INTHEBLACK
Follow INTHEBLACK on your favourite player and listen to the latest podcast episodes