Loading component...
Responsible AI explained: What it means and why it matters

Podcast episode
Garreth Hanley:
This is INTHEBLACK. A leadership strategy and business podcast, brought to you by CPA Australia.Jacqueline Blondell:
Welcome to CPA Australia's INTHEBLACK podcast, I'm Jacqueline Blondell. Today we're diving into AI governance, and what responsible AI means. To enlighten us is responsible AI pioneer Dr. Rumman Chowdhury, who was shaping the field of AI ethics and governance before it became a global imperative.She is the CEO and co-founder of Humane Intelligence, a nonprofit advancing community-driven AI auditing and evaluation. As a US science envoy for artificial intelligence, she engages in global AI governance efforts with a focus on emerging markets. Previously, she led AI ethics teams at Twitter and Accenture, pioneering enterprise-level AI risk mitigation tools. Rumman is speaking at this year's CPA Australia Congress.
Welcome to INTHEBLACK, Rumman.
Rumman Chowdhury:
Thank you so much for having me.Jacqueline Blondell:
Well, first I want to kick off with the fact that the AI genie is well and truly out of the bottle. Is it too late to create a responsible AI environment?Rumman Chowdhury:
Oh, absolutely not. So AI is actually still in very, very early days of deployment. And contrary to what maybe the media and some of the AI hype may be presenting, AI is really not in as common use as it could be. And also the genie can be put back in the bottle if we would like.Jacqueline Blondell:
So what are the tenets of responsible AI? What needs to be done to create that environment?Rumman Chowdhury:
That's a great question. So first and foremost, one thing I do like to differentiate is responsible AI from AI for good. I think the two often get conflated. So AI for good is a charitable use case of AI. Responsible AI is building AI from the ground up to have good data hygiene, ethical data practices, ensuring appropriateness of use and enabling the right stakeholders to be in the room, including impacted parties, developing ongoing methods of tests and evaluation, including model observability, transparency practices, and then finally ensuring accountability and clear transparency and auditability of what you've done.Jacqueline Blondell:
So could we look at the flip side, what happens when these things don't happen?Rumman Chowdhury:
Well, I mean, we've seen some of the important use cases in the headlines that we may have read over the past few years. So, for one, these AI models are deployed in a way that is harmful or discriminatory. As an example, in the United States, there was an algorithm that allocated kidney transplants to people in need, and it reflected the unfortunate inherent biases against patients of colour and downgraded individuals with the same presenting conditions who happen to be black.We've seen it manifesting in things like job applications, where women are not favoured because of algorithms picking up ingrained biases, so on and so forth. We've had facial surveillance algorithms that misclassified individuals, in particular people of colour. Again, we're just seeing the same manifestations of the biases that exist in society manifesting in technology.
Jacqueline Blondell:
Does a lot of work and effort have to go into making sure these biases aren't cropping up in a system?Rumman Chowdhury:
Absolutely. There is a deep field of responsible AI. There are, fortunately at this point, practitioners who have been doing it for some years, and there's a lot of expertise and experts to tap into.Jacqueline Blondell:
Can you talk about your experience at Twitter, where the cropping experience of the difference between men and women? I think that's a really good example of...Rumman Chowdhury:
Yeah, this is a great... I love this example actually. Well, because it is an end-to-end example of how a company faced with a challenge of unethical use or irresponsible use of AI, again, unintended, addressed it and actually did something meaningful about it. So just in terms of the timeline, this happened around, I believe, the year, fall of 2020 into 2021, if I remember correctly.And some of this actually predates my time at Twitter. And then the other half of it comes after I joined, and it was one of the reasons I joined. So what happened in the fall of, I believe 2020, users on Twitter noticed that the automatic cropping algorithm would tend to favour light-skinned faces over dark-skinned faces, and sort of the citizen data science that is Twitter. And again, this is Jack Dorsey's Twitter, not Elon Musk's X. People would take photos, they would take images, et cetera, and they would crop, and they did it both based on race and gender.
And they seemed to find that algorithms tend to crop women differently from men. Something called the male gaze, where women were cropped at their chest and men were cropped at their face. And it would again, as I mentioned, favour light-skinned people over darker-skinned people. And the Twitter leadership was actually quite engaged online about it. They promised to take action, and then, enter my team.
So I had just joined Twitter, and one of the first tasks that my team had was to actually conduct an audit of this algorithm. Now this was based on an open-source and publicly available algorithm that used human eye tracking data to determine, basically make a heat map of the popular images or the popular sections of a photo. So you imagine thousands of people; they were all shown a range of photos. These cameras were tracking what they instinctively looked at, creating a heat map so that could be projected onto really any image.
And so we audited the model, and we did find that there was a preference for younger, lighter-skinned women in terms of what faces... we did not necessarily find evidence of the male gaze. We didn't necessarily find the model tended to crop women a particular way versus men, but it was an interesting insight. So what the company's decision was, and this is critically important, was to put the genie back in the bottle, was to say, "We are not going to use this algorithm anymore. We don't really need it. Clearly it's causing harm because it is unfairly cropping individuals out of photos or cropping in an inappropriate manner."
And we pulled the algorithm. And what was very interesting is overwhelmingly people liked it, and a particular niche community of people, specifically photographers, were incredibly happy. If you are a professional or an amateur photographer, you know that you spend a lot of time framing your shots. And the professional photography community was very upset that our algorithms would crop their very intentionally framed photos. So yeah, it's a great example of how a company took what could have been a PR nightmare and not only turned it into a positive use case, but also an example of how AI is not inevitable.
Jacqueline Blondell:
That's brilliant to know. Let's turn to sovereign AI. Can you explain what it actually is and some of the initiatives that are happening in countries like Singapore, Malaysia and India and how they're ensuring effective AI governance?Rumman Chowdhury:
Yes. So sovereign AI is sort of a new global trend, geopolitically, where countries are trying to build their own homegrown AI models. So what does that mean? That means owning everything from the data centres to the data, to the models, having at-home talent, building them, testing and deploying them. And Singapore actually is leading the pack. They have a model called SEA-LION that has been built for some years and has been in existence for some years, but also it's following suit in other countries like Korea, Japan, as well as a few others.Jacqueline Blondell:
How does sovereign AI work with the big AI companies that are already out there?Rumman Chowdhury:
Yeah. Well, so my nonprofit Humane Intelligence does test an evaluation of generative AI models. One of the things we did last year was a comprehensive evaluation of multiple models working with the Singapore government and nine other different countries to test how these models performed.And when I say these models, one of them was SEA-LION and then also sort of the major AI model companies. And when you ask how it performs, it's a really great question. The answer to which is it depends on what you mean by "perform", right? So, of course, these models that have been around, that have been in development for almost a decade at this point by big Silicon Valley companies that have hundreds of billions of dollars of deep pockets, perform quite well across a wide range of generalised metrics.
But if we're going to talk about specificity as it relates to cultural relevance, potential ingrained biases, again related to culture or language or even linguistic performance, the native language models can be built to perform better.
Jacqueline Blondell:
Well, that's great to know. So do you think more countries are going to move towards sovereign AI, and how will that affect the big AI companies?Rumman Chowdhury:
So the short answer is yes, there is a big push. I think most recently it was Korea making the headlines for making a pretty significant investment in building out their homegrown AI models. India is also following suit. A lot of them are designing them as global competitions to encourage startups to start building large language models in their native languages.What does that mean for the bigger AI model companies? I think that maybe it has spurred a bit of an incentive to focus on languages other than English. Maybe it's an incentive to focus on different types of cultural nuances or geographic nuances or linguistic nuances. And I do think that there is an appreciation that AI is truly global.
Jacqueline Blondell:
Let's drop down from the geopolitical and global to the personal. Do you think you can make AI friends?Rumman Chowdhury:
Like AI companions, like actual friends?Jacqueline Blondell:
Yes.Rumman Chowdhury:
This is a pretty fraught question. So I was a fellow at Harvard's Berkman Klein Centre, and I held a course on intelligence as a social, scientific, political and economic construct. And one of the things we talked about is this rise of AI companions, AI friends, and this actually predates this current concern about AI psychosis.The short answer is, in my opinion, no. I think, what is a friend, and what does it mean? It's rather subjective. I think it is... fundamentally for human beings; this is quite dangerous. The idea that we want to make friends with things that don't actually understand or appreciate or know what friendship is. AI is not alive. It is not a friend. It does not feel anything. It simply mimics; it is programmed to say the right kinds of words that look like the words that a human being would say, but an AI does not care for you.
Now, that being said, one can think of many examples that are like therapeutic purposes where individuals who don't have access to care access the right kind of, let's say, therapy, might actually benefit, or maybe who are afraid to go to therapy might actually benefit. I do draw the line at saying friends. Right? Because if we are creating a society because of technology in many cases in which people are unable to actually create friends with other human beings, we have a deeper, more fundamental problem. And the answer to that is not simply sticking a band-aid on it and saying, "Make AI friends."
Jacqueline Blondell:
Maybe it's one of those "put the genie back in the bottle" situations if that evolves too far.Rumman Chowdhury:
Yeah, absolutely, absolutely.Jacqueline Blondell:
Now, to paraphrase Star Trek, because I believe you are a Trekkie. Is that right?Rumman Chowdhury:
I am, I am.Jacqueline Blondell:
Do you believe AI is the final frontier? I actually asked a few of AI agents, and they said no. What do you think?Rumman Chowdhury:
I think the human capacity for expanding our knowledge is infinite. I think there was a time where you could have said the internet was a final frontier. You could have said television was a final frontier. You could have said radio was a final frontier because all of these things, when they were launched, were unfathomable technologies. I mean, the telephone, right? Now, these are things that we take absolutely for granted.So no, I don't think AI is the final frontier. I think, again, the human capacity for expanding our knowledge and being clever and creative with the tools that we have is something human beings ourselves cannot fathom.
Jacqueline Blondell:
A lot of the discourse around AI in the civilian population has been quite dystopian. "My job is going to go. I'll be replaced." What's the good news about the future of AI?Rumman Chowdhury:
Well, the good news is that people are increasingly smart about how to interact and engage with technology. I think that there are actually a lot of quite brilliant people who are working on creating ethical AI, and the average consumer is not just blindly accepting what companies are telling them. And I think that's always a very, very positive thing when that happens. I think there's also a healthy amount of scrutiny and oversight from a wider range of people, including governments, but not always, like civil society is quite strong in this case, and I don't think it's an inevitability that we will arrive at this AI dystopia.I do agree with you that there tends to be this dystopian narrative that we seem to be drawn to, and maybe it's the human nature for the macabre. We love to be scared, but I hope that in doing so, we don't end up building the world that we don't want to see. I think it is very important that we create a positive vision for the future that we want to aspire to that includes AI. And while maybe these dystopian stories might be fun bedtime stories, it's not going to be the reality that we actually build.
Jacqueline Blondell:
That is very, very good to hear. Thanks so much, Rumman, for your fascinating insights and for joining us today.Rumman Chowdhury:
Thank you for having me. I really appreciate it.Jacqueline Blondell:
For our listeners eager to learn more, please check out the show notes for links to Rumman's website and the CPA Australia Congress 2025 website. And don't forget to subscribe to INTHEBLACK and share this episode with your friends and colleagues in the business community. Until next time, thanks for listening.Garreth Hanley:
To find out more about our other podcasts. Check out the show notes for this episode, and we hope you can join us again next time for another episode of In the Black.
Loading component...
About the episode
AI is advancing fast – but is it responsible?
In this must-listen episode, global AI expert Dr Rumman Chowdhury shares what you need to know about building AI systems that are fair, transparent and future-ready.
Key insights and questions include:
- Is it too late to create a responsible AI environment?
- What defines responsible AI?
- Managing AI bias and ethical risk
- The rise of sovereign AI
- Workforce disruption: preparing for AI-driven change
- Is AI the final frontier for humans?
Listen now.
Host: Jacqueline Blondell, Content Editor, CPA Australia
Guest: Dr Rumman Chowdhury. She is CEO and co-founder of Humane Intelligence, a nonprofit advancing community-driven AI auditing and evaluation. As the US Science Envoy for AI, Rumman engages in global AI governance efforts with a focus on emerging markets. Previously, she led AI ethics teams at Twitter and Accenture, pioneering enterprise-level AI risk mitigation tools.
Rumman is the CEO and co-founder of Humane Intelligence, and she is speaking at this year’s CPA Congress.
You can learn more about Rumman at her website.
Would you like to listen to more INTHEBLACK episodes? Head to CPA Australia’s YouTube channel.
And you can find a CPA at our custom portal on the CPA Australia website.
CPA Australia publishes four podcasts, providing commentary and thought leadership across business, finance, and accounting:
Search for them in your podcast platform.
You can email the podcast team at [email protected]
Subscribe to INTHEBLACK
Follow INTHEBLACK on your favourite player and listen to the latest podcast episodes