- Public sector AI governance: risks and opportunities
Public sector AI governance: risks and opportunities

Podcast episode
Garreth Hanley:
This is With Interest, a business, finance, and accounting news podcast brought to you by CPA Australia.Ram Subramanian:
Welcome to With Interest. I'm Ram Subramanian, external reporting lead in the policy and advocacy team at CPA Australia.In today's episode, we are talking with Michael Davern FCPA, professor of accounting and business information systems at the University of Melbourne, and Michael's here to talk about the role of AI in the public sector.
CPA Australia recently released a report on AI governance in the public sector, which you can find in the show notes. Today, Michael is going to talk about the importance of good governance for AI when it is used by public sector organisations. Welcome to With Interest, Michael.
Michael Davern:
A pleasure to be here, Ram.Ram Subramanian:
Okay. To start off, given the proliferation of AI across all walks of life, I'll start with the presumption that the use of AI in the public sector is a done deal and the recent announcements from government seems to suggest that that's the way we are going to go with the use of AI in the public sector.Public sector organisations are going to adopt this emerging technology in different ways, and as we know with anything like this new and emerging technology, there are going to be risks associated with it. Given this, what are the biggest risks public sector institutions face when adopting AI, especially when governance and awareness of this new technology are still maturing?
Michael Davern:
Yeah, look, I think when you think about the risks, there are many, and I think it's being careful not to overregulate from the outset. This is going to happen whether you have a policy saying take it slowly or not. It's happening. If we look at terms of regulation, what worries me very much firstly is thinking about what we mean by AI is something we probably should define to start with because it's used as a very ubiquitous term.A lot of people will think when you say AI, they think of ChatGPT and the generative AI, but there's also other AI tools around that are doing predictive analytics, machine learning around data classifying, transactions classifying case situations, for example, in tax and audit contexts. So it's being clear about what the different technologies are that you're using within that ambit of AI and they vary greatly in the risks that they expose you to.
I think the greatest risk is making sure that we keep the human in the loop in what's going on because these machine learning type algorithms are not transparent. By design, you can't see how they come up with what they do. The computer scientists can't explain how they come up with what they do. It's data-driven, and so it makes it very hard to be sufficiently transparent to justify a decision.
And if you look at a lot of the decisions we're making in a public sector context, they have human impacts. And so if you are making a trade-off sometimes between saving lives and in one case and saving lives and another case, you want that to be a human-informed decision, not something that is a black box model that's driving that if we're using some sort of machine learning algorithm there.
The other part I think of the risk is what's happening with the data that's there and the tools that are going on. So if you look at something like the generative AI tools, I get very worried that people in organisations are throwing confidential data into ChatGPT and similar sorts of tools. So it's very good that the government is developing its own platforms to do this so it can keep the data appropriately protected.
So the University of Melbourne for example, we have our own in-house ChatGPT tool called Spark, similar sorts of engines sitting behind it that we can drive on, but all the data stays within the university, so it changes the IP issues. There's no data leakage outside the organisation because when you interact with these generative AI tools, they're learning from the interactions they have with you.
So they are acquiring data from you from what you're putting in as well. And that creates a risk of obviously violating very much privacy issues and confidentiality issues on that data that's going in. So I think both of those are the dual concerns that I have there.
Ram Subramanian:
With that last one, you're talking about closed-loop AI tools, which are not open to the world at large. Arguably with a good AI tool, it needs to learn. So it needs to learn. The more data you put into it, the more it'll learn. So isn't that sort of a limitation on a closed-loop AI tool?Michael Davern:
It is. And so in my own work, I'm using ChatGPT and our in-house tool. And so my question always is, is this confidential? Is the IP that I'm using and trying to develop here something that I want to have ownership from from the university's perspective? If that's a yes, then I use the Spark tool, the in-house tool, otherwise I'll use something like ChatGPT because it has more data, it tends to produce better results because it's got more data that it's learning from, right?So it is a trade-off, but it also means that what you can do in those in-house tools is train them on data that is more relevant and specific to the context that you're looking at. And so that can be where the value add comes rather than this big generic thing that's sort of souping up everything on the internet and consuming that as the data, so yeah.
Ram Subramanian:
I think that makes sense because if you think about some of the sensitive areas of government like the tax office, ATO or defence, I mean you would want that to be confidential, so you don't want to put all the information out there into the wide world and that's not a good thing. All right, if I just move on to the next one then. In July the Australian government published GovAI, a whole of government service to design AI capability across the Australian public service.This is of course just at the commonwealth level and appears to be an AI toolkit that includes some aspects of governance as well to it. Now, similar to the GovAI tool that has just been released, should Australia develop a centralised AI governance framework for the public sector or is a decentralised entity specific approach better?
Michael Davern:
I think you have a mix of approaches. There needs to be some centralised big picture policy things. There'll be some context specific things. So you mentioned something like defence that's going to have different security requirements to ATO and healthcare and maybe some other agencies that are there. So there will be some localization that will need to happen. So you can't ignore it and just rely on the central.But I think the central policy creates consistency and is really helpful in building what I think's really important here, and that's a risk aware culture around the use of AI. And in all the work I've done over the years in risk management more broadly, culture is the big driver of success in my mind in operational risk things like this. And if you look in financial services for example where risk has worked very well in focus for a number of years, APRA for a long time was saying, we want banks that have a good risk aware culture.
Now defining what a good risk aware culture has always been a challenge. My definition of a good risk aware culture is ask yourself, what are your staff going to do if there isn't a policy or they don't know what the policy is? And if in that situation you sit back and go, yeah, I'm comfortable, I think I can sleep at night, then you've got a good risk aware culture and you can't beat that culture of the way in which you are introducing these tools, the way in which you're engaging with your workforce with these tools gets them to realise, gee, I need to just double check before I do this.
So if you over-regulate and over-compliance focus on what you're doing, it becomes a tick box exercise and people don't change their behaviours, they tick the boxes and then they go off and do whatever they were going to do anyhow. Right?
A lot of compliance training things, I see people work out how quickly they can get through the compliance training module and then go and continue to behave the way the organisation has always behaved. We don't want that happening with the AI, and so it's building that AI culture and it's about how you build that in the loop of having the human in the loop in what's going on.
So getting people to recognise that you should always be questioning the tools, always be questioning the output, doing a common sense check. I have a working heuristic that you shouldn't get AI to do anything that you can't do yourself. Because if it's doing a task that you can't do, you have no way of evaluating the outputs that's come out of that to see whether it makes sense or not.
Ram Subramanian:
Don't just blindly trust the AI tool.Michael Davern:
Don't blindly trust the AI. And look, it's a fantastic tool. These tools have come a long way in terms of what they're able to achieve and how efficient they can make us in doing things, but they also produce a lot of rubbish.Ram Subramanian:
I see two steep learning curves here. One is for the AI tool, as it gets more data input into it starts to learn how to provide better results. And the other learning curve is humans on how best to use the AI tool without falling into the risks that are associated with that use.Michael Davern:
Exactly. And it's creating the opportunity where you start with the mindset of criticising the output and critically thinking about what that output is, thinking about the consequences of those decisions that are there, but then particularly in terms of more of a compliance framework sort of thing, thinking about the issues of the transparency that you might require and you can think about that transparency at multiple levels.The first level is transparency about we used AI in making this decision or crafting this report. So that's sort of an acknowledgement of source transparency. What we really want though in a lot of cases is a lot more depth in the transparency.
We want to know, well, we've relied on this data in making this decision with this AI tool so we know what drove potentially and informed the judgement that was going on. And you're starting to see that even with the generative AI tools where they're trying to identify the sources of things that they're gleaning things from.
The ideal situation, which technologically is very hard to get to sometimes is I want to know the specific data about your case for the decision I made about your case with the AI, what did I use there?
Now how I weighted and balanced that information is often a black box given the nature of how machine learning algorithms work, but at least I want to know what data was used to inform that judgement so I can A, make sure the data is accurate and B, be conscious of how it's being framed because that gives me a sense of how the task is being framed for the AI.
And if anyone's played around with the generative AI tools, you learn very quickly that you can ask the same question in 10 different ways and get very different responses from it. And so that's very careful about how you frame and set up the task.
Ram Subramanian:
Yeah, given I think the context of our conversation is the public sector, the transparency aspect to it is quite important because as you said earlier, we are dealing with a sector that is affecting everyday Australians on a daily basis. So it's important that that transparency is there. And do you see any formal ways in which the transparency can be achieved within government to make sure that it does happen?Michael Davern:
I think it's being clear what data is being used to train the system in the first place. That's absolutely the first stage of criticality of what data is being used and how is that data being filtered and prepared in going into the system becomes really important.Ram Subramanian:
Rubbish in, rubbish out.Michael Davern:
Yeah. But also the other side that we've seen in other contexts is how you're framing the outputs that come out of that. So I had a PhD student a couple of years ago who looked not in a public sector context, but was looking at AI tools, making judgments about credit risk, and we'd have an edge case, so a borderline thing, and you would see how people would respond and depending on how you described how the modelling had happened behind the scenes, you could change people's assessment of whether this was a reasonable judgement or not.You look at the cases either side of a particular case that you're making, and so it really is being interactive and engaged with that. I think it's also making sure that people, and this is the big challenge, are getting exposed to the task itself. So this goes back to this idea that you shouldn't get the AI to do something that you can't do yourself.
And that's the one that I think is going to be the biggest challenge for us because how do you develop a workforce that is efficient in what they're doing when you don't want to have them spending lots of time doing the grunt work to learn how the grunt work works so they can then evaluate what the AI is generating at the other end, because otherwise they learn what the AI is and they're not learning about the real world they're trying to make the decision in and that's scary.
Ram Subramanian:
Yeah. Okay. So this is a rapidly evolving area, so we don't want to hold back the development of AI or the evolution of AI because it has got clear benefits to society. So I guess the question is, we talked about transparency, we talked about governance, how can AI governance frameworks be designed to remain agile? So be with the times and responsive to keep pace with technology developments?Michael Davern:
Well, as an accountant, I would always say I'm a principles-based accountant, not a rules-based accountant. And then I think the governance needs to be at the same sort of level. Let's keep the governance very much at the principles level and think about how that's building that risk culture rather than trying to get down to niggly rules, because that creates a compliance culture, a tick box culture, and it also tends to be rigid because when the new technology comes out about some new way of doing things, you've got to change your policy so dramatically because it's not principles-based.So it's thinking about what those broad principles are, and some of those can be then very much... I mean, if you think about the Gen-AI situation we have now, everyone thinks, well, AI has just exploded. It's magic, it's new. The technologies, fundamental technologies that are underlying Gen-AI is machine learning, which has been around for a very, very long time. I was using machine learning to teach MBAs in the 1990s, how to predict the S&P 500.
Now there's some things that are sitting on the other end of that, but at the end of the day, it's a machine learning thing. So the underlying principles of the challenges we face about where does the data come from, the black box nature of those sort of things, those principles aren't that vastly different than what they were 30 years ago. Right? It's just now we've got so much more application contexts because of how powerful the tools have become because our processing power, because our access to data has got so much better. So that principles-based approach is going to be key because if you go lockdown, people are just going to work around it anyhow.
Ram Subramanian:
Michael, so far we've talked about Gen-AI, which is one type of AI, which is largely to do with machine learning and data-driven AI. What about other forms of AI, which you already touched on earlier like agentic AI?Michael Davern:
So agentic AI, that's probably where it starts to get really scary because you are losing the human in the loop. You're making the AI then be an agent of action itself. And that's where I get really worried. It's that old adage that if a computer does something wrong, it does it wrong very fast and at scale. So you and I make a mistake, we make a mistake slowly. Computers don't make mistakes slowly.They do it very fast and catastrophically as a result of that. So I do worry about how they operate. There's efficiencies obviously in doing that, but the control then becomes a lot more challenging and we see some glimmers of that.
If you think about some of the ways that some companies are setting up chat bots to interact with customers and what those AI chat bots are then doing in interacting with customers, and you can imagine in a health and human services context that the government might be doing, what's the advice it's giving somebody through that chat bot that it might sit on a web page to do that? That could be incorrect advice, dangerous advice. That's starting to give it agency in being able to run free. That's what worries me.
Ram Subramanian:
Maybe we'll deal with Gen-AI first before we move to other types of AI.Michael Davern:
Yeah, yeah.Ram Subramanian:
Okay. All right. Thank you Michael for sharing your thoughts on the role of AI in the public sector and its governance. And I suppose some of this could equally apply to other walks of life as well.For our listeners eager to learn more, please check out the show notes for links to a report I mentioned earlier and additional resources from the Australian government and CPA Australia. And don't forget to subscribe to With Interest and share this episode with your colleagues and friends in the business community. Until next time, thanks for listening and thank you Michael.
Michael Davern:
Thanks Ram.Garreth Hanley:
You've been listening to With Interest, a CPA Australia podcast. If you've enjoyed this episode, help others discover With Interest by leaving us a review and sharing this episode with colleagues, clients, or anyone else interested in the latest finance, business and accounting news.To find out more about our other podcasts and CPA Australia, check the show notes for this episode and we hope you can join us again for another episode of With Interest.
About the episode
As AI becomes embedded across public sector operations, how do you balance innovation with accountability? In this episode, hear expert views on the governance of AI in the public sector.
Key themes include:
- Why defining AI clearly matters for risk management
- The need to keep humans in the loop for high-impact decisions
- Risks of using public generative AI tools with confidential data
- The importance of closed-loop and contextual AI systems in government
- How to balance centralised frameworks with agency-specific needs
- Building a risk-aware culture vs. enforcing tick-box compliance
- The challenge of maintaining transparency in machine learning
- Why AI governance should be principles-based, not rules-based
- The growing concerns around agentic AI and loss of human control
Whether you're a policy maker, accountant or public service leader, this discussion offers practical insights into designing agile, ethical and effective AI governance in the public sector.
Host: Ram Subramanian, External Reporting Lead, Policy, Standards and External Affairs, CPA Australia
Guest: Michael Davern, FCPA Professor of Accounting and Business Information Systems, University of Melbourne. For over 30 years, both in Australia and internationally, he has led industry-engaged research projects in data analytics, business intelligence, financial reporting, risk management, data governance and ethics, among others.
CPA Australia has released a report titled AI governance in the public sector. It features key insights from CPA Australia’s recent webinar with public sector experts.
For more, a media release on AI governance in the public sector summarises our views.
And learn more about this episode’s expert guest Michael Davern on the University of Melbourne website.
You can find a CPA at our custom portal on the CPA Australia website.
You can also listen to other With Interest episodes on CPA Australia’s YouTube channel.
CPA Australia publishes four podcasts, providing commentary and thought leadership across business, finance, and accounting:
Search for them in your podcast platform.
You can email the podcast team at [email protected]
Subscribe to With Interest
Follow With Interest on your favourite player and listen to the latest podcast episodes