How far can you go with AI before you hit an ethical dilemma?
Content Summary
- Ethics
- Technology

This article was current at the time of publication.
Artificial intelligence (AI) is a game changer for public practices. You may already be using it via cloud accounting platforms. Large language models (LLMs) using in text-generating programs like ChatGPT may be taking care of mundane tasks like drafting emails, creating formulas in excel or summarising financial reports in plain English for your clients.
AI in accounting has an estimated value of US$6.68 billion (A$10.38 billion) and this is expected to increase four-fold by 2029, so public practitioners are wise to explore its potential. But how far can you go with tools like LLMs before you face an ethical dilemma?
In June last year, APESB issued an Amending Standard to APES 110 Code of Ethics for Professional Accountants (the Code) for technology-related revisions. The amendments came into effect from 1 January this year and align with IESBA’s revisions to the International Code by incorporating technology-related considerations into the fundamental principles of Professional Competence and Due Care, and Confidentiality.
Belinda Zohrab-McConnell, Regulation and Standards Lead, Policy and Advocacy at CPA Australia, says the revisions also require members to apply their professional judgement in relation to the inputs and outputs of AI.
“You are required to query and apply your professional judgement and be aware that the information coming out of AI may not be accurate, it may be biassed and it may not be appropriate,” she says.
“Undue reliance on AI threatens your adherence to the fundamental principles of APES 110 [the Code].”
Be open and transparent about AI use
Privacy laws in Australia and New Zealand govern the way personal information is collected, used, stored and disclosed. Privacy obligations also apply to personal information you put into an AI system, as well as the output data generated by AI.
“Do not upload client data into a public-facing AI tool like ChatGPT,” says Brendan O’Connell, Honorary Professor, College of Business and Law at RMIT University and member of CPA Australia’s Centre of Excellence – Ethics and Professional Standards.
O’Connell is the author of CPA Australia’s AI Ethics micro-credential courses, which include modules such as Ethical AI in Accounting, and Ethical Frameworks and Guidelines in AI.
“Fundamental principles in APES 110 Code also include confidentiality,” he says. “Uploading client data into in-house AI tools is acceptable, as long as they possess adequate privacy and security attributes. However, uploading client data into public AI tools such as ChatGPT is not OK, as confidentiality is not guaranteed.”
O’Connell says accountants should also be transparent about their use of AI at work.
“There is nothing wrong with using AI in your practice,” he says. “In fact, you should get up to speed with it, because professional competence and due care is a fundamental principle of the Code.
“It’s important to invest in AI competency for your staff, so they can use it competently and understand the ethical implications. But you should be upfront about it with your clients.”
O’Connell suggests including information about your use of AI in client engagement letters.
“For example, say how you are using AI and how you’re not using it,” he says. “Clients may expect that you’ll use AI at some point, but they also expect that you are aware of its limitations and that you apply checks and balances.”
Humans still know best
AI is a powerful tool in day-to-day business operations, but it is not perfect and it can present data privacy risks. Left to their own devices, these tools can also hallucinate, delivering false or misleading information that can cause problems for clients and damage your reputation.
“There can be wild variations in output,” says Simon Small, Executive AI Advisor, AI Governance at NewZealand.AI, which provides independent advice to businesses on leveraging AI. “These tools can be inaccurate, and they can hallucinate, so they require human oversight.”
William Young FCPA is an adviser on AI platform, Praxio AI, designed for Australian tax and accounting professionals. He says AI product development should include considerations like accountability, transparency and human intervention.
“Users need to be accountable for the information they use from tools like ChatGPT,” he says.
“As an example, one of my advisers recently told me about a situation where his client had gotten a letter drafted by a lawyer for an ATO [Australian Taxation Office] submission. The client asked my adviser to check the letter, and he noticed that the case law referenced in the document doesn’t exist.
The lawyer had used AI to produce the letter and didn’t validate that the sources of information were correct.
“If you use AI to produce a piece of advice and you give it to your client but it’s wrong, you can’t just say to your client, ‘I’m sorry, I got that from ChatGPT’. In our product, we say that it is the accountant who has to check the information, because it is the accountant who is responsible for sending it to their client.
“AI is supposed to be an efficiency tool to help you. It is not here to replace your professionalism. It is not here to replace your accountability.”
Young adds that AI tools are only as good as the data they are trained on.
“Sources and citations are also very important,” he says. “What sources is the AI drawing on? Is it drawing on the words written in someone’s blog, or is it drawing on verifiable sources like the ATO? You need to be able to have an audit trail to say, ‘I got that information from AI, but it only referenced government sources’.”
Avoiding automation bias
The practical use of AI requires guardrails to ensure it aligns with core professional values.
Small suggests speaking to your network about how they are using AI in their practices and how they are managing ethical issues.
“Find out who else is using AI and innovating in your space and how they are dealing with client information,” he says. “A lot of the ethical issues come down to accounting best practices, but you need to apply human oversight. Take it slowly and carefully and the investment will pay off.”
O’Connell recommends developing a policy that outlines information such as how AI will and will not be used in your practice, as well as your approach to oversight, transparency and data privacy and protection.
“Treat AI like a virtual assistant that is prone to making mistakes,” he says.
“AI is such a useful tool for accountants, but you can’t hide behind automation bias. The onus is on you to check and verify the output.”
Discover more
Generative AI in business: how to navigate the ethics
1 February 2024 | Generative artificial intelligence (AI) has the potential to transform business, but with many ethical questions, accountants must balance professional scepticism with the benefits of these game-changing tools.
- Ethics
- Technology
Published on11 min read timeAccounting internships: what you need to know
We quiz a Christchurch-based practitioner who, after the 2011 earthquake, found her firm ready for a pandemic. test again
- Ethics
article·Published onMember access onlyHow new measures safeguard cloud computing data
With cloud computing and outsourcing on the rise, ensure your clients’ data is secure
- Ethics
- Technology
article·Published onAccounting internships: what you need to know
They can provide wonderful on-the-job training but employers must understand the risks
- Ethics
article·Published onAPES 110 Code of Ethics for Professional Accountants Part 2
- Ethics
Ethics and Governance
This subject provides you with the analytical skills and knowledge to identify and resolve professional and ethical issues
- Ethics