Which security threats will AI pose to your practice?
Content Summary
- Technology

The article is relevant to members in Australia and New Zealand and was current at the time of publication.
Artificial intelligence (AI) presents huge opportunities for public practitioners, but it can also open the door to new security threats. At a time when cybercrime is on the rise, AI can be used to send sophisticated phishing emails, produce deepfakes or voice clones and compromise data privacy and client record security.
AI’s tendency to hallucinate can also pose problems, so how can you make the most of AI while reducing the security risks?
Questions you should ask
Chris Howes, Managing Director of cybersecurity and technology professional services business Aurguard, notes: “AI makes things a little trickier; you could be sharing your data with an unknown entity. Do you know where it’s hosted and who’s running it? Who else has access to it?
“AI is essentially learning as it’s being fed information, and this is what makes it grow. Do you know whether the AI is learning from the proprietary or confidential information that you provide it, and if it’s then using that information to inform other consumers?”
AI fakes look like the real deal
Cybercrime has become more sophisticated thanks to AI. A decade ago, phishing emails often contained telltale signs like spelling errors and bad grammar, but AI is now helping to generate false documents, such as invoices, that look like the real deal.
AI-generated identities also present risks. In 2024, a finance worker in the Hong Kong office of multinational engineering firm Arup was tricked into paying US$25 million (A$38 million) to fraudsters who used deepfake technology to pose as the company’s CFO in a video conference call.
“AI is making it harder for people to recognise when something is malicious,” Howes says. AI hallucinations, which are the incorrect or misleading results that AI models can generate, also pose security risks, he adds.
“For example, if you’re relying on AI to provide you with a security configuration that you can roll out to all your computer systems, but you don’t check it and it turns out to be incorrect, it could have a security impact down the line,” he says.
“Whatever data it’s spitting out needs to be validated, rather than blindly trusted.”
Safety checks on AI
Human oversight is just one way to reduce security risks posed by AI. Darren Ellis CPA, Director at WA-based public practice Eagle Shared Services, says practices should examine their cybersecurity practices before choosing new technology.
Eagle Shared Services recently received ISO 27001 certification, which demonstrates an organisation’s commitment to establishing and maintaining an effective information security management system.
Ellis is also assessing the potential to use Microsoft Purview, a platform that provides data governance, risk and compliance solutions to help organisations manage, protect and govern their data across clouds, apps and devices.
“Regardless of the technology tools you use, you need sound principles for your information security management system, and using the CIA Triad principles at the core is a good place to start,” he says.
These principles refer to how organisations manage information security. They comprise: Confidentiality (such as through authorised individual access), Integrity (such as accuracy and reliability) and Availability (such as ensuring information is ready when needed).
“CIA Triad risks can be potentially identified and managed using AI tools,” Ellis notes. “Checklists are a great starting point for a conversation with your tech vendor to ask how they deal with and mitigate these issues.”
Your data and ChatGPT
Tyler Wise FCPA, Partner at accounting and business advisory firm Findex, says the personal data of employees or clients should never be uploaded into public facing AI tools like ChatGPT and suggests that accountants prioritise AI security education in their practice.
“Talk about risks like data poisoning, where cybercriminals can try to introduce bias into an AI model’s training data so that its outputs are false,” he says. “Remind them that it’s essential to always keep humans in the loop and check what AI is spitting out.
“Make sure people understand that they can use these tools for non-sensitive admin or non-value adding tasks, but make sure they understand the risks,” Wise adds.
“You don’t expect them to become cybersecurity experts, but it’s important to be able to identify when something seems out of place or out of character. If someone asks you via email or even video to transfer funds in a way that seems unusual, double-check it.”
Howes suggests viewing AI as an efficient, highly skilled assistant that “occasionally gets things very wrong”.
“Before you use AI in your practice, understand what it’s doing with the data that you provide and where it is hosted,” he says. “And make sure there’s always human intervention.”
How to use AI safely
1. Talk to your vendor
AI systems are often hosted in the cloud and may send data between different regions, so ask your vendor questions like where it is hosted, and whether your organisation’s inputs will be used to retrain the AI system’s model.
2. Restrict data inputs
Never put confidential or personally identifiable data into a public facing AI platform.
3. Educate your staff
Ensure that your employees understand how they can and cannot use AI at work and educate them about cyber scams involving AI.
4. Prioritise cybersecurity in your practice
Develop core security principles, such as how you ensure people are who they say they are.
5. Keep humans in the loop
Don’t take AI outputs on face value – always ensure human oversight.
Discover more
Public practice warning: AI boom opens the door to cyber crime
The AI boom has opened the door to cyber scams. Here’s how to protect your practice.
- Technology
article·Published onDigital Technology
For Australian and New Zealand practitioners. Boost your firm’s digital know-how in key areas such as cybersecurity, AI tools, crypto taxes, e-invoicing, accounting software, IT systems and more.
- Technology
article·Published onDoes your tech stack deliver?
Here’s what accounting practices should be including in their tech stacks
- Technology
article·Published onHow to calculate crypto losses
The crypto market has always been volatile but be prepared this tax time
- Technology
article·Published onFMA ups the ante on cyber security
How financial firms are bearing the brunt of escalating attacks by cybercriminals
- Technology
article·Published onTechnology and cybersecurity
Technology improves business efficiency, cuts costs, improves service delivery and maximises profitability
- Technology