How artificial intelligence is set to impact the auditing role
Although its potential applications seem limitless, the technology is still in its infancy. So, while we can see the benefits that generative artificial intelligence or so-called large language models can bring to the auditing role, we can also witness a number of limitations.
Data evaluation limitations
To understand these limitations, one must first understand that AI systems are trained exclusively on historical data sets. Accordingly, they are strong in applications that aim to summarise and evaluate statistical elements from the past. Evaluations are then used to draw derivations for the here and now or to project knowledge from the past into the future. If, on the other hand, no usable data is available to answer a current task or if no meaningful conclusions can be drawn from the available sources, AI often reaches its limits.
For example, let's say a client is involved in litigation for which there is no precedent. AI cannot assess whether the provisions created by the client are appropriate as there is no historical data on which to rely. Auditors, on the other hand, have the opportunity to observe and make a corresponding assessment. If necessary, they can ask the client about this in advance and discuss specific issues, such as what factors were considered when calculating the provision and whether recourse to these factors is appropriate. Overall, can the approach be judged to be plausible? For decisions of this kind, auditors need abstract knowledge of human nature and, of course, their expertise in auditing.
Dealing with AI hallucinations
A further limitation is the reliability of AI's results. As it cannot deal with real-world concepts, there is a tendency for AI to "hallucinate", which means it relies on words to describe the situation and can lead to often implausible conclusions. Since many AI tools do not provide references or any other way to evaluate the veracity of their results, they must be critically scrutinised to assess the appropriateness and correctness of the answer. From an auditing perspective, certification can only be issued if the possibility that the results used are incorrect can be ruled out. As there is a big question mark over AI's reliability in auditing, solutions and results still need to be evaluated by humans.
Audit responsibility, ethical awareness and trust
All auditors are guided by a sense of ethical responsibility towards their clients and the wider public. In addition, auditors know that violations of ethical standards not only have legal consequences, but also damage the reputation of the accounting firm and the profession as a whole. As AI is not capable of real-time ethical awareness, there is a risk that extensive use of AI will lead to unfair decisions and biases that can only be determined and stopped by human intervention.
In addition, people connect with other people. Indeed, clients are much more likely to trust a person than a machine. Establishing trust between client and auditor through face-to-face conversations is invaluable as it gives confidence in the overall process and achievable outcomes.
AI as a complementary tool to the auditor's role
AI is a fantastic technology that will open up more and more areas of work and life. Nevertheless, due to AI's limitations on assessing complex or new situations in real time, they are currently limited to processing repetitive parts of the audit role. The unreliability of results, the lack of ethical awareness and the inability to build trusting relationships are huge AI limitations. The good news is that using AI as a complementary tool will enable auditors to deepen their relationship with clients significantly in the future. As AI relieves auditors of essential but labour-intensive routine work, it leaves more time for auditors to deliver customised and client-specific services.
Auditors with a good AI tool will be more efficient and successful than an auditor or AI alone. The future is not “It's not man versus machine. It's man plus machine."