Put the user at the core, asserts new report from ACCA, as it reveals 51% of practitioners are unaware of explainable AI

Explainable AI (XAI) emphasises not just how algorithms provide an output, but also how they work with the user, and how the output or conclusion is reached. XAI approaches shine a light on the algorithm’s inner workings to show the factors that influenced its output. The idea is for this information to be available in a human-readable way, rather than being hidden within code.

ACCA’s (Association of Chartered Certified Accountants) latest report Explainable AI addresses explainability from the perspective of practitioners, i.e. accountancy and finance professionals. Head of Business Insights, Narayanan Vaidyanathan, said: ‘It is in the public interest to improve understanding of XAI, which helps to balance the protection of the consumer with innovation in the marketplace.’

Complexity, speed and volume of AI decision-making often obscure what is going on in the background (the black box), which makes the model difficult to interrogate. Explainability, or the lack of this, affects the ability of professional accountants to understand and display scepticism.  In a recent ACCA survey, more than double, 54%, agreed with this statement compared to those who didn’t.


Tecno Camon 15

Vaidyanathan continued: ‘It’s an area that’s relevant to being able to trust technology and to be confident that it’s used ethically and XAI can help in this scenario.  It’s helpful to think of it as a design principle as much as a set of tools.  Moreover, this is AI decoded, and designed to augment the human ability to understand and interrogate the results returned by the model.’

Key messages for practitioners:

  • Maintain awareness of evolving trends in AI: 51% of respondents were unaware of XAI. This impairs the ability to engage. The report sets out some of the key developments in this emerging area to help raise awareness.
  • Beware of oversimplified narratives: In accountancy, AI isn’t fully autonomous, but nor is it a complete fantasy. The middle path of augmenting, as opposed to replacing, the human works best when the human understands what the AI is doing; which needs explainability.
  • Embed explainability into enterprise adoption: Consider the level of explainability needed, and how it can help with model performance, ethical use and legal compliance.

Policy makers, for instance in government or at regulators, frequently hear the developer/supplier perspective from the AI industry. This report can complement that with a view from the user/demand side, so that policy can incorporate consumer needs.

The report’s key messages for policy makers are:

  • Explainability empowers consumers and regulators: improved explainability reduces the deep asymmetry between experts who understand AI and the wider public. And for regulators, it can help reduce systemic risk if there is a better understanding of factors influencing algorithms that are being increasingly deployed across the marketplace.
  • Emphasise explainability as a design principle: An environment that balances innovation and regulation can be achieved by supporting industry to continue, indeed redouble, its efforts to include explainability as a core feature in product development.

Narayanan Vaidyanathan added: ‘XAI can be polarising, with some having unrealistic expectations for it to be like magic and answer all questions. While others are deeply suspicious of what the algorithm is doing in the background. XAI seeks to bridge this gap, by improving understanding to manage unrealistic expectations, and to give a level of comfort and clarity to the doubters.’


LEAVE A REPLY

Please enter your comment!
Please enter your name here