Responsible Gen AI: Protecting Businesses in the Age of Automation

How Chief Audit Executives are gearing up for improved efficiency, coverage, and quality in the age of data-driven business

EXL recently shared key insights into the principles of AI and best practices for ensuring transparency and accountability in AI-driven business processes at an industry conference last spring. What follows is a recap of the presentation, along with the results of their survey involving more than 400 audit professionals attending the conference.

Auditing is a tedious job, regardless of application, be it risk assessment, financial reviews, compliance reporting, transaction analysis. You name it. The process is intense and typically labor intensive, requiring high levels of human skill and experience to deliver dependable results. Not to mention the right technology to support clear findings.

No wonder top audit executives are increasingly looking to digitalize their process for better efficiency, coverage, and quality. The planning, data management, testing, verification, and follow-up involved in getting to a reliable conclusion can all be facilitated by a more consistent, data-driven approach. One that leverages the latest advancements in artificial intelligence (AI) and machine learning (ML). One that strips out mundane, repeatable tasks and efficiently mines the vast stores of data swirling around any audit procedure to swiftly generate a trustworthy report.

According to EXL’s survey, 74% of audit professionals claim that generative AI (GenAI) will transform their organization within the next three years. Yet, 80% admitted that their organizations were not prepared to address governance and risk issues related to GenAI adoption. The findings would suggest a groundswell of support to adopt AI-based solutions in auditing, with the knowledge gap representing an opportunity for informed leaders to gain an early advantage.

Could there be any better time to examine the benefits of AI and ML in the audit function?

Follow along as we review the regulatory frameworks and guidelines relevant to ethical AI governance, and examine the potential risks and challenges associated with implementing it in a healthy and unbiased way. Also, explore some real-world examples of AI auditing systems, as well as how to incorporate AI to enhance overall audit effectiveness.

AI, ML, and the growing use of generative AI in audit functions

In today’s data-driven economy, auditors are broadly realizing that the old manual, often subjective approach to auditing is no longer practical. More and more practitioners are examining new ways to tackle the work by introducing digital age technologies such as AI, ML, and natural language processing (NLP).

GenAI is a hybrid AI/ML/NLP technology capable of creating high-quality, original content, such as text, images, code, speech, and other artifacts, based on computer models trained to scour vast data pools of structured and unstructured information. These models consume massive computing power and incorporate complex algorithms “trained” to generate specific outcomes within rulesets modeled for different tasks.

In auditing, these tasks may include fraud detection, project scoping, contract validations, and other complex assessments that require detailed analysis of large data sets. With GenAI, auditors are able to increase coverage of sample testing effectively and productively up to 100%, without tying up resources better focused on higher value activities. They can produce richer insights faster, uncovering more and higher quality findings in the process.

Finance pros and auditors can use it to develop exciting new experiences, such as:

  • Performing highly automated forecasts, displaying a richly synthesized view of financials.
  • Assessing risk against a synthesized collection of large data sets for laser-focused analysis.
  • Enriching user experiences through more relevant, natural conversations cast across various channels.

With such potential to do more with less and achieve greater results, it’s no wonder interest in GenAI is rapidly expanding across the industry. Indeed, 30% of respondents to EXL’s survey said they are currently evaluating potential use cases, with 28% of respondents claiming that pilot programs are already in place.

Understand the challenges up front

For all the value GenAI potentially brings to the audit function, making the transition to full implementation is not without its risks. Responsible AI requires close governance to protect against regulatory, operational, and reputational risk. Issues such as privacy, bias, accountability, and cybersecurity must be addressed to instill digital trust among stakeholders. Without it, much can go wrong. Consider three recent cases where improper use of AI led unwary users into trouble:

  • AI researchers from Mindgard and Lancaster University uncovered harmful vulnerabilities in certain portions of large language models (LLMs) used in ChatGPT-3.5-Turbo. The suspect LLMs were copied for $50 apiece, raising concerns over “potential targeted attacks, misinformation dissemination, and breaches of confidential information” due to “model leeching.”
  • Air Canada’s chatbot promised a full-fare discount to a bereavement passenger after the flight had already been booked, causing a dispute between the passenger and airline. The airline argued that the chatbot was “a separate legal entity that is responsible for its own actions.” The British Columbia Civil Resolution Tribunal ruled in favor of the passenger.
  • Researchers at HiddenLayer discovered susceptibilities in Google’s Gemini LLM, causing it to “divulge system prompts, generate harmful content, and carry out indirect injection attacks.” The AI security provider asserted that “the issues impact consumers using Gemini Advanced with Google Workspace.”

These are just a few of the examples where GenAI, left to its own devices, can lead users astray. GenAI risks generally occur in three areas where risk management and controls are necessary:

  • Development risks: Based on false or nonexistent information, faulty prompt engineering, or intellectual property infringements.
  • Data risk: Stemming from over-simplified models, poor model tuning, or noise injection.
  • Deployment risk: Ascribed to regulatory or cybersecurity breaches, or poorly designed or implemented change management programs.

In response, several industry frameworks have emerged to assess the risks and trustworthiness of GenAI, including OWASP LLM Top 10, NIST AI RMF, and ISO 42001. These voluntary frameworks provide basic guardrails designed to highlight potential vulnerabilities and help GenAI developers and users avoid trouble. Briefly:

  • OWASP LLM Top10 “aims to educate developers, designers, architects, managers, and organizations about the potential security risks when deploying and managing LLMs.”
  • NIST AI RMF is an AI risk management framework “intended for voluntary use to improve the ability to incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems.”
  • ISO 42001 is an “international standard that specifies requirements for establishing, implementing, maintaining, and continually improving an artificial intelligence management system within organizations.”

By leveraging these frameworks as part of your GenAI program, you can improve the traceability, transparency, and reliability of your capability, while also displaying demonstrable accountability with cost savings.

EXL survey highlights the biggest risks associated with GenAI

  • 35% privacy concerns
  • 25% intellectual property infringement
  • 20% bias and fairness issues

Based on recent EXL survey of more than 400 audit professionals

Proceed on strong governance

The principles of responsible AI governance tightly align with the same principles that govern the auditing profession itself. Integrity is the watchword here, along with a keen awareness of potential risks up and down the application development lifecycle.

Insist on full transparency in the process; it is essential to an impartial, ethical, and accountable GenAI program. Indeed, the best AI solutions rely on human judgment and accountability to keep the program fair, consistent, objective, and equitable.

Clearly define who is responsible for system monitoring, maintenance, and upkeep. This will encourage a watchful attitude, respectful of privacy and human rights, in compliance with industry and legal standards. Such human-centric accountability, combined with strict security and up-to-date data protections, will ensure a resilient, vibrant, and consistent solution capable of providing trustworthy intelligence, without bias.

Basic steps in the development lifecycle

Like all technology projects, the development of a GenAI capability starts with a list of questions. Why are you launching the project? What do you want to accomplish? What value are you hoping to strike?

From there, what resources do you have to accomplish your goal? Where is the data located? Do you have the people to complete the job effectively? The technology? A culture prepared to embrace it? The wherewithal to maintain it? Monitor it? Improve it? Sustain it?

Once you have the answers in hand, address any shortcomings and move forward with a united purpose. The following six steps will help keep you on track.

  • Data collection and processing: GenAI thrives on quality data. Begin by finding relevant and reliable sources of data pertinent to your core mission. Prepare for success by cleaning, labeling, augmenting, and splitting these data to ensure their quality, consistency, and diversity.
  • Model selection: Based on your understanding of the business purpose and knowledge of where the data resides within your enterprise, assess and select a GenAI model suitable for the task, data set, and evaluation metric. You may need to assess several models before you decide on the right one.
  • Model training: With your model chosen, begin teaching the system to perform the specific tasks your use case requires. This involves providing the system with data, tracking the feedback, and establishing rewards and penalties for compliant and non-compliant outcomes.
  • Model evaluation: Now it’s time to measure the performance and quality of your GenAI model. This involves comparing the model’s output with some reference data or criteria (i.e., metrics, benchmarks, etc.) defined by your project plan.
  • Model fine-tuning: Based on your feedback, begin adjusting the parameters of your pre-trained model to improve its performance. Focus on the defined task stated in your initial objective and continue refinement to that end.
  • Deployment and maintenance: The ultimate goal is to release your trained GenAI model for use in its specified application. Once deployed, continue to observe the application to ensure its reliability and performance over time. This will require continual testing, monitoring, and updating for enhancing results.

Typical GenAI use cases

Certainly, you can and should audit the use of AI in your enterprise. Doing so is critical to good AI governance and is the best way to ensure that you remain compliant, fair, ethical, and trustworthy in employing the technology across your organization.

But you can also leverage AI in your auditing function. Here are some typical use cases where the application of GenAI in the audit function can add value:

  • Risk assessment: Analytics over historical data and industry benchmarks
  • Compliance monitoring and trigger reporting: Rule application and violation detection
  • Foreign Corrupt Practices Act (FCPA) assessments: Reporting basis risk scores
  • Pattern-based audits: Providing interactive reports that drill down to details
  • Analytical reviews and financial ratios: Peers, competitors, and actions
  • 100% transaction-level auditing: Anomaly detection (e.g., invoices)
  • Wiki auditing: Risk assessment, legal, and GAAP Wikipedia
  • Change management: Validate accuracy and integrity of large datasets
  • Report Creation: Utilizing databases to create standard reports

EXL survey highlights perceived benefits of adopting GenAI

  • 56% improved efficiency and productivity
  • 32% reduced cost
  • 31% expected innovation and growth

Based on recent EXL survey of more than 400 audit professionals

Explore the benefits of responsible GenAI

By applying GenAI technology in your audit process, you will produce higher quality insights faster, while also maturing your understanding and utilization of GenAI in other areas of your enterprise. This, in turn, will strengthen your risk knowledge, stakeholder relationships, and decision-making, building a competitive advantage for your organization on a scalable foundation of ever-improving efficiency and performance.

To learn more about the application of GenAI in your audit function, or to start a conversation, please contact EXL today.


Written by:

Arvind Mehta 
Global Partner - Cybersecurity and Digital Trust 

Rohit Kumar Gupta 
VIce President & Client Partner