Artificial intelligence (AI) and, more recently, generative AI (GenAI) are transforming how organizations operate. These technologies are already embedded in systems used for financial reporting, customer interaction, and decision-making, often without a clear understanding of the risks they introduce.
As companies integrate AI into their operations, the need for robust governance, accountability, and oversight has become increasingly critical. For Internal Audit and risk leaders, AI isn’t a hypothetical future challenge. It’s a current-state issue that requires immediate attention.
Whether embedded in vendor platforms, built in-house, or quietly influencing decisions through automation, AI systems can pose risks to data integrity, control effectiveness, and regulatory compliance. If left ungoverned, these risks can undermine the very foundations of financial reporting, operational consistency, and investor trust.
The Internal Audit function is uniquely positioned to help organizations respond with a lens that connects AI innovation to enterprise risk, control structure, and long-term accountability.
Understanding AI’s Impact on the Risk Landscape
AI is not just another piece of technology — it is a capability that changes how decisions are made. In doing so, it introduces new forms of risk that traditional control frameworks were not designed to address. These include:
- Data Quality and Integrity Risks: AI models are only as good as the data they are trained on. Inaccurate, incomplete, or biased data can lead to flawed outputs and poor business decisions
- Transparency and Explainability Risks: Many AI systems, especially those powered by deep learning, function as “black boxes,” making it difficult to trace how they arrived at specific conclusions
- Regulatory and Ethical Risks: Organizations are increasingly under pressure to ensure ethical use of AI and comply with data privacy and discrimination laws that apply to algorithmic decision-making
- Cyber and Operational Risks: AI systems can introduce new vulnerabilities, including susceptibility to manipulation or unauthorized access to sensitive data
These risks are operational, reputational, and financial. When AI systems malfunction or produce unintended outcomes, they can expose organizations to liability, loss of stakeholder trust, and regulatory scrutiny.
The Evolving Role of Internal Audit
As AI adoption accelerates, Internal Audit’s responsibilities must expand in step. This begins with a foundational understanding of how AI is being deployed across the organization. Whether it’s a chatbot handling customer service inquiries or a model supporting financial forecasting, auditors must map where AI is being used and how its outputs influence business activities.
From there, Internal Audit should:
- Evaluate AI Governance Structures: Is there a defined ownership model for AI tools? Are responsibilities clearly assigned across business, IT, and compliance functions?
- Assess Development and Deployment Practices: Have models been adequately tested before deployment? Are there controls to monitor performance over time?
- Review Data Management Protocols: Is data used for training and inference accurate, relevant, and properly governed?
- Ensure Stakeholder Alignment: Do business leaders understand the limitations of AI and the risks involved in relying on algorithmic outputs?
Importantly, Internal Audit should not evaluate AI risks in isolation. These risks must be integrated into the broader Enterprise Risk Management (ERM) framework. That means coordinating with the Risk Management function and participating in AI governance committees to ensure alignment on risk appetite, escalation protocols, and oversight responsibilities. Embedding AI into ERM helps avoid silos, supports consistent treatment of emerging risks, and positions Internal Audit as a strategic partner in cross-functional risk discussions.
Audit plans must be updated to include AI as a recurring risk category. This includes both standalone audits of AI use cases and integrated testing of AI-related controls within business process reviews.
Establishing a Governance Framework
While many organizations are still in the early stages of AI adoption, establishing a governance framework now will help mitigate risks and support responsible innovation. Key elements of an effective AI governance framework include:
- Model Inventory and Risk Classification: Maintain a centralized inventory of AI models, categorized by risk level and business criticality
- Policy and Procedure Development: Establish organizational policies outlining acceptable AI use, including required approvals, documentation standards, and ethical considerations
- Validation and Testing Controls: Implement control procedures for model validation, performance monitoring, and recalibration
- Human Oversight Mechanisms: Ensure appropriate human review of AI-driven outputs, particularly in high-impact or sensitive areas
As many AI models continuously learn or retrain in real time, traditional point-in-time audits may be insufficient to capture ongoing changes. Internal Audit should explore continuous auditing and dynamic monitoring mechanisms, such as automated exception tracking, control dashboards, or real-time analytics, to keep pace with evolving model behavior and data inputs. These tools enhance visibility into model drift, performance degradation, and unauthorized changes ensuring governance remains effective between audit cycles.
Internal Audit should assess whether these elements are present and functioning effectively. Gaps in governance should be reported along with practical recommendations for remediation.
Navigating Regulatory Expectations
The regulatory environment around AI is still maturing, but the direction is clear: transparency, accountability, and consumer protection are top priorities. In the U.S., federal and state agencies have begun issuing guidance on responsible AI use. Globally, jurisdictions such as the European Union are pushing forward with more formal regulations, including the EU AI Act.
Internal Audit can help organizations prepare by benchmarking current practices against proposed standards. Identifying areas where documentation and oversight need to be strengthened, and working with compliance teams to build AI readiness into regulatory assessments.
By embedding itself in the conversation early, Internal Audit can ensure the organization isn’t caught off-guard as regulatory frameworks evolve.
Managing Third-Party AI Risk and Vendor Governance
Many organizations are building AI and buying it. Tools like Microsoft Copilot, Salesforce Einstein, and a host of AI-enabled SaaS platforms are being rapidly deployed across business functions. While these technologies promise efficiency and scale, they also introduce a layer of third-party risk that Internal Audit must address.
AI provided by vendors presents several distinct challenges:
- Limited visibility into model logic and training data
- Opaque change management and versioning
- Ambiguity around data ownership, retention, and transfer
- Overreliance on external SLAs without enforceable audit rights
Internal Audit should play a central role in evaluating how these tools are selected, governed, and monitored. This includes:
- Reviewing vendor due diligence practices, especially around model explainability, bias testing, and data usage terms
- Assessing whether contractual SLAs address performance, security, and update protocols for AI functionality
- Ensuring the organization retains sufficient governance rights to monitor and challenge AI outputs that affect decision-making
- Validating whether data sharing agreements meet enterprise standards for privacy, access control, and compliance
As the boundary between internal systems and external AI continues to blur, Internal Audit’s vendor risk lens must expand accordingly ensuring that third-party tools align with the organization’s control expectations, ethical standards, and regulatory obligations.
Practical Steps for Internal Audit Functions
To begin integrating AI oversight into the Internal Audit function, consider the following practical steps:
- Educate the Team: Offer training sessions on AI concepts, risks, and technologies to enhance auditor readiness
- Create AI-Specific Audit Procedures: Develop checklists and frameworks tailored to auditing AI applications, including questions on data sources, algorithm transparency, and bias mitigation
- Partner with IT and Data Science Teams: Collaborate with technical experts to understand how models are built, validated, and maintained
- Incorporate AI into Risk Assessments: Factor AI-related risks into the annual risk assessment and audit plan development processes
- Start with Targeted Audits: Begin with pilot audits of AI use cases that are material or high-risk, using those experiences to refine methodology and scope for future reviews
- Champion Scenario-Based Testing and Simulation: Internal Audit can add value by promoting the use of what-if scenarios and stress testing in evaluating AI systems. In collaboration with data science and risk teams, simulate unusual conditions — such as atypical data inputs, sudden market shifts, or extreme operating environments — to assess how models behave under pressure. These simulations help identify vulnerabilities, challenge assumptions, and validate that the model’s decisions remain consistent with organizational intent across a range of real-world situations.
Illustrative Use Case: How AI is Already Affecting Internal Controls
GenAI and machine learning are already shaping internal control environments today:
- GRC platforms (e.g., AuditBoard, Workiva) are piloting GenAI features that auto-suggest risk themes, propose control language, and streamline audit planning
- ERP and analytics tools are embedding AI to flag unusual journal entries, detect outliers, or automatically match large volumes of transactions
- Third-party risk tools use GenAI to analyze and summarize vendor contracts, helping identify clauses that may trigger control reviews or additional due diligence
These innovations are changing how controls are designed, executed, and reviewed, and often without Internal Audit being in the loop. Understanding these shifts is a critical first step toward building governance that keeps pace with innovation.
Conclusion
AI is no longer emerging; it’s operational. It’s powering decisions, automating workflows, and increasingly bypassing traditional controls. The question isn’t whether organizations are using AI, it’s whether they’re governing it with the same discipline they apply to financial reporting, cybersecurity, or third-party risk.
Internal Audit has a critical opportunity and responsibility to step into this moment. That doesn’t mean owning AI governance outright but rather ensuring that frameworks are in place to evaluate risk, assign accountability, and monitor outcomes continuously.
From aligning with ERM to assessing third-party tools, from simulating edge-case scenarios to embedding oversight into rapidly evolving models, Internal Audit can serve as a strategic partner in keeping AI innovation responsible, explainable, and resilient.
For organizations investing in AI, governance isn’t a burden, it’s a differentiator. And Internal Audit has a leading role to play in making that a reality.