Artificial Intelligence: Opportunities and challenges in relation to compliance

What opportunities does AI create for firms?  

AI creates a wide range of opportunities for firms, helping them become more effective, efficient, and proactive. But what does this mean in practice? If used well, here are just a few examples:  

  • identifying unusual patterns in transactions, behaviour, or account activity 

  • mapping relationships and detecting hidden links between accounts, entities, and transactions to uncover complex frauds  

  • comparing current policies, such as those linked to Compliance, Financial Crime, or ESG, with external regulatory requirements and updates  

  • machine learning models can develop profiles of customer behaviour and flag deviations, in real-time 

  • undertaking real-time ESG monitoring by scanning and flagging factors such as water usage, level of diversity and inclusion within the firm, executive pay etc.  

  • traditional rule-based systems often produce a high volume of false positives, but AI can continuously learn from case outcomes to improve alert accuracy and reduce ‘noise’ 

  • assisting investigators by automatically summarizing case histories, flagging key evidence, and suggesting next steps 

Through our work with banks and institutions of all shapes and sizes, we’ve seen the interest in the application of AI, increasing – from peer benchmarking exercises to more general advice. It’s worth noting though, whilst this is increasing, the hype of AI is far weightier than the application of it. In general, firms who aren’t using it want to know how they could benefit, and firms who are using it (the minority) want to know if they are using it as well as they can be.  

We’re also seeing this being discussed extensively by the regulator, some recent examples being the AI Sprint the FCA undertook in January 2025, and the FCA will soon be launching AI Live Testing. This will allow firms to collaborate with the FCA whilst AI tools are being tested; building confidence in the tools whilst receiving regulatory support. This latest testing space is part of the FCA’s AI Lab which provides insights, discussion and case studies.  

Whilst there are lots of opportunities with this technology, there are also a fair few challenges.  

What are the key challenges firms are facing and how can these be overcome?  

Striking the balance between AI and human overlay  

A key challenge we are seeing firms face when integrating AI into their operations is striking the right balance between automation and human oversight. For example, AI models which make decisions on the one hand is great. But this can also make it more difficult for people to explain the rationale behind why that decision was made, and we know that, in regulated environments, explainability is crucial. In addition, while AI can process data and make decisions far faster than humans, it may lack contextual understanding and the ability to apply fairness for example. For firms this challenge is about ensuring that AI enhances, but does not replace, human decision-making and judgment. 

The availability of good quality data  

Data is the cornerstone of any AI system, and its quality and completeness are critical to the performance and reliability of AI models. Often, the advantages or risks associated with AI can be traced back not to the algorithms themselves, but to the underlying data. As organisations shape their data strategies to support AI adoption, there is a growing push for the creation and adoption of AI-specific data standards. 

Even if a firm is not currently deploying AI models, investing in improving data quality and completeness is a smart move. Accurate, well-structured data delivers value regardless of how it's ultimately used, laying the groundwork for future AI initiatives and other data-driven efforts. 

Having a governance framework which manages AI effectively  

As we’ve stated earlier, one of AI's defining features is its ability to make autonomous decisions, something which carries significant implications for governance, accountability, and responsibility. Firms should build on existing governance frameworks to manage AI. Importantly though, senior leaders don’t need to be technical experts or have deep knowledge of AI. What’s required is sound governance, for example, asking the right questions, applying critical thinking, using plain language, and ensuring AI is used responsibly and effectively. 

One way to do this, is assigning ultimate responsibility to a senior manager with business units remaining accountable for execution, compliance, and outcomes. However, it’s worth noting, certain aspects of the governance framework will likely be consistent – for example AI tools, if outsourced, would need to go through the existing governance processes in relation to outsourcing controls and oversight.  

Hallucinations 

AI hallucination are where AI systems (particularly generative models like large language models) produce outputs that are inconsistent, factually incorrect or even entirely fabricated. This can happen when the AI perceives patterns that actually don’t exist at all. This bring with it huge risk. If AI is making decisions which are overlayed with hallucinations this will cause misinformation, confusion and the potential to ultimately result in the wrong outcomes being made. This is where human overlay, which I mentioned earlier, is super important to mitigate this risk. 

Data leakage 

Staff uploading internal or sensitive data to third party software, like Chat GPT, is a real risk. For example, this could be to summarise a Suspicious Activity Report, a customer file, an employee uploading their personal details when asking the site to help with financial tips – the list goes on. All of which results in confidential information ‘out there’ with the potential to be exploited.  

Some self-assessment questions  

Take the opportunity to critically assess, both individually and collectively within your firm, how well you understand, implement, validate, and align with the use of AI and its associated controls. 

  1. How are you using AI now?  

  2. How do you plan on using AI in the future, and is there sufficient preparation going into this?  

  3. Is your data of good quality? If yes, how can this be evidenced? If no, what is being done about this?  

  4. Is your governance framework sufficient to oversee AI effectively?  

  5. Who is ultimately responsible for AI systems and controls?  

  6. What AI related management information is escalated and discussed at a senior management level? Does this show trends / patterns over time?  

  7. Do staff across the organisation understand the benefits and risks of AI? 

  8. Are you providing sufficient training and awareness on AI, which is appropriately tailored to staff roles and responsibilities, 

  9. Are technical and non-technical teams able to collaborate effectively on AI initiatives? 

  10. Is the data feeding into the AI systems accurate, complete, and relevant? 

  11. Are you complying with current and emerging regulations related to AI? 

  12. Do you have processes in place to detect and respond to AI-related incidents or errors? 

  13. Are you transparent with customers and stakeholders about the use of AI in your products or services? 

  14. How often are you validating and testing your AI models? 

  15. Have you or will you engage with the AI Live Testing space the FCA is facilitating? If not, why not? 

  16. Are your AI systems aligned with your firm’s values and ethical standards? 

  17. Have you considered potential social or reputational risks of using AI? 

  18. Do you know how you compare to your peers on AI use?  

Get in touch 

If you’d like to have a chat about how your firm could use AI, how it’s currently being used or what your peers are up to, please do get in touch at contact@avyse.co.uk  

Next
Next

Lessons from the Jes Staley Case on leadership, accountability, and the FCA