AI: Actionable Intelligence… the missing link in financial crime risk management

Image of woman's eye with the title of the blog. Title reads "Risk Assessment Reframed. AI: Actionable Intelligence... The missing link in financial crime risk management"

In a previous blog, I reflected on the FCA’s recent assessment of Business-Wide Risk Assessments (BWRAs) and the recurring challenge firms face: explaining exposure in a way that reflects the reality of their business. This week builds on that theme by examining a dimension the FCA hinted at but did not explicitly name, how firms use intelligence in their risk assessments, and more specifically, the distinction between the industry’s focus on artificial intelligence and its underuse of actionable intelligence.

Artificial intelligence captures attention. Actionable intelligence drives understanding.

And if the FCA’s recent observations tell us anything, it is that many BWRAs fall short not because firms lack data, but because they fail to make their internal intelligence actionable. I agree with this entirely. Most BWRAs simply tell you what you already know. They restate inherent risks, summarise business lines, and repeat established typologies, but they do not provide insight. They do not generate intelligence. An effective BWRA should do exactly that.

Firms generate intelligence constantly, they just don’t use it

Every firm produces a wealth of operational evidence every day: SAR themes, QA outcomes, breaches, backlogs, rejected applications, customer behaviour patterns, and more. These signals tell you which risk events are materialising, where controls are under pressure, which assumptions no longer hold, and how criminals are interacting with the business.

Yet the FCA found that this “internal intelligence” rarely informs the BWRA in any meaningful way. Instead, assessments remain anchored in external typologies and business descriptions, with only brief references to operational reality (“SAR numbers stable”, “QA findings broadly satisfactory”). This results in a BWRA that is static, generic, and disconnected from the evidence the business generates.

Just as importantly, internal intelligence is not only about failures. Firms generate positive intelligence too: strong true positives, high first-time-right rates, smooth journeys for legitimate customers, timely escalations, and evidence that recent improvements are working. These signals can show where risk is being effectively mitigated, or where the underlying exposure is inherently low because the customer base, product set, or operating model presents fewer opportunities for abuse. Yet despite their value, these positive indicators are largely absent from most BWRAs. They rarely influence exposure ratings or shape the narrative, even though they provide essential evidence about where the firm is resilient as well as where it is vulnerable.

The point is simple: the industry debates when artificial intelligence will transform financial crime, while ignoring the actionable intelligence that could improve effectiveness today.

Why actionable intelligence should shift exposure, and usually doesn’t

The purpose of the BWRA is to assess exposure to risk events. But for many firms, exposure ratings barely change from year to year. Not because the threat landscape is stable (it never is) but because exposure is based on generic risk factors rather than evidence.

Firms often see:

  • recurring QA failures,

  • emerging SAR themes,

  • delays in sanction screening,

  • monitoring backlogs,

  • or changes in customer behaviour,

yet the BWRA narrative remains unchanged.

Equally, improvements in control performance rarely reduce exposure. This results in assessments that describe risk but do not learn from it. Artificial intelligence might one day enable dynamic modelling, but actionable intelligence allows change now, if firms are willing to embed it.

A learning system, not a learning machine

When firms hear the term “learning system”, many instinctively think of machine learning or advanced analytics. But the learning system the FCA is implicitly calling for is operational, not technological.

A learning system in financial crime is simply:

A risk framework that absorbs its own evidence, interprets it, and updates exposure and controls accordingly.

It is the difference between a BWRA that documents risks and one that generates understanding. And it does not require AI. What it requires is the willingness to treat operational intelligence as evidence, not background noise.

The missing link: mapping intelligence to risk events

The main reason internal intelligence is not used effectively is that most firms lack a structured way to interpret it. QA failures, SAR themes, or operational issues are often treated as isolated observations rather than evidence of specific risk events.

For example:

  • A surge in mule-related SARs

  • A cluster of onboarding failures around source of wealth

  • Slower escalations during peak periods

  • A backlog in transaction monitoring

  • Staff taking shortcuts in pressured processes

These are not generic “issues”; they are evidence of precise risk events involving specific actors, actions, and exploited processes.

Without a risk-event architecture (actor, act, process abused, outcome, victim/beneficiary) intelligence floats unanchored. Artificial intelligence may one day help organise and cluster this data, but it cannot assign meaning until the organisation defines the meaning.

Actionable intelligence requires structure to become insight.

The real barrier isn’t technology, it’s discomfort

There is a reason firms often avoid making internal intelligence actionable: it can be uncomfortable. External typologies feel safe; they speak in broad terms and rarely expose internal weaknesses. Internal intelligence highlights control gaps, operational pressure points, behavioural issues, and cultural shortcuts. It is far easier to repeat sector-wide risks than to explain why a core control has failed repeatedly.

But this discomfort is precisely why internal intelligence is valuable. It tells you not only what risks you face, but how those risks interact with the realities of your systems, staff, and processes. It is the foundation of genuine understanding.

A practical feedback loop for actionable intelligence

If the industry is serious about intelligence-led risk assessments, it needs a simple, disciplined feedback loop that turns operational evidence into updated exposure and improved controls. A quarterly cycle might involve:

Step 1: Evidence capture (positive & negative)

Gather intelligence showing both stress and resilience:

  • emerging SAR themes,

  • repeated QA failures or breaches,

  • alert backlogs or delays,

  • evidence of mis-calibrated thresholds,

  • strong true positives,

  • high first-time-right rates,

  • smooth onboarding for legitimate customers,

  • timely staff escalations,

  • and evidence that recent improvements are working.

Step 2: Map intelligence to risk events

Which risk event does this evidence relate to?  What act is occurring? Through which process? By whom? To what outcome?

Step 3: Interpret the implications

Does the evidence indicate control design issues, inconsistent execution, capacity pressures, training needs, or strong performance?

Step 4: Update exposure

Exposure should increase when risk events occur or controls show weakness, and decrease when controls demonstrably mitigate risk.

Step 5: Adapt controls and processes

Interventions should follow from evidence, not assumptions.

Step 6: Record what was learned

A short “learning log” ensures quarterly insight informs annual assessments.

This is not a learning machine. It is a learning organisation.

The FCA’s message, and the opportunity

The FCA did not use the phrase “actionable intelligence”, but its expectations align squarely with it. The regulator criticised BWRAs that remain static, rely on generic statements, and fail to incorporate evidence. When the FCA warns that your risk assessment does not reflect the reality of your business, it is saying: you have intelligence, but you’re not making it actionable.

An intelligence-led BWRA is not a restatement of what everyone already knows.  It is a lens through which new insight is generated and learning is embedded.

The firms that embrace this approach will not only improve their risk assessments, they will improve their understanding, their controls, and ultimately their effectiveness.

Artificial intelligence may shape the future. But actionable intelligence (operational, evidence-led, unfiltered intelligence) is what the industry needs right now. And it is already in the building.

Previous
Previous

Threat vectors: Enhancing the architecture of financial crime risk assessment

Next
Next

From configuration to comprehension: What Coinbase’s fine reveals about control design