What Mickey Mouse can teach us about transaction monitoring

 
Close up of woman's eye. Title overlaid reads "Risk Assessment Reframed - What Mickey Mouse can teach us about transaction monitoring"

Transaction monitoring is potentially one of the most powerful controls in the financial crime framework, and sometimes one of the hardest to keep under control. I was reminded of that recently through a slightly unexpected reference.

Strictly speaking the story comes from Goethe’s The Sorcerer’s Apprentice, although if I am honest it was Disney’s Fantasia that made it memorable. In the scene, Mickey Mouse discovers a spell that animates a broom to carry water for him, an ingenious idea, at least initially, as the broom does the work while he relaxes.

The problem, of course, is that he does not fully understand how the spell works. The broom keeps going. The buckets keep filling. Soon the room is flooded. Attempts to stop the process only make matters worse as more enchanted brooms appear, each continuing the same task with relentless efficiency.

It is a story about the unintended consequences of unleashing a powerful mechanism without fully understanding how it should be controlled. At times, transaction monitoring systems can feel uncomfortably similar.

Over the past two decades financial institutions have invested heavily in monitoring technology designed to detect suspicious behaviour across millions of transactions. The idea is simple: if unusual patterns of activity can be identified, intervention becomes possible before illicit funds move further through the financial system.

Yet as monitoring frameworks evolve, many institutions confront a familiar challenge. Scenario libraries expand, typologies accumulate and alert volumes increase. The system continues doing exactly what it was designed to do (generating signals), but the volume of those signals becomes harder to interpret. The result is not unlike Mickey’s workshop in Fantasia: a process that once seemed elegant gradually becomes difficult to control.

The question is not whether transaction monitoring is valuable. It clearly is. The question is whether we have gradually asked it to do more than it was ever designed to do.

The promise of the risk-based approach

Returning to familiar territory, what initially looks like a transaction monitoring problem is also a business-wide risk assessment (BWRA) problem. More specifically, it is a relationship problem.

In principle the relationship between the two should be straightforward. As I have noted previously, the BWRA identifies where financial crime risk arises across the organisation. Transaction monitoring, on the other hand, is one of the mechanisms designed to detect those risks in practice.

Yet in many institutions the connection between the two remains surprisingly weak. The BWRA evolves through governance cycles and exposure assessments. Transaction monitoring frameworks, by contrast, evolve through scenario tuning exercises driven primarily by alert volumes, remediation programmes or vendor updates.

Both frameworks exist and both are documented, but the link between them is often implicit (at best) rather than explicit. Which raises a simple question: if the BWRA identifies the firm’s financial crime risks, how exactly is that intelligence shaping the system designed to detect them?

Why the two frameworks drift apart

Several structural factors contribute to this disconnect. In many institutions the BWRA is owned by financial crime risk or policy teams, while transaction monitoring sits with operations or technology functions. The two groups therefore operate with different priorities and time horizons.

Transaction monitoring systems are also frequently built around vendor rule libraries. These often reflect what is easy to monitor rather than what actually needs to be monitored within the institution’s own business model.

Over time monitoring frameworks evolve through tuning exercises designed primarily to manage alert volumes and investigation capacity. The BWRA, meanwhile, evolves through governance processes intended to demonstrate that risks have been considered and documented. The two processes therefore move forward largely in parallel: one describing where risk exists, the other attempting to detect it in practice, often without a shared understanding of how those risks actually manifest within the institution’s business model.

The missing intelligence pipeline

There is a deeper structural issue beneath this disconnect. Both the BWRA and transaction monitoring frameworks should be informed by external financial crime intelligence, supervisory publications, enforcement cases and typology reports that describe how financial crime occurs in practice.

But intelligence rarely works well when it jumps straight into monitoring rules. It needs to inform the organisation’s control architecture first.

Financial crime intelligence should inform three things: where risk exists, how it manifests and how it is detected (or prevented).

In practice this means intelligence can influence three distinct layers.

  • First, exposure characteristics. External reporting often highlights the customers, jurisdictions or products associated with particular risks.

  • Second, risk events. Typologies and case studies describe how criminals exploit financial institutions in practice.

  • Third, detection scenarios. Those insights can then be translated into monitoring rules designed to detect behavioural patterns within transactional data.

This creates a logical flow:

External intelligence → Exposure understanding → Risk events → Detection scenarios

When this structure exists, monitoring frameworks clearly reflect the risks identified in the BWRA.

When it does not, something else happens. External intelligence feeds directly into monitoring scenarios while the BWRA evolves independently as a governance artefact. Over time the two frameworks drift further apart.

When typologies become monitoring architecture

Not unlike the apprentice’s spell, which continues multiplying long after the original task has been forgotten, transaction monitoring gradually becomes the place where every emerging typology is expected to be addressed.

Yet not every typology naturally translates into a monitoring rule. Some insights are primarily about exposure, while others describe risk events. Only a subset translate into detectable behavioural patterns within transactional data.

Unfortunately the typical response is to convert each new typology into another monitoring scenario.

Many practitioners will recognise the pattern. A new typology is published (perhaps involving trade-based laundering or complex cross-border flows) and the immediate question becomes whether a new monitoring rule should be introduced. In practice the institution’s business model may have little exposure to those activities, but the scenario often appears in the monitoring framework anyway.

Over time monitoring frameworks accumulate rules that are marginally relevant to the institution’s business model or difficult to detect through transactional data. The operational consequences are familiar: expanding scenario libraries, growing investigative backlogs and increasing difficulty distinguishing genuine risk signals from routine alerts.

The technology paradox

Against this backdrop, many institutions are exploring the use of artificial intelligence and machine learning to improve transaction monitoring. The enthusiasm is understandable. Advanced analytics has the potential to identify complex patterns, reduce false positives and improve investigative efficiency.

But there is a structural issue that receives less attention. If the monitoring framework is populated with scenarios reflecting a broad collection of global typologies rather than the organisation’s own risk profile, the data it produces will inevitably contain significant noise.

Models trained on that data will inherit those characteristics.

In other words, increasingly sophisticated technology risks being applied to optimise a detection architecture that may not be properly aligned with the organisation’s risk intelligence in the first place.

To return briefly to the earlier metaphor, this can sometimes feel like attempting to solve the flooding workshop in The Sorcerer’s Apprentice by casting a larger spell. The mechanism becomes more powerful, but the underlying problem, understanding how the system should be controlled, remains unresolved.

Advanced analytics may still play an important role, but its effectiveness will depend heavily on the quality and relevance of the signals it receives.

A question worth asking

The purpose of the risk-based approach has always been straightforward: financial crime controls should be proportionate to the risks faced by the organisation. That requires firms to understand where those risks arise and direct controls towards the areas where they matter most.

Yet in many institutions transaction monitoring frameworks evolve largely independently from the risk assessment that is supposed to inform that prioritisation. External intelligence continues to generate new typologies. Monitoring frameworks continue to expand. Meanwhile the BWRA continues to be refreshed through governance cycles.

All three activities have value. But without a clearer structure linking them together they risk becoming parallel exercises rather than parts of a coherent risk architecture.

Over time the monitoring framework expands, new typologies appear and additional scenarios are introduced, each attempting to address the latest threat. The system becomes increasingly powerful, but also increasingly difficult to interpret and control.

Which raises a more uncomfortable possibility.

Has transaction monitoring become the Sorcerer’s Apprentice of the risk-based approach,  a powerful mechanism we keep asking to do more, while the real challenge lies in the design of the system itself?

‍ ‍

Previous
Previous

FCA review: Are second charge mortgage practices up to standard?

Next
Next

FCA Motor Finance Scheme update: Positive news, but don’t skip these practical steps.