Will quantum computing solve financial crime?
This week’s announcement that Lloyds has been experimenting with quantum computing to identify money mule networks is, on one level, exactly what you would expect.
Financial crime is becoming more complex, more networked, and more embedded within legitimate activity. The idea that more advanced technology (capable of identifying patterns across vast transactional datasets) might help uncover what traditional approaches cannot feels both logical and necessary.
And it is. But it also sharpens a question that sits behind much of the current discussion on financial crime risk.
Not whether better technology helps, but whether our underlying frameworks are capable of using it effectively.
Are we treating better technology as the answer to financial crime risk, or as a way of avoiding a harder conversation about how our frameworks actually function?
It’s not the first time the industry has been here. Twenty years ago the same promise arrived with early transaction monitoring systems. Neural networks. AI-driven detection. Transformation of the alert management process. What many firms got instead was a spike in volumes that their operations couldn’t absorb, and a realisation that more signal didn’t automatically mean better decisions. The technology worked. The framework around it often didn’t.
Financial crime risk has never been just a data problem. Quantum computing doesn’t change that.
Three disciplines, and they don’t naturally fit together
Financial crime frameworks are shaped by three distinct disciplines. The problem is that they pull against each other in ways firms rarely acknowledge directly.
1. Rules and legal obligation. These are the foundation: legislation, regulation, sanctions regimes. Clear, non-negotiable, and the first thing anyone reaches for when a decision needs to be defended. When things go wrong, this is the lens that dominates. It is also blunt, slow to respond to emerging threat patterns, and structurally incapable of proportionality. Rules tell you what must not happen. They don’t tell you where to focus.
2. The risk-based approach. This is often where the most persistent operational failures surface. The gap between what firms believe their framework does and what it actually does is widest here, and it tends to stay hidden until it isn’t. The organising principle is sound: prioritise effort according to exposure, calibrate controls to risk. In practice, it drifts badly. Scores are assigned, matrices completed, ratings produced. The assessment runs to eighty pages and takes six months. And then doesn’t change a single control decision. Or can’t answer a supervisor’s question about how appetite connects to SAR volumes. Or, and this is the failure mode that receives the least attention, gets refreshed in response to a new threat typology without anyone formally asking what that means for monitoring parameters, control coverage, or risk appetite articulation.
That last point matters more than it appears. When intelligence capability improves faster than the risk framework supporting it, it doesn’t just create a gap, it creates a documentation problem the BWRA was never designed to absorb. The firm is now acting on signals it cannot formally evidence. Better detection, paradoxically, can undermine defensibility.
3. The (emerging) intelligence-led discipline. This includes transaction monitoring, analytics, typologies, public-private partnerships, and increasingly advanced detection technologies, and is where discovery happens. Where patterns emerge that rules and risk frameworks can’t surface alone. But signals are probabilistic. Data sharing is constrained legally and reputationally. And when a supervisor asks how a typology update changed your monitoring parameters, “we discussed it at the Financial Crime Committee” is not an answer that survives scrutiny.
The integration failure nobody reports
The uncomfortable reality is that most firms are already operating across all three disciplines simultaneously. The question is not which one to choose, it is whether the interaction between them is understood or simply assumed to work.
In practice, each domain reports green while the connections between them are assumed rather than demonstrated. A model change drives a spike in alerts no one fully anticipated. An assessment is refreshed but doesn’t materially change how controls operate. Intelligence is discussed but doesn’t translate into formal outputs or MI. And under pressure (audit, regulatory review, a skilled persons appointment) decisions fall back on rules because they are the easiest to evidence and defend.
Under SMCR, that matters beyond the governance question. The Senior Manager accountable for financial crime cannot point to green-reporting domains when the connections between them were assumed rather than demonstrated. Integration failure is not just a framework problem. It is a personal liability question.
This is not a new observation, but it becomes harder to ignore as detection capability improves.
The conclusion that follows from this is uncomfortable: most BWRAs are more accurately described as compliance artefacts than risk management tools. They demonstrate effort. They satisfy a process requirement. The firms that handle regulatory challenge best have recognised this dynamic, and, in practice, separated how risk is managed from how it is formally articulated, with the BWRA functioning more as a governance record than an operating framework.
That is not a comfortable position for the risk-based approach, which was supposed to be the connective tissue between rules and intelligence. The evidence that it isn’t (that it sits alongside both rather than integrating them) is visible in almost every regulatory review that goes beyond surface-level documentation testing.
A final thought
Quantum computing will improve detection. It won’t close the loop back to risk assessment, appetite, or accountability, and that is where most frameworks are already struggling.
If your framework brings together rules, risk, and intelligence, as most now should, how do they actually interact?
Or, more uncomfortable still, do you actually know which of the three you are optimising for, or is that being decided implicitly?