From configuration to comprehension: What Coinbase’s fine reveals about control design

An image of a woman's eye with the title overlaid. Title reads "Risk Assessment Reframed - What Coinbase's fine reveals about control design"

The €21 million fine against Coinbase was, on its surface, about transaction-monitoring failures.

But in reality, it highlights deeper questions of control design, and what can happen when firms struggle to connect the intelligence they receive to the controls they deploy.

In a previous blog, I explored how firms digest external intelligence (typologies, advisories, law-enforcement updates) and how much of that insight can be lost in translation between understanding and implementation.

The Coinbase case illustrates that translation gap in practice.

From Intelligence to Control

When the Central Bank of Ireland published its settlement notice, the headlines focused on configuration errors.

Five of Coinbase’s 21 monitoring rules failed silently for over a year, leaving around €176 billion, roughly 31% of total transaction volume, unmonitored.

Those failures were technical in nature, but they point to a broader issue: how effectively firms translate intelligence about criminal behaviour into the design and calibration of their transaction-monitoring controls.

What failed may not just have been code, but comprehension, the link between what we know and how we detect it.

The Limits of Scenario Thinking

Like many institutions, Coinbase’s transaction monitoring relied on a set of pre-defined scenarios representing high-risk typologies, such as interactions with high-risk blockchain addresses or rapid transaction velocity.

That’s common practice, but it also shows the limits of “scenario thinking.”

Firms often measure coverage, the number of rules or typologies monitored, rather than understanding: whether those controls are truly mapped to the risks they intend to manage.

When several rules stopped working, Coinbase appears to have had no clear way to identify which specific risk types or behaviours were no longer being monitored.

That’s not unique to crypto; it’s an industry-wide pattern that emerges whenever control logic isn’t explicitly tied back to defined risk events.

Designing Controls Through the Risk Event

Our enhanced BWRA methodology starts from the risk event, defined through the actor, act, process, outcome, and victim or beneficiary, and then asks:

Which control types (preventive, detective, corrective, or directive) meaningfully interrupt this event?

Applied to transaction monitoring, this means designing and testing rules through the lens of specific events, not abstract typologies.

For example:

  • If the event is “criminal uses a mixing service to obscure source of funds,” calibration should focus on behavioural interaction with known obfuscation tools, not just address lists.

  • If the event is “employee colludes with a criminal to onboard and maintain fraudulent accounts,” effective calibration goes beyond simple onboarding thresholds. It would involve cross-referencing internal activity data with unusual approval patterns, timing anomalies, or overrides of automated checks, the kinds of signals that suggest internal complicity rather than procedural error.

Framing control logic through these events ensures every rule has a traceable purpose, and when something stops working, the firm knows exactly which risks have fallen out of view.

Outsourcing and the Illusion of Assurance

Coinbase Europe outsourced its monitoring to its U.S. affiliate but remained legally responsible for compliance in Ireland.

The Central Bank’s findings underline a familiar challenge: how far reliance on group-operated systems can obscure local accountability.

A mature BWRA framework isolates outsourcing and oversight as a discrete control domain.

It asks whether the institution can demonstrate genuine assurance over controls operated elsewhere, understanding not just that they exist, but how they function and how issues are escalated.

Without that assurance chain, monitoring easily becomes a black box: data goes in, alerts come out, but the connection between risk event and detection rule is opaque.

Timeliness as a Measure of Effectiveness

The settlement notice records that Coinbase’s rescreening exercise spanned almost three years.

By the time suspicious transaction reports were filed, much of their potential intelligence value had been lost.

That delay reinforces an important design principle: control effectiveness is temporal, not static.

A control that identifies risk years later isn’t truly effective, it’s diagnostic, not preventive.

Control design should therefore consider not just coverage and logic, but timeliness: how quickly a control transforms data into insight and insight into action.

Why Comprehension Matters More Than Configuration

Coinbase’s case underscores what can happen when compliance infrastructure advances faster than risk understanding.

Rules, scenarios, and typologies only work when anchored to clearly defined risk events.

Without that anchor, firms can’t tell whether their systems are working as intended, or which risks slip through when they’re not.

The difference between configuration and comprehension is the difference between monitoring activity and managing risk.

Control design is where that difference becomes visible.

Closing Thought

The lesson from Coinbase is not that technology failed, but that the relationship between intelligence and control may not have been sufficiently explicit.

If risk understanding begins with intelligence, it ends with control design, and every step in between must be deliberate, traceable, and testable.

That’s what transforms monitoring from configuration to comprehension.

Next
Next

Read, reflect, but don’t overreact: What the FCA’s BWRA review really tells us