Building it might have been the easy part

 
Close up of woman's eye. Title reads "Risk assessment reframed - Building it might have been the easy part"

An alternative to the BWRA

Over the past year, I’ve written a number of times about the limitations of the business-wide risk assessment.

Not because firms don’t have them, most do. Not because people aren’t trying, they are. But because, despite that effort, many BWRAs still struggle to do the thing they are supposed to do: explain how financial crime risk actually presents itself within a business, and what is being done about it.

The themes have been consistent. The gap isn’t just one of execution. It’s structural: a version of likelihood × impact that produces ratings without producing understanding, risk events that are often detached from how financial crime actually occurs, and residual risk treated as a calculation rather than a judgement that needs to be capable of being defended.

For most of my career, the role was to diagnose that and suggest alternatives. Over the last year, that question has shifted. Not whether the BWRA could be done differently in theory, but whether something different could actually be built.

Building it has been hard. But increasingly, it feels like that might have been the easier part.

The specific failure

One of the points I’ve come back to repeatedly is how likelihood is derived.

Most frameworks treat it as a question of subjective probability. What it should be anchored in is exposure: who your customers are, where they operate, how your products behave, and which parts of the business are most exposed to exploitation. That distinction sounds straightforward. In practice, it changes almost everything about how the assessment behaves.

The consequence shows up in a specific way. When two business units with genuinely different customer bases produce near-identical exposure assessments, the model isn’t broken, it just isn’t forcing the difference. Consistency has been achieved through convergence: different parts of the business arriving at the same answer for entirely different reasons. The assessment looks coherent. It is not telling anyone anything useful.

Residual risk compounds the problem. The most common failure is not deliberate. It is structural: optimism about control effectiveness quietly compressing the residual rating below what the inherent position actually warrants. The narrative supports a lower rating. The rating gets used. Nobody has technically done anything wrong.

What building it actually required

In earlier blogs, I’ve set out what a more structured approach to exposure, risk events, and residual risk might look like in principle.

The challenge over the last year has been translating that into something that actually works in practice, something that forces those distinctions to hold under pressure, rather than being explained away in narrative.

In the platform we have built, exposure is structured across the risk factor categories that do the most analytical work: customer, because it determines who has access to your products and under what conditions; delivery channel, because it shapes whether human judgement or automated processing is the primary control point; and geography, because it drives the external threat environment that your customer base is embedded in.

Each risk event selects a primary driver from those categories. The structure is designed to force a specific articulation (not just that a firm is exposed, but where and why) and to surface where that articulation does not hold up.

Each risk event follows a causal schema: Actor, Act, Mechanism, Outcome. A money mule event, for example, is not expressed as “money mule risk”, it is expressed as a recruited account holder receiving and dispersing criminal proceeds through a retail current account, exploiting high-volume, low-value transaction patterns. That level of specificity changes the control question. It makes the mapping direct rather than forced.

Residual risk is anchored with a hard floor. Where inherent risk is assessed as high and controls as weak or absent, the residual rating cannot fall below a defined threshold regardless of the narrative. Where the methodology bends, where a firm’s profile genuinely does not fit, that is documented as a structured exception requiring explicit sign-off rather than quiet accommodation.

The constraint you do not see coming

One of the trade-offs I’ve touched on before, but only really understood through implementation, is flexibility.

In theory, giving firms the ability to tailor an assessment framework to their own risk profile sounds like a strength. In practice, it is often what allows the underlying issues to persist, just in slightly different forms across different firms and different assessment cycles.

So we took the opposite approach (at least initially). Less optionality. More constraint. A defined methodology that forces certain questions to be answered in a particular way. That creates a different problem: what happens when a firm’s risk profile does not fit neatly within it.

That is a real challenge. We have not resolved it completely. What the methodology does is document where it bends, a structured exception that requires explicit sign-off rather than quiet accommodation. Whether that is sufficient is one of the things May will test.

Contact with reality

Up to this point, most of what I’ve written has been about how the framework should work, structurally, conceptually, and in terms of methodology.

Where this now gets tested is somewhere much less forgiving: whether it holds when it is actually used.

The design has been stress-tested against its own logic. The harder test is a different one: whether a firm’s MLRO, under time pressure and with a Board deadline approaching, will accept a residual risk rating the methodology will not let them lower. That is the specific point of friction. Everything else is preparation for it.

In May, we will be taking the platform into a public setting for the first time. Not to demonstrate that it works in theory. To find out whether it holds when the answers actually matter.

Because building something different is one thing. Ensuring that it holds is something else entirely.

Next
Next

Product governance frameworks: Building the foundation for regulatory confidence