Assessing the BWRA: Why being “technically compliant” is no longer enough

 
Close up image of woman's eye behind glasses. Title overlaid reads "Risk assessment reframed - Why being "technically compliant" is no longer enough

In earlier pieces in this series, I have been deliberately critical of the status quo.

I have argued that many Business-Wide Risk Assessments are not fit for purpose, not because firms are inattentive or careless, but because too many BWRAs have become static, generic, and disconnected from how financial crime risk is actually managed in practice. In one piece, I put it starkly: if a BWRA doesn’t shape decisions, it isn’t really a BWRA at all. In another, I questioned whether enforcement action was reinforcing the right lessons, or simply encouraging better documentation of fundamentally unchanged approaches.

Those arguments still stand.

But over recent months, through many conversations with MLROs, Heads of Financial Crime, colleagues and senior practitioners across different sectors, I’ve begun to think that something else is happening alongside those debates. In many cases, we may have been talking at cross purposes, not about whether BWRAs are “good” or “bad”, but about what they are actually for.

That distinction matters more than it might first appear.

Fit for purpose, but for which purpose?

A common response to my earlier criticism goes something like this:

“Our BWRA is technically compliant. It’s been through audit. It’s been challenged by supervisors. In what sense is it ‘not fit for purpose’?”

This is a reasonable challenge, and an important one. For a long time, the implicit purpose of a BWRA was relatively narrow and largely procedural. It was expected to demonstrate that a firm understood the financial crime risks it faced, that prescribed risk factors had been considered, and that there was appropriate governance around approval, ownership, and periodic review.

Against that purpose, many BWRAs are fit for purpose. They evidence awareness. They demonstrate process. They can be defended under scrutiny.

The difficulty is that this is no longer the only, or even the primary, expectation being placed on BWRAs (if it ever should have been).

Alongside the traditional compliance objective, a second expectation has been steadily emerging: that the BWRA should actively inform judgement, prioritisation, and action. That it should help firms decide where to invest, where to accept risk, where to simplify, and where to intervene. That it should be something senior management can use, not just approve. It should be the bedrock of the risk-based approach.

It is here that compliance and value begin to diverge.

When compliance becomes a proxy for usefulness

This divergence explains a persistent unease I hear across the industry. Many experienced MLROs can point to BWRAs that meet regulatory expectations, are internally consistent, and have evolved incrementally over time, yet still struggle when asked some very basic questions.

Why are these risks genuinely more important than those ones? What decisions changed because of this assessment? Where have we consciously chosen to do less, not more, as a result?

These are not trick questions. Nor are they unfair. They go to the heart of what a risk assessment is meant to do. And yet they are often surprisingly difficult to answer with confidence.

In earlier writing on enforcement, I suggested that we sometimes mistake defensibility for effectiveness. The same phenomenon is at work here, viewed from a different angle. The issue is not that BWRAs are “wrong”. It is that technical compliance has gradually become a stand-in for value.

Over time, we have become very good at producing BWRAs that can survive challenge. We have been less disciplined about asking whether they are actually earning their place in decision-making.

Reframing the critique, value as the missing test

Seen this way, the criticism of the status quo becomes more precise, and harder to dismiss. When we say that many BWRAs are “not fit for purpose”, what we often mean is not that they are poorly executed or non-compliant, but that they struggle to demonstrate value in ways that now matter.

At its core, a BWRA should give the organisation confidence in its understanding of risk. Not just that risks have been listed or categorised, but that they can be clearly explained, articulated, and defended when challenged, whether by supervisors, auditors, or the Board. Confidence here is not about rhetorical polish; it is about whether the assessment reflects how the business actually operates, where it is exposed, and why certain risks are judged to be more significant than others.

Beyond explanation, an effective BWRA should shape decisions. This is where many assessments quietly fall short. If the output of the BWRA does not influence prioritisation, investment, simplification, or trade-offs, then its role is largely ceremonial. Decision-usefulness does not require precision or false certainty, but it does require that the assessment meaningfully informs judgement, rather than sitting alongside it.

There is also a more practical test: whether effort and controls are being deployed where they matter most. A BWRA that concludes everything is important offers little help to the organisation. Value emerges when the assessment supports proportionate responses, when it helps firms distinguish between what is genuinely critical, what is adequate, and what may be legacy or habit. This is often where the conversation becomes uncomfortable, because it forces explicit choices about where not to invest.

Finally, a BWRA should support learning. Financial crime risk does not stand still, and neither do business models, delivery channels, or threat actors. An assessment that can only be refreshed through periodic, resource-intensive exercises is poorly suited to this reality. Value increasingly lies in whether the organisation can absorb new information, challenge prior assumptions, and adapt its understanding of risk over time, without having to start from scratch.

Taken together, these are not exotic or aspirational ambitions. They are simply the logical consequences of asking a BWRA to do more than satisfy a formal requirement.

Why our usual measures struggle to capture this

Part of the difficulty is that we continue to judge BWRAs by what is easiest to evidence rather than by what is most revealing. We look for the presence of scores and heatmaps, the completeness of prescribed sections, and the regularity of review cycles. These signals are not meaningless, but they are, at best, proxies.

  • They tell us whether the BWRA exists.

  • They tell us whether it is orderly.

  • They tell us whether it can be defended procedurally.

They tell us far less about whether the organisation genuinely understands its risk, whether decisions have changed as a result, whether resources are being directed with intent, or whether the assessment can evolve as circumstances change.

This is why discussions about BWRAs so often talk past one another. One side points to compliance and structure; the other points to frustration and missed opportunity. Both perspectives are valid, they are simply answering different questions.

From critique to construct: testing for value in practice

Framed properly, this is not a retreat from earlier criticism. There is no contradiction between saying “most firms have compliant BWRAs” and “many BWRAs are not delivering sufficient value”. Those statements coexist quite comfortably once we separate form from outcome.

Earlier pieces in this series focused on what BWRAs are failing to do. This piece focuses on how we might recognise when they succeed. That shift matters, because it moves the conversation from abstract critique and towards something more practical: what would it actually look like for a BWRA to work well?.

Once value becomes the test, different questions come into focus. Can the organisation clearly explain and defend its understanding of risk under challenge, without defaulting to process or documentation? Do the conclusions of the BWRA genuinely shape decisions about priorities, investment, or trade-offs? Is effort being deployed where it matters most, rather than being spread evenly in the name of consistency? And does the assessment support learning, allowing assumptions to be challenged and updated as threats, intelligence, and business models evolve?

These are not theoretical questions. They are the questions Boards, supervisors, and senior management are increasingly asking, even if not always explicitly.

Why many organisations struggle to answer those questions

In practice, many firms find these questions difficult to address because they sit awkwardly between two familiar forms of assurance.

On one side sits technical compliance: has the BWRA considered the right risk factors, followed the right process, and been approved through the right governance? On the other sits operational effectiveness: are controls working, are issues being remediated, are outcomes improving?

The value of the BWRA itself often falls between these stools. It is neither a pure compliance artefact nor an operational control test, and as a result it is rarely assessed on its own terms. This helps explain why organisations can feel confident in the existence of their BWRA while remaining uncertain about what it actually enables them to do.

A Pathfinder-style lens on BWRA value

One response we have increasingly seen, and deliberately adopted in our own work, is to assess BWRAs explicitly through a value and effectiveness lens, rather than as another exercise in technical gap-analysis. Internally, we refer to this as a Pathfinder approach.

The aim is not to re-score inherent risk or to second-guess regulatory interpretation. Instead, it asks a simpler but more revealing question: to what extent does the BWRA support confident judgement, prioritised action, and adaptation over time?

Viewed this way, the BWRA becomes something that can be tested, not against a checklist, but against the outcomes it is meant to support. This framing is closely aligned with the direction of travel in effectiveness-focused regulatory thinking, including the emphasis on understanding, judgement, and proportionality reflected in Wolfsberg-style principles.

Crucially, it also creates space for constructive conversation. By separating questions of value from questions of technical compliance, organisations can identify where a BWRA is genuinely strong, where it is merely adequate, and where it may be acting as a constraint rather than an enabler.

A closing reflection

If supervisors are asking harder questions of BWRAs, and they are, it may only be a matter of time before Boards do the same, not out of scepticism but out of reliance. At that point, “compliant” will no longer be a sufficient answer.

Being clearer about the value a BWRA is meant to deliver, and how we would recognise that value in practice, does not blunt earlier criticism of the status quo. It sharpens it. Because once value becomes the test, it is no longer enough for a BWRA to be technically sound.

It must also be decision-useful, proportionate, and alive to change.

And that, ultimately, is the purpose many of us were arguing for all along.

Next
Next

Financial crime risk and operational risk: Same threats, different questions