When “love” becomes leverage: Understanding romance fraud

The FCA’s latest review on romance fraud makes for tough reading. Real people are tricked into sending money to someone they believe cares about them. It’s painful, human, and costly. 

But here’s the key point for risk teams and leaders; romance fraud is not a brand-new type of fraud. It is a familiar problem (authorised push-payment (APP) scams and sometimes fake “investment” pitches) delivered through a social-engineering vector (in this case, a fake romantic relationship). If we only chase the headline label (“romance fraud”), we miss the mechanics we can actually fix. 

The better lens: Break the event into parts 

In our enhanced Business Wide Risk Assessment methodology, every financial crime (including fraud) is described the same way. That consistency makes the problem clearer, and the solutions more targeted. 

Our schema (in plain English): 

  • Threat actor: Who is doing it? 

  • Threat act: What are they doing to make it work? 

  • Process abused: Which step in our own process are they exploiting? 

  • Outcome: What happens as a result? 

  • Victim / Beneficiary: Who loses, and who gains? 

Applied to romance-initiated scams: 

  • Threat actor: A social engineer or organised group posing as a partner. 

  • Threat act: Building trust, then inducing a voluntary payment or “investment.” - This is the social-engineering vector, a behavioural way in. 

  • Process abused: The customer-initiated payment journey (new payee setup, confirmations) and, often, the receiving bank’s weak mule controls. Sometimes, funds are pushed onto crypto rails to make recovery harder. 

  • Outcome: money leaves the customer’s account and is quickly moved through mule networks. 

  • Victim / Beneficiary: the customer loses funds; the fraudster and their mule network benefit. The firm carries operational, regulatory, and reputational risk. 

Seen this way, “romance” is the storyline, not the category. The fix lives in the abused process. 

Why this framing matters 

Most commentary stops at “more training and awareness.” We go further by designing controls around the process steps that are actually misused. That’s the advantage of modelling financial crime and fraud as actor → act → process → outcome → victim/beneficiary (where applicable): 

  • You can tag the vector (“social engineering: romance”) without inventing a new category. 

  • You can attach controls exactly where the attack lands (payment initiation, new payee, beneficiary account, on-ramp to crypto). 

  • You can measure harm reduction in a way that’s meaningful to customers, boards, and regulators. 

What to do differently (practical, not theoretical) 

1. Add “vector tags” with simple triggers 

Keep the event as an APP/investment scam, but tag the social-engineering vector so your systems can spot it. Triggers might include: 

  • Several payments to a brand-new payee, especially where values ramp up over time. 

  • Customers who can’t explain the purpose clearly or change their story when asked. 

  • Out-of-pattern behaviour (unusual times, devices, or locations). 

These are plain, observable signals that can prompt a pause or a conversation. 

2. Use the receiving bank’s vantage point 

Fraudsters need an account to receive funds. Strengthen beneficiary-side analytics

  • First-use risk on new beneficiary accounts. 

  • Mule clustering (multiple unrelated senders funnelled to one account). 

  • Fan-in patterns that don’t match the stated business activity. 

  • Fast internal routes to freeze and repatriate where lawful and appropriate. 
    This shares the load; it’s not just on the sending bank to spot the problem. 

3. Build “break-the-spell” moments into the journey 

When triggers fire, add friction that helps customers think twice, not just generic warnings: 

  • A short pause before the payment completes, with a clearer explainer (“Common romance scam signs, does this look familiar?”). 

  • An alternative-channel check-in (brief call or secure-message confirmation). 

  • If a customer declines to talk, graduated limits until a second factor is satisfied. 

4. Measure what actually matters to customers 

Move beyond raw fraud losses. Track: 

  • Value prevented before send (because that’s the harm that never reached the customer). 

  • Time-to-intervention from the first risky signal. 

  • Re-victimisation within 90 days (are people falling twice?). 

  • Recovery from the receiving bank (if funds can be pulled back quickly). 

These metrics speak directly to customer outcomes and accountability. 

Why our BWRA handles this better 

Traditional BWRAs catalogue types of fraud or financial crime; they are slow to recognise how those types adapt. Our method is different: 

  • It models the mechanism, not just the label’ so “romance” becomes a social-engineering vector against a known payment or investment scam. 

  • It pinpoints the process steps you can redesign (initiation, confirmation, beneficiary triage, on-ramp to crypto). 

  • It builds in measurement that shows real-world harm is going down. 

In practice, that means you’re not rewriting your risk framework every time criminals change the story. You’re strengthening the same control points they keep trying to exploit. 

Closing thought 

Romance fraud hits the headlines because it preys on trust. The right response isn’t a new category; it’s a clear view of the mechanism and controls that disrupt it. Model the event as actor → act → process → outcome → victim/beneficiary, tag the social-engineering vector, and design the journey so the right frictions and recovery routes kick in at the right time. 

That’s how we move from describing the problem to reducing the harm

Next
Next

Key reforms in AML/CTF supervision