Artificial intelligence will always be a “work in progress” in fighting money laundering and won’t entirely replace humans and their intuition, a top UK regulator said.
“Any bank hoping for a black box in the corner that will sniff out the launderers will be disappointed,” according to Rob Gruppetta, head of the financial crime unit at the UK Financial Conduct Authority or FCA.
“But the technology has the capability to better achieve what we all want: keeping finance clean,” Gruppetta said last week during a talk in London.
British banks spend £5 billion ($6.7 billion) every year combating financial crime, he said. A big chunk of that is spent on detecting suspicious activity and reporting it to the authorities.
With those resources, shouldn’t artificial intelligence eventually reach a level of effectiveness that’s better than humans?
According to Gruppetta, there are “non-technological challenges” that hold back the use of machine learning to fight money laundering.
The biggest problem is the quality of information. Learning systems depend on feedback. But banks complain they hear little from police after filing a suspicious activity report, Gruppetta said.
Without better feedback, banks can’t train machines to spot the worst cases of actual or suspected money laundering.
To fix that, banks and law enforcement agencies are cooperating more, Gruppetta said. And that cooperation “may provide new fodder for the machines to chew on.”
Another obstacle is that each bank stands alone and only sees its own part of a transaction. Having just “one piece of a jigsaw” impairs a bank’s ability to train its machines to detect criminal funds, Gruppetta said.
Banks aren’t at fault. Laws require them to protect customers and not share their data. The UK’s new Criminal Finances Act creates a legal framework for information sharing, Gruppetta said. How much that will hep banks create smarter machines remains to be seen.
What’s the future for AI versus money laundering?
Machine learning should sit alongside human decision-making, Gruppetta said. “We see it as complementing, not replacing, human judgment.”
Banks should continually refine their feedback processes, he said. Predictions that produced false positives (or false negatives), for example, should be used to refine the models.
Ultimately, software deals in probabilities, not absolutes, Gruppetta said. So final decisions at banks about passing information to law enforcement will require human judgment.
But smart machines can “direct the humans to the cases of most interest,” he said.
Rob Gruppetta’s full speech to the FinTech Innovation in AML and Digital ID regional event in London on December 6, 2017 is here.
Richard L. Cassin is the publisher and editor of the FCPA Blog.