Artificial intelligence and machine learning technologies are expected to bring massive operational efficiencies to compliance and ethics program management tasks. But what exactly is behind these buzzwords?
A more holistic view leads me to a conclusion that what a machine technology can do is less interesting than how it should be taught to do it. Inevitably, it all comes back to questions of ethics. Human ethics.
First, let’s be clear on terminology. In a compliance context, Artificial Intelligence (AI) is computer software that can quickly analyze large amounts of data and use correlations and patterns within that data to draw pre-programmed conclusions (such as raising a red flag). In other words, AI involves machines that can make decisions normally made by humans.
And just like humans, AI needs to learn to make those decisions. With the help of machine learning (ML) the technology can learn on its own. So instead of relying on fresh inputs of programming code with specific instructions to accomplish a particular task, ML is a way of “training” an algorithm so that it can learn how to accomplish it.
The learning can be supervised or unsupervised. In case of supervised learning the data, the goal, and the expected output are provided to AI which identifies algorithms to get to the expected results, whereas unsupervised learning provides the machine only with the data and the goal. In simplified terms, the more data the AI gets, the more it allows it to adjust itself and improve the algorithm.
AI implications for compliance. Today AI is being used mostly in the financial services sector because of the large number of daily transactions that need to be screened for certain parameters. However, as AI becomes more advanced, it could find applications in virtually any industry.
Where a traditional rule-based monitoring will look for very specific red flag transactions, AI is able to filter out false positives by identifying benign reasons for them. Normally ruled out in the course of investigation performed by an analyst, now AI is capable to handle this task on its own by identifying patterns and correlations too complex to be picked up by traditional anomaly detection approaches. For example, PayPal has already cut its AML/Fraud false alarm rate in half by using a homegrown AI monitoring system.
While the opportunities are considerable, from a compliance perspective the technology is most relevant to monitoring transactions for potential criminal activity in the following fields:
Sanctions screening and AML transaction monitoring: against OFAC and other international sanctions/watch-lists of politically exposed persons and businesses, and other international bad actors associated with organized crime.
Due diligence and KYC controls: seeking to assess and vet at the onboarding stage or monitor for unethical behavior on an on-going basis. This includes detection of links between accounts, customers, and related parties, unpacking of beneficial ownership, legal entity and directors, identifying relevant negative news along with financial health scores and legal filings to fully understand the risk posed by the third party.
Fraud identification: constantly monitoring behavior and cutting down the false positives by adding context/other data such as usage patterns, IP addresses, geolocation tagging, phone numbers, etc.
ABC controls: analyzing multiple sources of information including accounting ledgers, expense reports and corresponding receipts and invoices, emails, phone calls, text messages to detect payments/expense payouts questionable in terms of their timing, nature, regularity, and/or amounts.
___
Even with the enhancements AI solutions can bring to compliance and ethics programs, it would be naïve to assume that AI technology on its own is capable of changing the organizational culture towards a more ethical one. On the contrary, AI’s emergence instead provides a new perspective on existing ethical challenges.
The 50-year-old trolley dilemma is discussed today in the context of self-driving cars. The original thought experiment asked if it would be ethical to hit a switch that will turn a runaway trolley onto a side track where it will kill one workman instead of killing five workmen on the main track.
Today this philosophical problem requires a practical solution: car producers need to “teach” AI how to behave whenever some form of collision is inevitable. Who should be given a priority — the passengers, the pedestrians, or neither?
When we speak of AI in the context of compliance, we generally mean a lot more simplified set of technologies than automated cars. AI is not expected to replace human decision-making in compliance — at least, we are not there yet. Based on data analysis, the machine would only advise a human operator on the course of action to take.
However, the trolley dilemma is relevant to understanding that the ways machines will assist humans in both ordinary and more sophisticated tasks will depend on our own ethicality. In companies where ethics is not just an “e-word,” there will be much less incentive to manipulate technology or create technologies to stick only to the bare compliance minimum, or even when legally permitted, evade regulation. The behavior of AI would be a reflection of our own values and behaviors, and that’s why the existing and underlying ethical culture is key.
In its report, “Top 9 ethical issues in artificial intelligence,” the World Economic Forum named unemployment as the primary concern people have about AI. There are plenty of scary headlines about robots replacing most human jobs — making today’s jobs automated or obsolete. But it’s far too early, I think, to talk about AI causing compliance unemployment. There is still much work to be done by humans on the ethical side of the job.
____
Vera Cherepanova, FCCA, CIA, MSc (pictured above), has more than 10 years’ experience as a compliance officer. She’s currently a self-employed ethics and compliance consultant based in Milan, Italy. She speaks English, French, Italian, and Russian. She can be contacted here.
2 Comments
This is a fascinating topic. I currently work in the financial services world and we are working on trying to automate as many manual processes as possible. As you've said, though, pretty much all of our 'decisions' are still made by a human being. The automation aspect simply helps us filter out the 'noise' and get us the list of decisions we have to make.
What are some of the biggest challenges you see for AI technology in the compliance world going forward?
Hello Randall,
Thank you for your comment. This is a great topic indeed. Besides the challenges connected with ethical dilemmas which I have covered in the article, there are at least two other perspectives to consider.
From a technical standpoint, model selection, validation, training and re-training are all quite challenging in terms of a) making sure that the provided data is sufficient, and b) the model produces optimal results. From a regulatory perspective, the primary challenge is to build adequate competencies to be able to regulate and audit AI systems that companies develop and put into operation.
Comments are closed for this article!