Is artificial intelligence the biggest blessing or the biggest challenge of our age? Considering the relentless proliferation of legal requirements in nearly every jurisdiction of the world, the question is more relevant than ever for compliance professionals.
Today’s compliance departments have to contend with an incredibly complex web of laws and regulations, detect red flags among thousands of third parties, and conduct sanction-list screening of all debtors and creditors. Banks and financial institutions face additional AML and KYC requirements, including pre-screening customers before opening accounts or fulfilling substantial transactions. For example, in the European Union alone, 521 legislative acts and 1,446 non-legislative acts (including delegated and implementing acts) were adopted in 2019.
Tracking legal changes, maintaining master data on thousands of third parties, and detecting and preventing unauthorized financial transactions has become almost impossible without automated processes and AI applications. After the new GDPR regulation entered into force in May 2018, certain AI applications were able to find relevant data privacy clauses in contracts to be modified, reducing the manual workload significantly. In addition, AI might also play an important role in the field of compliance risk assessment or predictive analysis, where tens of thousands of data inputs have to be analyzed and translated into action items for compliance managers and valuable management information for business leaders. Companies are also increasingly applying natural language processing (NLP) software to identify relevant compliance-related legal cases or laws adopted by authorities.
Nevertheless, like every new technology, AI constitutes new risks as well. Offering more accurate and precise solutions through AI applications does not necessarily lead to the best result. For example, in 2018 Amazon reportedly had to stop applying a recruiting engine because the tool “learned” to be discriminatory against women. In order to select the most appropriate applicants, the AI hiring tool observed a pattern among the candidates in the past ten years and learned that men had been more successful due to the dominance of male applicants. The case ultimately led to the concept of algorithmic accountability, which triggered the introduction in the U.S. Congress of the Algorithmic Accountability Act 2019. The bill, still pending in Congress, would oblige large companies to do an impact assessment for their high-risk automated decision systems.
To protect the rights of individuals in the age of AI, the OECD has attempted to establish global guidelines. The OECD Principles on Artificial Intelligence – backed by the European Commission, Council of Europe, and the G20 Leaders Summit in Japan – were adopted by 36 member countries along with Argentina, Brazil, Colombia, Costa Rica, Peru and Romania in 2019.
Following these principles, for the first time,
AI systems should be designed in a way that respects the rule of law, human rights, democratic values and diversity, and should include appropriate safeguards – for example, enabling human intervention where necessary – to ensure a fair and just society. There should be transparency and responsible disclosure around AI systems to ensure that people can challenge outcomes. AI systems must function in a robust, secure and safe way throughout their lifetimes, and potential risks should be continually assessed and managed. Organizations and individuals developing, deploying or operating AI systems should be held accountable for their proper functioning in line with the above principles.
Initiatives like the pending U.S. bill for algorithmic accountability, and the already implemented OECD principles on AI, are a clear sign that lawmakers have started stepping into the new field of regulating artificial intelligence and giving a legal shape to it.
What kind of future awaits compliance officers in this new AI world? Should we be worrying that self-learning machines are threatening our careers? Not at all. New paradigms will need human judgment more than ever. In order to make an impact in this future, we have to complete our traditional skill sets by extending our knowledge in the field of computer science, data analytics, risk assessment, workflow management, and software testing, among others, to harness the benefits of AI while at the same time mitigating the potential risks and machine errors.
We need the skills that will enable us to identify areas of compliance where a self-learning algorithm can supplement humans. As an example, an internal investigator will always be needed to unveil potential fraud or harassment within a company because the trust necessary for a successful investigatory interview can only be built up between human beings. Therefore, the biggest priority for compliance professionals in this new era is to learn to cooperate and co-exist with AI in accordance with our human values.
By learning to benefit from the advantages of AI, we will be remembered as those who will have helped shape a digitally advanced future of compliance.