Skip to content

Editors

Harry Cassin
Publisher and Editor

Andy Spalding
Senior Editor

Jessica Tillipman
Senior Editor

Bill Steinman
Senior Editor

Richard L. Cassin
Editor at Large

Elizabeth K. Spahn
Editor Emeritus

Cody Worthington
Contributing Editor

Julie DiMauro
Contributing Editor

Thomas Fox
Contributing Editor

Marc Alain Bohn
Contributing Editor

Bill Waite
Contributing Editor

Russell A. Stamets
Contributing Editor

Richard Bistrong
Contributing Editor

Eric Carlson
Contributing Editor

Dan Adamson on AI: Due diligence needs more human judgment, not less

Just 15 years ago, due diligence researchers might spend hours retrieving documents from courthouses and sifting through microfiche at libraries. As recently as 2010, they could go through six to 12 months of training just to learn how to create a “level 1” report. 

Today, technology can handle those tasks in minutes or seconds, and researchers might wonder about their future in an industry that seems ready to replace humans with machines.

But machines can’t replace humans, and the industry shouldn’t try. The better approach is to combine humans with machines, augmenting human intelligence, not replacing it.

The chess world learned this lesson after humans and machines traded the title of world’s greatest chess player in the 1980s and 1990s in contests that were billed as the ultimate “man versus machine” competition. Then it discovered that a decent human player assisted by a computer performs better than either a grandmaster or a supercomputer playing alone. Today, the World Chess Champion who famously lost to a computer in 1997, Garry Kasparov, writes of learning to love intelligent machines  by seeing them not as rivals but as a boon that provides opportunities to extend human capabilities.

In current state-of-the-art due diligence, the computer begins research the same way a human would: by making a positive identification. With just a few basic biographical details, software matches an entity with an official profile in a government or national identifier database that includes the subject’s name, current and past addresses, work history, and social security number.

The machine can then use secondary sources to whittle down the possibilities, cross-referencing open intelligence sites and premium sources (as subscriptions allow). It can execute searches multiple permutations of a name, retrieving, for example, results for Glaxo Smith Kline, GlaxoSmithKline, and GSK.

With artificial intelligence, machines can go beyond just aggregating data to help compliance teams extract the right information out of millions of available pages, interpret it correctly and process it quickly. Today’s technology can understand different human languages, follow leads and create concise, auditable reports within minutes to highlight potential issues while eliminating the vast majority of false positives.

Banks in particular need these capabilities. With more than $1 trillion of illicit financing moving through the financial system each year, an average of 200 regulatory changes to track globally each day, and fines for noncompliance that have cost $321 billion since 2008, banks urgently need the capabilities that AI adds to due diligence.

But technology can’t do everything. No machine today can gain true insight into the complexities of what any human being is thinking or has thought. IBM’s Watson doesn’t learn while it’s in a chess game and start reacting and changing its algorithms, despite what its marketers tell you. And self-driving cars don’t learn to think for themselves and start rolling through stop signs as a way to get better gas mileage. Machines can’t think creatively and won’t any time soon, so today’s solutions still need humans to make the ultimate call.

Teamed up, humans and machines can quickly sift through a complex web of interrelated sources to find the right balance between comprehensive, forensic analysis and practical, cost-effective results. The cognitive computing platform does the time-consuming job of sifting through masses of available information, chasing down leads, marking off obvious false positives and flagging any possible risk issues so that the human team can make better decisions.

The future of due diligence asks more of humans, not less. Researchers will need even better training as their work grows increasingly sophisticated. They’ll spend more time thinking and less time gathering and reading information that software can easily read and discard. They’ll focus almost entirely on identifying and understanding the true risky events and potential risks.

The due diligence industry needs all the human intelligence it can find — just not applied to mundane tasks. When it comes to fighting the boundless criminal creativity of what may be the world’s most profitable industry, human thought has never mattered more.

____

Dan Adamson is the President, DDIQ & Global Head of Cognitive Computing at Exiger, the global regulatory, financial crime, risk and compliance company. Exiger recently acquired OutsideIQ, a company founded and led by Adamson that develops innovated cognitive computing solutions that perform automated due diligence to help organizations grappling with compliance mandates and business risk. He can be contacted here.

Share this post

LinkedIn
Facebook
Twitter

1 Comment

  1. Hello Dan,

    thanks for the interesting article! For my book “Access Granted – Tomorrow’ Business Ethics” I elaborated a job-profile for the new position of an Artificial Intelligence Compliance Officer (AICO), a human employee, whose target groups are the AI-software, but also its programmers. It includes AI inside the company’s products and solutions, but also internal AI, taking the role of an artificial employee. Such a robot or intelligent software is less comparable to the humanized “C3PO” or “R2D2”, but better be understood as the “T-1000” from the Terminator-cinemalogy. A machine what could be created for all purposes, to terminate or protect humans. Similar to a predator, not good or bad, but simply fulfilling its purpose in nature.

    Base for this new job-function is the university- and business-workgroup “Fairness, Accountability and Transparency in Machine Learning” (FATML), which wants to understand and ensure that machine learning and its decision making stays based on values and ethics. With these principles, the AICO has to foster and ensure responsibility, explainability, accuracy, auditability and fairness.

    Similar to today’s Compliance Officer, who is responsible for human employees only, the AICO has to establish him- or herself, not only as technical and legal-expert, but furthermore as a trusted colleague and advisor. This concept explains, why the Ethics & Compliance department can be enriched by AI, but not replaced. A brave new world is awaiting us!

    Thanks and best regards,

    Patrick


Comments are closed for this article!