A step forward in reforming a global banking system that harbors criminals
The FinCEN Files didn’t pull back the curtain on crime within our banking system; it pulled back the curtain on the system enabling crime to thrive. This week, Congress approved legislation requiring U.S. companies to register the identities of their owners — a huge step in anti-money laundering/countering the financing of terrorism (AML/CFT) reform.
But our AML/CFT system needs much more work. Specifically, government regulators must share feedback with banks and regulatory technology (“RegTech”) companies, not just take data. This shift to two-way communication will create a community of professionals supporting the United States’ national security and financial integrity goals.
Criminals are unafraid to use the most trusted global banks to launder, move and store ill-gotten proceeds from criminal activity because our AML/CFT system is built on a flawed model. It assumes the effectiveness of bank compliance can be measured by the quantity of Suspicious Activity Reports (SARs) filed with Financial Crimes Enforcement Network (FinCEN) — not whether the reports actually lead to any meaningful result.
SARs provide important data to government agencies. But it is time to start measuring the effectiveness in how well that data deters criminal behavior, and not just the quantity of data shared. If no more than 5 percent of SARs filed actually represent illicit activity, a fair estimate, then the people and accounts reported in the other 95 percent are investigated without good reason and without public benefit such as safety and security. True effectiveness and progress can only be measured with an approach that considers both inputs (SARs and other data shared with the government) and outputs (the number of disrupted terrorist financing networks and arrests of criminals, etc.).
The dramatic evolution in technology over the past decade, specifically today’s easy access to Artificial Intelligence (AI) and Machine Learning (ML), can help accomplish this. With AI/ML, an institution can screen faster and more efficiently, with technology ranking and prioritizing entities based upon likely threat. By improving effectiveness and efficiency, fewer innocent people will have their privacy disturbed, and the rest of us will wake up in a world with less financial crime.
This approach requires providing financial institutions with the necessary training data. ML and AI systems are inductive by nature, meaning they require examples in order to learn. But banks don’t need to share customer information, like names, addresses or other Personally Identifiable Information (PII) to provide each other with that training data. Law enforcement, likewise, can collaborate with financial institutions without sharing sensitive case information. Instead, bankers, regulators and law enforcement agents can share algorithms built on the data they have. Technology allows for federation and transfer of models while preserving privacy (known as “transfer learning” and “federated learning”), which means institutions receive the benefits of collaboration without the drawbacks.
I have built two companies on these models, and the time is now. In the ideal reporting system, screening and continuous vetting are the priority; algorithms are shared collaboratively between institutions; and law enforcement, regulators and banks share a feedback loop that is electronic and machine-learned. Instead of relying on high quantity of low-quality data shared with the government, the system of the future prioritizes the right data to generate the best results, and we measure the effectiveness in how well that data deters criminal behavior.
Gary M. Shiffman, Ph.D. is the author of “The Economics of Violence: How Behavioral Science Can Transform our View of Crime, Insurgency, and Terrorism.” He teaches economic science and national security at Georgetown University and is the creator of Dozer and GOST.