Out with the old, in with the new: Banks must update security systems
© Getty Images

Financial risk has become so broad that it’s nearly impossible to pinpoint one right way to reduce it. It’s dynamic and constantly evolving. 

One thing is certain: the legacy systems that financial institutions have relied upon for the past 40 years to protect against things like fraud and money laundering are no longer sufficient. These systems rely upon historical “rules,” implemented by banks to create an alert when signs of a possible threat are detected.

This approach was fine for its day, when cybercrime was young and fairly predictable. But today’s banks face endless attacks from sophisticated hackers who use incredibly advanced fraud and cyber tools to sneak past their defenses and steal customer data and money. 


Rules-based systems have no way to recognize new types of threats, let alone sift through mountains of customer data generated daily to pinpoint suspicious activity. 


So instead, they see a threat in everything, generating an overwhelming amount of false positives that create huge bottlenecks and force banks to hire virtual armies of analysts to investigate them all. Even worse, this ‘false positive fatigue’ can actually cause institutions to miss the real threats.

However, there is a solution: mathematical algorithms that organize dynamic models of associations between observations. An algorithm is essentially a set of instructions or steps that sift through a number of possibilities or associations, testing each one against specific inference criteria before arriving at a conclusion.

The concept may sound vague or intimidating, but we use these kinds of algorithms every day, whether we are aware of it or not. In fact, the processes of human memory formation are algorithms! They are necessary for us to achieve efficient awareness and decision-making.

For example, imagine walking in the woods and listening to the birds sing. After a while, you will begin to recognize patterns in their songs, and even be able to associate them with specific locations, weather or seasons.

If you continue going on these walks and listening to the birds, you will eventually be able to determine which type of bird is singing which song. This is exactly how these algorithms work; they recognize complicated patterns, sort them out and organize them into a coherent network of associations.

Another example: if you are driving and your car makes a strange sound that you’ve never heard, it’s easy for you to deduce that something may be wrong. This is because you are unconsciously organizing the sounds and behavior of your car, using your memories as a model.

Once you have enough experience with a particular environment, you can detect when there are anomalies. People do this all the time, but they do it slowly, which is why the speed of machines comes in handy. 

Data changes its shape and texture all the time, and algorithms are designed to flow with that. This is why they are such a perfect tool for detecting anomalies in massive amounts of data. As the complexity of financial business increases through new channels and types of transactions, the systems need to be re-tuned constantly.

Rules-based systems, and the thresholds, parameters and settings that come along with them, cannot adapt to the changes. By the time thousands of parameters are re-tuned and systems are adjusted, the rules created are already stale, rendering the whole process useless.

As such, it is critical that banks enable real-time online systems that can (a) define relevant consistent patterns of associations (in other words, the ‘status quo’), and (b) “connect the dots” to reveal inconsistencies in the data whenever they appear.

Recently, a leading European bank used this kind of anomaly-based detection in an attempt to detect cases of SME lending fraud. It employed algorithms to analyze data from three domains: Chamber of Commerce, Customer Information, and Customer Account Balance & Transactions.

The bank gathered data over a 12-month period and ultimately detected not only five cases of fraud, but also a money laundering threat and nine cases of credit risk. Furthermore, only one data scientist was required for the project, because he wasn’t buried under hundreds of false positive alerts.

In order to truly reduce operational risk, financial institutions must be able to identify and eliminate threats in a timely and effective manner. Rules-based systems can only react to what they know; they cannot detect the constant influx of new threats appearing and are unable to react to data they haven’t previously encountered. Without algorithms in place for anomaly detection, companies will always be one step behind and risk will only increase.

The new generation of algorithm-based machine learning technologies dynamically tune to data as it is processed. It is a new paradigm in the financial world that enables banks to rapidly detect illegal activity, nearly eliminate false positives, and significantly reduce operational risk. 


Ronald Coifman is a faculty member at Yale University, and he is one of the world’s leading mathematicians, with over 35 years of research and teaching experience. He is the co-founder of ThetaRay, a cyber security and big data analytics company with headquarters in Hod HaSharon, Israel and offices in New York and Singapore.

The views expressed by contributors are their own and not the views of The Hill.