Insights

Regulators and Banking Industry – Adapting to AI


August 31st, 2020


By Walter Ullon, Sarva Srinivasan

1   Introduction

In Wolfgang von Goethe’s 19th century retelling of the classic German legend, Dr. Faust is a bored, ambitious protagonist who is very dissatisfied with his pedestrian lot in life. Faust strikes a deal with the demon Mephistopheles, who serving as a proxy for the devil, grants Faust access to his magical powers for a number of years.  All the while Mephistopheles reminds the lustful scholar of the price that must eventually be paid in exchange for boundless knowledge and worldly pleasures – the forfeit of his soul for all eternity [1].

Centuries later, in a stroke of poetic fancy, technology luminary Elon Musk called us to reconsider the reality of the “Faustian bargain” and its relevance to our times, posing a stern warning that “with artificial intelligence, we are summoning the demon” [2].

Granted, the nature of the demon alluded to by Musk was not some elemental, supernatural being. His reference is to the incredible power of Machine Learning and AI and their future promise which carries some of the mystery and danger of the occult - oracle-like “black boxes” that presages events with surprising accuracy and inscrutability.

When technologies become democratized, it inevitably opens a universe of possibilities. In recent times we have seen just about every industry sector rely on intelligent algorithms to detect patterns, predict scores, classify events, cut down on costs, automate processes, and drive up efficiency..

The sober truth though is that if we do not prepare for these disruptive technologies, and channel it in a controlled manner we may very well find ourselves on the wrong side of disruption. MIT physicist Max Tegmark put it bluntly, “we invented fire, repeatedly messed up, and then invented the fire extinguisher, fire exit, fire alarm, and fire department” [3].

In the paragraphs below, we explore how the Financial Industry can collaborate with the regulators to adopt the measures and governance practices being proposed. We survey some of the cutting-edge methods being employed by the likes of Google to demystify the inner workings of ML models and extract useful explanations from encoded learning. We illustrate use cases and legislative developments seeking to protect the privacy, livelihood, and personhood of citizens in the United States as well as the European Union. Last, we touch upon some of the challenges that lie ahead and hope to show that success and compliance are both attainable by adhering to a straightforward set of guidelines.

Dispelling fears begins with understanding our choices and the nature of the solutions. The capability of the tools available to us now should set us at ease in the knowledge that we can be ready to tackle the problems that will determine who will lead from the front as we enter the new decade.

2   The Problem is... also the Solution? The Role of AI in Regulation

While the causes that led the 2008 financial crisis are varied, complex, and have been analyzed ad nauseam, expert opinion agrees that it was precipitated by lack of focus on -  models, cross validation, independent assessment along with mispricing of complex level 2 and level 3 assets due to poorly understood algorithms. At the peak of their pre-crash influence these models oversaw movements that accounted for as much as 40% of all trades on the London Stock Exchange, and as high as 80% on some American equity markets [4].

These excesses and uncontrolled risk have since resulted in a myriad of regulations including Volker and Dodd-Frank, which the financial industry has had to adopt.

Now more than a decade later, in an ironic turn of events, the same SEC who took the brunt of the blame in the crisis that ensued and whose raison d’etre is to stop this sort of thing from happening again, is employing similar algorithms to help them detect foul play,

“At the Commission we are currently applying machine learning methods to detect potential market misconduct. Many of the methods are open source and easy to implement for those trained in data science...This freedom has fueled the rapid innovation at the SEC, and I suspect also among your organizations” [6].

Data Quality or the lack thereof largely drives the inaccuracies in interpretation of outcomes. In a world that has been producing data at a rate of 2.5 quintillion bytes per day [7],  throwing bodies at the problem of Data Quality, is no longer feasible as it has become intractable by volume, velocity, and impenetrable by complexity. Clearly, the only way forward is by leveraging intelligent algorithms, talent development coupled with adherence to regulations and guidelines while keeping the ethics and human values in perspective.

This is not saying that the exercise has become trivial merely by invoking the power of AI, but the benefits seem to outweigh the risks.

3   The Role of Regulation in an AI Driven World

On February 11, 2019, the Office of Science and Technology Policy announced that the White House would be signing an Executive Order, effectively launching the “American AI Initiative” [9].

Among the points outlined in the press release, some of which included “Investing in AI Research and Development”, “Unleashing AI Resources”, and “Building the AI Workforce”, the directive stressed the importance of formulating adequate mechanisms for the safe adoption and oversight of AI technology.

Under the “Setting AI Governance Standards” section, the initiative concludes,

“As part of the American AI Initiative, Federal agencies will foster public trust in AI systems by establishing guidance for AI development and use across different types of technology and industrial sectors. This guidance will help Federal regulatory agencies develop and maintain approaches for the safe and trustworthy creation and adoption of new AI technologies. This initiative also calls for the National Institute of Standards and Technology (NIST) to lead the development of appropriate technical standards for reliable, robust, trustworthy, secure, portable, and interoperable AI systems” [9].

Taking an earlier, broader stab at the issue, the “Board of Governors of the Federal Reserve System” released SR 11-7 on April of 2011 as an effort to define general guidelines to govern model risk management [12].

Taken as whole, the aforementioned directives echo efforts by European and international agencies to control the widespread adoption of AI in a manner accordant with human rights and democratic values.

On a wider international level but still concordant with the spirit of the EU’s GDPR, the Organization for Economic Cooperation and Development (OECD) passed a set “human-centered” principles that were eventually adopted by the G20 on June of 2019 [10, 13].

Chief among these principles, two stand out as relevant to the subject under discussion here:

  1. AI systems should be designed in a way that respects the rule of law, human rights, democratic values and diversity, and they should include appropriate safeguards – for example, enabling human intervention where necessary – to ensure a fair and just society.
  2. There should be transparency and responsible disclosure around AI systems to ensure that people understand AI-based outcomes and can challenge them.

Governments have come to realize that to be competitive and to improve efficiencies in their operations, adoption of AI is critical and unavoidable. Consequently, there is much collaboration between regulators, governments, industry participants as they strive to lay down a common playing field for all participants.

The legislative overtures mentioned previously take a very human-centric approach to the problem of regulating AI, placing a premium on features that provide the ability to interpret models and challenge their decisions should they become unaligned with general guidelines, i.e. the ”right to an explanation”.

This of course entails developing and implementing features, that can accurately and transparently distill the model’s encoded “learning” for ingestion and analysis by model risk/compliance officers.

4   The Role of Explainability in Model Governance

When we talk about “model explainability” we are referring to the family of methods/algorithms that can extract prediction reasons from a model’s “learning”. Highly accurate models may arrive at their predictions by finely tuning entropy-reducing splits, latching onto important correlations, mappings, etc. Though useful, however, these still obscure much of the inner workings of the model without providing us with a single iota of insight.

For so-called “black-box models” such as neural networks, these “explainer” algorithms give us the ability to elucidate many of the hidden relationships encoded in the complex mathematical layers that comprise a fully-trained model.

For instance, libraries such as SHAP (SHapley Additive exPlanation) and LIME (Local Interpretable Model-agnostic Explanations) have been growing in popularity recently, as data science teams began prioritizing model explainability as a measure for validating models and ensuring bias-free predictions.

While a detailed technical description of these libraries is beyond the scope of this paper, the main ideas behind them have sound mathematical foundations.

SHAP relies on the principles of game theory to consider all possible predictions for set of inputs, by iterating through all possible combinations of these, thus finding the average marginal contribution of a feature value over all possible coalitions [14].

LIME, on the other hand, relies on local linear approximations, i.e. building sparse linear models around the vicinity of a prediction to explain the model’s behavior in this subspace [15].

Any team seeking to understand their model’s behavior either as an exercise in pure data science (quality, consistency, sanity), or from the point of view of model risk officers looking to document compliance with internal and external policies, would be well-advised to incorporate these methods in their model pipelines.

For instance, data giants such as Google and European industry leaders like the Bank of England have already internalized these techniques [16, 17].

In a nutshell, explainable models accomplish five very important tasks:

  1. they build confidence: a model whose workings are well-understood and whose outputs have been validated in accordance with original assumptions, foster trust in the end-to-end process of building and deploying AI solutions
  2. they alert the presence of bias: model explanations might point to misuses of data that violate the rights of persons in discriminatory ways that were never intended the creators
  3. they improve future models: by understanding how models are learning to make their predictions, data science teams can begin improving models by engineering features that augment helpful signals, or by removing noisy ones
  4. the help assess risk: models deployed in novel environments could be susceptible to adversarial attacks, whereby bad actors disrupt systems by identifying a small number of data points that can prompt erratic behavior in the model. Assessing these risks requires advanced understanding of the robustness and vulnerability of the model.
  5. they meet regulatory standards: transparency is a crucial factor in enforcing legal rights surrounding a system and proving that a product meets regulatory standards. In some cases, lack of explainability can be a nonstarter as it violates certain guidelines regarding the right to explanations, or other such measures that could result in liability [18].

As we can see, model explainability is necessity that as of recent has become synonymous with good internal practices as well as cutting-edge innovation.

Furthermore, for reasons outlined in the previous section, adherence to regulatory standards would be altogether difficult, if not impossible, without relying on these powerful models and supporting frameworks.

But there are, of course, some challenges when it comes to building explainable models. Chief among them is the fact that different use-cases might require different forms of explainability. Thus, proper thought and good design ought to be given to these considerations in order to guarantee viable solutions to the problem of explainable AI.

5   The Path to Algo-Adoption

Surely, if Machine Learning and its myriad of algorithms are hopelessly complex and irrevocably obscure, then one could make the case that the same holds true for more benign technologies that are staples at financial institutions everywhere. Take for instance Excel. Except for a senior Microsoft software engineer, no user is intimately aware of the inner workings of the software. Despite this, the software is employed daily, worldwide and without questions.

Clearly, a healthy period of mistrust is constructive at this stage, as it was surely the case with Excel at some point. New Results were double and triple checked against known results, gradually building confidence in the new technology. It is in this manner that we suggest the implementation of AI algorithms should be approached.

First, it is necessary to identify low risk use cases with sufficient, good quality data, where implementation teams can perform back tests based on historical records, thereby ascertaining the model’s accuracy with little to no risk of causing damage to ongoing operations.

Second, once the model has been properly trained and calibrated, the team should perform parallel testing on live/current data, preferably in a detached environment where sensitive assets would not be affected in the event of a failure or bad prediction. It is recommended that this should take place over a period of several weeks/months with periodic re-training of the model, to ensure that it is robust enough to handle novel scenarios.

Third, the design for contingencies ought to be undertaken as early as possible, with the purpose of addressing those scenarios where the model did not perform well. These cases should be cataloged and corrective steps sufficiently well documented so that the response is swift and decisive.

But perhaps most important of all is to foster good “data stewardship” programs in the organization. This means ensuring data is clean, free of errors, and originating from a trusted source. After all, Machine Learning begins and ends with data, and good quality training records are essential to ensure noise reduction during model training.

On this point, the SEC has this to say,

SEC staff, particularly staff in the Division of Economic and Risk Analysis (aka “DERA”), have long recognized how essential it is to have usable and high-quality data...When applied to the emerging fields of SupTech and RegTech, there is tremendous potential for enhanced regulatory compliance.[6]

Where these programs are given sufficient resources to carry out competent monitoring of incoming data, they will see the biggest reduction in the risks associated with bad predictions, in addition to lower model upkeep. In turn, this will enable organizations to research and take on more complex use cases where the return on their investment could potentially be much higher.

As we can see, it is essential for organizations to get ahead of the game by gaining early exposure to these technologies and get comfortable with the language of AI, incrementally building experience.

Unquestionably, understanding and sterilizing the risks associated with AI automation is a major undertaking. The financial services industry lends itself to a number of use cases – quality of customer, market and securities data, reconciliation, monitoring fraud, credit risk, market risk and many more, the steps outlined above present a solid foundation upon which to build valuable AI capital by adopting advanced tools and framework that can be leveraged and improved upon. The time is ripe to get the effort underway.

6   Parting Words

In this paper, we have studied the unique challenges brought forth by AI;  how the regulators are proactively working with the industry to define guidelines to minimize abuse; the need  for tools to validate and explain models; surveyed some of the most popular methods for achieving these goals and outlined their benefits.

In addition, by laying out a simple approach to the adoption of AI technology, we hope to have allayed some of the fears that accompany efforts of this magnitude and how we can sterilize much of the risk while still benefiting from the insights garnered therein.

Explainable models are critical, but they will have to co-exist in an eco-system with frameworks to monitor, measure and prevent bias,  incorporate appropriate data stewardship and most importantly human-centered ethics

Aligning the vision and goals of an organization with the transformative force of AI will ensure beneficial outcomes and success as we continue this journey.

About EZOPS

EZOPS Inc. is a U.S.-based fintech firm providing full front-to-back data control software that drives efficiency and dramatically reduces operational costs and resource requirements for financial services institutions.  With the power of its novel machine learning tools capable of automating virtually any manual process, EZOPS has harnessed artificial intelligence to vastly enhance data flow and reduce operational bottlenecks, enabling clients to enjoy major cost and time savings while achieving straight-through processing (STP) automation goals. Clients such as global and regional banks, futures commission merchants (FCMs), asset managers, fund administrators, insurance firms and corporate treasury operations use the EZOPS modular suite of software tools to control and manage the full range of post-execution business processes associated with their listed and over-the-counter derivatives activity.

References

[1] Goethe, Johann Wolfgang von, 1749-1832. Faust. Part II: a Dramatic Poem. Edinburgh: W. Blackwood, 1886.

[2] McFarland, Matt. “Elon Musk: ’With Artificial Intelligence We Are Summoning the Demon.’.” The Washington Post, WP Company, 24 Oct. 2014, www.washingtonpost.com/news/innovations/wp/2014/10/24/elonmusk-with-artificial-intelligence-we-are-summoning-the-demon/

[3] Tegmark, Max. Life 3.0: Being Human in the Age of Artificial Intelligence. Penguin Books, 2018.

[4] Dodson, Sean. “Was Software Responsible for the Financial Crisis?” The Guardian, Guardian News and Media, 15 Oct. 2008, www.theguardian.com/technology/2008/oct/16/computing-softwarefinancial-crisis

[5] Delcl´os, Carlos. “The Authority of the Inscrutable: An Interview with Cathy O’Neil.” CCCB LAB, CCCB Lab, 6 Feb. 2019, lab.cccb.org/en/the-authorityof-the-inscrutable-an-interview-with-cathy-oneil/

[6] “Speech.” SEC Emblem, 3 May 2018, www.sec.gov/news/speech/speechbauguess-050318

[7] “How Much Data Is Produced Every Day?” Northeastern University Graduate Programs, 26 Nov. 2019, www.northeastern.edu/graduate/blog/howmuch-data-produced-every-day/

[8] “Speech” SEC Emblem, 21 June 2017, www.sec.gov/news/speech/bauguess-big-data-ai

[9] “Accelerating America’s Leadership in Artificial Intelligence.” The White House, The United States Government, 11 Feb. 2019, www.whitehouse.gov/articles/accelerating-americas-leadership-in-artificialintelligence/

[10] “OECD Principles on Artificial Intelligence - Organisation for Economic Co-Operation and Development.” OECD, www.oecd.org/goingdigital/ai/principles/

[11] “Art. 1 GDPR – Subject-Matter and Objectives.” General Data Protection Regulation (GDPR), gdpr-info.eu/art-1-gdpr/

[12] “Board of Governors of the Federal Reserve System.” The Fed - Supervisory Letter SR 11-7 on Guidance on Model Risk Management – April 4, 2011, www.federalreserve.gov/supervisionreg/srletters/sr1107.html

[13] “G20 Ministerial Statement on Trade and Digital    Economy https://www.mofa.go.jp/files/000486596.pdf

[14] “Slundberg/Shap." GitHub, 28 Feb. 2020, github.com/slundberg/shap

[15] “Marcotcr/Lime." GitHub,  21 Dec. 2019, github.com/marcotcr/lime

[16] “AI Explainability Whitepaper” https://storage.googleapis.com/cloud-aiwhitepapers

[17] “Staff Working Paper 816”       https://www.bankofengland.co.uk/-/media/boe/files/working-paper/2019/machine-learningexplainability-in-finance-an-application-to-default-risk-analysis.pdf?la=enhash=692E8FD8550DFBF5394A35394C00B1152DAFCC9E

[18] “Explainable AI: the Basics”  https://royalsociety.org/-/media/policy/projects/explainable-ai/AI-and-interpretability-policybriefing.pdf