“It is true, we shall be monsters, cut off from all the world; but on that account, we shall be more attached to one and other.”
— Mary Shelley, author of “Frankenstein, or The Modern Prometheus”
Reflecting on the current state of Artificial Intelligence (AI), one might say that in writing the novel “Frankenstein,” Mary Shelley unknowingly touched upon the fears and contradictions facing humans in the 21st century. Humans, in our desire to advance, are pushing the boundaries of innovation by creating intelligent thinking machines that could (in a worst-case scenario) become self-serving and fall out of the control of its creators.
The jury is still out on whether the “we created a monster” narrative is simply the stuff of science fiction or whether AI poses a significant threat to humanity. We can say for sure that AI is having an impact and solving a wide range of problems. When used to solve highly specific problems, AI is referred to as “Narrow” AI and offers promising advancements for many industries. Its counterpart, “Full” or “General” AI, has bigger goals and aims to replicate human cognitive thinking – this capability stokes fear that AI will lead to self-destruction.
In the financial industry in particular, there are only a few case studies for Full AI. In a survey of AI risks by Emerj, which questioned over 30 multiple academic researchers, several said they are worried that AI designed to increase profitability for businesses or individuals could lead to economic catastrophe. Today, an increasing number of reputable banks and asset managers are leveraging predictive machines to pick stocks and improve trading outcomes.
The financial industry has far more Narrow AI case studies. The most common and notable use cases include AI’s ability to augment the customer experience with chatbots, predict fraud in a loan or portfolio, reduce the risk of money laundering, or determine borrower creditworthiness. These uses have made back offices more efficient, reduced overhead costs and improved customer retention. Some financial institutions are even exploring the use of AI in financial modeling, model benchmarking, auditing and model validation. These uses could improve a financial institution’s ability to make strategic, risk-based business and balance-sheet decisions.
Could such tools champion financial services operations management, save institutions from reputation damage and attract more customers? Perhaps. However, in order to get the full benefits of AI, financial institutions need to prepare for risks and speed bumps such as regulatory scrutiny, data privacy and AI misuse.
For financial institutions dabbling with investing in AI, here are three foundational moves that will help pave the way for AI’s use:
1. Strengthen Base Automation Capabilities
The area of financial services is known for a wide variety of forms, reports and documents with different formatting, field names and data structures. In some forms, you might use a decimal for a dollar figure; in another, you might just round to the nearest hundred. Optical Character Recognition (OCR) technology can help an institution read traditionally non-readable forms and extract data for use. Leveraging this technology, an institution can begin to aggregate, normalize and centralize key data elements (KDE), and some of the more advanced OCR technologies can even begin to interpret the data. Having a centralized, accessible database, institutions will be prepared to think through their biggest hurdles and identify how the performance and decisions of one department, such as credit, impacts the performance and behavior of another department, such as banking and deposits.
2. Start with Specific Questions
To reinforce a point made in a previous article, “The Most Important Aspect of Data Has Nothing to Do with Data,” financial institutions should start with a strategy. In this case, solve for a specific challenge. The magnitude of AI can inspire the institution to begin collecting enormous amounts of data, but perhaps the more practical approach is to determine the problem you are trying to solve, and the data that is required to achieve the objective. The application of AI will be far more successful if you start with a specific question; once the institution has a proof of concept in which AI answers that question, more questions and challenges can be considered.
3. Validate AI and Machine Learning Models
Regulators are watching closely how financial institutions use or intend to use AI/ML (Machine Learning) models, and simple explanation may not be satisfactory. Atul Nepal, Quantitative Analyst at MountainView Financial Solutions, a Situs company, said, “Explainability of model outcome plays a crucial role in financial services. If an AI-driven credit model denies a loan application, institutions need to show that AI did not deny the application based on factors such as race or gender. Regulators need to understand the factors considered in the loan decision. A black box won’t satisfy a regulator.”
Atul further stated that if an institution is using AI for any type of model, it is critical that the institution assesses the data, documents the model thoroughly, evaluates governance, benchmarks the model, and vets the model through an independent validator – an individual not involved in any part of model development process. Models using AI will likely be more complex and sophisticated, making it more difficult to explain to a regulator.
It is easy to think through all that might go wrong if machines outsmart humans, but in financial services, AI might just catalyze a game-changing transformation. While the AI opportunity poses many challenges and risks, it is inspiring solutions at every turn. With careful planning, management and application of AI in financial services, we can keep the monsters at bay and use AI to become a champion for customers, employees and financial services leaders.
Does your institution utilize AI for financial modeling? Email firstname.lastname@example.org if you need data assessments, model benchmarking or model validation services.
Fagella, Daniel. December 8, 2018. Risks of AI – What Researchers Think is Worth Worrying About. Emerj.com. Retrieved January 4, 2019. https://emerj.com/ai-market-research/artificial-intelligence-risk/