.blog #page { padding-top: 24px; }

Financial institutions proceeding with caution in race to use artificial intelligence for modeling

A few years ago, “big data” emerged as a buzzword across many industries, including financial services. The concept and push was to capture all-encompassing information. While that talk has wound down, we’re now hearing an uptick in discussions about artificial intelligence and machine learning, interpreted as using big data to improve decision making.

Financial institutions are poised to be at the forefront of using AI and machine learning because they’ve had to maintain and sustain data for so long. Chat bots have been the most prevalent application of AI to date at large institutions, but AI is also being leveraged to prevent fraud, analyze legal contracts and even develop challenger models during validation.

Large and midsize institutions have also applied AI to some other areas of their modeling, as they’ve sought to ramp up of the sophistication of their analyses, but they’ve done this with caution for a number of reasons, according to Jeff Prelle, Managing Director and Head of Risk Modeling at MountainView Financial Solutions, a Situs company.

Prelle states that AI is starting to be used more frequently for reviewing loan applications and deciding whether to provide credit, a trend that originated with marketplace lending platforms. There are a number of Fair Lending Act concerns, and he points out that if you’re feeding data into models and not putting the correct theoretical constructs behind it, you could easily violate fair-lending practices.

There are three types of machine learning: supervised, unsupervised and semi-supervised. Each one has a different application and different level of associated risk in its application. Factor in the regulations surrounding lending requirements at financial institutions, and you realize that a model using unsupervised learning is not always going to work well without introducing theoretical constraints in the process, especially in credit modeling, according to Prelle.

In elaborating on this point, he emphasizes that the more data you collect, the more precise your result should be, but if you train the model incorrectly, you can still get an undesirable result.

“Business theory should bound what you’re going to feed the AI, and we have seen some people implementing AI outside of theory, utilizing data snooping,” said Prelle. “Some say the AI will be the end of statistical theory, but that’s really not always the case, because it’s math at the end of the day.”

The obvious supplement of this precautionary note, according to Prelle, is that it’s easy to improperly implement these models and improperly implement the data constructs. “If the data is bad, your model is going to be bad, whether it’s machine learning or not,” he explained. “You can build a theoretically sound model, but if the data is bad, you’re definitely going to get a bad result.”

Another challenge with implementing AI for financial models is that the model risk management guidance of Statement SR 11-7 from the Federal Reserve does not explicitly integrate AI as a part of the process, though many of same the principles apply. Prelle said some individuals using AI don’t know how to interpret or test it yet. “There’s a real danger in not having a good understanding of how to test it, how to use it, how to validate it, and how to make sure you’re not making bad decisions because of the points I just mentioned.”

In summarizing the challenges, Prelle said we are starting to view modeling through a very new type of lens: “You take away some visibility when you use AI, but it is not impossible to implement it well and verify the results, so you must temper it with theory in all phases of the model life cycle.”