.blog #page { padding-top: 24px; }

Four considerations for CECL model validation

Jeff Prelle, Managing Director, Head of Risk Modeling, was presenter on the topic of CECL model validations during the GFMI Model Risk Conference on January 29, 2019.

Welcome to the CECL time warp. When the FASB announced its CECL requirement in 2016, it seemed like there was an endless amount of time available to meet CECL 2021 implementation deadlines. Now, in 2019, the road to implementation seems like a speedy downhill drive in a car with squeaky brakes – except there are, in fact, ways to check the brakes along the way.

That was the message that Jeff Prelle, Managing Director and Head of Risk Modeling at MountainView Financial Solutions, a Situs company, communicated to the audience at the GFMI Model Risk Conference on Tuesday in San Francisco. His presentation provided financial institutions with four considerations for CECL model validation.

Sympathetic to his model risk audience, Prelle stated, “I don’t know about you, time is funny. When we have a couple of years to implement something, we seem to procrastinate. During 2016, we thought we had time for CECL development and implementation, but before we knew it, the calendar turned to 2019. If the next two years move just as fast, we need to find ways to conduct due diligence on the CECL process to make sure we have done it right. Model validation provides us with a way to do this.” 

Although it went by fast, said Prelle, the last several years have offered the industry time to explore CECL modeling, learn lessons from International Financial Reporting Standards® (IFRS-9), circulate modeling concepts among peers, and share tools and techniques. But, he warned, CECL comparability against peer institutions and portfolios will remain a challenge; there is still a lot of testing and tweaking ahead.

Prelle reminded the audience that while independent validation of models are required under model risk management guidelines, validation has value beyond checking the box. Reflecting on IFRS-9 implementation overseas, Prelle stated that some organizations neglected to run parallel tests and waited until the go-live date to see the effects of IFRS-9. This ultimately resulted in significant increases in loan loss allowance estimates (cliff losses quick rise in losses because of the two-stage estimate) and capital uncertainties. Frequent validation might have afforded these institutions the opportunity to recalibrate and improve their models.

Prelle recommends validating CECL models at three key moments:

·       During the model development process;

·       After model completion and use;

·       After the model is modified, improved and calibrated.

While the frequency of validation is critical, institutions should keep in mind that the subjectivity of the standard leaves specific aspects of the requirement open to interpretation. To prepare for validation, these following areas demand special attention:  

Is Your Estimate Reasonable and Supportable?

Whether the estimated loss is reasonable and supportable depends not only on the history of losses, but if economic scenarios are used, and how far into the future those economic scenarios can be reasonably estimated. Therefore, institutions should supplement their historical observations with other factors to determine whether that estimate is likely to hold in the future. For example, prior to 2007, financial institutions considered Home Price Indices (HPI) to be a major driver for mortgage default. However, post-recession, the unemployment rate was considered the key driver for default.

Moreover, the reasonable and supportable period for CECL modeling is being assessed differently at various financial institutions. Some institutions are using a blanket time frame across all products, while other institutions are tailoring the reasonable and supportable period by product type.

Prelle reminds institutions that to determine how far back in time to go, institutions need to look at the model or system of models used to predict their estimates. In a system of models, an institution’s reasonable and supportable time-frame will be dictated by the factor that has the shortest reasonable and supportable period.

Are You Effectively Tracking Model Change and Attribution?

Models consists of various components. Over time, the model performance can degrade. The degradation in model projection can be attributed to the change in data, change in the underlying model driver, and change in the strategic factors that can change the policies affecting portfolio characteristics, among others. An effective tracking of model result should be able to pinpoint what component(s) caused the model change. Institutions need to be sure to pay special attention to model tracking and attribution during validation because auditors will closely scrutinize the attribution requirements. While calibration changes are easy to track, model redevelopment will require additional consideration because the entire specification has to change, in addition to any errors found in the model.

Does Your Data Represent the Portfolio on Book? 

Data is an increased area of scrutiny for our clients as auditors are looking to ensure that the data used in the model is reflective of the portfolio on book. If an institution is lax in how it tracks the data coming into models, red flags could be raised during an audit.

A model validation team will need thorough documentation on the data used and to replicate the data set results. Moreover, if data is insufficient in term, data capture and governance plans, it will likely come up during model validation.

Have You Tracked Your Q-Factors?

Q-factors are often used as model overrides and will need to be tracked. Institutions should ensure that there is a model override monitoring process by business line, which should include testing model performance with Q-factors to determine any errors against actuals. It is important to keep in mind that adjustments are not necessarily a one-time action. When model are adjusted, they need to be revisited later to ensure that the adjustments are still relevant as institutions continue to rely on their CECL models.  Additionally, Q- factors will have to be tracked for attribution analysis for modeled losses. 

During the GFMI conference, Prelle reminded the audience that in the big picture, institutions need to get their estimates right because the model outcomes will have a significant impact on their capital management decisions.

“If an institution is behind or hasn’t started working on CECL, revving the engine a little might be required,” said Prelle. “But use model validation as a way to remain thoughtful and strategic throughout the process.” 

Since validation will be a critical piece in the CECL model implementation, it is important to recognize that CECL is more complex than other models and will likely be assessed with greater scrutiny than lower risk models. By regularly validating, institutions can better prepare for the types of questions that will arise in an audit – and will have a better chance of capturing errors that may impact the accuracy of the estimate.

To learn more about our CECL Model Validation services, please reach out to Jeff Prelle, jprelle@mviewfs.com