Skip to main content

Model Risk Management: Model Testing

In Europe, the US Federal Reserve (FED) and the US Office of the Comptroller of the Currency (OCC)’s Supervisory Guidance on Model Risk Management (SR 11-7) is accepted as the global standard for the application of model risk management (MRM). Specifically, this guidance – issued to market participants in the form of a letter – provides recommendations to commercial and investment banks (CIBs)of all sizes on the best practices needed to develop and apply a robust MRM process.
SR 11-7 provides this guidance by introducing stages to MRM, which allows CIBs to garner a common appreciation of the concept of the MRM lifecycle. On the basis of its experience with CIBs of all sizes, GreySpark Partners has developed a comprehensive view of this lifecycle, its requirements and the best means of its practical application.

The Elements of Model Testing & Regulatory Expectations

Model testing is considered a crucial part of the model development process because it guarantees that the model and its components perform as intended.

As stated in the US Securities & Exchange Commission (SEC) SR 11-7 guidelines, besides checking the model’s accuracy, model testing is also intended to: “assess the impact of assumptions and identify situations where the model performs poorly or becomes unreliable.”

For that reason, it is important to align testing scenarios with the current market conditions and business’ expectations. Included in testing activities should be the purpose, design and execution of test plans, summary results with commentary and evaluation, and detailed analysis of informative samples (see Figure 1).

 

Figure 1: The Testing Elements of SR 11-7

Source: GreySpark analysis

 

A Challenging Testing Environment

In GreySpark’s experience, the majority of CIBs took a checkbox approach with a generic list of testing requirements that are not tailored to the model being tested. This is reflected in the use of single and static test plans for all models regardless of shape and size, and regardless of pass / fail test outcomes.

This checkbox approach creates the following issues for banks:

  • As models of all risk levels go through the same level of testing, low-risk models would result in a slower validation process;
  • A lack of differentiation to testing for simple or low-risk testing creates inefficiencies and frustration between the lines of defence;
  • Creates risks that the incorrect testings are applied to high-risk models; therefore, potential risks might not be considered or properly mitigated; and
  • Creates risks that the incorrect testings are applied to model feeders of algorithms and, as a consequence, risk are not appropriately mitigated.

 

Best Practices: Taking a Flexible Approach to Testing

As part of the CIB industry’s best practices, GreySpark considers that testing requirements should be tailored to the model being tested on the basis of a predetermined list of criteria. The selection of some or the entirety of these criteria would determine the testing requirements for each model.

An example of this list can be, but is not limited to:

  • The model’s risk rating / level;
  • The asset class or business unit where the model will operate;
  • Model activity; and
  • The algorithm controls framework in the case of model feeders of algorithms.

These criteria could also be combined to determine whether testing should be performed at an algorithm level, at a model level, or even at both levels, as well as which types of testing – for example, stress testing or back-testing – should be performed.

For example, an investment bank with risk-based testing requirements might have the situation that a high-risk model requires a more rigorous testing and independent review of the testing results than a model classified as low-risk.

Last but not least, testing activities should be appropriately documented so that everyone involved in the model lifecycle has access to the testing results, which can be used during the validation and monitoring stages.

Documentation should include a summary of all test results and a conclusion of outcome regardless of the acceptable or unacceptable results. By properly documenting the testing requirements that are applied to each model and the evidence of testing results, CIBs can ensure that models are effectively fit-for-purpose, and any risks with regards to the models’ performance are appropriately mitigated.

Over a series of five articles, GreySpark will explore these five topics in an effort to assess the challenges associated with their application and to draw out the best practices that can be utilised to manage their implementation.

The third article in the series of five will examine model validation.

For more information please contact the authors


    Your Name (required)

    Your Email (required)

    Subject

    Your Message