Monday, April 6, 2015

Conducting a Model Validation for BSA- a Two Part Series- Part Two
As we discussed in the first part of this Series, there has been an increasing emphasis placed by examiners on the use of BSA/AML monitoring software.  In particular, regulators expect that when banks obtain monitoring software, they will also perform regular data and model validations.  We noted in Part One that data validations should be performed by comparing core system data with the data in the monitoring software.  In addition, we noted that the data validation should be performed with some regularity to prevent long periods where errors may exist.   
Official guidance from the OCC and the Federal Reserve is the basis for regulators expectations in this area.   The guidance that was first published in 2011 directly addresses the use of models as part of a risk and compliance structure at banks. [1]  The guidance notes that the use of models in banking is both a desirable and useful practice; however, models cannot be viewed as a complete solution for compliance purposes.   Models have to be part of an overall compliance program. Moreover, models must be validated as to both data accuracy and applicability.  It is not enough to simply test whether or not the data in your BSA/AML software has been properly mapped.  You must also determine that the software is doing what the bank needs it to do to monitor suspicious activity.  
Model Validation
The OCC guidance points out that the use of models in any banking environment must fit within a risk framework.  This framework has essentially four elements:
·         Business and regulatory alignment – the model must fit the bank’s risk profile and regulatory requirements
·         Project management – a proper and appropriate implementation is an ongoing project that is dynamic as the bank’s operation
·         Enabling Technology – The use of the technology should facilitate the bank’s ability to meet its regulatory requirements
·         Supporting documentation – As a best practice, documentation of the rational for using the model should be maintained. 
For BSA/AML, monitoring software, the risk framework means that regulators expect banks to know how its software works as well as the “blind spots”   for transactions that may not be completely covered by the way the software operates.   The expectations are that your bank will use monitoring software as a tool that is constantly being sharpened and improved.  The model validation process is the means to ensure that the software is improving.  
Its 11pm- do you know what your software is doing?   
The first consideration in completing a model validation is to determine just which type of software you are using and how it works; in other words the conceptual  framework of the software.  The most popular types of BSA monitoring software are:
  • Risk based- These are systems that incorporate various factors such as NAICIS codes, zip codes, volume and frequency to predict higher risk customers
  • Rule based- system that compares transactions to scenarios that mimic suspicious activity
  • Behavior based- These systems establish a “base line for a customer and track activity that exceeds the baseline
  • Intelligent systems- This software is based on decision trees that follow data that has indicated suspicious activity
  • Combination- These systems incorporate two or more of the above into the software.  
It is important to document that the bank is aware of which type of software  it has.   Moreover, it is important to document how this software’s concept aligns with the risk profile of the bank.   The model has to be one that has the ability to discern the transactions that your BSA assessment has identified as higher risk.  Ultimately, the expectation is that you will be able to document what it is that your software is monitoring and how it is keeping track of your risky transactions.   You should be able to document just how alerts are created and what they mean.  Are you getting a report when a customer sends a wire for the first time?  How about the first time a wire goes to a foreign country?  Does the software have the ability to track and aggregate ALL transactions associated with one customer and her affiliates?   
Along the same lines, it is critical that a gap analysis is performed to determine where the system has “blind spots”.  Are there transactions such as a transfer of funds between two customers that are not being picked up by the software?  What if someone cashes a check and gives it to another customer.  Will your software be able to make that determination if this is the sort of transaction that occurs at your bank and is considered risky?   Ultimately, the conceptual framework of the software that you are using must match the products, customers and transactions of your bank.  
Practice Makes Better (Never Perfect)
Many banks that have contacted us have been shocked to find that they have been criticized for keeping the default settings of the software.  We are being conservative; the logic goes, by keeping the settings of our software as wide and open as possible.  However, this is most certainly not a best practice.  The use of models is supposed to be a tool that enhances the ability of the BSA staff to determine suspicious activity and act on that information as is necessary.  Default settings do not reflect the risk profile of an individual bank and often lead to a large number of alerts.  When alerts are generated for transactions that are clearly ordinary or not at all suspicious, the output from  the software becomes ineffective as the BSA staff spends hours on superfluous data.   The model has to be trained to know the risk profile of the bank and to look at data that is outside of that profile.  Therefore, as a part of a model validation, it is necessary to review the rules or settings in place and to adjust to optimize the software.   This process should be based upon both the experience of the BSA staff as well as by doing comparative testing.  An effective means of optimization and tuning is to  back test several accounts.  Using core data, a staff member can ensure that all transactions that should be considered suspect are being noticed by the software with alerts. 
Who is Minding the Store?
The guidance from  the OCC and the Federal Reserve also makes clear that a critical component of model risk management is output analysis.  The best data in the world will be very ineffective in the hands of a staff member who does not know how to interpret it.  Moreover, the staff that receives and  analyzes  data must also have the ability to act on their conclusions.    The output of monitoring software should be reported to senior management on a regular basis along with information about the actions taken in response to the data.   It is not simply enough to show that the software is creating alerts that result in a suspicious activity report (SAR).  The expectation is that at some point when a customer has received multiple SAR’s, the data will be reported to senior management and a decision should be made whether or not to continue the relationship with the customer. 
Governance (It ALWAYS Comes Down to This!)
Ultimately, model validation comes down to the overall governance being practiced at a bank.  Models are only as effective as the structure in which they are used.   The guidance notes that there has to be governance structure that surrounds the use of monitoring software.  This structure should include:
a.       Senior Management and Board Involvement – through regular reporting to the Board or a committee of the Board
b.      Policies and Procedures – which should be updated on an annual basis
c.       Roles and Responsibilities – the staff who are responsible for reading and interpreting the data should be designated by the Board. 
d.      Enterprise Risk Management and Reporting – the software must be dynamic and should change along with the risk profile of the bank.
e.    Independent Audit and Testing- The overall model’s effectiveness should be reviewed and tested regularly by an independent party.


[1] See OCC 2011-12;  Federal Reserve SR 11-7
 

No comments:

Post a Comment