In Part One, we established why the CBN’s new Baseline Standards for Automated AML Solutions rank among the world’s best. Here, we examine the risks those Standards create and the hard governance work that genuine compliance requires.
A regulatory framework is only as valuable as the quality of its implementation.
The CBN has been explicit on this point from the opening pages of its new Baseline Standards – they are designed to ensure “demonstrable effectiveness and not merely feature-based compliance or vendor-driven implementation”.
That phrase is both an aspiration and a warning. It tells institutions precisely what the CBN will be looking for when it examines compliance and what will not satisfy it.
What follows is an analysis of the ten most significant risks embedded in the new framework, explained in terms that non-technical readers can follow, with the supporting detail and specific Standards references that Compliance Officers and Risk Managers need to act on.
The more sophisticated an AI model, the harder it typically is to explain what it is doing. A rule-based system is fully transparent; every decision traces to a defined condition. An advanced ensemble model builds complex internal relationships between hundreds of variables whose interactions cannot be reduced to a simple causal account.
The Standards address this directly. Section §5.5a.v requires that where AI and ML are used, the institution must provide “clear, auditable information on the key factors that contributed to the alerts to support human review”. Section §5.4a.iv requires human oversight and explainability as conditions of AI deployment. The NDPA 2023 reinforces this with specific obligations around automated individual decisions – an institution that cannot explain to a regulator or a court why a transaction was flagged or an account action taken carries legal exposure that is difficult to bound.
The post-hoc explainability tools vendors typically provide (mathematical techniques known as SHAP values and LIME, which attempt to describe why a model made a particular decision) are useful approximations but have known limitations. They describe what the model probably considered; they do not provide a definitive causal account and should not be presented as such in regulatory submissions.
What institutions must do – Before deployment, apply a practical explainability test. Sit a senior Compliance Officer in front of a live AI-generated alert alongside the model’s explanation and ask whether they can document the reasoning in terms a CBN examiner would accept? Can they explain it to a customer who disputes the decision? If the answer is consistently no, the model is not operationally ready for that category of decision.







