chevron

Retour

AI compliance and compliant AI: a methodology for Luxembourg’s financial professionals

Théo Antunes

[object Object]

The AI Act (AIA) further clarifies the regulatory framework for AI development, deployment and use. While it does enact a series of obligations, mostly supported by providers and deployers, it only contains half of the equation required to ensure a fully compliant AI system under Luxembourg financial law. To reach such a level of compliance, financial and professional compliance departments are required to enact an adequate methodology to meet the regulatory standards in Luxembourg. This article will highlight the challenges and the pathways for establishing such a methodology, leading to the question of how AI systems should be developed, deployed and used to comply with the relevant Luxembourg and European provisions. 

When one refers to AI compliance, the aim is to assert whether AI systems are governed by the appropriate set of rules, making their development, deployment, and use in line with regulatory expectations on both a national and European level. One would thus refer to the newly enacted AIA and the General Data Protection Regulation (GDPR), which govern aspects of data protection for AI systems. 

However, isolating these rules from any consideration regarding the deployment of the AI system brings a risk of inconsistency with the norms already applied to the sector in which it is deployed. If the AIA establishes a series of obligations for all AI systems, it does so in a general way, regardless of the specificities of certain sectors. As such, the AIA is only half the equation for ensuring full compliance with AI systems; the other half lies in adapting the norms applied to the sector to the AI systems’ technicalities. 

The lifecycle of AI systems can be highlighted through three phases. Firstly, its development, where the AI system is given an objective, assembled, programmed, and trained by the provider. Secondly, its deployment, meaning how the AI system is integrated within the financial professional activity. Thirdly, its use by the deployers pursuant to its objective. 

As the development and deployment of AI systems can be malleable to fulfil several tasks for financial professionals, it must do so while keeping them compliant with Luxembourg law. This requires financial professionals to consider the rules already binding the purpose the AI system aims to fulfil. For instance, if AI systems are deployed to detect and report suspected transactions, AML rules must be considered so the financial professional can remain compliant with such norms. 

To ensure a compliant AI for financial professionals, two approaches are needed. First, AI compliance must be considered the common ground that binds all AI systems, as enshrined in both the AIA and the GDPR (1). Second, a contextual approach must be addressed to ensure that the deployment of the AI system is pursued compliantly to its final use (2).

AI compliance: a common ground for all AI systems

AI compliance comprises all obligations under which providers and deployers are subjected to under the AIA and, to a certain extent, the GDPR. Indeed, while the European Union (EU) has chosen to give a special regulation for AI systems (b) one should also consider the implications for the GDPR in such a perspective (a).

 

A. GDPR and AI systems: drawing the line.

Financial professionals might decide to take an “in-house” approach to develop their AI systems, meaning that they would develop their own model rather than purchasing the licence of an already existing one. The AIA does not seem to distinguish between the requirements of third-party providers and those of in-house providers, meaning that both need to demonstrate that their AI system complies with the regulation. However, the development phase is unique, for it can be considered under the AIA and the GDPR. Thus, the first step toward ensuring that the development complies with Luxembourg and EU law is to clearly distinguish between GDPR and AIA compliance. 

Before the AIA and without a specific framework for answering the challenges of AI systems, the GDPR applied to them as long as they were automated personal data processing tools. For example, in the Clearview AI case, the GDPR was found to be violated primarily due to the lack of consent from individuals whose data was repurposed. Issues such as consent, purpose limitation, and data processing proportionality have been central in challenging the legality of these practices. Violations ranged from data collection to data transfer outside Europe and its processing by U.S.-based AI software. The CJEU Schufa case also highlighted that credit risk profiling algorithms fall under Article 22 of the GDPR, requiring credit institutions to ensure that such software does not constitute the sole basis for loan denial decisions.

However, with the entry into force of the AIA, one should reconsider the role of the GDPR and where its provisions would be relevant in the AI life-cycle.  Since AI systems inherently involve automated data processing, there could be confusion about reconciling compliance under the GDPR and AIA. An analysis of the AIA reveals two key aspects. Firstly, the AIA should be read in conjunction with the GDPR, as it explicitly states that the AI Act should not undermine existing EU data protection regulations. Secondly, the AIA governs the development and use of AI systems, including the rights of individuals affected by AI outputs. However, the AIA is less explicit regarding the data protection guarantees that apply to AI systems. It only mentions norms regarding the training of AI systems, solely requiring that such data be of high quality, meaning it must meet adequacy and reliability standards. Given the AIA’s specific focus on the training of AI systems, its silence on data collection suggests that GDPR provisions should apply to the gathering of training data, prolonging principles like loyalty, purpose limitation, proportionality, and consent for data collection for training AI systems. The AIA primarily addresses the processing of data during AI system training and use, focusing on requirements of output accuracy and bias mitigation techniques undertaken by the provider. Thus, while the AIA governs the data training process and system use, the GDPR is more relevant to the collection of personal data for AI training.

A potential challenge arises from the interplay between Article 22 of the GDPR, which prohibits decisions based solely on automated data processing with significant effects on individuals, and the AIA. While the AIA does not explicitly ban the sole reliance on AI outputs, even for high-risk systems, it implicitly discourages users from depending solely on them. Article 14 of the AIA suggests that deployers should avoid automatic or disproportionate reliance on AI outputs, though it does not impose a blanket prohibition as the GDPR does. Both texts aim to ensure that AI systems and automated data processing are not the sole basis for decisions likely to significantly impact individuals, particularly in high-risk scenarios. Thus, compliance with such a measure, already enshrined in the GDPR, would be prolonged in the AIA. 

In practice, the GDPR is more relevant during the data collection phase for AI model training. Ensuring compliance with its principles can be challenging, given the vast amounts of data used to train AI systems, especially when this data has been previously collected with consent. Reusing this data in a different context can breach the GDPR principle of consent, as demonstrated by European Data Protection Authorities (DPA) decisions in the Clearview AI cases. Enforcement of these rules is further complicated by the lack of transparency during AI development. The AIA addresses this issue by establishing principles and rules for providers to mitigate such risks.

 

B. AI compliance under the AIA: establishing a common ground for AI systems.

The AIA aims to establish a harmonised framework for developing, deploying, and using AI systems within the EU, apart from those used for national defence. However, it still applies to AI systems used by public bodies like law enforcement, judicial authorities, public administration, and private entities such as financial professionals, credit institutions, insurance companies, investment firms, asset managers, custodians, and crypto-asset service providers.

The AIA classifies AI systems based on the level of risk they pose to fundamental rights, which is crucial for determining whether an AI system is prohibited and, if not, the specific requirements that govern its development, deployment, and use. Many AI systems employed by financial professionals in client-related activities or to uphold their compliance obligations would likely be classified as high-risk due to their potential to significantly impact fundamental rights, including data protection and consumer rights. This classification would typically include AI systems used for trading recommendations, detecting suspicious activities and transactions, investment recommendations, client profiling for loans, KYC, investment profiles, fraud detection, transaction screening and monitoring, etc... However, AI systems developed for chatbots generally would not fall into this high-risk category.  

As high-risk systems, these AI tools must comply with specific common requirements, which can be broadly categorised into obligations related to their development and use. Although the AIA primarily focuses on system classification rather than the roles of actors involved with AI, this division aligns with the distinction between providers and deployers. A financial professional can act as both under the AIA. If a financial professional develops its own AI system or commissions its development, it is considered a provider. Conversely, if a financial professional merely uses an AI system after purchasing a license from a provider, it is classified as a deployer and subject only to deployer obligations. When developed “in-house”, financial professionals are both providers and deployers, requiring a distinct separation of their obligations to ensure clarity, especially considering the reporting procedure to the CSSF, as this article will further elaborate on. 

Providers face several obligations to ensure that the AI systems they develop comply with the AIA. These include implementing risk management procedures, data governance, creating technical documentation, recording events during development, maintaining adequate documentation, generating automated logs, and conducting a fundamental rights risk assessment. Providers must also ensure that their AI systems meet standards of accuracy and transparency during their use by deployers. Development obligations must aim at creating an AI system that complies with the rules of AIA, and the risks of misuse, errors and bias have been mitigated enough. 

Deployers, on the other hand, have obligations related to the AI system’s output and its role in decision-making. These obligations primarily involve maintaining human oversight of AI outputs and ensuring that the outputs can be explained to the affected individuals, especially regarding the final decision where AI is used as a decision-support tool. Notably, the AIA does not guarantee individuals the right to a full explanation of the AI’s decision-making process but rather focuses on clarifying the AI's role in the final human decision.

The AIA establishes these obligations, but they remain somewhat abstract without considering the specific purpose for which the AI system is deployed. The AI Act constitutes the common cement with which all AI systems must comply. However, it is not sufficient. Representing only half of the equation, such AI compliance must be reinforced with the specific norms of the sector in which it is deployed. Thus, for financial professionals to ensure compliance, it is crucial to ensure a compliant AI system with such a sector.

 

Compliant AI: a specific approach for AI systems.

Compliant AI refers to the ability of an AI system to meet both the regulatory requirements set forth by the AIA and the GDPR and the specific regulatory demands of the sector in which it is deployed. Failing to consider these factors during the deployment phase could render the norms governing AI development and use insufficient. Some of these rules, particularly those concerning the quality of data, which must be both adequate and reliable, are closely tied to the specific context of deployment. 

For example, in the context of Anti-Money Laundering (AML) deployments, AI systems must not only adhere to technical standards but also align with the underlying principles and norms governing AML regulations. The AML risk assessment for financial professionals is based on both general and specific criteria, depending on the nature of the services provided. AI systems designed for AML purposes require a tailored approach that mirrors the specificity of AML requirements. Similarly, a suspicious transaction or activity report must be supported by evidence that justifies the suspicion. An AI system that would not be programmed especially to demonstrate such evidence might be inapt for AML detection purposes and make the financial professional falls outside its compliance obligations.

A dual approach is necessary to achieve compliant AI under Luxembourg and European law. Firstly, one must conduct a thorough cross-analysis of the regulatory expectations related to the specific deployment of the AI system by identifying the regulatory requirements of a particular sector (a). Secondly, one needs to be an effective methodology for implementing these regulatory requirements as part of the AI system (b).

 

A. Identification of the regulatory requirements for AI systems

The identification of regulatory requirements is no stranger to compliance departments. Identifying regulatory requirements is a fundamental task for compliance departments and serves as the initial step in ensuring that the services provided by financial professionals align with Luxembourg norms. This process typically involves analysing the nature of the financial professional's activities and determining their status under Luxembourg and European law. The goal is to ensure that compliance departments clearly understand which rules apply, how to  maintain compliance, and how to mitigate risks associated with these activities. While there may be similarities between different financial services, such as asset management and credit institutions, each faces unique risks that require tailored compliance approaches.

AI systems do not introduce entirely new services for financial professionals but rather enhance existing ones. For example, AI can be used for credit profiling to assess a client's risk and ability to repay loans. Additionally, AI systems can improve financial professionals' ability to comply with regulatory requirements. For instance, AI can enhance the detection of suspicious transactions or activities, thereby strengthening compliance with AML regulations. Similar AI applications can be used to detect financial crimes, such as market manipulation, insider trading, and fraud. Since AI systems are integrated into these existing obligations, they must also adhere to the legal requirements of these sectors.

For instance, the level of explanation required by the AIA will vary depending on the purpose of the AI system. For example, under the law of November 12, 2004, compliance officers or money laundering reporting officers (MLRO) must provide the Luxembourg FIU with a reasonable basis for classifying a transaction as suspicious. AI systems used for detecting suspicious transactions must meet this standard of explanation to ensure the quality and accuracy of the output, particularly if it is included in a Suspicious Transaction Report (STR) or Suspicious Activity Report (SAR) before it is transmitted. For these AI systems, the methodologies used for Know Your Transaction (KYT), Know Your Customer (KYC) and Know Your Business (KYB) should be enhanced by AI without altering their core principles. This pre-development analysis is crucial, especially for AI systems used in AML contexts, which need to be tailored according to the financial professional’s risk assessment. This assessment considers factors such as the nature of the professional's activities, the characteristics of their clientele, their geographical locations, and the types and volumes of transactions performed.

The identification of regulatory requirements calls for a cross-reading analysis of the requirements stemming both from the AIA and the norms applying to the sector it is deployed to. This analysis must be performed before development has started to ensure the AI systems are tailored to prevent financial professionals from falling outside their compliance obligations. This proactive approach ensures that AI systems are customised to prevent financial professionals from falling out of compliance with their obligations.

 

B. Compliance methodology for compliant AI systems

Identifying the regulatory requirements is the fundamental step under which the compliant AI methodology is built upon. An adequate and adapted response to a single objective of the system is necessary for a full compliance of the system.

How can these regulatory requirements be implemented within the system to ensure compliance? The answer lies in a complex relationship between the AI system's provider and deployer. Whether the AI system is developed in-house is irrelevant to analysing such a relationship. The provider must ensure that the AI system will be tailored to the deployer, requiring the deployer to inform the developer of the regulatory standards applying to its activity. What matters is such a prospect, in a clear methodology to ensure the financial professional is deploying a compliant AI. 

Firstly, the identified regulations must be translated into technical requirements. This echoes the prior identification of the regulatory norms applying to a specific deployment of the AI system. If AI is employed for a credit risk assessment, it must avoid relying on biased data to ensure that the client would not later be subjected to potential discrimination. It must rely on data that are both adequate and reliable at the level of the financial institution. It must also consider the provisions applying in the context of prudential requirements, capital requirements, and the relevant CSSF circulars. Extra attention must be paid to the context where the AI system is deployed for such a credit risk. For instance, in the context of ESG and credit risk assessment, the AI system must produce an output that is compliant with the content of the CSSF circular 21/769 related to this specific area. The development of the AI system must ensure such compliance through the selection of the relevant data to ensure their reliability and a suitable model. For AML purposes, the development of AI systems must follow the same path by addressing the requirement of the law of the 12th of November 2004, by ensuring that not only such use maintain the capacity of the financial institution to uphold the screening of suspicious transactions and activities but also is capable of explaining its flagging to the MLRO or compliance officer in charge. The technical requirements must be upheld throughout the development to limit the risk of errors after its deployment. The final objective of such a phase is to ensure the compliance of the AI system with the sector in which it is deployed. Without it, the AI system could make the financial professional fall outside compliance regulatory norms. Such a challenge can be accentuated in the context of full automatisation with an AI system of a specific procedure, such as KYC, and opens the risk for liability under the CSSF supervision for the financial professional. 

Secondly, one must focus on the use phase to ensure compliant AI systems remain so. The methodology for complying with regulatory requirements lies in the ongoing monitoring of the system during its life cycle. Such monitoring answers to several objectives that must be taken care of by the compliance department. The smooth integration of the services the AI system supports requires an integration that would not disturb the existing processes and organisations. To do so, compliance departments need to clearly implement solutions to ensure that compliance is not undermined by AI systems. A major step in this direction is appropriate training by the final users of the system. Such training should lead the users to understand the limits of the AI system, how to interpret its outputs and how to integrate it adequately as part of its final decision under both the AIA and the deployment sector. Such training must be performed to prevent the risk of misuse by employees when delivering services to the clientele. The training should allow employees not to trust the AI outputs blindly. The compliance department should also draft and implement ethical guidelines adapted to AI systems to complement such training and limit the risk of uncompliant AI uses. They should especially focus on the need for a complementary approach between humans and AI systems, limit the over-reliance on its outputs to limit automation bias from arising and provide clientele affected by the output reasoning on how the AI system played a role in determining the final decision. The fundamental purpose of the guidelines is to give users an ethical approach to using an AI system that is compliant with the services they provide or to uphold their compliance obligations under Luxembourg law.  The performance of the AI systems depends on the objective it must fulfil.  

Thirdly, after the integration follows such a smooth and ethical approach, the compliance methodology must also encompass monitoring of the AI system’s performance and relevance in the field in which it is deployed. AI systems must bolster compliance and should not undermine it, requiring such monitoring. As such, two types of solutions can be implemented. On the one hand, the real-time monitoring of the performance of the AI system through the implementation of dashboards and tools to monitor the AI system's performance in real-time, allowing for tracking metrics such as accuracy, precision, recall, and response times, is crucial. On the other hand, anomaly detection systems must also be implemented to detect anomalies or deviations from expected outputs pursuant to its development, allowing the identification of data issues, data drift or the degradation of the AI model. Implementing a feedback loop can also bolster the supplier's rapid implementation of modifications in order for the AI system to function adequately.  Such performance monitoring must lead to performance reviews of the AI system. This means that the compliance department must establish regular reviews of the AI system’s performance against benchmarks and expected results. Pending the review results, compliance departments should undertake the update and calibration of the AI system depending on identifying the reason for the underperformance of the system. Whether through an update of the data or the model or a new calibration, the review must pinpoint the relevant criteria to enhance the system performance. 

Fourthly, the AIA aims to ensure that the use of AI systems is auditable by national authorities. In the context of the use of AI systems by financial professionals, this role could fall on the CSSF as its tasks are to supervise and monitor the financial market, ensuring compliance by financial professionals. In this measure, a certain level of reporting and documentation is expected to ensure the audit of the system and its overall compliance in the view of the CSSF. Such reporting should include performance metrics as well as the compliance status of the AI system to assess whether AI compliance has been reached and whether the AI is compliant with the norms of the sector deployed. In anticipation of the entry into force of the AIA, AI systems should already be auditable through the development documentation and technical audit of the AI system. 

Compliant AI requires methodically ensuring that the AI systems are fulfilling their objectives according to the financial professional’s overall compliance obligations. Compliant AI must focus on the pre-development phase, the development phase, the deployment phase, the use phase, and the performance review phase while being transparent enough to allow the CSSF to conduct audits on such systems.

 

Conclusion

A single approach cannot be undertaken to solve the equation of integrating compliant AI systems as part of the financial professionals’ activities. Rather, compliance departments must undertake a methodological approach to ensure double compliance. One with the AIA and the other on the norms applying to the sector where the AI system is deployed. Such a methodology should be reproduced for all types of AI systems considered by financial professionals to ensure compliance with both the AIA and the sector in which it is deployed. 

As Luxembourg financial professionals increasingly turn to AI systems to bolster their activities and compliance solutions, an effective and comprehensive methodology such as the one developed in this article would be a first step toward ensuring full compliance. 

 

Join Théo Antunes on November 14 in Luxembourg for our annual “AML/CTF Table Ronde des Régulateurs” conference.

Théo will participate in the “AI & AML, Compliance, Risks and Opportunities” discussion panel.

 

Source:

CSSF and Banque centrale du Luxembourg. (2023). Thematic review on the use of Artificial

Intelligence in the Luxembourg Financial sector, Mai 2023

European Parliament and Council Directive (EU) 2018/843. (2018). Amending Directive (EU)

2015/849 on the prevention of the use of the financial system for the purposes of money

laundering or terrorist financing and amending Directives 2009/138/EC and 2013/36/EU [2018] OJ L156

European Parliament and Council. (2016). Regulation 2016/679 (EU), on the protection of

natural persons with regard to the processing of personal data and on the free movement of such

data, and repealing Directive 95/46/EC (General Data Protection Regulation), OJL 119, [2016]

FATF. (2021). Opportunities and challenges of new technologies for AML/CFT,

Parlement et européen et Conseil, Règlement (UE) 2024/1689 établissant des règles harmonisées concernant l’intelligence artificielle, (législation sur l’intelligence artificielle), adopté le 6 mars 2024, considérant 27

OECD. (2021). Artificial Intelligence, Machine Learning and Big Data in Finance:

Opportunities, Challenges and Implications for Policy Makers

 

preload
logo