This is the main message of the discussion paper “General principles for the use of Artificial Intelligence in the financial sector” published today by De Nederlandsche Bank (DNB). We will use this discussion paper, and the comments received, to engage in a dialogue with the Dutch financial sector over the coming months.
Financial firms increasingly use AI
Financial firms increasingly use AI to enhance their business processes, and improve their product and service offerings. Examples of current AI applications are identity verification in customer onboarding, transaction data analysis, fraud detection in claims management, pricing in bond trading, automated analysis of legal documents, customer relation management, and risk management.
AI enables financial firms to enhance their business processes and provide new added value. At the same time, incidents with AI, especially when this technology is not applied responsibly, could harm a financial firm or its customers and can have serious reputation effects for the financial system as a whole. Furthermore, given the interconnectedness of the financial system, incidents could also impact financial stability. For this reason it is important that financial firms apply AI in a responsible manner, as part of their sound and controlled business operations.
SAFEST principles for responsible AI
We have formulated a number of general principles regarding the use of AI in the financial sector. The principles are divided over six key aspects of responsible use of AI, namely soundness, accountability, fairness, ethics, skills and transparency (or “SAFEST”).
AI applications in the financial sector should first and foremost be sound, meaning that they should be reliable and accurate, behave predictably, and operate within in the boundaries of applicable rules and regulations. Firms should also be accountable for their use of AI, as AI applications may not always function as intended and can result in damages for the firm itself, its customers and/or other relevant stakeholders. Furthermore, it is vital for society’s trust in the financial sector that AI applications do not inadvertently disadvantage certain groups of customers. As AI applications take on tasks that previously required human intelligence, ethics becomes increasingly important and financial firms should ensure that their customers, as well as other stakeholders, can trust that they are not mistreated or harmed because of the firm’s deployment of AI. When it comes to skills, from the work floor to the board room, people should have a sufficient understanding of the strengths and limitations of the AI-enabled systems they work with. Transparency, finally, means that financial firms should be able to explain how and why they use AI in their business processes, and (where reasonably appropriate) how these applications function.
As AI applications increasingly inform a financial firm's decisions and as their potential consequences for the firm and its customers grow, responsibility and accountability standards governing their deployment will become stricter. As part of our supervision of financial institutions, we will critically consider the potential impact of firms’ AI applications and conduct further research into the principal aspects of the use of AI, such as transparency.
We are calling for a debate on AI
In this discussion paper, we offer our preliminary views on the responsible use of AI in the financial sector. We believe that the issues and ideas outlined in this paper will benefit from broader debate, and therefore we welcome comments on this discussion paper, which can be submitted at firstname.lastname@example.org. We will report on the outcome of this process over the course of 2020.