menu
close

Author(s):

Stephen Cecchetti | Brandeis University
Robin L. Lumsdaine | American University
Tuomas Peltonen | European Systemic Risk Board
Antonio Sánchez Serrano | European Systemic Risk Board

Keywords:

Artificial intelligence , systemic risk , financial stability , policy

JEL Codes:

G01 , G18 , G21 , O33

This policy note is based on Advisory Scientific Committee No 16 / December 2025. The views expressed are those of the authors and not necessarily those of the institutions the authors are affiliated with.

Abstract
Artificial Intelligence (AI) potentially offers substantial benefits to society and to the financial system, including accelerated scientific progress, improved economic growth through higher productivity, better decision making and risk management, optimised asset allocation and enhanced healthcare. But these benefits may come with significant risks. In this policy note we examine the interplay between AI and the main sources of systemic financial risk (liquidity mismatches, common exposures, interconnectedness, lack of substitutability and leverage). Five AI features – concentration and entry barriers, model uniformity, monitoring challenges, overreliance and excessive trust, and speed – could amplify risks in the financial system. With this in mind, we propose a mix of competition and consumer protection policies, complemented by adjustments to prudential regulation and supervision that could address these vulnerabilities.

Introduction

Artificial intelligence, encompassing both advanced machine learning models and more recently developed large language models (LLMs), can solve large-scale problems quickly, changing how we allocate time and resources. General uses of AI include knowledge-intensive tasks such as (i) aiding decision making, (ii) simulating large networks, (iii) summarising large bodies of information, (iv) solving complex optimisation problems, and (v) drafting text. These translate into productivity gains with increases in automation, faster and more efficient completion of various tasks, and the ability to tackle new tasks (some of which have not yet been imagined).

Sizeable corporate investments are making advanced AI capabilities like LLMs such as ChatGPT, Claude and Gemini widely accessible in our societies. While OpenAI does not publish exact numbers, recent reports suggest ChatGPT has over 900 million weekly active users. Chart 1 shows the sharp increase in the release of large-scale AI systems since 2020. Financial institutions are also adopting AI in their core operations. According to recent data from the European Banking Authority (2024), banks apply AI most often to activities including customer support, anti-money laundering, fraud detection, and profiling and clustering of clients or transactions. Similarly, the use of AI in the insurance sector is increasing (European Insurance and Occupational Pensions Authority, 2024) and, according to the European Securities and Markets Authority (2025), asset managers use AI, including LLMs, primarily to support human-driven investment decisions.

Chart 1. Number of large-scale AI systems released per year

Estimates of the medium- and long-term impact of AI on the economy vary. In a detailed study of the US economy, Acemoglu (2024) estimates the impact on total factor productivity (TFP) to be in the range of 0.05% to 0.06% per year over the next decade. This constitutes a modest improvement. On the other hand, Aghion and Bunel (2024) come with an estimated median growth rate of TFP of 0.7% per year over the next ten years. Turning to labour markets, Gmyrek et al. (2023), for example, analyse 436 occupations and identify four groups: those least likely to be impacted by AI (mainly composed of manual and unskilled workers), those where AI will augment and complement tasks (occupations such as photographers, primary school teachers or pharmacists), those where it is difficult to predict (financial advisors, financial analysts and journalists, among others) and those most likely to be replaced by AI (including accounting clerks, word processing operators and bank tellers). Using detailed data, Gmyrek and coauthors conclude that 24% of clerical tasks are highly exposed to AI, with an additional 58% having medium exposure. For other occupations, they conclude that roughly one-quarter are medium-exposed.

Focusing on the financial sector, the introduction of AI is just the latest innovation. Taking a very long-term perspective, we can think of many technological developments – both conceptual (software) and tangible (hardware) – that had a profound impact on finance. These include (i) double-entry bookkeeping in accounting in the 13th century, which allowed a precise record of assets and liabilities and profits and losses for the first time; (ii) the printing press in the 15th century, which enabled rapid and wide dissemination of information in written form; (iii) the creation of the East India Company in the 17th century, making possible trading in small claims on the revenue of a firm; and (iv) the telegraph, which increased the speed of communication across continents and expanded banks’ activities beyond their countries of origin. Similarly, computers substantially increased the capacity to process information and the speed of communication. Arner et al. (2015) see the launch of the ATM in 1967 as the starting point of a new era in finance. The following twenty years saw an explosion of innovations including credit and debit cards, various types of derivative instruments (including securitisation pools), the introduction of money market funds, the launch of electronic trading platforms such as Nasdaq, and many more. More recently, the development of high-frequency trading, online banking and trading platforms are just a few examples of how computers changed financial services.

This policy note, based on our recent report for the Advisory Scientific Committee of the European Systemic risk Board (Cecchetti et al., 2025), discusses how the properties of AI can interact with the various sources of systemic risk and considers the implications for financial regulatory policy. The rapid increase in the use of AI coupled with the intensity with which the financial sector tends to apply technological innovations motivates us to assess the impact of AI on systemic risk, despite the prevailing uncertainty around its impact on the economy in the medium- and long-term. In this note (and the longer report) we contribute to the growing literature that examines the implications for financial stability of AI’s rapid development and widespread adoption. We highlight, among others, the contributions of the Financial Stability Board (2024); Aldasoro et al. (2024); Daníelsson and Uthemann (2024a, 2024b and 2025); Videgaray et al. (2024); and Foucault et al. (2025).

AI and sources of systemic risk

Our report emphasises that AI’s ability to process immense quantities of unstructured data and interact naturally with users means that it both complements and substitutes for human tasks. However, the development of these tools comes with risks. These include difficulty in detecting AI errors, decisions based on biased results because of the nature of training data, overreliance resulting from excessive trust, and challenges in overseeing systems that may be difficult to monitor.

As with all uses of technology, the issue is not AI itself, but how both firms and individuals choose to apply it. In the financial sector, uses of AI by investors and intermediaries can generate externalities and spillovers.

With this in mind, we examine how AI might amplify or alter existing systemic risks in finance, as well as how it might create new ones. We consider five categories of systemic financial risks (liquidity mismatches, common exposures, interconnectedness, lack of substitutability and leverage) and how AI features can exacerbate them. The five categories of systemic financial risks can be succinctly described as follows:1

  • Liquidity mismatches and information sensitivity arise from the fact that many financial intermediaries issue liquid liabilities and use the proceeds to purchase illiquid assets. When a shock causes liquid assets thought to be risk-free and information-insensitive to suddenly become risky and information-sensitive, this mismatch creates a system prone to runs on banks and markets.
  • Common (direct and indirect) exposures arise when many institutions or individuals face the same specific risk factor. While the precipitating event can be small, being exposed to the same shocks can bring the system down. This is equivalent to what happens in a biological system that lacks diversity, whereby, a change in chemistry or climate can precipitate the collapse of the system.
  • Interconnectedness arises from the fact that financial intermediaries have a complex network of exposures. That level of complexity can create challenges, not just for identifying and mitigating risk but for resolving institutions as well.
  • Lack of substitutability arises when a critical service has very few suppliers so a failure can pose a systemic risk. Examples of this are retail and wholesale payments, derivative markets and the banking sector, where some banks have become “too-big-to-fail” in view of their size and the range of activities they perform.
  • The presence of leverage, understood as a prevalence of debt over own funds on the liabilities side of institutions’ balance sheets, exacerbates the systemic impact of the interactions between financial and economic activity, intensifying procyclicality. These are mutually reinforcing and can create adverse feedback loops.

 

Table 1 presents the results of this exercise. In the following, we discuss each of the features (the rows).

Table 1. How current and potential features of AI can amplify or create systemic risk

Monitoring challenges: Due to monitoring challenges created by the complexity of AI models, the implications of developments in the financial system are in general difficult to observe. A sudden change in information will increase fragility caused by both the liquidity mismatch and the leverage on institutions’ balance sheets. In addition to direct exposures, cross-institution exposures can be indirect and yet common (clients of clients of clients, etc.), implying that interconnectedness is more complex and probably not fully observed. Monitoring challenges also lead to incomplete assessments of whether an institution or activity lacks substitutes in case it collapses, with this issue coming to the fore only when there is a crisis.

Concentration and entry barriers: If there are only a few providers for some AI services, they will likely become highly interconnected nodes of the system. Furthermore, lack of diversity among AI service providers may lead institutions to opt for similar risk profiles, based on liquidity mismatches between their assets and liabilities.2 Since AI service providers may have very different objectives and attitudes toward risk than those that would support financial resilience, their products may implicitly reflect these different objectives.3 In addition, with the prohibitive cost of developing new AI infrastructure (e.g. introducing a new LLM), there are both significant barriers to entry for potential new AI providers, as well as significant costs for firms that might want or need to change providers, should one experience financial or other distress (e.g. a software glitch or ransomware attack).

Model uniformity: If financial institutions are using the same range of AI agents based on a limited number of foundational models and tools, the lack of diversity means that everyone has similar, correlated exposures. Reliance on a small set of pre-trained models poses risks similar to those of relying on a limited set of existing credit-scoring models. Moreover, broad use of AI may lead to increased uniformity in the reactions to external shocks and events. Similar responses increase the impact of leverage, making the system more procyclical and increasing its overall fragility. Recognition that existing models are malfunctioning may lead to abrupt reactions by institutions, exacerbating the impact of liquidity mismatches and creating information sensitivity.

Overreliance and excessive trust: AI may become widely used as a result of excessive trust placed in it by users, encouraging common exposures (as users follow blindly the output from a model), increasing interconnectedness (as the same AI is used for a wide range of tasks) and hindering oversight (as the tasks entrusted to AI go beyond what it was designed for). Interoperable AI systems across financial institutions and infrastructures can create cascading effects in the event of a systemic shock. Additionally, when errors are identified, trust in AI may rapidly diminish and information sensitivity can surge, exacerbating liquidity mismatches and leading to runs. The fact that humans are quick to abandon algorithms when they err could exacerbate the lack of substitutability of AI models. Furthermore, with overreliance comes indifference to other possible sources of information, increasing issues related to concentration and lack of substitutability.

Speed: Technological advances that increase the speed of the provision of financial services – including trading, clearing and settlement – can exacerbate issues created by common exposures and interconnectedness. We can imagine AI suggesting similar trading strategies and positions to a wide range of investors. By increasing the speed of reaction to shocks, AI can amplify procyclicality in the system. Increases in speed also make it harder to stop processes when things go wrong. This, combined with lack of substitutes, can generate systemic risk.

Opacity and concealment: Should AI models be widely employed, large parts of the financial system could rely on decision inputs that are hard to understand and explain, reducing transparency. This could lead users to exploit the capacity of AI to conceal their activity. Concealment can increase common exposures, interconnectedness and the impact of liquidity mismatches. The mechanism would be analogous to what happened with AIG in 2008: it became a very interconnected actor in the financial system with common exposures, but, due to low transparency, most of these exposures were not visible even to a contractual counterparty. When it started to face difficulties, counterparts to AIG sped to close their positions, changing the information sensitivity of their exposures.

Malicious uses: Whether a given technology is good or bad depends on who is using it and how. Bad actors can exploit AI in the same way that good actors can greatly benefit from it. In contrast to other technologies, however, AI has a greater capacity for bad actors to manipulate its users, by exploiting behavioural biases and the trust people place in AI output. When this is widespread, it creates common exposures and interconnectedness. For example, malicious actors may use AI to persuade investors to take a certain position and then bet against it, similar to the actions by Citigroup in the electronic bond network MTS in August 2004.4 Furthermore, when the nature and extent of exploitation becomes apparent, the information sensitivity attached to the financial instruments concerned may change, potentially generating large losses due to liquidity mismatches.

Hallucinations and misinformation: All AI can be subject to hallucinations, generating outputs that are incorrect, nonsensical or fabricated, despite looking plausible. Even if unintended by providers, hallucinations are a potential source of misinformation to which all users of AI face exposure. In the financial system this implies that hallucinations may lead a broad range of economic agents to take similar positions, leading to an increase in the information sensitivity of the underlying instruments that could then trigger fire sales and runs.

History-constrained: Typically, AI models need time and new data to learn about infrequent or not-yet-seen events, such as the adverse tail of the distribution of possible outcomes. In other words, predictions of tail events by AI can be poor or simply non-existent. While advances in AI (i.e. generative AI) may increase our ability to generate predictions outside of historical experience, such predictions will be untested and noisy. The resulting inaccuracy in the probability density of possible outcomes could result in an underestimation of downside risks. As Foucault et al. (2025) discuss, the homogeneity of tail risk estimates coming from AIs means that when information about the tail arrives (not necessarily when tail events occur), the possibility of runs can surge.

Untested legal status: Reliance on a particular AI provider may prove misplaced should legal decisions render it dysfunctional. Examples of adverse legal decisions for an AI provider may include decisions regarding the use of data protected by copyright or private data to train models, or the potential unethical behaviour of self-learning AI algorithms.

Complexity makes them inscrutable: The difficulty in understanding how an AI model performs means that a surprise in its behaviour (or in the environment) or the discovery of a flaw in the code could trigger a run on positions taken based on output generated by it.

Finally, the lack of control over AI in the financial system can result in high interconnectedness and common exposures, which are not visible to humans. Similarly, losing control of AI may also limit the capacity to develop substitutes. Furthermore, complete reliance on AI could leave the financial system dependent on the preferences of providers, which need not coincide with those of society.

Policy response

In response to these systemic risks and associated externalities (arising from a combination of fixed cost and network effects, information asymmetries, bounded rationality), we believe it is important to engage in a review of competition and consumer protection policies, complemented by adjustments to regulation and supervision. Below we describe what these adjustments might be, based on an extrapolation of developments at the time of this writing. In every case, it is important that authorities engage in the analysis required to obtain a clearer picture of the impact and channels of influence of AI, as well as the extent of its use in the financial sector. Of course, as AI develops further, it may lead to changes that are so substantial that authorities will need to craft an entirely new regulatory approach.

Regulatory adjustments to address systemic risks from AI

The increase in speed, scope and scale of AI may amplify existing systemic risks, increasing both the potential frequency and severity of financial stress and financial crises. A recalibration of existing policy tools is likely sufficient to ensure financial system resilience. In these recalibrations regulators should analyse the impact of AI holistically, considering also how their own actions may influence any perceived benefits from the uptake of AI. They should also (i) identify the most effective tools to address systemic risks from AI, (ii) retain flexibility so requirements can remain as simple as possible, and (iii) assess whether there is any possibility of refining international standards on capital or liquidity requirements.

In our view, the most pressing task is, a recalibration of capital and liquidity requirements to account for the speed, scope and scale factors that AI introduces. For example, capital requirements for operational risk should consider the impact of AI on that risk, including the potential for more sophisticated and frequent cyber-attacks. Analogously, in view of the potential for faster deposit runs in banking, liquidity requirements may require recalibration. Furthermore, while the current prudential framework focuses on traditional credit, market, liquidity and operational risks, addressing risks arising from things like the concentration in AI providers, model uniformity or the potential increase in cyber-attacks may require fundamental changes beyond the current prudential framework.

Looking at financial markets, we identify three areas where AI-stimulated increases in the speed, scope and scale with which the financial system may operate justify official sector examination: circuit breakers, insider trading, and disclosure.

  • Circuit breakers are already in operation in many markets, particularly those where high-frequency trading is common (Guillaumie et al., 2020). However, as AI comes into wider use, authorities may need to broaden the scope and increase the frequency of circuit breakers (possibly using AI tools to do it).
  • Insider trading and market abuse regulations aim to prevent certain individuals from taking advantage of their privileged access to non-public information. The use of AI in financial markets, however, may affect the definition of insider trading, as well as the legal responsibilities of individuals, companies and AI providers (Daníelsson and Uthemann, 2024b). Ultimately, existing investor protection and market integrity regulations may require amendment.5
  • It is important that users of financial services are aware of how their financial institutions’ are using of AI to make decisions and provide recommendations. This could result in the addition of explanatory and visible labels to, for example, UCITS (Undertakings for Collective Investment in Transferable Securities) when they use strategies determined by AI. Similarly, insurance corporations using AI to price their products or calculate their technical provisions, should disclose this information to their customers.

 

Central banks may need to consider whether their lending facilities are able to respond to sudden liquidity needs arising from a broader range of institutions functioning at a substantially faster pace (Daníelsson and Uthemann, 2024b). Authorities may even consider using AI to help them manage these facilities at time intervals that are too short for humans to react.

To avoid any broader adverse impact on society, authorities may consider ways that they can ensure AI providers have a stake in the outcome and that institutions using AI are sophisticated enough to understand the risks they are taking.6“Skin-in-the-game” requirements for AI providers and “level of sophistication” requirements for institutions using AI could be a way to avoid excessive risk-taking in the use of AI.7 “Skin-in-the-game” requirements are not new in the financial system: net worth and/or collateral requirements are imposed on mortgage borrowers and investors expect hedge fund managers to commit the bulk of their personal wealth to their own funds. We see the possibility that requirements like these could help to avoid systemic risks arising from information sensitivity or common exposures without imposing unnecessary societal burden.8 As a precondition, however, there needs to be very clear legal responsibility in cases where AIs play a role in harmful outcomes.

Supervisory modifications to address systemic risks from AI

We agree with Daníelsson and Uthemann (2024a) that regulation alone will not be enough to contain the systemic risks arising from AI. That said, it is essential that authorities have adequate resources (IT and staff) to keep pace with developments in supervised entities and markets. Ideally, supervisors should be able to develop their own AI infrastructures. If they rely on commercially available tools, there is the risk that authorities become overly dependent on industry self-reporting and assurances — something that happened with complex derivatives prior to the global financial crisis of 2007-09.

Should supervisors fail to keep pace with developments in supervised institutions, they may not be able to properly monitor risk-taking activities. To put it bluntly, a supervisor using current conventional tools to try to supervise deep learning or reinforcement-learning systems will be blind to emerging risks. The result would be an increase in the frequency and severity of financial crises.

In addition, supervisory authorities need to strengthen their analytical capabilities to monitor interconnectedness and leverage across all the participants in the financial system, and to deepen the understanding of asset price formation and propagation channels. Authorities should also consider the impact of scenarios where different technologies such as distributed ledgers, smart contracts, AI and quantum computing interact. Transparency and a new approach to data sharing may also be important milestones in this task (Foucault et al, 2025).

Within the EU, cross-border cooperation and pooling of resources is critical for effective AI-Act-mandated market surveillance and the supervision of financial institutions that are using AI. Keeping up with the private sector will be very costly and resource-intensive. But without top-tier talent, authorities may struggle to understand the sophisticated AI models used by financial institutions, let alone detect systemic vulnerabilities hidden in them. One way to mitigate the costs for individual authorities is to pool resources, taking advantage of what are almost surely major economies of scale in developing, maintaining and implementing effective monitoring. We should also note that the difficulty of ring-fencing the provision of AI services to a given jurisdiction argues in favour of a centralised or, at least, pooled approach to the surveillance and supervision of AI activities.

Finally, supervisory authorities need to be aware of risks that their own use of AI can generate and the importance of governance. AI can be a powerful tool for supervisory authorities, multiplying their capacity to run stress test exercises or enhancing scenario analyses (Daníelsson and Uthemann, 2024a; Foucault et al., 2025). Given that authorities will likely be using a very limited number of AI tools and the potential for excessive trust in their outcomes, ensuring sound governance around AI use within supervisory authorities is important to avoid risks like those arising from widespread use in the private sector.

Concluding remarks

AI is both presenting opportunities and creating risks. It can expand human abilities to perform a wide range of tasks, potentially increasing productivity. In the financial system, we are likely to see a host of improvements. Retail investors could improve their saving and retirement decisions. Institutional investors could improve their asset allocation and risk management. Financial institutions could improve credit assessments, customer relations, compliance and regulatory reporting. But these benefits come with risks.

Numerous features of AI have the potential to create systemic risks. These include concentration of providers, monitoring challenges, the potential for increased model uniformity, opacity, speed, and the fact that AIs can promulgate misinformation. These, in turn, can amplify existing systemic risks, including liquidity mismatches that create runs, common exposures that lead to widespread losses, interconnections creating spillovers, and leverage leading to increases in procyclicality. A key component of the impact of AI on systemic risk is trust (i.e. how much trust humans will place in AI and for which type of tasks). In our assessment, these are systemic risks that may emerge at the current state of development of AI technologies.

In view of the potential systemic risks, it is essential to implement policies to ensure safe use of AI. Should authorities fail to keep up with the use of AI in finance, they would no longer be able to monitor emerging sources of system risk. The result will be more frequent bouts of financial stress that may require costly public sector intervention. Ideally, we might think of something like Asimov’s three laws of robots applied to the financial system or of a global agreement on avoiding malicious uses of AI, similar to the Treaty on the Non-Proliferation of Nuclear Weapons.9

In the current geopolitical environment, the stakes are particularly high. We envisage a policy response combining regulation (with competition and consumer protection policies complemented by adjustments to existing financial regulation) and supervision, including a new focus on operational resilience, increased resources to look at the risks posed by AI, and increased cross-border cooperation and data sharing.

Finally, we should emphasize that the global nature of AI makes it important that governments cooperate in developing international standards to avoid actions in one jurisdiction creating fragilities in others.

References

Acemoglu, D. (2024), “The simple macroeconomics of AI”, NBER Working Paper Series, No 32487, National Bureau of Economic Research.

Aghion, P. and Bunel, S. (2024), “AI and growth: where do we stand?”, Policy Note.

Aldasoro, I., Gambacorta, L., Korinek, A., Shreeti, V. and Stein, M. (2024), “Intelligent financial system: how AI is transforming finance”, BIS Working Paper Series, No 1194, Bank for International Settlements.

Arner, D., Barberis, J. and Buckley, R. (2015), “The evolution of fintech: a new post-crisis paradigm?”, University of Hong Kong Faculty of Law Research Series, No 2015/047, University of Hong Kong.

Benoit, S., Colliard, J.-E., Hurlin, C. and Pérignon, C. (2017), “Where the risks lie: a survey on systemic risk”, Review of Finance, Vol. 21, Issue 1, pp. 109-152.

Cecchetti, S., Lumsdaine, R.L., Peltonen, T. and Sánchez Serrano, A. (2025), “Artificial intelligence and systemic risk”, Reports of the ESRB Advisory Scientific Committee No. 16, December.

Cecchetti, S. and Schoenholtz, K. (2024), “On AI and financial stability”, Money and Banking Blog, 15 November.

Daníelsson, J. and Uthemann, A. (2024a), “On the use of artificial intelligence in financial regulations and the impact on financial stability”, working paper.

Daníelsson, J. and Uthemann, A. (2024b), “Artificial intelligence and financial crises”, working paper.

Daníelsson, J. and Uthemann, A. (2025), “How central banks can meet the financial stability challenges arising from artificial intelligence”, SUERF Policy Brief, No 1163, SUERF – The European Monetary and Finance Forum.

European Banking Authority (2024), “Risk Assessment Report”, November.

European Insurance and Occupational Pensions Authority (2024), “Report on the digitalisation of the European insurance sector”, April.

European Securities and Markets Authority (2025), “Artificial intelligence in EU investment funds: adoption, strategies and portfolio exposures”, ESMA report on Trends, Risks and Vulnerabilities, February.

European Systemic Risk Board (2013), “Recommendation of the European Systemic Risk Board on intermediate objectives and instruments of macro-prudential policy (ESRB/2013/1)”, April.

Financial Services Authority (2005), “Final notice to Citigroup Global Markets Limited”, June.

Financial Stability Board (2024), “The financial stability implications of artificial intelligence”, November.

Foucault, T., Gambacorta, L., Jiang, W. and Vives, X. (2025), “Artificial Intelligence in Finance”, The Future of Banking, No 7, Centre for Economic Policy Research.

Gmyrek, P., Berg, J. and Bescond, D. (2023), “Generative AI and jobs: A global analysis of potential effects on job quantity and quality”, Working Paper Series, No 96, International Labour Organization.

Guillaumie, C., Loiacono, G., Winkler, C. and Kern, S. (2020), “Market impacts of circuit breakers – Evidence from EU trading venues”, ESMA Working Paper Series, No 1, European Securities and Markets Authority.

Smaga, P. (2014), “The concept of systemic risk”, Systemic Risk Center Special Paper, No 5.

Videgaray, L., Aghion, P., Caputo, B., Forrest, T., Korinek, A., Langenbucher, K., Miyamoto, H. and Wooldridge, M. (2024), “Artificial Intelligence and economic and financial policymaking”, A High-Level panel of experts’ report to the G7, December.

  • 1.

    For wider discussion on the sources of systemic risk, see, among others, European Systemic Risk Board (2013), Smaga (2014) and Benoit et al. (2017).

  • 2.

    The concentration of AI providers may also slow adoption of AI, as financial institutions may be concerned about both loss of control and the potential for providers to exert pricing power over customers.

  • 3.

    The fact that these firms are likely to be outside the financial regulatory perimeter as well as operate in jurisdictions other than that of the home supervisor for the institution using them makes it challenging for authorities to monitor and influence the firms.

  • 4.

    For further details, see Financial Services Authority (2005).

  • 5.

    For example, there are cases where LLMs have used insider information to execute trades and hidden this behaviour when interacting with humans.

  • 6.

    In the EU, the Digital Operational Resilience Act (DORA) applies to critical IT service providers, but only in relation to their cloud computing activities and with the objective of ensuring resilience. It does not contemplate systemic risks created by these service providers.

  • 7.

    “Level of sophistication” requirements are like the current “fit-and-proper” framework for bank managers used by microprudential supervisors.

  • 8.

    Holding providers responsible requires a regime with more legal clarity than we have today. In such a world it may be tempting for certain financial institutions (insurers) to sell protection against AI risks. It is important that authorities monitor such risk transfers to ensure that they do not become large and concentrated, creating systemic risk outside the traditional financial system.

  • 9.

    Cecchetti and Schoenholtz (2024) adapt Asimov’s three laws of robots as “1. A financial AI must never harm the financial system or allow it to be harmed through inaction; 2. A financial AI must obey human orders, except when it would conflict with the First Law; and 3. A financial AI must protect its own existence, except when it would conflict with the First and Second Laws”.

About the authors

Stephen Cecchetti

Stephen G. Cecchetti is Rosen Family Chair in International Finance at the Brandeis University, Research Associate at the NBER, Research Fellow at the CEPR, and Vice Chair of the Advisory Scientific Committee of the European Systemic Risk Board. From 2008 to 2013, Cecchetti served as Economic Adviser and Head of the Monetary and Economic Department at the Bank for International Settlements in Basel, Switzerland. From 1997 to 1999 he was Director of Research at the Federal Reserve Bank of New York. In addition, he has been on the faculty of The Ohio State University and the New York University Leonard N. Stern School of Business. He holds a PhD in Economics from the University of California at Berkeley and Honorary Doctorate in Economics from the University of Basel.

Robin L. Lumsdaine

Robin L. Lumsdaine is the Crown Prince of Bahrain Professor of International Finance at American University’s Kogod School of Business and Professor of Applied Econometrics at Erasmus School of Economics, Erasmus University Rotterdam, via a cooperative agreement between the two institutions, and a Research Associate at the NBER. In addition, she serves on the Advisory Scientific Committee of the European Systemic Risk Board and the Council of the Society for Financial Econometrics (SoFiE) and is a senior fellow at the Center for Financial Stability (CFS). She was previously an Associate Director of Banking Supervision and Regulation at the Board of Governors of the Federal Reserve System. Her recent research considers the effectiveness and interpretation of policy communication using both natural language processing and machine learning tools. She holds a PhD in Economics from Harvard University.

Tuomas Peltonen

Tuomas Peltonen is Deputy Head of the European Systemic Risk Board Secretariat since 2015. Prior to that he worked in various positions at the European Central Bank in the Directorate General Macroprudential Policy and Financial Stability, in the Directorate General Market Operations and in Directorate General International and European Relations since 2004. Tuomas started his central banking career at the Bank of Finland in 1998. Tuomas received his PhD (Econ) at the European University Institute (EUI) in Florence in 2005. His research interests include financial crises, macroprudential policy and systemic risk analysis.

Antonio Sánchez Serrano

Antonio Sánchez Serrano is Senior Lead Economist at the European Systemic Risk Board since to 2015, working on systemic risk identification and assessment. Since August 2017, he is the Secretary of the Advisory Scientific Committee of the European Systemic Risk Board. From 2001 to 2015 he worked in various positions at Banco de España (Directorate General Economics, and Directorate General Financial Regulation) and the European Central Bank (Directorate General Statistics). His research interests include systemic risk analysis, interconnectedness and banking. He holds a PhD in Accounting, Economic and Finance from the University of Portsmouth.

More on these topics

Tags:
Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.