The entry of large technology companies (big techs) into finance promises efficiency gains and greater financial inclusion. At the same time, it introduces new risks associated with market power and data privacy. The nature of the new trade-off between efficiency and privacy will depend on societal preferences and will vary across jurisdictions. This increases the need to coordinate policies both at the domestic and international level.
The business model of large technology companies (big techs) such as Alibaba, Amazon, Facebook, Google and Tencent rests on enabling direct interactions among a large number of users on digital platforms. An essential by-product is their large stock of user data, which they use to offer a wide range of services and exploit natural network effects, generating further user activity. Increased user activity completes the circle, as it generates yet more data. Building on the self-reinforcing nature of the data-network-activities (DNA) feedback loop, some big techs have ventured into financial services, including payments, money management, lending and insurance. Big techs’ foray into finance raises both opportunities and risks (Boissay et al, 2021).
Big techs’ business models can be best described as online platforms that allow different types of user to interact. Big techs initially create value as online “multi-sided platforms” (MSPs), by enabling and catalysing direct interactions between two or more groups of users (eg buyers and sellers). The three main types of online platforms are social networks, e-commerce platforms and search engines.
The interaction of users on these platforms creates network externalities: the more users interact, the more attractive the platform. Other industries (eg telecommunication networks, credit card payments networks) feature network externalities. But big techs’ online-focused business models allow them to reach dominant market positions at unprecedented speed. Petralia et al (2019), for example, report that social networks such as Facebook or Tencent’s WeChat took less than five years to reach 50 million users (see also Graph 1, left-hand panel). In terms of user numbers, these firms are much larger, and have grown much faster, than any financial firm (Nguyen Trieu (2017)). Further, the systematic accumulation of user data and new ways of analysing it (eg artificial intelligence such as machine learning solutions) allow them to exploit these network externalities in a very effective way.
Financial services both benefit from and fuel the DNA feedback loop. Offering financial services can complement and reinforce big techs’ commercial activities. The typical example is payment services, which facilitate secure transactions on e-commerce platforms, or make it possible to send money to other users on social media platforms. Payment transactions also generate data detailing the network of links between fund senders and recipients. These data can be used both to enhance existing (eg targeted advertising) and other financial services, such as credit scoring.
Big techs may have a competitive advantage over banks and serve firms and households that would otherwise remain unbanked (Graph 1, right-hand panel).2 This is because they can tap different but relevant information from their digital platforms.3 For example, two large big techs in emerging markets, Ant Financial and Mercado Libre, claim that their credit quality assessment and lending decisions typically involve more than 1,000 data series per loan applicant. There is evidence that the advent of fintech and big tech lenders and their use of alternative data have been a boon for borrowers who are unserved or underserved by banks. In China, for example, the major platforms have provided access to credit for hundreds of millions of new personal and business borrowers. Globally, big tech credit exceeded $500 billion in 2020 (Cornelli et al (2020)).
The sheer amount of data collected by big techs and their intelligent use have the potential to reduce financial frictions, in particular borrower screening, monitoring and collateral requirements. Frost et al (2019) suggest that, when applied to small vendors, big techs’ credit scoring outperforms models based on credit bureau ratings and traditional borrower characteristics. MYbank (Ant Financial group) uses network analysis of transactions to evaluate whether an entrepreneur separates personal funds from business funds, which is one of the basic principles of good business conduct.
Big techs can also ease collateral and documentation requirements that many smaller borrowers may not be able to fulfil, as anecdotal evidence from Argentina and China suggests.
The very features that bring benefits also have the potential to generate new risks and costs associated with market power. Platforms can exploit their market power and network externalities to increase user switching costs, exclude potential competitors, and consolidate their position by raising barriers to entry.
Platforms now often serve as essential selling infrastructures for financial services providers, while at the same time big techs compete with these same providers. When a network operator owns a smartphone-based payment system, for example, it can undermine competitors’ access to its own digital platform by charging competitors (ie banks or rival big techs) high fees to connect with its (payment) system. Once a captive ecosystem is established, potential competitors may face steep costs and high risks in setting up rival platforms.
Another major risk is the monopolistic use of data. One special aspect of data as an input of production is non-rivalry: data can be used many times over, and by any number of firms simultaneously, without being depleted. Thanks to non-rivalry, data generate increasing returns in both scale and scope (Farboodi et al (2019)).
Digital monopolies’ use of individual data can also entail discriminations among their customers. One example is price discrimination: data may help not only to assess a potential borrower’s creditworthiness, but also to identify and charge the highest rate this borrower would be willing to pay for a loan (ie their individual “reservation price”). Another example is unintended unethical discrimination. Over time, the algorithms used to process data may develop biases (eg based on race or religion) leading to greater inequality (O’Neil (2016)). One study of the US mortgage market, for instance, finds that Black and Hispanic borrowers are less likely to benefit from lower interest rates from machine learning-based credit scoring models than White or Asian borrowers (Fuster et al (2019)).
When information is gathered without the informed consent of the consumer, it often infringes on personal privacy. Popular health websites have been found to share people’s sensitive data (eg medical symptoms, diagnoses, drug names) with dozens of companies around the world, including big tech firms such as Google, Amazon, and Facebook (Financial Times (2019)). These risks are still greater when firms underinvest in data security, leading to data breaches (Carrière-Swallow and Haksar (2019)).
The benefits and costs of the use of personal data in finance raises important policy questions. These go beyond the traditional ones of financial stability and competition, extending also to a new trade-off between data efficiency and privacy.4
One challenge is related to assigning the control and ownership of personal data. Control and ownership of data are rarely clearly defined. In many countries, the default outcome is that financial institutions or big techs acquire customer data at very low cost and keep de facto control.
Another challenge relates to the value of privacy, and whether privacy should be traded off against other goals in the first place. Some argue that data privacy has the attributes of a fundamental right, which cannot be traded off against economic benefits.5 Evidence suggests that cultural preferences towards data privacy differ across jurisdictions, and even between different social segments (Chen et al (2021)).
There are at least three, potentially complementary, approaches to address the efficiency-privacy trade-off raised by the widespread use of personal data: i) restricting the processing of user data; ii) giving consumers greater control over their personal data; and iii) providing a set of public infrastructures on which a layer of services can be built.
In several jurisdictions, data protection laws have clarified data collection and use to protect personally identifiable information (eg in Brazil, California, the European Union (EU), Japan and Singapore). The challenge with these laws is how to balance differences in privacy concerns and use of data. Some jurisdictions have taken measures with a wider ambit that may restrict data flows across borders.6 While this supports law enforcement and monitoring and supervision, such frictions in the use of data could lead to cost inefficiencies, limiting their potential benefits.7
Open banking initiatives (eg in Australia, the EU and Mexico) are examples of concrete policy actions to give consumers greater control of their data. Open banking rules selectively restrict the range of data that can be transmitted (eg financial transaction data), as well as the type of institutions among which such data can be shared (eg accredited deposit-taking institutions).
Public infrastructures on which a layer of services can be built include foundations for digital services such as digital identity (Aadhaar in India or MyInfo in Singapore) and the development of data management protocols. Once these infrastructures are in place, payments, digital government services and a host of other solutions become possible.
How to define and regulate the use of data are issues that need to be coordinated at both the domestic and international level.
At the domestic level, central banks and financial regulators may need to upgrade their understanding of personal data issues. And they need to coordinate with competition and data protection authorities. There also is scope to address the policy challenges of big techs by developing specific entity-based rules (Carstens et al (2021), Crisanto et al (2021)). Elements of such an approach are already taking root in several key jurisdictions – notably in China, the EU and the United States.
At the international level, regulations on the use of personal data diverge widely. In the EU, the General Data Protection Regulation (GDPR) assigns data rights to individuals. In the United States, a patchwork of sector-specific legislation means that, in practice, companies have relatively free access to data. Meanwhile, only a few countries have a national data or artificial intelligence strategy.8 As the digital economy expands across borders, there is a need for international cooperation on rules and standards.
Aaronson, S A, “What are we talking about when we talk about digital protectionism?”, World Trade Review, vol 18, no 4, October 2019.
Bank for International Settlements (BIS), “Big tech in finance: opportunities and risks”, Annual Economic Report 2019, Chapter III, June 2019.
Boissay F., T Ehlers, L Gambacorta and HS Shin, “Big techs in finance: on the new nexus between data privacy and competition”, BIS Working Paper Series, no 970, October 2021.
Carrière-Swallow, Y and V Haksar, “The economics and implications of data: an integrated perspective”, IMF Departmental Papers, vol 19, no 16, 2019.
Carstens, A, S Claessens, F Restoy and H S Shin, “Regulating big techs in finance”, BIS Bulletin, No 45, 2021.
Chen S, S Doerr, J Frost, L Gambacorta and H S Shin, “The fintech gender gap”, BIS Working Papers, no 931, March 2021.
Cornelli, G, J Frost, L Gambacorta, R Rau, R Wardrop and T Ziegler, “Fintech and big tech credit: a new database”, BIS Working Paper Series, no 887, September 2020.
Crisanto J C, Ehrentraud J, Lawson A and F Restoy, “Big tech regulation: what is going on?”, FSI Insights on policy implementation, no 36, September 2021.
Croxson, K, J Frost, L Gambacorta and T Valletti (2021), “Platform-based business models and financial inclusion”, BIS Working Paper Series, forthcoming.
Cyberspace Administration of China, “Security Assessment of Cross-border Transfer of Personal Information”, June 2019.
Farboodi, M, R Mihet, T Philippon and L Veldkamp, “Big data and firm dynamics”, NBER Working Papers, no 25515, January 2019.
Financial Times, “How top health websites are sharing sensitive data with advertisers”, 13 November 2019, www.ft.com/content/0fbf4d8e-022b-11ea-be59-e49b2a136b8d.
Frost, J, L Gambacorta, Y Huang, H S Shin and P Zbinden, “BigTech and the changing structure of financial intermediation”, BIS Working Papers, no 779, April 2019.
Fuster, A, P Goldsmith-Pinkham, T Ramadorai and A Walther, “The effect of machine learning on credit markets”, VoxEU, 11 January 2019.
Hau, H, Y Huang, H Shan and Z Sheng, “Fintech credit, financial inclusion and entrepreneurial growth”, mimeo, 2018.
Huang, Y, C Lin, Z Sheng and L Wei, “Fintech credit and service quality”, mimeo, 2018.
Mitchell D and N Mishra, “Regulating cross-border data flows in a data-driven world: how WTO law can contribute”, Journal of International Economic Law, vol 22, no 3, September 2019.
Nguyen Trieu, H, “Why finance is becoming hyperscalable”, Disruptive Finance, 24 July 2017.
O’Neil, C, Weapons of math destruction: how big data increases inequality and threatens democracy, Broadway Books, 2016.
Petralia, K, T Philippon, T Rice and N Veron, “Banking Disrupted? Financial Intermediation in an Era of Transformational Technology”, Geneva Reports, no 22, 2019.
All authors are from the Bank for International Settlements (BIS). Torsten Ehlers was on secondment to the International Monetary Fund (IMF) at the time of publication. The views in this paper are those of the authors only and do not necessarily reflect those of the BIS or the IMF.
More generally, big techs’ market penetration rate tends to be higher in areas where banks are absent or their branch networks sparser.
See Hau et al (2018) and Huang et al (2018) for China, and Croxson et al (2021) for a global overview.
See BIS (2019) and Petralia et al. (2019).
For a discussion of data rights in Europe, and of the grounding of such rights in eg the EU Charter of Fundamental Rights, see BEUC (2019).
One example is China (see Cyberspace Administration of China (2019)).
According to Aaronson (2019), 58% of the countries in the world have now adopted or are adopting data protection laws. Many such laws contain provisions affecting cross-border data flows. It is still too early to assess whether such laws are effective in addressing risks (see also Mitchell and Mishra (2019)).