This policy brief is based on CEPS, Foresight report: Building a European AI Ecosystem of Excellence. The views expressed are those of the authors and not necessarily those of the institutions the authors are affiliated with.
Abstract
CEPS conducted a foresight exercise to explore how AI could reshape Europe’s innovation systems and labour markets by 2045. Two futures, which integrated macro and micro drivers, emerged from the participatory scenario building: a globally coordinated, market-driven, exponential growth and automation; and a fragmented, government-led, slower innovation, which is human-centred. The constraints of these futures formed the basis for backcasting the needed policy actions for European excellence in AI. This policy brief describes how this will depend on the ability to govern AI legitimately, distribute its gains fairly, and evaluate its social outcomes. Along both trajectories, effective procurement, worker participation, and evidence-based adaptation will be the foundations for an innovative and inclusive AI ecosystem.
Recent shocks and disruptions – be they technological, geopolitical or epidemiological – have highlighted the need for policymakers to anticipate and prepare for change before it strikes. With this in mind, the European Commission published its “Preparedness Union Strategy” earlier in 2025.
Seeking knowledge about the future would be nothing more than witchcraft without the use of analytical tools to guide the process, especially to chart a path back to the present. Complexity must be approached with openness to leap into the unknown and with clarity of judgement, to turn it into knowledge for informed decision-making. One common analytical tool which adopts this approach – and is a key string in the Preparedness Strategy’s bow – is strategic foresight.
An exploratory rather than predictive method, strategic foresight begins with a qualitative analysis of the present to identify key drivers of change before combining them to build multiple plausible future scenarios. It then involves backcasting – a normative approach to preparedness which involves describing policy actions that would lead to desired outcomes under the constraints of the alternative futures.
Following this methodology, CEPS brought 21 experts and stakeholders together to explore the possible impact of AI on Europe’s innovation systems and labour markets by 2045 and how to make the EU resilient to these changes, for the CEPS project on a European Ecosystem of Excellence in AI1.
Figure 1. The process behind the foresight workshop conducted by CEPS

The process contrasted futures along two interlinked analytical levels, which were identified through preparatory desk research. At the macro level, participants inspected AI progress under globally coordinated or fragmented governance, and under market-driven or government-driven research and innovation. At the micro level, they considered how quickly AI capabilities might progress on economically relevant tasks – following an exponential or logarithmic path – and whether these technologies would automate or augment human work.
The process continued by coupling each micro scenario with one macro scenario, resulting in a set of internally coherent futures. From this set, CEPS organisers chose two for further reflection – one depicted globally coordinated, market-driven, exponential growth and automation and the other envisaged globally fragmented, government-led, slower innovation, which is human-centred.
Amid the constraints of these futures, participants investigated the necessary steps to create an EU ecosystem of excellence in AI by backcasting – working back from 2045 to now to identify policy actions. This policy brief presents the key insights from this culminating step of the workshop. It offers policymakers foresight on the conditions that could either strengthen or undermine the EU’s capacity to balance innovation with democratic and social resilience.
Figure 2. The macro grid: coordinated or fragmented global governance (vertical axis)
vs. market-driven or government-driven R&D (horizonal axis)

Figure 3. The micro grid: exponential or logarithmic AI improvement (vertical axis)
vs. automated or augmented tasks (horizonal axis)

The first integrated scenario was coordinated global governance and market-driven R&D at the macro level, and exponential AI development and task automation at the micro level. The innovation ecosystem in this world is profit-driven, delivering scalable breakthroughs, while global coordination through multilateral forums establishes safety and interoperability standards as guardrails on artificial general intelligence (AGI). Large-scale task automation restructures workflows for efficiency and displaces workers. Policymakers attempt to reduce labour-market volatility through urgent reforms but struggle to keep pace with the speed of change.
The participants investigated the underlying dynamics in this alternative world, adding the following nuances to the scenario:
After the world-building, participants identified goals and policy tools for a European ecosystem of excellence under the scenario’s constraints. The conversation coalesced around a few key fields of action.
A pro-innovation regulatory environment. Governments should create conditions that allow markets to move quickly e.g. by accelerating the regulatory approval of AI-enabled drugs, to accelerate innovation. On top of this, regulations across the EU must be harmonised, to create a “28th regime” of uniform laws. To reduce dependencies on a few dominant companies, governments should require open-source AI development and multilateral forums should ensure interoperability and open technical standards. In a market-driven world, there is still a place for government-funded research – DARPA-style structures can fund defence-grade innovation and a new European Research Council could be set up to cover non-industrial research, focusing on discovery, fundamental science and involving citizens.
Redistribution of wealth to tackle inequality. Universal basic income (UBI) could address inequality, possibly funded through taxes on robots or automation-related levies. It should avoid shifting inequality to tech access disparities (i.e. the ‘digital divide’, those with access to tech and those without), by, for example, ensuring a baseline number of “personal AI” possessions. Participatory democracy could be complementary to redistribution to reduce social tensions.
Fostering active citizenship. By 2045, there will be a fundamental reclassification of professions in the labour market. Instead of emphasising “jobs”, the future might focus on civic engagement. UBI could be tied to civic participation, with mechanisms in place to avoid informal scoring, social pressure or paternalism.
Education reforms. Ideas could become the currency of a world economy that is fully automated by AI. Human values would need to be continually reaffirmed and their expression incentivised. Citizens could receive funds to reskill in areas experiencing high market demand.
The discussion on policy options surfaced on an overarching concern about legitimacy and political feasibility. The issue of legitimacy cropped up around who should administer redistribution and who should set R&D priorities. Assumptions about feasibility were revealed when participants sorted the most fundamental policy proposals, like large-scale redistribution, into the “difficult and long-term” category. Robust policy tools for this scenario are available but it is still unclear whether governments will have the capacity to execute them well – and in time.
The second integrated scenario imagined a world of fragmented global governance, government-driven R&D, logarithmic development and augmentation of work. In this world, AI capabilities advance steadily but logarithmically – slowing rather than ‘taking off’ towards AGI. Governments, not private corporations, drive AI R&D. Global coordination is fragmented by competing national priorities and geopolitical tensions. Investment emphasises reskilling and work redesign, augmenting human capacity and service delivery rather than automating it like in the first scenario.
Bound by these conditions, participants fleshed out this 2045 world along four key dimensions.
Five key areas for action came out of the backcasting exercise, as participants thought through how to ensure a better European ecosystem of AI given the scenario described above.
Building public compute and data infrastructure. Centrally governed key infrastructure, especially compute, is vital for supporting state-led research and smaller AI companies. Public-private collaboration would be key, enabled by co-designed public policies that create conditions for companies to innovate and scale, especially skills development and capital access.
Fostering a diverse AI landscape. A core objective would be to support a rich landscape of smaller, specialised AI companies working in synergy with the public sector. Policy levers could include tax incentives, public infrastructure (notably energy) and strategic funding for burgeoning start-ups. Legislation could go beyond risk mitigation, encouraging governments to steer markets and support companies. Conditionalities such as mandatory interoperability or transparency would ensure high standards and public value.
Ensuring public trust for AI adoption in society. Deployment depends on public trust. Trust could be strengthened by wider communication on high-profile public-value uses. Integrating AI into curriculums, professional training, Erasmus+ and other applied programmes could increase uptake and trust together.
Improving public services. Participants argued for augmenting AI’s use in public services like health, safety and education. Learning by doing, while continuously evaluating AI outcomes, could help ensure inclusive deployment aligned with public-value goals and effective public services could help to build public trust regarding AI’s wider adoption.
Addressing demographic shifts and labour-market risks. AI’s development would couple with an ageing European population, requiring significant policy intervention around labour participation and social security funding. Early retirement schemes, voluntary working-life extensions and social security reforms might be necessary, alongside targeted reskilling for emerging occupations. Taxation could be overhauled to better capture and redistribute AI-related windfall or productivity gains.
First, the stakes could not be higher. The more disruptive scenario predicts a world entirely different from the one we know now, with profound social upheaval, fraying democratic norms, deep inequalities and perhaps the end of labour markets. Even the less disruptive scenario would cause seismic change, especially in how organisations function and distribute tasks.
During the backcasting exercise, both groups experienced moments of analytical friction that revealed how even core concepts would become unstable under scrutiny. In the high-tech, high-displacement scenario, participants had difficulty with the ‘economic incoherence’ of a world in which AGI drives production costs towards zero while profit and ownership structures persist. Redistribution proved equally problematic, raising doubts over legitimacy and whether democratic systems could ever match the pace of exponential technological change.
In the slower, mission-oriented world, friction centred on the blurred boundary between automation and augmentation. Even augmentative AI was expected to displace some jobs, with what counts as augmentation varying widely across sectors. Together, these tensions highlight that policy foresight must grapple not only with what to plan for but also with what its categories truly mean in a world likely to profoundly differ from today’s. The pressure points themselves are where future governance challenges will surface.
Across both scenarios, the key lesson is that it is wrong to see the future as a simple function of technological change. Instead, the key factors are choices made at the institutional and organisational levels. At the micro-level, social and labour outcomes rely more on governmental and organisational choices around job design and bargaining power. At the macro-level, an ecosystem of excellence in AI turns less on specific policy instruments and more on institutional legitimacy, reform and effectively applying AI that delivers material improvements for citizens. Indeed, the shape of technological disruption itself is a result of sociopolitical decisions such as who gets to direct – and fund – AI development. In short, the future of a European AI ecosystem is less a prediction problem than a design problem.
The full project report is available here.