
The technologies we’re not talking about
Europe’s capacity for innovation depends on unglamorous components that are quietly becoming decisive

Over the past year, corporates worldwide have entered a full-speed race to deploy artificial intelligence. The intensity is understandable. AI promises a step-change in productivity and decision-making, and early movers see the possibility of reshaping their industries while others watch from the sidelines.
Yet the present enthusiasm is only partially justified. Despite the rhetoric of imminent transformation, many fundamental challenges – technical, regulatory, organizational, and geopolitical – remain unresolved. Models are costly to train, difficult to govern, and heavily reliant on foreign hardware and cloud infrastructure. Lorenzo Diaferia recently wrote about this rush, capturing the urgency with which companies are experimenting, hiring, rethinking processes, and trying to harness AI before competitors do.
Nonetheless, it is reasonable to assume that much of this will be ironed out over time. History shows that when incentives are strong enough, technical obstacles are eventually overcome, regulations adapt, and organizations evolve. This expectation has created an unintended consequence: because people assume the hard problems within AI itself will eventually be solved, the corporate conversation has narrowed almost exclusively to applications: how to deploy AI, how to integrate it, how to scale it. And in doing so, we risk ignoring the technologies that will actually determine whether Europe can deploy AI at all.
What’s below the AI surface
The less glamorous components beneath the AI surface – semiconductors, power supply, compute capacity, data centers, network infrastructure – are quietly becoming the decisive variables for Europe’s innovation capability. These technologies do not lend themselves to spectacular demos or viral videos, but without them, the AI revolution cannot take root. In the next few years, Europe’s competitiveness will depend on both how quickly its corporates adopt AI tools and on whether the continent can build and secure the physical and digital foundations required to support them.
The first challenge is an intensifying bottleneck in computing capacity. Advanced GPUs and accelerators have become both scarce and extraordinarily expensive, as global demand continues to outstrip supply. European firms face not only higher cloud bills, but also the deeper vulnerability of relying on non-European infrastructure for critical operations. This is not a mere cost issue; it is a structural dependency that gives foreign providers de facto influence over Europe’s pace of innovation.
Even when hardware is available, Europe confronts another obstacle: the capacity of its data centers. The region is constrained by energy availability, grid limitations, stringent environmental regulation, and lengthy permitting processes. Building or expanding data centers can take years. While the United States and parts of Asia are accelerating large-scale buildouts, many European facilities are already operating near their limits, and new projects are increasingly hitting political or regulatory resistance. This means that Europe’s AI ambitions may soon outgrow the very spaces intended to host them.
Compounding these challenges is the state of Europe’s network infrastructure. AI-driven applications are not simply computationally demanding; many require low-latency, high-bandwidth connections to function effectively. Yet Europe still suffers from uneven fiber deployment, comparatively slow 5G rollout, and a patchwork of regulatory frameworks that discourage large, coordinated investment. As industries shift real-time AI services (such as autonomous logistics), the continent’s digital backbone begins to look less like a modern infrastructure and more like a constraint. Europe cannot build an AI economy on networks designed for an era of streaming video.
This convergence of computing scarcity, data center limitations, and network constraints is creating a new form of dependency. Increasingly, the “computer” that companies and individuals rely on is no longer the device on the desk or the server in the office, but an enormous remote cluster controlled by a handful of global players. And those who own the computing capacity will shape the rules of the game.
For European corporates, this shift raises strategic questions that extend far beyond IT. What does it mean to rely on foreign infrastructure for mission-critical operations? How vulnerable does this make companies to pricing shocks, service disruptions, political pressure, or export controls? The recent chip cold war between China and the Netherlands – centered around ASML’s lithography machines –illustrates that technological infrastructure is now a geopolitical asset. Europe cannot afford to treat these dependencies as trivial.
The technologies usually ignored in public debate therefore deserve far more attention, because they may offer Europe a way to reshape the underlying economics of AI. In the12th edition of our HIT Radar, we examine several emerging systems that could play a decisive role in Europe’s ability to build a resilient, competitive digital ecosystem.
Looking at the emerging alternatives
Europe’s most commercially aligned deep-tech story right now is photonic computing – the use of light instead of electrons to perform mathematical operations. For businesses, the implications could be very tangible: if photonic accelerators deliver on their promises, they could reshape the economics of cloud computing, offering faster computation with dramatically lower power consumption. Germany’s Q.ANT and the Dutch ecosystem around PhotonDelta have become focal points for this movement, trying to build supply chains that could position Europe as an indispensable player in next-generation AI infrastructure. What makes this especially significant is that Europe already has a deep legacy in optics and semiconductor equipment: Photonic Computing fits directly into industrial strengths the continent has been cultivating for decades. Several European pilot lines have demonstrated the potential for photonic accelerators to deliver far greater energy efficiency for AI workloads than electronics alone. Yet the bottleneck is the absence of a scalable industrial pathway: photonic chip design remains fragmented, fabrication capacity is limited, and the continent still lacks the equivalent of a TSMC-class foundry prepared to mass-manufacture photonic processors. Unless Europe can bridge this gap between research excellence and industrial scale, photonics may become another field in which breakthroughs are exported rather than commercialized at home.
A second frontier where Europe has made important contributions is neuromorphic computing. Neuromorphic chips attempt to mimic the structure and energy efficiency of the human brain: this architecture allows for computation at extremely low power, making it ideal for the millions of sensors, robots and intelligent devices expected to populate the next generation of factories, cities and supply chains. Europe is home to two of the world’s most advanced neuromorphic systems – BrainScaleS in Heidelberg and SpiNNaker in Manchester. These platforms were initially designed for neuroscience research, but they have evolved into experimental computing environments that can run ultra-fast, energy-efficient neural networks. Although still largely confined to laboratories, they offer glimpses of what future edge AI hardware could look like.
Today, neuromorphic computing is not a field that executives need to operationalize yet. But it is one worth quietly watching. As industries move toward pervasive automation and intelligent edge systems, the cost of running AI on tiny power budgets will become a strategic constraint. However, neuromorphic systems also currently face two significant bottlenecks: they are not yet compatible with mainstream programming frameworks, and they require specialized algorithms that differ from those used in today’s dominant deep-learning models. As long as corporate AI ecosystems remain optimized for GPU-centric architectures, neuromorphic solutions will struggle to move beyond the lab. But if neuromorphic technologies escape the lab – and Europe is arguably the most likely region to make that happen – they could become the enabling layer for real-time robotics, autonomous machines and next-generation IoT platforms. Europe must therefore develop the software tooling, industrial testbeds and commercial partnerships required to translate a scientific lead into meaningful market impact.
Connectivity tells a similar story of both promise and constraint. Europe has been active in defining the evolution of 5G Advanced, with strong involvement from Nokia, Ericsson, and multiple national research institutions in shaping standards and developing industrial pilots. These technologies offer capabilities – such as ultra-reliable low-latency communication and network slicing – that are essential for real-time AI and its future applications (starting from autonomous driving). But progress remains uneven. Deployment is slowed by fragmented spectrum policy, inconsistent investment incentives and, crucially, the financial weakness of Europe’s telecom operators. Without stronger coordination and more predictable returns on infrastructure spending, 5G Advanced risks becoming a patchwork rather than a foundation for Europe’s industrial AI future.
The same dynamics are even more pronounced in the early moves toward 6G, where Europe has again shown scientific ambition. Unlike the jump from 4G to 5G, 6G will not be primarily about faster download speeds. Its ambitions are more systemic: hyper-efficient industrial networks, continent-wide digital twins, and AI-powered network management capable of predicting and adjusting to demand in real time. Europe’s involvement here is not incidental: initiatives like Hexa-X and Hexa-X-II, led by European industry players and funded by the European Commission, aim to position the region at the forefront of global 6G standard-setting. Research centers from Oulu to Dresden are exploring key components of 6G such as sub-THz communication, and yet this scientific strength masks two structural vulnerabilities. First, Europe lacks the scale of investment seen in the U.S. and Asia, where 6G programs are tied to broader industrial strategies and defense initiatives. Second, there is a growing disconnect between research and the telecom operators who will ultimately deploy these technologies; many operators remain financially constrained and risk-averse after years of squeezed margins. This raises a critical question: even if Europe designs the future of connectivity, will it be able to build it?
Final thoughts
Together, these emerging technologies illustrate the paradox Europe now faces. The continent possesses world-class research capabilities and active participation in next-generation standards. Yet, it struggles to convert leadership in discovery into leadership in deployment. Photonic and neuromorphic computing offer pathways to reduce Europe’s reliance on energy-intensive, GPU-dominated architectures, but both require industrial scaling and closer alignment between research and corporate demand. Similarly, 5G Advanced and 6G could form the nervous system of Europe’s future digital economy, but only if regulatory fragmentation, investment hurdles and telecom-sector weakness are addressed.
Europe’s ability to compete in the AI age will depend not only on how quickly (and how well) firms adopt new tools, but on whether the continent can seize these emerging technologies and embed them into a coherent, sovereign digital infrastructure. The foundations for such a strategy exist. The question now is whether Europe will act on them – or watch others do so first.


