
The most recent empirical evidence dismantles one of the dominant narratives about artificial intelligence: the problem is not that AI “doesn’t work,” but that organizations fail to turn it into systemic value.
The PwC Global CEO Survey 2025–2026 finds that more than half of CEOs have yet to observe tangible economic benefits from AI adoption, and only a minority report simultaneous improvements in costs and revenues. The discriminator is not investment in advanced models, but the degree of integration into core processes and corporate strategy.
This finding is consistent with the McKinsey Global State of AI 2025: AI use is now widespread across the majority of firms, but scalability and enterprise-level impact remain limited. AI is adopted, but rarely institutionalized.
The most recent experimental studies confirm that AI can increase productivity, but with uneven effects that are not always positive for everyone. A large-scale field experiment in e-commerce (2025) shows that the introduction of generative AI tools produces measurable gains in sales and productivity, but with strong heterogeneity: benefits are more pronounced for less experienced operators and less mature segments.
Similarly, a randomized longitudinal study of 6,000 knowledge workers (2025) documents significant reductions in time spent on email and routine tasks, but weaker effects on activities requiring coordination and complex decision-making. AI accelerates individual work; it struggles to transform collective and organizational work.
But the evidence is not univocal. A recent randomized controlled study of experienced software developers (METR, 2025) shows a counterintuitive result: AI use can slow down senior profiles, increasing the time required to complete complex tasks. AI does not automatically replace expertise; at times, it interferes with it.
The empirical message is that AI is not a neutral multiplier of performance. If productivity effects are heterogeneous, governance becomes the truly critical variable.
The PwC Responsible AI Survey 2025 shows that firms with structured Responsible AI frameworks—model monitoring, use-case inventories, cross-functional accountability—achieve better economic outcomes and a measurable reduction in operational risks.
Similarly, the McKinsey Global AI Trust Maturity Survey 2025 indicates that most organizations still sit at intermediate or low levels of AI governance maturity, with significant gaps in data governance, risk indicators, and incident response. Here a structural point emerges: AI adoption is moving faster than the capacity to govern it. The risk is not only operational; it is institutional.
Further empirical evidence reinforces this diagnosis. The Global Trust in AI Report 2025 (University of Melbourne – KPMG) documents that more than half of workers use AI without disclosing it to their employers, often uploading sensitive data and without verifying the accuracy of outputs. This produces a crisis of responsibility attribution:
- Who is accountable for AI-assisted decisions?
- Who certifies output quality?
- Who controls data and bias?
The issue is constitutional: it concerns the ownership of decision-making power and the traceability of choices in environments increasingly mediated by algorithmic systems. The most recent evidence suggests that AI is becoming an infrastructure for coordination and the exercise of power, not merely a tool for efficiency. It reshapes information flows, redefines who decides, anticipates or replaces human judgment, and sediments operational practices that end up assuming an implicit normative value.
When explicit governance is absent, an “algorithmic material constitution” takes shape: a set of rules not deliberately designed, but produced by the accumulation of automatisms, operational shortcuts, and technological dependencies.
Empirical evidence converges on the fact that the value of AI does not depend on the power of the models, but on the quality of the institutions that govern them. Without accountability architectures, impact controls, and clear attribution of responsibility, AI does not strengthen organizations. It reorganizes power within them without declaring it.



