
AI governance and new regulations: how corporate liability is changing

The most recent legislative measures, at both the national and European levels, have progressively defined a reference framework designed to support companies in the responsible management of AI, providing criteria and principles to guide its integration into organizational processes. While the structural and ubiquitous adoption of technology in productive and decision-making dynamics is undeniable, the real challenge today lies in its governability; the paradigm shift needed to foster, within organizations, a cultural and managerial evolution capable of ensuring oversight, transparency, and traceability of algorithmic tools. Attention cannot be limited to formal compliance but must extend to the construction of governance frameworks consistent with the complexity of the ongoing transformation, enhancing accountability from a business ethics perspective, the quality of decision-making, and the protection of stakeholders as central elements of corporate competitiveness and sustainability.
Law No. 132/2025, read in conjunction with Regulation (EU) 2024/1689 (AI Act), the Digital Omnibus Package, and Legislative Decree 231/2001, defines the regulatory framework within which to develop the definitive shift beyond compliance understood as a formal obligation and to inaugurate a phase of substantive governability of automated processes. The use of artificial intelligence systems, from a neutral technological variable, becomes a structural factor of legal, social, and reputational risk that directly affects a company’s organizational structure and the criteria for attributing liability to the entity. This entails a profound transformation of the 231 model (the organizational and compliance framework adopted by Italian companies under Legislative Decree 231/2001 to prevent certain crimes and limit corporate liability), which is called upon to implement an effective capacity to govern algorithmic systems throughout their entire life cycle. The very notion of the “adequacy” of the organizational model assumes, in this context, a dynamic and technological characterization: an adequate model is one that enables the understanding, oversight, and direction of the use of artificial intelligence within the organization, integrating it into decision-making processes and control systems in a conscious manner. From this perspective, AI governance naturally intertwines with corporate sustainability and with renewed attention to business ethics, understood as responsibility involving not only the company but also the entire supply chain.
Law No. 132/2025 has outlined the national framework for AI governance, impacting the criminal system and providing for a delegation to adapt criminal offenses to new technological risks. At the same time, the AI Act has introduced a regulatory model based on a risk-based approach, with graduated obligations and enhanced safeguards for high-risk systems, while the Digital Omnibus intervenes in the temporal and operational coordination of these disciplines, influencing corporate compliance planning. The result is a multilevel regulatory framework that requires a systemic and integrated reading, as the various sources converge in defining new organizational standards and corporate liability.
Within this context falls Article 26 of Law No. 132/2025, which introduces a general aggravating circumstance for crimes committed through AI systems, as well as specific aggravating circumstances for certain offenses, including market rigging and market manipulation: an intervention that directly affects the scope of predicate offenses under Legislative Decree 231/2001, potentially expanding their relevance whenever the unlawful act is carried out or facilitated by algorithmic tools.
The use of AI may, in fact, interfere with cybercrimes, market abuse, money laundering offenses, copyright violations, as well as corporate, environmental, or public administration offenses, where the algorithm constitutes the means of carrying out or facilitating the conduct.
The resulting effect impacts the overall structure of the system: the area of crime risk tends to expand, and the threshold of risk deemed acceptable is recalibrated in light of the specific impact that the use of AI may have in terms of greater social danger. Consequently, risk assessment pursuant to Legislative Decree 231/2001 is required to address this evolution, including a careful mapping of AI use cases and a precise evaluation of the risk profiles connected to their actual implementation.
The provisions of the AI Act also intersect with Article 86, which recognizes — in relation to high-risk AI systems listed in Annex III — a right to an explanation of individual decision-making processes vis-à-vis the persons concerned. This entails for companies the need to establish structures capable not only of preventing offenses, but also of effectively responding to transparency requests, inspections by competent authorities, and reporting obligations. The substantive governability of the algorithm is also measured by the ability to account for its decisions.
Furthermore, the framework outlined by the AI Act affects the configuration of the value chain by distinguishing roles and responsibilities among providers, deployers, importers, and distributors. This articulation entails a multilevel distribution of compliance obligations according to a logic of interdependence that makes risk management intrinsically network-based.
AI governability thus assumes a systemic dimension that is realized through the construction of a structured ecosystem of internal controls, protected reporting mechanisms, and safeguards throughout the supply chain.
Integrated compliance, therefore, takes shape as a multilevel architecture: internal (231 model, reporting channels, audits), external (contractual relationships and supply chain controls), and institutional (interaction with national and European authorities). It is within this interaction that the difference between merely declared governance and the substantive governability of the algorithmic enterprise is measured.
From this perspective, AI governability also becomes a parameter of the adequacy of organizational arrangements pursuant to Article 2086 of the Civil Code and of proper management, requiring management bodies to consciously and controllably integrate automated systems into decision-making processes. A lack of AI governance safeguards may, in fact, be relevant not only for the purposes of liability under Legislative Decree 231/2001, but also as an indicator of organizational fault and inadequacy of the organizational structure, affecting the assessment of professional diligence and the overall sustainability of the enterprise.
The new regulatory framework requires that AI governance be considered not as a specialized segment of compliance, but as an organizing criterion of the entire system of corporate liability. The substantive governability of automated processes thus becomes a qualifying parameter of the adequacy of the 231 model, an indicator of the robustness of organizational arrangements, and of the credibility of the ESG strategy.
The topics addressed in this article will be further explored in the course Compliance aziendale, modelli 231 e sostenibilità (in Italian).


