

During 2025, AI has confirmed its role as the environment in which most other technologies are developing and operating (Link ad articolo di apertura del trend topic).
Our perspective on next year expands this evidence: while Generative AI (GenAI) increasingly enters the corporate world, organizations will face several relevant execution open points.
Among the many open points, we see two areas where companies might be struggling:
- • Understanding the real structure of the Gen AI market.
- • Managing the practical challenges of customizing Gen AI for business value.
Both topics draw on our recent studies, one examining over 248 open models and one based on 18 cross-industry practitioner interviews. In the paragraphs below, we provide an overview of key considerations for companies.
The "Open Source" Confusion: what the open-source Gen AI market truly looks like
For years, open source was a relatively clear concept in the software domain. With GenAI, this clarity has faded away. Providers increasingly market models as "open" while keeping critical components opaque or legally constrained.
Our analysis of 248 real-world models shows that openness is not a binary choice with Gen AI. It spans three domains, each with very different strategic implications:
- • Technical openness (Are weights, code, and training data truly accessible?).
- • Governance (Who controls the model’s development and evolution?).
- • Licensing & usage rights (What are you legally allowed to do?).
When we clustered these models, six distinct archetypes emerged. Each reflects a different balance between transparency, control, and usage rights, from Corporate Open and Corporate Restrictive solutions to Open-Science Community Models and niche derivatives. This diversity shows that there is no single “best” approach: different configurations serve different purposes, and understanding these differences is essential for making informed technology choices.
For instance, Corporate Restrictive models dominate market share, even though they impose limits on commercial use, derivative creation, or model improvement. The point is not that one category is "good" and another is "bad." Different openness configurations are suitable for different strategic purposes. What matters is that companies have the tools to evaluate technology choices in a mature and informed way. Some models are ideal for experimentation and transparency; others, despite restrictions, may offer stability, performance, or support structures that fit specific needs. The real risk arises when decisions are made without considering the trade-offs.
By clarifying the technical, governance, and licensing dimensions behind each archetype, we aimed to help managers navigate an increasingly complex Gen AI landscape, and select solutions that align with their objectives, constraints, and business strategy.
Customizing GenAI: where companies might get into trouble
While adoption is growing rapidly across industries, many organizations struggle to translate Gen AI into tangible business value. The core issue is not the lack of tools, but the need to contextualize general-purpose models so they work within the specific realities, workflows, and constraints of each company. This translation from generic capability to practical execution is where the real challenges begin.
Companies typically customize GenAI across three layers, each with its own potential pitfalls:
- • Data layer (ex. RAG, prompting).
- • Model layer (ex. fine-tuning).
- • Infrastructure layer (ex. integrating GenAI into workflows and legacy systems).
Across industries, seven recurring challenges surfaced, along with potential mitigation strategies.
Challenge 1: "Data Overload" from messy unstructured content. Companies sometimes feed Gen AI vast archives of PDFs, reports, emails, and technical documents, assuming more data equals more value. In reality, unstructured data is often overly abundant, informally maintained, contradictory, formatted inconsistently. Therefore, the involvement of domain experts early to curate what truly matters is essential to mitigate this risk and develop pragmatic use cases.
Challenge 2: Hidden contradictions embedded in data. GenAI can mask content inconsistencies, producing fluent but incorrect outputs and companies risk discovering contradictions only after deployment. The use of domain-expert-guided data assessments and automated GenAI-driven consistency checks can mitigate this challenge. Together with challenge 1, these two elements point to a joint need for an aware approach to unstructured data preparation and governance.
Challenge 3: The control–performance trade-off. The more companies constrain a model (guardrails, restrictive prompts, narrow input formats), the more they trade predictability with Gen AI’s generative capabilities. To find the proper balance, companies can apply a risk-tiered governance model to mitigate this risk. Not every use case requires the same level of restriction. Creative internal use cases and customer-facing medical chatbots cannot be controlled the same way.
Challenge 4: Unpredictable outcomes. Even well-customized models occasionally hallucinate, misinterpret, or behave unexpectedly. The implementation of structured, multi-phase output testing involving business experts before roll-out and while the system is operating in production environments can increase confidence and mitigate the risk.
Challenge 5: Performance inconsistency across sub-tasks. Gen AI might excel at one part of a process but fail on some cases, exceptions, or task variations. A pragmatic combination of Gen AI with rule-based components, validators, or deterministic logics instead of expecting a single model to do everything is a more mature mitigation strategy to reduce risks. The right tool needs to be used to solve the right problem, or part of the problem.
Challenge 6: Rapid technological obsolescence. The rate of technological obsolescence in the Gen AI domain is very high. Building a modular GenAI stack that allows components to be swapped without rebuilding the solution protects from technological risks.
Challenge 7: Employee-driven "shadow customization". Tools like Copilot Agents and Custom GPTs empower employees to create Gen AI workflows with no oversight. The definition of clear boundaries for acceptable self-customization and the enforcement of a governance for anything that touches core systems is a potential road to enable employees to experiment within clear safe boundaries.
Final thoughts
Gen AI is becoming an enterprise infrastructure in multiple application areas, and the organizations more likely to capture value are those able to put in place the proper procedures and controls to translate its potential into a well contextualized execution. In a world full of solicitations about AI and Gen AI, our two research streams point to a very pragmatic consideration: companies must understand what they are building on and how they are adapting it to their organizations to extract value.
This means developing the capability to evaluate what Gen AI technological choices to make and building the organizational maturity to contextualize general‑purpose technologies within their own data, workflows, and risk environments.


