- Start Date
- Duration
- Format
- Language
- 14 Oct 2024
- 3 days
- Class
- Italian
In 2016, when Amazon launched Just Walk Out, the idea seemed like a really good one. In Amazon Go and Amazon Fresh stores, customers would be able to make their purchases without going to any cash register. They’d simply use their phone to identify themselves, then an AI-supported CCTV system would scan their items, and the bill would go straight to their Amazon accounts. But in April 2024, Amazon announced they wanted to dial back this system in most of their stores. Media sources claimed that beyond various technical difficulties, the artificial intelligence required support from hundreds of Indian employees tasked with finetuning product recognition.
Today, overboard enthusiasm for AI is a bigger risk than in 2016, and there are few companies in the world that can compare to Amazon in terms of economic might and technological prowess. So, learning from the Amazon experience, before launching any project linked to AI, every manager should ponder some basic questions.
The “zero question” that we must ask ourselves is the most obvious one, but in some ways the trickiest too. Do we really need an AI solution? It’s easy to get swept up in the AI craze these days, so we need to make a cool-headed appraisal of whether it’s truly the best path to take. This is a zero step we can’t afford to skip.
After that, the first question takes a deeper dive into the kind of benefits that we want to get out of artificial intelligence. Even before the explosion of generative AI, any number of artificial intelligence systems had the capacity to upgrade the efficiency of various types of business processes (making them faster and cheaper), generating economic benefits for the organization. For specific types of businesses (often tech companies) these benefits could become real strategic differentiating factors. This distinction also comes up in the field of Gen-AI, where many of the projects currently being trialed aim to produce economic benefits by boosting efficiency. Others, more ambitious but for the time being less certain, envisage a more strategic use of Gen-AI.
The second question is about striking a balance between expected impact and technical feasibility. Obviously, the impact we get must be worth the cost and the effort we need to give to achieve it. But what we are prone to neglect, with all the hype and psychological pressure to adopt AI solutions, is a serious feasibility study.
If still today many AI projects are stalling out before scaling up, it’s because so few people have the ability to see an artificial intelligence system in its entirety. Sometimes the risk is focusing on the training component: the model, the operating algorithm and the data necessary to make it work. Yet we tend to forget that all this must be grafted onto pre-existing technological infrastructure. What’s more, the operators working in the field must accept the system. And it must dialogue effectively (or be integrated directly) with the other technological tools that constitute the backbone of company systems. All this holds true both for generative AI and more traditional predictive AI solutions.
In short, artificial intelligence must be integrated into existing business processes. If this integration isn’t planned, and people aren’t encouraged to adapt it, the system will function in a very limited, isolated way at best. For example, if we need to feed data into the system that are collected and recorded by our salespeople in the field, we need to make sure they’re willing to help, and that they understand just how crucial it is for them to report these data efficiently. Or as another example, if we set up a system for drawing up contracts, the staff in the legal department should trust it. Or if we need to input homogeneous data in the system, we have to check the data we have in hand to make sure they don’t differ semantically, and they aren’t coming from business units with incompatible legacy systems.
The final consideration is the need for personalization (and how to achieve it). There are basically four possible approaches (looking more closely at generative systems and Large Language Models, LLMs).