Zoom on

Can we trust Artificial Intelligence?

The context

An opportunity for progress, but at the same time a potential threat for individuals and for society as a whole: this is the common perception with regard to the onset of the fourth industrial revolution, with more and more advanced devices being developed that are equipped with Artificial Intelligence. And this ‘common wisdom’ is shared by ordinary people, opinion makers and politicians alike. In fact, according to a recent IPSOS Mori survey, 66% of British MPs believe that automation will have a positive impact on the economy, but almost half (45%) think that the next 15 years of technological innovation will erase more jobs than it will create.

At the heart of the matter lies the question of trust: Can consumers trust organizations that leverage digital technologies to amass and process a constant proliferation of their personal data? Is the legislation that regulates all this adequate? Or too weak? Or even too strong, potentially curbing innovation? And more broadly speaking, can we trust Artificial Intelligence as such? These are the questions that consumers and citizens keep asking, questions that companies need to keep in mind if they want to successfully activate digital transformation processes.

Busting the myth

Stephen Hawking, in a 2014 interview with the BBC, warned that the coming of pervasive artificial intelligence, with autonomous cognitive capabilities that can match our own, could spell the end of all humankind. Whether or not we agree with this dire prediction, the fact is that today AI is a long way from achieving such capabilities, since unlike us it has no cognition: AI is not self-aware, nor is it capable of sense-making in terms of the analyses it generates. In other words, AI consists of a series of tools that can enhance an individual’s cognitive capacity, but not autonomously replicate it.

From a business standpoint, this means that even in the Era of AI, it would be a serious mistake to underestimate the human element. Instead, companies need to develop a number of skills – both hard and soft - as quickly as possible, so they can implement appropriate AI-related initiatives. When designing systems, modelling problems and programming artificial intelligence, the human touch is indispensable, to include expertise in myriad areas, not only computer science and data science. Equally essential for managers is having a deep understanding the structures, the needs and the specificities of the relative business context, and the communication skills to transfer AI outputs to a broader set of stakeholders.

So the challenge of AI in the business world is not simply a technical and engineering one. It’s also – if not above all – an organizational and more generally a governance challenge as well. Naturally, trust in AI-based tools can’t be unconditional; instead today’s AI requires human supervision to contend with problems, deal with exceptions, and guarantee total process accountability.

It’s a question of privacy

As we mentioned before, the need to build trust in AI isn’t just for the sake of company personnel, but external stakeholders as well, such as consumers and citizens on the whole. A case in point with regard to privacy is intelligent vision. This algorithm-based technology enables machines to recognize and classify images and their patterns and details according to human or above-human standards. What’s more, the precision of this technology can be enhanced over time without the need for rules-based programming. Areas of application for intelligent vision are countless: from automatically recognizing products stocked on warehouse shelves to identifying counterfeit luxury items, from enhancing cutting-edge security devices to analyzing the emotions and opinions of consumers in the store in real-time.

But on the flipside, technologies that can automatically identify people while they’re shopping in a store, and ‘read’ their emotions in nearly real time naturally tend to raise suspicion and concern about privacy protection. Processing images and photographs that belong to the sphere of personal data, potentially profiling individuals, storing sensitive information – all these processes represent extremely delicate aspects linked to the advent of intelligent vision. In light of this, to win the trust of customers and citizens alike, companies must respect regulations and codes of conduct.

Europe has reached an crucial milestone on the path of privacy protection with the implementation of the General Data Protection Regulation (GDPR –UE regulation 2016/679). This new norm establishes rigorous obligations for organizations with regard to how they process and store personal data; companies must also ensure transparency in the algorithms they use. Specifically, one of the core principles of the GDPR is data ‘pseudonymization’. This means that personal data must be stored in such a way that they can’t be traced back to a specific person without inputting additional information.

How will the new approach set down in the GDPR impact development opportunities for AI technologies in concrete terms? How deeply will this impact be felt? The answers are not entirely clear. But one thing is certain: corporate decision-makers should constantly and conscientiously interface with AI experts to make sure that any future initiatives they roll out are in line with the legislative framework now in force. The hope is that the new principles in the GDPR will find practical applications that can reinforce consumer trust in AI-enhanced technological innovation, while avoiding undermining opportunities for further AI development.

SHARE ON