Does AI have a positioning problem?

John Galpin, Co-Founder Design by Structure, looks at the state of AI today, its role over the next decade, and its positioning problems with clarity, perception and choice of language. 
John Galpin, Co-Founder Design by Structure, looks at the state of AI today, its role over the next decade, and its positioning problems with clarity, perception and choice of language. 

AI was first coined by American computer scientist Prof. John McCarthy in 1955. He said, ‘Our ultimate objective is to make programs that learn from their experience as effectively as humans do.’ 66 years since, AI has only come into its own in the past decade as technology has caught up with the theory. In 2016, Amazon, Apple, DeepMind, Google, IBM and Microsoft formed the ‘Partnership of AI’ to set societal and ethical best practice for artificial intelligence research. So, where is AI today?

What is AI in the 2020s? 

IBM’s definition of AI is: ‘Artificial intelligence leverages computers and machines to mimic the problem-solving and decision-making capabilities of the human mind.’ The reality, widely accepted, is that true AI doesn’t exist in business yet. What does exist are AI applications, which learn from interactions and make recommendation-based responses based on those learnings, namely; 

● speech recognition, (such as Siri, Alexa), 

● virtual agents, (Slack, FB messenger), 

● recommendation engines, 

● and self-driving tech. 

The uptake of these applications* has grown during the pandemic as people were forced to become more connected during lockdown.

AI has made inroads into enterprize technology, streamlining workflows etc., and while there is big ambition on the adoption of AI, the reality is its uptake among global companies is not where it should be, according to Forbes Insights. A KPMG report (2020) suggests an AI’ trust gap’, which has ‘prevailed amid a lack of quality data and an ensuing reluctance to hand critical business decisions over to machines’. This lack of data is perceived as a barrier to adoption. Similarly, the 2021 McKinsey AI report suggests that its’ findings show no increase in AI adoption’.

What’s the problem with AI?

As a branding agency working in the technology sector, we have seen many AI-driven propositions in our time, some interesting and cool stuff that isn’t accelerating as fast as perhaps it should. 

 It seems to us overall that AI has a bit of a positioning problem in the following three areas:

1. Language used – exclusive and not very clear.

2. Perception – fear and distrust.

3. Clarity – what is the problem being solved?

1. Language used

AI is sometimes positioned as a silver bullet or a panacea, but that doesn’t make it real for end-users who aren’t clear on how it works or what problem it’s solving for them. The word ‘artificial’ doesn’t evoke trust or security and has not changed since its inception in the 1950s. There are many definitions of what AI is, again confusing, and on top of that, there are subfields (machine learning and deep learning) often used in conjunction with AI, which muddies the water even further. 

AI washing in marketing

This isn’t helped by the many products and services that claim to use AI but simply don’t, a marketing effort known as ‘AI washing’. We see much of this kind of activity in FMCG, where a fast turnaround is often driven by ‘new and improved’ messaging in a bid to shift more products. 

For example, in 2019, an ‘AI’ toothbrush was launched, claiming to track brushing and supply feedback; it was sold on the marvels of new AI tech. Does the brush decide the brushing technique based on what it has learned? No, it doesn’t. This type of activity can confuse consumers about the reality and capabilities of AI.

Marketers may be guilty of feeding the mistrust when they launch products that appear to be AI-enabled and future-forward, but which in reality are simply tech-enabled devices. Therefore, the power of language is important, and so is how brands use it to hit cues of connection and understanding with consumers. Let’s examine other categories, such as the car industry and self-driving cars. We can see genuine strives to make something in the arena of AI, where the tech is looking at and reading situations to make decisions. One of the big players in this sector is Tesla with its Autopilot tech. 

Tesla’s use of ‘Autopilot’ is interesting, it’s a word we all feel familiar and even safe with (from our experience of flying), but in this context, the cars always require a driver. Following well documented Autopilot crashes, the need for clarity of information on what the tech actually does is more important than ever. The brand has had a warning about this, a German court found Tesla’s ad copy misled consumers and has banned the car company from the use of the terms, ‘full potential for autonomous driving’ and ‘Autopilot inclusive’ in its advertising materials.

In conclusion, the importance of clear language is paramount for safety and understanding because when consumers read about crashes and misinformation, the fear and distrust set in. 

 2. Perception: fear and distrust

A global survey by Statista revealed that only 23% of UK respondents said they trust AI. There are many reasons for this. Some distrust is fuelled by what they read about AI and their perception of what it is doing or going to do. The mainstream media is trusted to supply true and faithful narratives and is where consumers often source their information as ‘fact’, and therefore it shapes their perception. But it is also guilty of sensationalism. So while people are unsure about AI, they are acutely aware of when it goes wrong, with headlines such as Tesla’s driving-assisted cars involved in fatal accidents, or a Microsoft chatbot spewing racism or smart speakers caught recording and analysing private conversations. All of which have a negative impact on trust. 

AI has been the subject matter for many fantastical movies, which always seem to take a fatalistic POV; think about Terminator or Transcendence or AI, where the human is always being persecuted. However, in the real world, it also doesn’t help either when industry leaders such as Elon Musk, publicly fuel the fear that AI will quickly evolve from being a benefit to human society to taking it over. He is quoted: mark my words… AI is far more dangerous than nukes’. The irony of the Tesla owners remark hasn’t gone unnoticed. 

There are some very cool and interesting developments in AI and its subfields, which are being undermined by the negative impact of sensational press stories and pop culture, shaping a skewed narrative that is damaging to both the perception and understanding of AI. We need to be clear about what it is, what it does and how it helps us and celebrate those stories.

3. Clarity – what is the problem being solved? 

Are humans being augmented, or are they being replaced? There’s a big difference between the two and given that AI is unproven as a replacement for people, we think it would be wiser in the near term to position AI as an aid rather than a replacement e.g., it’s about helping humans do something better. 

Currently, there are too many unknowns, therefore defining a clear narrative on AI and the implications and limitations of adoption will increase the chance of success in the long term. We need to talk about AI as an aid, not a decision-maker or as something that takes control away from us. We need to market AI as something that enhances our choices and decisions, this could help drive adoption and ultimately move us to a place where AI could become fully autonomous in a way that would be more acceptable to the masses. Marketing should focus on addressing the consumer pain points rather than the technology itself, such as ‘park assist’ – i.e., a useful tool for a specific problem.

READ MORE:
Moving forward

To reposition AI, we need to rethink the terminology and how we talk about it to make it accessible and transparent. The solution pivots around a people-first approach to AI-enabled tech with a clearer definition of what and what isn’t. Let’s describe it as is it to make it feel more real; for now, and in the near future, it’s about using smarter technology to speed things up, enhance performance, drive better outcomes faster.

*Uptake in speaker ownership is 38% in the UK, up from 23% in 2018, citing the pandemic as a purchase driver. (UK Smart Speaker Consumer Adoption Report 2021 by VoiceBot, 2021)

For more news from Top Business Tech, don’t forget to subscribe to our daily bulletin!

Follow us on LinkedIn and Twitter

John Galpin

John Galpin is the Co-Founder of Design by Structure