missing alt

The Detriments of Rushed AI Implementation


As part of the conversation around AI, we’ve been reflecting on the dangers of avoiding AI and why it’s important to get ahead of the curve. Our experts - Senior User Researcher, Daniel Finnigan, Head of Data Analysis, Michael Baines, Director of Service Design, Dave Jackson and User Researcher, Lucy Hutchinson – pick the conversation back up this week to discuss the snowball effect of avoidance. Join us as we look at the detriments of rushed AI implementation.

What will drive a rush into AI?


There are many reasons why AI can’t be avoided, and why it'll fairly rapidly become ubiquitous. The first being that the big productivity suites are rolling it out; both Google and Microsoft are working to integrate the technically leading Generative AI models into their products (Gemini and ChatGPT respectively). Secondly, Generative AI is not flash in the pan; research has shown considerable increases in productivity resulting from it and, as a third reason, this creates competitive pressures that ramp up over time for late adopters. The fourth reason is that as AI tools become more ubiquitous, this will reduce the reluctance of clients and of the whole business system to engage with them and will drive the working through of compliance and copyright issues to clear the way for their commercial usage. These factors are likely to create a slower-then-faster dynamic in how these tools are rolled out, which will drive this sense of needing to catch up as things start to accelerate.



Rushing to catch up leads to bad implementation


The worst-case scenario when rushing into AI is that you start replacing people and functions with AI to save money and find, several months down the line, the cracks start to show. Think about when automated supermarket check-outs were implemented: they all rushed into having no people at tills only to realise customers hated the process. The same thing could happen at scale with AI, where customer service suffers because customers are wrestling with large language models, and no one can get hold of people anymore.

Another example is in job searching, when employers are using automated systems to sift through ridiculous numbers of CVs. The issue is that it creates bias without any real accountability. Algorithmic CV checkers have slipped under the radar, but Daniel thinks we’re going to start seeing studies about how they’ve led to forms of ageism, racism, or sexism. These low-profile implementations of AI are almost more dangerous, in a sense, because there's less scrutiny of them.

Lucy also refers to the DPD chat bot that, within a day or two, had been coerced into writing bad poems about DPD services, or Air Canada who lost a lot of money after having to legally adhere to the refund and discount policy its chat bot made up. “When you don’t anticipate the ways in which users might interact with the system, AI hallucination becomes another real concern.”

From a data perspective, any AI product or service is underpinned by solid data foundations and so it’s vitally important to have reliable and robust data in place for AI models to learn from. If the data sourcing and preparation in building these AI products is rushed, invariably the end-product will be poor. To quote Michael with his extensive experience in rolling out data projects... “Rubbish in, rubbish out”


How organisations can start to educate and experiment with AI...

We know behaviour change will happen slower than technological change, so starting early with AI puts organisations and more importantly, individuals, in a better place to keep pace with AI change.

Dave feels AI experimentation should be mandatory within organisations and funding is needed to give employees access to the better AI services. Equally employees should be given the time to experiment.

Right now, many businesses aren't ready to implement AI, but we’d recommend identifying use cases where AI could make a significant impact to either the businesses or the customer’s experience, so you are ready when the time is right.  

All of this should be backed up by setting rules to stop teams from walking into danger areas such as GDPR. Rules and guidance is the best way to empower teams to make the most of AI research, because it eliminates employee anxiety when experimenting, as well as reducing the risks for businesses.

Daniel’s advice for organisations starting their AI journey would be to understand the technologies operationally and the impact from the end-user's point of view. You don't have to implement anything straight away but learn so that you can be in a position to implement tools in a productive way. If you miss the boat now, then you'll only be pushed to implement down the line but without having the know-how.


... without feeling overwhelmed


Dave, Mike, Lucy and Daniel left us with 6pieces of advice to avoid being overwhelmed by AI:

If you're starting out on your AI journey and want some more hands-on advice on where to start to avoid some of the pitfalls we’ve discussed above, we have just the thing. On the 30th April, we’re running our half-day Transform Academy “AI Fundamentals 101”, to share an overview of our experience in running AI projects, bust some myths and give some practical, interactive insight into creating AI use cases for your business. For more details on what you can expect from the session, click here.

To allow attendees to actively participate in this F2F workshop, spaces are strictly limited - if you'd like to join us or to suggest someone from your team, email us at transformation@transformUK.com.

Related Blogs