The advancement is remarkable and yet, underneath the shiny surface, there are some concerning undertones inherent in generative AI systems and platforms as they currently exist. “These platforms are incredible, and they'll set off a creative revolution through their accessibility, but I think it's important that their pitfalls are also acknowledged to increase understanding, so we can aid their development in a positive way.”
Director of Research & Service Design, Dave Jackson agrees. Hallucination, for one, is a real pitfall with AI, especially when we’re working with large volumes of data and the hallucinations are not easily identifiable. With AI-generated imagery, we can see shortcomings easily, such as stereotypes that could be politically incorrect or misspelled words, but when you’re using AI for larger tasks or on unfamiliar subjects, you run the risk of using erroneous data to inform decisions. So, it’s important for AI not to be taken at face value and the work needs to be supported by facts and research to mitigate risks.
Lucy agrees AI systems can be easily manipulated and filled with biases and inconsistencies, which can present incorrect or problematic insights.
“In research, a lot of what we do for clients is to mitigate bias as much as we can.” AI models are often trained on historical data sets which are filled with misrepresentations and often underrepresent certain social groups, which is why it’s important to ask: what data are we training them on? What research do we need? How do we authentically represent people? This is important because inauthentic portrayals of lived experience will lead to decrease of trust and will not provide accurate, actionable insights.
Attempts to fix these issues by diversifying data sets have also caused controversy, as we have seen most recently with Google Gemini generating images with racist undertones, like the viral image of Elon Musk as a black man. Though it might not be intended, this plays into existing biases and histories of blackface, so it’s about the balance of advancing these types of systems without perpetuating discrimination. It also shows that retrofitting data sets isn’t so easy and demonstrates how important high-quality research and testing is to develop these systems in the first place.
In addition to the content of data sets, research and other user-centered design expertise should be involved in these systems' design from the start. Other factors which impact system development such as framing the problem space, data labelling, UI design, algorithm selection and feedback loops are all subject to bias, posing new challenges for business’ looking to engage with AI.
From a research perspective, it’s a tempting prospect to pump primary data into AI, but that is loaded with GDPR risk. Free AI and some paid for AI will learn from your data, which could inadvertently appear in an AI answer elsewhere. It is therefore critical to spend time to understand which paid for AI services will not learn from or share your data, so you can make decisions that are right for your organisation. It is also best to experiment with AI on low-risk scenarios to get a feel for the power of AI.
Lucy agrees that internal guidelines on systems usage might help clarify the rules when it comes to data privacy. Ethical frameworks could be established in a way that encourages responsible development, to address the concerns without letting them become blockers. This way you’re tapping into the benefits of AI whilst minimising the risk to civil liberties; it’s important for policy makers to be the first ones to show responsible practice.
AI is power hungry. For example, Google estimates 34 million images are generated around the world per day and The Register found it takes around 1.35kw to generate each. If those sources are to be trusted, that would mean the amount of energy used each day for image generation could power a Tesla Model 3 to travel over 91 million miles every day. Of course, there are ways to mitigate that usage with green energy in data centers, but the impact on the environment is still significant.
Language models like ChatGPT use up to 500ml of water for data centre cooling every time you ask for a series of 5-50 questions. If users are doing that daily, they’d each use the same amount of water in 3 months as the average person would use for washing, bathing and drinking in an entire year. Though it’s important to understand the wider impacts on communities and the planetary ecosystems which accommodate them, it’s not to say there aren’t a whole host of benefits to using AI.
Lucy points out that the term ‘Artificial Intelligence’ can be a pitfall in itself. It’s easy to overestimate how close to human intelligence these systems are. When you’re prompting LLMs, the interaction feels like you are engaging with a real person who has encyclopedic knowledge, making it feel really powerful, but that interpretation is heavily influenced by the UX design. In reality, the typing effect is simulated to make it feel more conversational and human-like. LLMs are trained to perform certain tasks and make associations based on languages, but we know that language accounts for a small part of human experience. It’s important to keep in mind that their ‘Intelligence’ is actually quite limited. A recent study from the University of New York found that babies outperformed AI in crucial psychology tasks. When it came to perceiving motives behind gestures, babies were much better at judging this than any artificial intelligence can.
AIs cannot understand context, which is vital in user research as it helps gain authentic insights that inform possible designs for our clients. In saying that, we have found AI to be incredibly useful for finding information at pace and summarising it, but it again comes back to context lost through this process. In the end, it becomes a battle between efficiency versus effectiveness. At Transform, we’d never want to undermine the integrity of our work in the quest for greater productivity, so it’s important that organisations similarly understand the risk.
For any sort of AI transformation to be successful, you need to bring people with you, which means workforces need to see the value of these technologies. Many people are concerned for their own futures or worried about the change, and that might deter them from engaging in a way that would add the most value. This makes it vital for organisations to keep their people at the heart of the change, to make sure they feel comfortable in what the company is trying to achieve.
Then, of course, no one can tell the future of AI accurately, so even that is a challenge when it comes to businesses investing time and money. Though AI is working well as a co-pilot and will likely continue to improve that way, it’s worth considering whether AI is overhyped and we’re at risk of getting carried away.
We’d love to close this piece with tips to help avoid these pitfalls:
If you're starting out on your AI journey and want some more hands-on advice on where to start to avoid some of the pitfalls we’ve discussed above, we have just the thing. On the 30th April, we’re running our half-day Transform Academy “AI Fundamentals 101”, to share an overview of our experience in running AI projects, bust some myths and give some practical, interactive insight into creating AI use cases for your business. For more details on what you can expect from the session, click here.
To allow attendees to actively participate in this F2F workshop, spaces are strictly limited - if you'd like to join us or to suggest someone from your team, email us at transformation@transformUK.com.