A row of people seated and listening

Public Sector Innovation Conference: Ethics and AI

On the 13th of March, RSA House staged several conversations around the interesting ways in which people across the industry are using AI to innovate our public services. Our Managing Director, Johan Hogsander, went along to see what insights the Public Sector Innovation Conference had to offer and he’s now ready to share his thoughts.


I had the privilege of attending the PSI conference last week, a small but (in my opinion) very high-quality conference. I have to say I was particularly impressed that they were able to deliver so much value for a ticket price of zero! The theme was of course AI – what else? - and I was genuinely pleased to get such deep insight from many of the speakers, although as always, there were more questions than answers or solutions.

The panel discussions turned quickly to the recent bias problems making headline news over the last few weeks .Are we doing anything about it? Can we? The answer to both questions is yes, and some organisations have started running “unintended consequences” identification sessions when considering the impact of using the internet data that underlies the LLMs (data which has been largely generated by white males over the last +30 years). Many panellists were worried that the current drive towards efficiency (especially in the cash-strapped public sector) will mean the ethics get left behind or is an afterthought; a relevant worry. Most of us want to ensure that predictions or recommendations based on this data don't disadvantage parts of society. In some instances, synthetic data can be used to address this problem, at least partly, but that might cause other unintended consequences. Unsurprisingly, it turns out you probably have to include humans in the loop at some stage, something I see myself when I am using AI in my job.

But even if we try to adjust for historic bias in the data to influence the behaviour of the AI, the speakers acknowledged that the current toxic culture wars - where different versions of reality are fighting for supremacy - means there is a big risk of toxic data entering the models. Some of this comes from genuine disagreement about what the world looks like, and some comes from bad faith actors with an interest in destabilising society. Again, an area with many challenges and few answers, but one of the most interesting panels was the one around AI and security. I was quite encouraged to hear that the message from NCSC was, on the balance, positive: bad actors will try to use AI against us, but our ability to use AI for countermeasures will allow us to stay ahead of these. If this is true, it would be a good thing. The panel identified several threat areas of increased focus, including increased automation to find personal data, voice cloning / phishing and the use image flooding to subtly poison LLM models. The basics remain key – multifactor authentication, for example, will still be a good protection even against new varieties of hacking.

Speaking of basics, I always enjoy hearing someone on stage bring up one of my old tropes: it’s the basics, stupid. Although the army representatives’ metaphor was much better than that: “we are good at chasing shiny baubles, but not so good at getting a Christmas tree to hang them on”. The backbone of successful AI usage is well functioning basic systems, processes and data that are all fit for purpose. Without these, your AI will go the garbage in/out route and be unlikely to be transformational or even supportable in the long run.

As a final reflection, it was good to hear about the frameworks coming from organisations such as the AI Safety Institute and the CDDO for manage the challenges of AI around ethics, inclusion, mitigation in case of misuse and the many other factors that must be considered. However, I know I’m not the only audience member who felt the discussion veering towards defensive or negative; focusing on risks, bias, and challenges. AI could be a great playing-field leveller, in my opinion, helping people get better and fairer access to faster, better, and cheaper government services, regardless of their background or education level. It’s not a given that it will do this, but the potential is there. It's up to us and our leaders to put in place the policies that will allow us to exploit the full potential of AI for good. Our track record with new technologies is… mixed - but past results are not a good predictor for future performance. Especially not in a world changing as fast as ours is now and change is, on the balance, a good thing. At least to annoying neophiles, like me!

If you’re neophile like Johan and you’re looking for free AI training that cuts through the hype and starts from the foundations, look no further. On Tuesday 30th April we’re running our newest Transform Academy: AI Fundamentals 101, from 9am to 1.15pm in our London office (W1W 7RT).

Attendees will leave with a clearer idea of which are the next steps in their journey and will get the chance to build their own use cases, getting feedback and advice from our subject matter experts.

Spaces are strictly limited so be quick and book your place by dropping us a note at Transformation@TransformUK.com

Related Blogs