Transform

How much AI is ‘enough’?

So – it begins – you might want to say, if you’re melodramatic. A U.S. Defence company has now announced that its drone ‘AI’ software has been able to defeat a human pilot in simulations every time.

This of course is very significant for the aircraft and defence industries.  New fighter craft cost billions – in the case of the new f-35 Lightning it’s reportedly $1.5 trillion, allegedly making it the most expensive weapons system ever.  Drones tend to be cheaper. Especially when you consider that one of the most expensive components of a fighter jet is the pilot.  It takes a significant amount of time and effort to train pilots whereas a drone can be rolled out much faster.  Once you’ve written the code controlling combat, it can just be uploaded to drone after drone after drone.  The article doesn’t mention it but ultimately the drone would also gladly sacrifice itself to defeat its enemy – if for instance, the other craft is about to launch a nuclear attack. This is not as far-fetched as it may seem.

The company has cracked a key problem in how to handle multiple inputs faster although it’s interesting to note that the actual AI is very limited, hence my use of quotes around the term.  In my opinion, the term AI is over-used and should be saved for something that actually approaches intelligence and decision making in highly complex environments.  Funnily enough, air combat is much less complex than say cleaning a house!  You deal with an open environment where you have clear inputs from your radar, satellite positioning systems etc., and your definition of success is straightforward – destroy the other craft whilst avoiding destruction, using a limited range of weaponry at your disposal.  Instead, this form of advanced automation might be best suited to some human intervention – for example, having a “swarm” of highly automated drones taking tactical decisions (how to win specific fights). Whereas a human oversees the overall strategic objectives – deciding where to strike, monitoring data on the enemy and then pushing the button ordering the attack – at which point the machines with their superhuman speed, lack of any kind of fear and doubt will execute their programmes and deliver results.

I will leave it to the defence experts to draw the roadmap for future warfare, but there are learnings in the article for us civilians as well. It pays to remember that more civilian applications than we would like come from military inventions – nuclear power, radar etc. Take the old chestnut of self-driving cars.  Ideally these should be totally autonomous, but it might be much more palatable to have some human oversight at least before we have access to (very) robust real AI.  Imagine a city with ‘swarms’ of autonomous cars with overseers for each individual ‘swarm’.  The overseer is monitoring the overall performance, but crucially is ready to take more advanced decisions in a number of cases – breakdown, complaining customers or when a car starts acting erratically due to e.g. broken sensors or soft/hardware failure.  They can order a new car, talk to the customers about any complaints and rapidly get emergency units in place if needed.

I wonder if there are some of those new jobs that we are not even training the kids for in this?  Swarm herder?  Drone overseer?  Auto-consultant manager?  Hmmm…