Blog

Sunday, 3 March 2019

On alien AI...

I'm editing the final chapter of the new novel, When the Children Come, and without giving too much away, the human protagonist encounters an AI (Artificial Intelligence). But this is not a human-created one, it comes from an alien race. So it doesn't carry any human baggage in its algorithms, including the stuff we'd like to rid ourselves of, such as hatred, our capacity for war, etc. However, it wants some...

AI is not new - whether your starting point is Asimov's robots, the robot from The Forbidden Planet, or Star Trek's Data, the typical idea is of a mature and pure intellect. Purity in Scifi seems to have nothing to do with good or bad, so AIs have often been portrayed as evil, ruthless and even malicious, though as a psychologist, I'm not sure how the emotional content would get coded...

The basic idea behind an AI is that it is a learning system, whether via neural nets, machine learning, or some other way, running on basic algorithms that can later become self-adapting. This means AI's can become very smart (though whether knowledge and data alone breed wisdom is another question), and make very fast decisions. Future warfare scenarios will probably involve AI's, who can make judgement calls (based on instilled values such as loss of life, mission criticality, collateral damage, use of resources, etc.) far faster than a battle commander - in theory. In practice warfare is about as messy as humanity gets, so there are rarely any clean decisions, so maybe (I'm hoping) there will always be a human in the loop.

The fear, most eminently expressed in the Terminator film series, is that the AI does indeed get smarter and decides humanity's fate in a nanosecond. But surely there could be an off-switch, or a way of detecting AI turning into 'malware'? Not so much, because of the complexity. There are big debates going on right now about whether AI 'thinking' must remain explainable. Think of self-driving cars for a moment. Imagine one such car runs down a cyclist. Why did it happen? To unravel the thousands of lines of code (make that hundreds of thousands, and probably millions), to work out why it behaved in such a way, is very laborious, and may soon become intractable. That's because when we go ahead and tell an AI to learn, we effectively push it out of the nest and let it learn to fly on its own.

So, coming back to science fiction, what about Asimov's three laws, that he implanted in his robots' psyches so they could never harm humans? Could we have such a 'backstop' in AI's? In the upcoming novel, the AI in question comes from a benign race. But it wants some of the bad stuff, and sees humanity as a way of overcoming the backstops in its core programming. Which gives the humans in question something of a dilemma, as they are going to need this AI in order to survive...




No comments:

Post a Comment

 
© Barry Kirwan | info@barrykirwan.com
website by digitalplot