Blog

Monday, 15 June 2015

Is Artificial Intelligence a smart idea?

I watched Ex Machina last night, and it got me thinking again about Artificial Intelligence, and what it will mean for humanity. Will it be a boon, heralding robots and thinking machines that can help us, or will it signal the death knell for mankind, a new intelligence that, as the character Nathan notes in the film, will look upon us as we do Neanderthals, and represent a singularity in our timeline?

I used to code, a long time ago, and later on worked with coders who developed cognitive simulations of the way we think about certain tasks. Nothing remotely emotional - just how experts used knowledge and heuristics to solve problems, from medical diagnoses to nuclear power crises to air traffic controllers avoiding mid-air collisions. This much can be done. But humans are still better at it. And the cognitive systems have no awareness, they are just tools, just another program that might one day support us and help us out in a tight spot.

The recent film about the Enigma machine, developed to crack war-time communication codes, highlighted one of the brightest brains in the last century. But his legacy is not only modern computers, it is also a burning question about Artificial intelligence, and this question occupies a large part of the theme of Ex Machina - how do you tell if a machine has intelligence and self-awareness?

The problem is that we inevitably consider intelligence as human intelligence. It is not merely intellect. There is emotional intelligence as well as logic, the difference used to death in all sorts of tropes from Star Trek's Spock to robots and androids in countless films and books. The latter are seen as unemotional, the usual sub-text being that this makes their intelligence somehow inferior and flawed, and that humans are better. But is this simply hubris on our part?

Humans have organs, and hormones, that account for the emotional part of us, the part that is difficult to understand and, ultimately, program. Can a machine ever have emotions? It can be programmed to act emotionally - maybe - but is this the same thing? The android David in the film Prometheus is a more contemporary depiction of a self-aware, questioning thinking machine, but he feels nothing emotionally, and is ultimately nihilistic, and there is a (presumably intentional) feeling of emptiness about him. An android that has no emotions has no drive, no goal. One can wonder what the point of existence would be. Of course, a drive, a reason to go on, can be programmed, but if the intelligence is self-aware, then sooner or later  self-questioning will take place, and self-determination.

Perhaps we should change the ages-old epithet from

I think, therefore I am

to

I think and feel, therefore I am

Can we know what a truly independent artificial intelligence might want, other than what we program it to want? If we're lucky, it might want to look after us, or, luckier still, might venture outwards and leave us to our own devices. Screenwriters, from Terminator to the Matrix, usually portray a darker scenario, however, where the machines will seek to replace us, becoming the next generation, homo machinas.

I grew up reading and loving Isaac Asimov's Robot series, but the simple idea that 'do no harm to humans' could be hard-coded seems nowadays to be a stretch. Asimov himself spent many books exploring this idea, trying to find ways in which the robots in question could out-manoeuver this primary constraint in their programming.

In my own writing, I focused on this question in Eden's Endgame. Near the start of the book, two people are sent to retrieve a remnant of a long dormant machine race, one that almost took over the galaxy two million years earlier. This race, the Xera, was put down at great cost, but in Endgame they are woken up, and begin to rise again.

One question that is asked in the film Ex Machina, and a number of others, is why we try to create something more intelligent than us, an evidently questionable pursuit. The answers range from 'because we can' to (the one in the film, one programmer to another), 'wouldn't you if you could'? In our case, there is clearly a distinction between intelligence and wisdom.

But in this vein, I use a scientist, Pierre, in the book, as the one who tries to liaise with the Xera, and ultimately tries to convince them not to wage war on 'impure, chaotic organic species', including humans. Because he is a scientist, Pierre is fascinated by the Xera, indeed he is drawn to them, and so is willing to make the ultimate sacrifice. As a scientist, he values intelligence, the ability to be able to think and comprehend, above all else. But the Xera need something else from him. Pierre knows they don't care about him, per se. But he has an ace up his sleeve. The chapter is called 'Goliaths', and Pierre is David... Here's an extract from the chapter, where Pierre crosses the threshold.


Pierre stood on the surface of the Machine asteroid as it hurtled through space. He was on an intercept course with one of Qorall’s Orbs, standing inside a small bubble of tailored atmosphere shielding him from cold vacuum and hard radiation. The asteroid-sized Machine remnant – all that was left of the race after Hellera’s deception – was solid metal, so there was nowhere else for him to go. The ground beneath him was flat and featureless. Since his childhood, he’d always looked to the stars; they had been his friends, the constellations a landscape sketching his hopes and dreams. But it was different seeing them this way, with the naked eye as opposed to via a screen or porthole back on Ukrull’s Ice Pick, or through layers of atmosphere back on Earth. Now the stars looked starker, stabbing at him through silent space. Somehow they felt hostile, accusing, as if they knew what he was planning. He started walking on the grainy metal surface. There was nowhere to go, but he needed to move to help him think this through one last time, before there was no going back.
           
 
He stopped walking, squatted down, and touched the hard surface. Did he trust the Machines? Of course not. That made no sense, they were logical creatures. Jen had used the right words – it needed to be hard-coded into them. But how? The Machines were in survival mode. Kalaran had awoken them, Hellera had used them and tried to eliminate them immediately afterwards. But the Machines would not feel anger, nor would they feel any remorse if they took over the galaxy by killing a few trillion organics, and not just those turned by Qorall. For the Machines, it was all about utility, propagation, and logical order rather than chaos. The idea of a new galaxy was an enticing prospect, and they had originally been designed by the Tla Beth to explore remote galaxies, but there was a lot of risk, and why should they leave this galaxy when they had what they needed here? Qorall's Orbs were a significant threat to the Machines, but if they could be destroyed, then the Machines could sit and wait out the final battle between Qorall and Hellera and then make their move.
            Pure, cold logic.
            The ground beneath his fingers rippled, and he glanced upwards, and saw it, a star that looked brighter than any other, with a golden light. Qorall's Orb, come to destroy the Machines, and him. He stood up, and opened his mind.
             He tensed. This was the agreement. He wouldn’t sign the contract with blood, but with his DNA, and his organic mind’s force of will. That’s what they needed to survive and defeat the Orb, which was nothing more than a super-virus that worked by re-writing organic software. Most organic species – following Kalaran’s template – had strong will, but their emotional and intellectual ‘software’ was beneath their conscious control, and so was vulnerable; their coding was weak, so re-coding via the Orbs was easy. In contrast, the Machine race’s coding was very strong, ultra-disciplined, and had inbuilt intelligent monitoring and resilience. But the Machines lacked any organic sense of will; they had been designed by the Tla Beth to be servants, and so it was only a matter of time before the Orb exploited such a basic flaw, a trap-door in their metal-clad coding.
That was where Pierre came in. But there was the risk: he was about to try to help a dangerous force become even more powerful. The lesser of two evils? He couldn’t be sure. But he had made up his mind, he was committed.
 
He felt pressure on his feet, and glanced down. It had begun. Black metal vines crawled up his calves, his feet and ankles already encased. There was no turning back now. He took one last look at the Orb, at the stars he knew he would never see again this way, then closed his eyes, and held the vision of Kat and Petra in his mind, the only two people who had ever meant anything to him, knowing he would never meet them again. The metal around his legs invaded his flesh, freezing pinpricks stabbing into him, making his upper body shake as his core temperature freefell. He opened his eyes, gasped and cried out. He could no longer feel his legs, only a wave of ice creeping up his chest. He lifted his arms in front of him, watched metal gloves wrap around his hands, and felt the ice-metal on his neck. Whichever way he analysed it, he was dying, there would be no more Pierre after this. He just had time to taste the names on his lips of the two women who had meant more to him than anyone else. He spoke their names into the echoless void.
“Petra, Kat, forgive me.”

He could no longer see, the metal ice tightening like a tourniquet around his face, freezing tendrils drilling through his skull, skewering inwards. Pierre concentrated on one last thought, with all his being, with all his will, one single line of code.


1 comment:

  1. While Drools started out as a PRS, 5.x introduced Prolog style backward chaining reasoning as well as some functional programming styles. For this reason HRS is now the preferred term when referring to Drools, and what it is.virtual assistant software

    ReplyDelete

 
© Barry Kirwan | info@barrykirwan.com
website by digitalplot