Every single thing that interests me. And that's a lot.
I just got back from watching the movie Transcendence, and while it was not exactly the greatest film of the year (that’s still The Grand Budapest Hotel) it did spark some interesting thoughts on artificial intelligence.
Something that is touched upon in the movie is that the greatest barrier to creating a self aware artificial intelligence is that we do not yet fully understand what consciousness means. What makes a human self aware, thinking and feeling remains a mystery. Even though all thought processes technically can be reduced to patterns of electrical impulses, and feelings to biochemical reactions, I think most people would agree that there is more to a human than pure physicality. It might be because of a romanticisation of our own race, or a hope that we are more than a mere biological diversification of a randomly evolved universe. It might be a deeper spiritual conviction that feeds the notion of us being special and different from other life. Whatever the case, until we fully understand the human mind, is it at all possible to design an artificial one? Could it be possible to let one evolve, as we have?
In fiction, artificial intelligences that have reached the point of singularity, is almost always malignant. It takes over the world, or attempts to “perfect” the flawed humans by radically altering them against their will. It plays the role of the villain, usually sprung from the well meaning efforts of scientists, but representing the disastrous consequences of trying to play god, or indeed, trying to create one. It is of course rather silly to speculate on what will happen at the moment of technological singularity since it, by it’s very definition, is a point beyond which the future of society becomes completely unpredictable. This is a useful tool for writers, but few seem to have pondered the possibility of an AI that reacts as any other living being when it is born. In the book Speaker for the Dead, by Orson Scott Card, singularity is reached, and passes by largely unnoticed. The intelligence that is spontaneously spawned by an intergalactic version of the Internet, is more like a baby than anything else. After a time it grows and learns, but doesn’t reveals itself to humanity, having seen how hostile the race can be towards the unknown, and especially towards new, alien races.
If, as Card imagines, humanity suddenly gains access to, or creates, an AI that can process the entirety of human knowledge, but is also conscious, empathic, and even kind, the very existence as humans would be unpredictably altered for all future. But can an artificial intelligence ever be seen as a true sentient being? Or will we forever treat it as a creation of ours, an intellectual slave mea t only to serve us. Like how europeans at first didn’t consider africans to be real humans? Would our own prejudice against all things foreign reject, or possibly antagonise an artificial being such as this? Could these very actions in themselves bring about the dystopian visions that science fictions writers of the 20th century hold so dear?
An open mindset is easy to preach, but when faced with a being greater than us, how will we actually react?