Thursday 15 December 2016

How to Coexist Successfully with 'AI' in the Data Age.

At the turn of the century, it’s likely few, if any, could anticipate the many ways artificial intelligence would later affect our lives. The race with the machines is getting harder to run. Machines keep getting smarter in the digital age. Everyone has their opinion about what we might expect from artificial intelligence (AI), or artificial general intelligence (AGI), or artificial superintelligence (ASI) or whatever acronymic variation you prefer. Ideas about how or if it will ever surpass the boundaries of human cognition vary greatly, but they all have at least one thing in common. They require some degree of forecasting and speculation about the future, and so, of course, there is a lot of room for controversy and debate.
How true is “The Assumption” ?
The assumption is that “Intelligence is more powerful than anything else and that human intellect can never coexist with a super intelligence — an entity that might be to us like we are to a rabbit. Or an ant.
Without a good definition of “human level intelligence”, it is difficult to answer the question. What exactly defines human intelligence? What do our brains have that machines cannot replicate? A brain is a biological composition of chemicals and biological matter which is vastly superior to all other known life for its unparalleled ability to process information and aid survival. Scientific studies on human feelings, emotions, and thoughts have been able to map regions in the brain that are active when we feel fear, pleasure and a variety of other emotions. We are still very far from fully understanding the mechanisms of human intelligence. It is reasonable to assume that human intelligence is a product (in part) of computational processes running on biological components. Emotions, once thought dominion of the unobservable soul, are now visible as electrochemical reactions. If we can isolate the chemical components and find electronic analogs, machines will be able to experience the same emotions. To create AI and preventing it from going rogue and threatening us, one needs to find the set of operating parameters the human brain follows and mimic them in an electronic format.
Why should AI depend/build on human values?
The short explanation is that without the experience gained through thousands of years of human civilization, an AI wouldn’t have the necessary knowledge to avoid destroying itself. When humans try something new, we usually aren’t sure how it’s going to turn out, but we evaluate the risk, either formally or informally, and we move forward. Sometimes we make mistakes, suffer setbacks, or even fail outright. Why would a superintelligence be any different? Why would we expect that it will do everything right the first time or that it will always know which thing is the right thing to try to do in order to evolve? Even if a superintelligence is much better at everything than humans could ever hope to be, it will still be faced with unknowns, and chances are that it will have to make educated guesses, and chances are that it will not always make the correct guess. Even when it does make the correct guess, its implementation might fail, for any number of reasons. Sooner or later, something might go so wrong that the super intelligence finds itself in an irrecoverable state and faced with its own catastrophic demise.
This is why human values are fundamental for the destiny of the universe, and it is also why we can expect that future superintelligence will respect humans and human values, and even develop them to a much higher level. The more intelligent and refined technology gets, the more it will reflect values of people creating and using it, and to avoid disastrous conflicts in the realm of the technology we will need all the experience we have gained in mixing cultures and learning to respect each other – a process where we still have a lot to learn.
Would the aforementioned proposition always work?
Now, here’s the key issue: While there’s an idea on how we can defend ourselves and manage the risks with anything up to nanotechnology, there’s no such possible defense towards a superintelligence that would attack us, simply because a super intelligence would always be smarter than us and would find a way to circumvent any defense we could possibly imagine. We seem to be in the process of building a God. Now would be a good time to wonder whether it will (or even can) be a good one.
The upshot?
We already coexist with a “superintelligence”. It’s the internet itself. Everybody of us, who is contributing something to it, further helps to build up this giant “machine”. The question of humans coexisting with a new form of intelligence is currently impossible to answer. There is no historical precedent determining how humans will react when we are confronted with the issue. It seems that AI will have to be developed in such a way that the differences between human and AI are still apparent to remind humans of the difference. After all, if we develop an artificial intelligence that doesn’t share the best human values, it will mean we weren’t smart enough to control our own creations.

No comments:

Post a Comment