Sunday, October 30, 2016

Can we build Artificial Intelligence without losing control over it?

Last Thursday October 27, I had the chance of listening to João Valente Cordeiro, Professor of Health Law and Ethics, Science and Technology, as key-note speaker at SAS FORUM Portugal 2016.  

"On the next day, no one died - Digital Revolution, Health and (i)mortality" was the title of his captivating speach, starting on the Pale Blue Dot (taken on Feb 1990 by Voyager 1), Professor Cordeiro led the audience on a trip about our past, our present and our (possible) future and on how the exact same incredible technology developments can impact, in a myriad of forms, both positively and negatively in humanity.

Far is the old discussion of "strong vs weak artificial intelligence" - just consider that as a minimum, an AI system must be able to reproduce (mimic) aspects of human intelligence (human cognitive functions) and you will find that is the current state-of-the-art; AI (agents) are now designed to perform specific tasks such as speech recognition, natural language processing, facial recognition, internet searches, driving a car, etc...  This "specific domain AI"  allows "machines" to outperform humans at whatever specific task one can imagine; playing chess, Jeopardy or GO are examples that allow us to foresee applications of current AI systems and from personal assistants (like SIRI), to solving equations or self-driving cars, AI is progressing rapidly, encompassing a wide range of capabilities, from Google search algorithms to IBM’s Watson features and to autonomous weapons.

Though most people involved in the field of artificial intelligence and cognitive computing are excited about these developments, many worry that without proper planning and reflection advanced AI could destroy humanity; and in more recent years philosophers and ethicists  - like Professor Cordeiro - have given a step forward on the sincere and open discussion about the worries of the long-term future of artificial intelligence as its represents fascinating controversies for humanity.

The threat of uncontrolled AI,  Sam Harris argues in a recently released TED Talk, is one of the most pressing issues of our time. Yet most people “seem unable to marshal an appropriate emotional response to the dangers that lie ahead.” Harris explains that one only needs to accept three basic assumptions to recognize the inevitability of superintelligent AI:
  • Intelligence is a product of information processing in physical systems.
  • We will continue to improve our intelligent machines.
  • We do not stand on the peak of intelligence or anywhere near it.


No comments:

Post a Comment