Part of the inhumanity of the computer is that, once it is competently programmed and working smoothly, it is completely honest. – Isaac Asimov
Even a super intelligent computer needs input from humans to build a program. It is great if you are a scientist or a computer professional and can provide mathematical models or algorithms. But what if you don’t know how to specify what you need from the program? Can computer really understand us? Can people trust computers to build a correct system for their needs? Will be communication with computer comfortable and effective?
We should consider four important software creation areas to answer these questions:
- Understanding – can computer comprehend our language and complex ideas?
- Engagement – can computer effectively involve us in communication?
- Guiding – can computer help us to understand our needs, direct our thinking and retrieve useful information?
- Trust – can we trust that computer will follow our human interests, obey rules and don’t do harm?
Alan Turing offered the first test for computer intelligence – a computer is intelligent if you cannot distinguish in conversation this computer from human (John Searle argues in his Chinese room experiment that it is not enough). Computer should posses intelligence, master language and understand meaning of words to pass the test.
- Ludwig Wittgenstein’s Beetle and Box argument demonstrates that computers should be part of our life, social interactions and cultural context to really understand us and construct similar meaning.
- Noam Chomsky made an argument that any human language contains a limited set of organizing rules – universal grammar. Therefore, it is possible for computers to master language without learning infinite number of possible language constructions and word combinations.
- However, to posses strong AI (comparable with human) computer should learn how to go beyond initial rules and axioms provided by humans and define its own (based on Gödel incompleteness theorem). Strong AI will allow computer to be our partner in constructing new knowledge and concepts.
The best source of knowledge and potential birthplace of computer intelligence is The Internet. The father of World Wide Web, Tim Berners-Lee envisions marking web content for computer usage with special tags – The Semantic Web. However, there are some drawbacks for effective implementation: it requires extra effort, reflects limited perspective of people who tags information, difficult to identify relationships, easy to mislead and manipulate.
Tim O’Reilly supports another approach: harnessing collective intelligence (Web 2.0), where “meaning was already being encoded unconsciously by web page creators when they linked one page to another”. Google use this approach with Page Rank. The Semantic Web creates meaning for computers by adding something to web content. Web 2.0 relies on implicitly encoded meaning by millions way people use and link web content. For example, computers could identify objects and pictures based on Google Image search. I believe that combining natural language processing with Web 2.0 is the better approach for gaining computer intelligence than The Semantic Web.
How broad should be knowledge to understand humans? Good human programmers should be fluent in customer domain language, problem context and concepts far beyond programming. And it is much more than current AI advances in mastering simple rules and language of chess game, stock trading or air traffic control.
In the past some people had dreams of completely automated humanless services, where people deal with machines only. It didn’t work this way – today, while automation decrease number of people on most jobs, we have more and more people in services.
There are few reasons why people prefer communication with other people and not machines:
- social – our biological necessity in social contacts – we cannot live without other people. Aristotle said that a man is a social animal.
- specialization in understanding other people – we are great in understanding non verbal signs and nuances in our expressions, especially considering that words carry of only 7% of meaning in human communication.
- empathy – other people could make us feel better, comfortable and confident.
- humor, fun and joy of conversation with other people
- same experience – we all have similar body, senses, desires and deal with the physical world around us. We share culture, history and common problems and interests
Humanoid robots could enhance communication. Rodney Brook and his team at MIT are making much progress on designing and creating realistic robots in real world contexts. Marvin Minsky even goes as far to say that human like robots such as Cog (humanoid interaction with the world) and Kismet (sociable humanoid robot) could in a sense be regarded as conscious.
We get used to anthropomorphize, admire and believe in personality of some machines like cars – the same way as we love animals. Probably, we could fight technophobia and be comfortable in communication with computers.
Our biological essence complicates matters for computers. They will experience our irrational behavior – politics, desire for power, bias, interest in personal advance, etc.
Software requirements are often difficult, unclear and open-ended. In addition, people are concerned about aesthetics and usability, which are difficult to master even for human experts. Finally, people with different background, knowledge and experience will describe the same problem completely differently.
A computer should learn to distill, refine and analyze human input to retrieve useful information. A computer should build shared theory, explore and test it together with human by creating prototypes and visuals.
Another effective and natural way to describe needs is story telling and examples, but a computer could have difficulties to get clues and meaning based on verbal information only. Communication could improved by making it direct – Microsoft Research Lab is building a schema to allow computers access human brains, MIT develops device to translate the thoughts of a paralyzed person.
At the end, we cannot allow computers to write programs if we don’t trust them. It could be dangerous if our privacy, security, well-being and lives could be threatened by accidental or even intentional consequences of these programs. Also our morale, values and principles could be easily undermined by an insensitive computer. Could you imagine effective AI for creating porno sites, spam programs or bank hacking tools?
Isaac Azimov’s Three Laws of Robotics could become relevant.
- A robot may not injure a human being or, through inaction, allow a human being to come to harm. A computer should understand what is good and what is bad. These questions are difficult even for humans and dependent on culture, society and situation. What computer should do if it should choose between sacrificing life of few people to potentially save many more lives?
- A robot must obey orders given to it by human beings except where such orders would conflict with the First Law. A computer should follow orders, but it should resist to become evil instrument in bad hands. Taboos, morale and ethic principles should be embedded in its thinking.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. Computer’s self-interest is important – focusing on goals, avoiding break down and preventing loss of integrity. But if we give sense of itself and intentionality, computer could become conscious and independent. Do we want it?
Computers will have tough time to interact with humans, even if we know exactly what we need, exhibit completely rational behavior and ready to cooperate with machines. But we will not – we have difficulty expressing ideas, understanding ourselves, don’t trust others, we have personality, ambitions, play political games and make mistakes. The smartest computer, which could overcome these problems will still face challenges to build convenient, reliable and useful programs. Any human programmer can confirm it. This is the topic of the next post.