Can machines think? The Chinese room argument
For centuries, thinking and solving advanced problems have been the proper thing for a man and distinguished him from other beings. Already in antiquity, the capabilities of the human mind were defined as originating from an order other than the material world. The modern era, the philosophical foundations of which was laid by René Descartes, turned out to be a triumphant moment for the human mind. He made the body and mind two completely separate and independent substances, and made man the only being on Earth in which the body as an extended thing and the soul as a thinking thing are combined.
The consequences of making the ability to think a mere human trait, often based on theological arguments, are clear to this day. Even in the face of the reflections on other minds that come with new discoveries of the cognitive abilities of other animals, many make an effort to maintain the hegemony of the human mind. Even so, our privileged position is becoming shakier and shakier. While comparing ourselves with other animals, we can still come up with more and more sublime arguments for our greater participation in thinking, the spectacular successes of machines, such as the famous victory of the Deep Blue chess game with the master Garry Kasparov, seem to more and more often embarrass us.
Machines replace humans not only in physical activities, as animals and less advanced technical tools used to do, but also in what so far was only human power – predicting the effects of actions, making key decisions, recognising phenomena, and reacting to not on the basis of acquired knowledge. Self-learning algorithms write movie scripts, paint pictures, translate languages, make jokes, and diagnose cancer. It’s not uncommon for us at Samurai Technology to use these and many other applications of machine learning to solve challenging problems. But can we say that the actions of the algorithms are based on thinking?
As early as the 1950s, Alan Turing argued that if a machine would be able to use human language enough to deceive us that we were dealing with a human being, we would be able to consider it as thinking. Although the machines of today are tested for intelligence using the Turing test, this criterion has also gathered crowds of critics. One of them is the American philosopher of mind, John Searle, whose counterargument is presented by a thought experiment known as the Chinese room.
Imagine, says Searle, that you don’t speak or read any Chinese dialect, but end up in a room where you have access to a set of Chinese rules. By using the instructions, you are able to trick a native Chinese speaker into talking to someone who also knows Chinese. If the computer had followed the same instructions, it would have passed the Turing test. However, according to Searle, there is no question of thinking in this case. Even though based on the instructions, it would have been possible to simulate a Chinese speaker, neither about the human nor the computer in the room, we could not say that their actions were based on the understanding of the Chinese language.
As Searle says, a computer is by definition a symbol manipulating device. Any process that can be described formally can, therefore, be simulated by a digital computer. However, according to the philosopher, manipulating symbols does not imply having mental states, which, according to him, are necessary for us to be able to talk about thinking. In the case of computers, we can only talk about their use of syntax, while the human mind is also characterized by semantics (some content). A machine can pass the Turing test, Searle says, but cannot know that it has passed anything. People and other organisms with a nervous system, apart from using rules, understand meanings – they can interpret the context of specific situations and what this context means for them.
According to Searle, minds are the products of biological brains, and the decisive factor in the existence of cognitive processes is the consciousness they produce. The philosopher argues at this point with various views that consciousness is only a computational tool in the brain or that nothing can be said about it due to its subjective nature. Although we do not yet have a complete understanding of how the mind works, Searle argues that we must accept common-sense facts about consciousness and distinguish its characteristics as follows – it is irreducible, all its states are qualitative and subjective, its areas are unified (we are aware of all things at once, not separately), and it also manifests itself causally in our behavior.
The belief that creating a program capable of performing computationally advanced operations is also the creation of some consciousness is an illusion to which, according to Searle, we succumb. Machines merely simulate the processes taking place in the human mind. We may as well simulate a downpour, but no one will think then that this simulation will actually get us wet, says the philosopher.
According to Searle, consciousness is therefore a necessary condition for thinking, and its origin is biological and cannot be mapped even by a suitably complex program. However, it should be remembered that the Chinese room argument is only the beginning of a discussion on the capabilities of computers that have been going on for several decades. Supporters of strong Artificial Intelligence (Strong AI) predict that in the next decade the machine will finally pass the Turing test. Some forecasts also predict that by 2045 there will be machines smarter/more intelligent than people. However, it is certain that we do not yet know everything about the capabilities of our own minds or their digital counterparts.
Samurai Technology uses machine learning and artificial intelligence to create world-changing projects. Samurai cooperates with companies from Poland and around the world and implements a number of projects in cooperation with the National Center for Research and Development. More information: samurai.technology