Artificial Intelligence and Approaches in the Philosophy of Mind

Artificial Intelligence and Approaches in the Philosophy of Mind

Logos Publishing

Logos Publishing

Epistemology

Lately, one of the most fascinating subjects in the fields of contemporary technology and science is Artificial Intelligence (AI). Thinkers, psychologists, and neuroscientists are already debating the intersection between AI and the human mind, creating fertile ground for the philosophy of mind. Perhaps some philosophers of the past would think we are in a golden age of knowledge, where any device with internet access grants entry to a true oracle: ready to provide the most varied answers using a catalog of all the information ever released in the world throughout history as its source. Others might hold a more skeptical view, which would preclude machines from satisfactorily answering philosophical questions. Regarding the Philosophy of Mind, it is possible to explore how technology seeks to imitate and understand human behavior and mental processes.

What is Artificial Intelligence?

Etymologically, "intelligence" derives from the Latin inter legere, which means "to choose between." In this sense, intelligence is the capacity to select the most efficient way to perform a task. "Artificial," in turn, refers to something produced by man. Therefore, Artificial Intelligence is intelligence created by humans to endow machines with abilities that simulate human intelligence.

Although we already find the distinction between the mind (res cogitans — thinking substance) and the body (res extensa — material substance) in Descartes’ Discourse on the Method, or the example of the character Talos (a giant bronze automaton that protected the island of Crete in Greek myths), the development of more in-depth ideas on the subject dates to the mid-20th century, following technological advances.

Americans from MIT, John McCarthy (computer scientist) and Marvin Minsky (cognitive scientist), define AI, respectively, as: "The science of studying the emulation of human intelligent behavior through machines" and "The science of making machines do things that would require intelligence if done by men" (Nakabayashi, 2009). These authors answer affirmatively to the question "Can machines think?"; and Herbert Simon — also an MIT scientist — holds the view that human thought itself can be understood as a process of manipulating symbols and information. Intelligence would not be a metaphysical mystery, but rather a set of operations that a computer can replicate (Simon, 1969).

AI and Philosophy of Mind: Dialogues and Critiques of the Main Theses

1. Dualist Approach: The Mind and the Body

For a philosopher with a rationalist worldview, AI would encounter difficulties in acquiring consciousness. Classical authors such as Descartes and Leibniz would suggest that the mind (connected to the soul and even other divine substrates) differs from the physical body, making any simulation of consciousness by hardware and software (material entities) unlikely.

In Cartesian philosophy, dualism extends to the mind (res cogitans), which is the thinking, non-extended, and immaterial substance, and the body (res extensa), which is the extended and material substance. They are independent but interact, and this interaction is necessary for a satisfactory epistemological experience. No matter how impressively an AI simulates intelligence and conscious behavior, it would never have a genuine subjective experience, or "feel" things—attributes exclusive to the soul and immaterial substance.

2. Biological Naturalism: A Physicalist (Materialist) Approach to Consciousness as an Emergent Property of the Human Being

The analytical philosopher John Searle, who has made significant contributions to the fields of Social Philosophy and Philosophy of Mind, is a notorious critic of the idea that computers can have genuine understanding or consciousness. Searle considers consciousness to be an emergent biological property of the human brain, performing organic functions as natural as digestion, for example. However, his famous critique is the "Chinese Room Argument." His thesis proposes that syntax (symbol manipulation) does not guarantee semantics (understanding of meaning); that is, a computer can manipulate symbols and simulate understanding without actually understanding them:

"Suppose that I'm locked in a room and given a large batch of Chinese writing. Suppose furthermore (as is indeed the case) that I know no Chinese [...] suppose that I am given as a second batch of Chinese script together with a set of rules in English [...] The rules allow me to correlate one set of formal symbols with another set of formal symbols [...] the rules instruct me to give back certain Chinese symbols with certain sorts of shapes in response to certain other Chinese symbols. The input symbols are called 'questions' by the people who give them to me, and the output symbols are called 'answers' to the questions. From the external point of view—from the point of view of somebody reading my 'answers'—the answers to the Chinese questions and the English questions are equally good. But in the Chinese case, unlike the English case, I produce the answers by manipulating uninterpreted formal symbols. As far as the Chinese is concerned, I simply behave like a computer; I perform computational operations on formally specified elements." (Searle 1980)

3. Functionalist Approach: The Mind as a Functional Process

This approach also requires a materialist worldview, where physics grounds reality. However, these functionalist theses align with the researchers at MIT and open the floor to defend the idea that AI can acquire consciousness. Hilary Putnam and Daniel Dennett are the primary proponents of this idea. Putnam defines mental states by their causal functions; that is, the mind plays a mediating role between mental inputs and behavioral outputs, which could be replicated by non-biological structures.

Pain, for example, is a specific type of mental state that does not need to be directly connected to a biological system, supporting the idea that pain could also be felt by artificial brains. In his article "Psychological Predicates" (1967), Hilary Putnam exemplifies this idea:

"If every living being capable of feeling pain had a certain type of neurophysiology (for example, if all had excited C-fibers), then we could identify pain with the excitement of C-fibers. But the available evidence suggests that this is false: there are many types of animals that feel pain, and their neurophysiologies differ widely." (Putnam 1967)

If what matters for a mental state is not the material it is made of, but rather the functional role it plays (how it interacts with inputs, outputs, and other internal states), then a machine that replicates these functions can, in principle, have that same mental state.

Meanwhile, Daniel Dennett seeks to provide a demystified explanation of human brain functionality. He proposes that consciousness is not an immaterial mystery but tends to be a physical phenomenon resulting from distributed brain processes that can "delude the user" in certain situations. He states: "There is no 'central point' in the brain where 'it all comes together'" (Dennett, 1991), and suggests that the mind could be mapped and understood via technology and computation.

Can Machines Think?

After this brief journey through the main theses on the relationship between Artificial Intelligence and the philosophy of mind, it is clear that the discussion is far from over. With every new technological advance, we are invited to revisit fundamental questions about what it means to be intelligent and, ultimately, what makes us human. The future of AI lies not only in the creation of more efficient machines but also in the continuous exploration of the mysteries of our own consciousness. Many of our upcoming articles will be dedicated to this issue, and we can explore them together. What about you—what is your opinion? Which theses, in your view, present the best arguments? The big question remains: can machines think?

Peter Webster - Logos Publishing Editorial Staff

References

Nakabayashi, Luciana Akemi. 2009. "A contribuição da inteligência artificial (IA) na filosofia da mente" [The Contribution of Artificial Intelligence (AI) to the Philosophy of Mind]. Master’s thesis, Pontifícia Universidade Católica de São Paulo.

Simon, Herbert A. 1969. The Sciences of the Artificial. Cambridge, MA: MIT Press.

Searle, John R. 1980. "Minds, Brains, and Programs." Behavioral and Brain Sciences 3 (3): 417–457.

Putnam, Hilary. 1967. "Psychological Predicates." In Art, Mind, and Religion, edited by W. H. Capitan and D. D. Merrill, 37–48. Pittsburgh: University of Pittsburgh Press.

Dennett, Daniel C. 1991. Consciousness Explained. Boston: Little, Brown and Co.

Lately, one of the most fascinating subjects in the fields of contemporary technology and science is Artificial Intelligence (AI). Thinkers, psychologists, and neuroscientists are already debating the intersection between AI and the human mind, creating fertile ground for the philosophy of mind. Perhaps some philosophers of the past would think we are in a golden age of knowledge, where any device with internet access grants entry to a true oracle: ready to provide the most varied answers using a catalog of all the information ever released in the world throughout history as its source. Others might hold a more skeptical view, which would preclude machines from satisfactorily answering philosophical questions. Regarding the Philosophy of Mind, it is possible to explore how technology seeks to imitate and understand human behavior and mental processes.

What is Artificial Intelligence?

Etymologically, "intelligence" derives from the Latin inter legere, which means "to choose between." In this sense, intelligence is the capacity to select the most efficient way to perform a task. "Artificial," in turn, refers to something produced by man. Therefore, Artificial Intelligence is intelligence created by humans to endow machines with abilities that simulate human intelligence.

Although we already find the distinction between the mind (res cogitans — thinking substance) and the body (res extensa — material substance) in Descartes’ Discourse on the Method, or the example of the character Talos (a giant bronze automaton that protected the island of Crete in Greek myths), the development of more in-depth ideas on the subject dates to the mid-20th century, following technological advances.

Americans from MIT, John McCarthy (computer scientist) and Marvin Minsky (cognitive scientist), define AI, respectively, as: "The science of studying the emulation of human intelligent behavior through machines" and "The science of making machines do things that would require intelligence if done by men" (Nakabayashi, 2009). These authors answer affirmatively to the question "Can machines think?"; and Herbert Simon — also an MIT scientist — holds the view that human thought itself can be understood as a process of manipulating symbols and information. Intelligence would not be a metaphysical mystery, but rather a set of operations that a computer can replicate (Simon, 1969).

AI and Philosophy of Mind: Dialogues and Critiques of the Main Theses

1. Dualist Approach: The Mind and the Body

For a philosopher with a rationalist worldview, AI would encounter difficulties in acquiring consciousness. Classical authors such as Descartes and Leibniz would suggest that the mind (connected to the soul and even other divine substrates) differs from the physical body, making any simulation of consciousness by hardware and software (material entities) unlikely.

In Cartesian philosophy, dualism extends to the mind (res cogitans), which is the thinking, non-extended, and immaterial substance, and the body (res extensa), which is the extended and material substance. They are independent but interact, and this interaction is necessary for a satisfactory epistemological experience. No matter how impressively an AI simulates intelligence and conscious behavior, it would never have a genuine subjective experience, or "feel" things—attributes exclusive to the soul and immaterial substance.

2. Biological Naturalism: A Physicalist (Materialist) Approach to Consciousness as an Emergent Property of the Human Being

The analytical philosopher John Searle, who has made significant contributions to the fields of Social Philosophy and Philosophy of Mind, is a notorious critic of the idea that computers can have genuine understanding or consciousness. Searle considers consciousness to be an emergent biological property of the human brain, performing organic functions as natural as digestion, for example. However, his famous critique is the "Chinese Room Argument." His thesis proposes that syntax (symbol manipulation) does not guarantee semantics (understanding of meaning); that is, a computer can manipulate symbols and simulate understanding without actually understanding them:

"Suppose that I'm locked in a room and given a large batch of Chinese writing. Suppose furthermore (as is indeed the case) that I know no Chinese [...] suppose that I am given as a second batch of Chinese script together with a set of rules in English [...] The rules allow me to correlate one set of formal symbols with another set of formal symbols [...] the rules instruct me to give back certain Chinese symbols with certain sorts of shapes in response to certain other Chinese symbols. The input symbols are called 'questions' by the people who give them to me, and the output symbols are called 'answers' to the questions. From the external point of view—from the point of view of somebody reading my 'answers'—the answers to the Chinese questions and the English questions are equally good. But in the Chinese case, unlike the English case, I produce the answers by manipulating uninterpreted formal symbols. As far as the Chinese is concerned, I simply behave like a computer; I perform computational operations on formally specified elements." (Searle 1980)

3. Functionalist Approach: The Mind as a Functional Process

This approach also requires a materialist worldview, where physics grounds reality. However, these functionalist theses align with the researchers at MIT and open the floor to defend the idea that AI can acquire consciousness. Hilary Putnam and Daniel Dennett are the primary proponents of this idea. Putnam defines mental states by their causal functions; that is, the mind plays a mediating role between mental inputs and behavioral outputs, which could be replicated by non-biological structures.

Pain, for example, is a specific type of mental state that does not need to be directly connected to a biological system, supporting the idea that pain could also be felt by artificial brains. In his article "Psychological Predicates" (1967), Hilary Putnam exemplifies this idea:

"If every living being capable of feeling pain had a certain type of neurophysiology (for example, if all had excited C-fibers), then we could identify pain with the excitement of C-fibers. But the available evidence suggests that this is false: there are many types of animals that feel pain, and their neurophysiologies differ widely." (Putnam 1967)

If what matters for a mental state is not the material it is made of, but rather the functional role it plays (how it interacts with inputs, outputs, and other internal states), then a machine that replicates these functions can, in principle, have that same mental state.

Meanwhile, Daniel Dennett seeks to provide a demystified explanation of human brain functionality. He proposes that consciousness is not an immaterial mystery but tends to be a physical phenomenon resulting from distributed brain processes that can "delude the user" in certain situations. He states: "There is no 'central point' in the brain where 'it all comes together'" (Dennett, 1991), and suggests that the mind could be mapped and understood via technology and computation.

Can Machines Think?

After this brief journey through the main theses on the relationship between Artificial Intelligence and the philosophy of mind, it is clear that the discussion is far from over. With every new technological advance, we are invited to revisit fundamental questions about what it means to be intelligent and, ultimately, what makes us human. The future of AI lies not only in the creation of more efficient machines but also in the continuous exploration of the mysteries of our own consciousness. Many of our upcoming articles will be dedicated to this issue, and we can explore them together. What about you—what is your opinion? Which theses, in your view, present the best arguments? The big question remains: can machines think?

Peter Webster - Logos Publishing Editorial Staff

References

Nakabayashi, Luciana Akemi. 2009. "A contribuição da inteligência artificial (IA) na filosofia da mente" [The Contribution of Artificial Intelligence (AI) to the Philosophy of Mind]. Master’s thesis, Pontifícia Universidade Católica de São Paulo.

Simon, Herbert A. 1969. The Sciences of the Artificial. Cambridge, MA: MIT Press.

Searle, John R. 1980. "Minds, Brains, and Programs." Behavioral and Brain Sciences 3 (3): 417–457.

Putnam, Hilary. 1967. "Psychological Predicates." In Art, Mind, and Religion, edited by W. H. Capitan and D. D. Merrill, 37–48. Pittsburgh: University of Pittsburgh Press.

Dennett, Daniel C. 1991. Consciousness Explained. Boston: Little, Brown and Co.