Functionalism or Computationalism is the idea that consciousness is merely a byproduct of complex information processing; it's the dominant view in modern neuroscience.. However, it faces severe philosophical challenges.
The Hard Problem of Consciousness
The most famous refutation comes from philosopher David Chalmers, who distinguishes between the Easy Problems and the Hard Problem.
The Easy Problems: These involve explaining functions; how the brain discriminates stimuli, integrates information, or controls behavior. Complex processing can theoretically explain all of these. We can build a robot that processes heat damage and moves its hand away.
The Hard Problem: This asks why that processing is accompanied by a subjective experience (qualia).
Why doesn't the processing just happen "in the dark" like a computer script running in the background? The Refutation: You can fully explain the mechanism (the complex processing) without ever explaining the experience. Therefore, consciousness is something over and above the processing.
This thought experiment attacks the idea that syntax (processing symbols) creates semantics (understanding meaning).
The Scenario: Imagine a man in a closed room who doesn't speak Chinese. He has a rulebook (the program) that tells him how to manipulate Chinese characters. If he receives a certain symbol, the book tells him to output another specific symbol.
The Result: To an observer outside, the man appears to understand Chinese perfectly; he is passing the Turing Test. However, the man actually understands nothing. He is just manipulating symbols based on shape.
The Implications: Digital computers are just faster versions of this man. They manipulate 1s and 0s (syntax) but have no understanding of what those symbols represent (semantics). Therefore, no amount of complex processing of syntax will ever magically turn into understanding.
Proposed by Frank Jackson, this argument suggests that knowing all the physical facts about processing isn't the same as having the experience.
The Scenario: Mary is a brilliant neuroscientist who knows everything there is to know about the physics of color and how the brain processes color wavelengths.
However, she has lived her entire life in a black-and-white room. The Event: One day, she steps outside and sees a red rose.
The Question: Does she learn something new?
The Refutation: Most people agree she learns what it is like to see red. If she learns something new, then her previous "complete" knowledge of the physical processing was actually incomplete. Therefore, conscious experience is not reducible to physical processing.
This is a logical possibility argument.
It is logically possible to conceive of a "Philosophical Zombie"- a creature that is atom-for-atom identical to you and processes information exactly as you do, but has zero inner experience. It screams when hit, but feels no pain.
If such a creature is logically conceivable (even if not physically possible in our world), it proves that processing and consciousness are conceptually distinct. You can have one without the other, meaning they are not the same thing.
The Binding ProblemInformation processing in computers is discrete and fragmented.
The Fragmented Processor: In a computer, data is stored in different addresses and processed sequentially or in parallel threads that don't "know" about each other.
The Unified Mind: Conscious experience, however, is unified.
You don't experience "red" + "shape" + "motion" as separate data streams; you experience a moving red ball. The Refutation: There is no known mechanism for how billions of discrete processing events in the brain "bind" together to form a single, unified subjective field. Merely adding more complexity to the processing doesn't explain how the unity emerges.
No comments:
Post a Comment