Wow, solid wiki article! It’s very hard to say anything on the subject that hasn’t been said.
I didn’t see the simple phrasing:
“What if the human brain is a Chinese Room?”
but that seems to fall under eliminative materialism replies.
Part of the Chinese Room program (both in our heads and in an AI) could be dedicated to creating the experience of consciousness.
Searle has no substantial logical reply to this criticism. He openly takes it on faith that humans have consciousness, which is funny because an AI could say the same thing.
The whole point of the Chinese room is that it doesn’t need anything “dedicated to creating the experience of consciousness”. It can pass the Turing test perfectly well without such a component. Therefore passing the Turing test - or any similar test based solely on algorithmic output - is not the same as possessing consciousness.
The problem with the Chinese room thought experiment is that it does not show that, at all, not even a little bit. The though experiment is nothing more than a stupid magic trick that depends on humans assuming other humans are the only creatures in the universe that can understand. Thus when the human in the room is revealed to not understand anything, therefore there was be no understanding anywhere near the room.
But that’s a stupid argument. It does not answer the question if the room understands or not. Quite the opposite, since the room by definition passes all tests we can throw at it, the only logical conclusion should be that it understands. Any other claim is not supported by the argument.
For the argument to be meaningful, it would have to define “understand”, “consciousness” and all the other aspects of human intelligence clearly and show how the room fails them. But the thought experiment does not do that. It just hopes that you buy into the premise because you already believe it.
“The room understands” is a common counterargument, and it was addressed by Searle by proposing that a person memorize the contents of the book.
And the room passes the Turing test, that does not mean that “it passes all the tests we can throw at it”. Here is one test that it would fail: it contains various components that respond to the word “red”, but it does not contain any components that exclusively respond to any use of the word “red”. This level of abstraction is part of what we mean by understanding. Internal representation matters.
it was addressed by Searle by proposing that a person memorize the contents of the book.
It wasn’t addressed, he just added a layer of nonsense on top of a nonworking though experiment. A human remembering and executing rules is no different from reading those rules in a book. It doesn’t mean a human understands them, just because he remembers them. The human intuitive understanding works at a completely different level than the manual execution of mechanical rules.
it contains various components that respond to the word “red”, but it does not contain any components that exclusively respond to any use of the word “red”.
The human intuitive understanding works at a completely different level than the manual execution of mechanical rules.
This is exactly Searle’s point. Whatever the room is doing, it is not the same as what humans do.
If you accept that, then the rest is semantics. You can call what the room does “intelligent” or “understanding” if you want, but it is fundamentally different from “human intelligence” or “human understanding”.
This is exactly Searle’s point. Whatever the room is doing, it is not the same as what humans do.
He fails to show that. All he has shown that the human+room-system is something different than just the human by itself. Well, doh, nobody ever assumed otherwise. Running a NES emulator on my modern x86-64 CPU is something different from running an original NES too. That doesn’t mean that the emulator is more or less capable than the real NES or that the underlying rules driving the emulator are different from the real thing. You have to actually test the systems and find ways in which they differ. Searle’s experiments utterly fails here.
Its a thought experiment involving a room where people write letters and shove them under the door of the Chinese kid’s dorm room. He doesn’t understand what’s in the letters so he just forwards the mail randomly to his Russian and Indian neighbours who sometimes react angrily or happily depending on the content. Over time the Chinese kid learns which symbols make the Russian happy and which symbols make the Indian kid happy, and so forwards the mail correspondingly until he starts dating and gets a girlfriend that tells him that people really shouldn’t be shoving mail under his door, and he shouldn’t be forwarding mail he doesnt understand for free.
Imagine that you’re locked in a room. You don’t know any Chinese, but you have a huge instruction book written in English that tells you exactly how to respond to Chinese writing. Someone outside the room slides you a piece of paper with Chinese writing on it. You can’t understand it, but you can look up the characters in your book and follow the instructions to write a response.
You slide your response back out to the person waiting outside. From their perspective, it seems like you understand Chinese because you’re providing accurate responses, but actually, you don’t understand a word. You’re just following instructions in the book.
What is a Chinese room?
https://en.wikipedia.org/wiki/Chinese_room
Man, I love coming across terms like this.
Chinese Room, Chinese Walls, Dutch Treat, Dutch Uncle, Dutch Oven.
Wow! Me, too! What is a Dutch Oven!?
A covered pot.
Or a fart in a blanket :)
*Satisfied nod.*
Wow, solid wiki article! It’s very hard to say anything on the subject that hasn’t been said.
I didn’t see the simple phrasing:
“What if the human brain is a Chinese Room?”
but that seems to fall under eliminative materialism replies.
Part of the Chinese Room program (both in our heads and in an AI) could be dedicated to creating the experience of consciousness.
Searle has no substantial logical reply to this criticism. He openly takes it on faith that humans have consciousness, which is funny because an AI could say the same thing.
The whole point of the Chinese room is that it doesn’t need anything “dedicated to creating the experience of consciousness”. It can pass the Turing test perfectly well without such a component. Therefore passing the Turing test - or any similar test based solely on algorithmic output - is not the same as possessing consciousness.
The problem with the Chinese room thought experiment is that it does not show that, at all, not even a little bit. The though experiment is nothing more than a stupid magic trick that depends on humans assuming other humans are the only creatures in the universe that can understand. Thus when the human in the room is revealed to not understand anything, therefore there was be no understanding anywhere near the room.
But that’s a stupid argument. It does not answer the question if the room understands or not. Quite the opposite, since the room by definition passes all tests we can throw at it, the only logical conclusion should be that it understands. Any other claim is not supported by the argument.
For the argument to be meaningful, it would have to define “understand”, “consciousness” and all the other aspects of human intelligence clearly and show how the room fails them. But the thought experiment does not do that. It just hopes that you buy into the premise because you already believe it.
“The room understands” is a common counterargument, and it was addressed by Searle by proposing that a person memorize the contents of the book.
And the room passes the Turing test, that does not mean that “it passes all the tests we can throw at it”. Here is one test that it would fail: it contains various components that respond to the word “red”, but it does not contain any components that exclusively respond to any use of the word “red”. This level of abstraction is part of what we mean by understanding. Internal representation matters.
It wasn’t addressed, he just added a layer of nonsense on top of a nonworking though experiment. A human remembering and executing rules is no different from reading those rules in a book. It doesn’t mean a human understands them, just because he remembers them. The human intuitive understanding works at a completely different level than the manual execution of mechanical rules.
Not getting it.
This is exactly Searle’s point. Whatever the room is doing, it is not the same as what humans do.
If you accept that, then the rest is semantics. You can call what the room does “intelligent” or “understanding” if you want, but it is fundamentally different from “human intelligence” or “human understanding”.
He fails to show that. All he has shown that the human+room-system is something different than just the human by itself. Well, doh, nobody ever assumed otherwise. Running a NES emulator on my modern x86-64 CPU is something different from running an original NES too. That doesn’t mean that the emulator is more or less capable than the real NES or that the underlying rules driving the emulator are different from the real thing. You have to actually test the systems and find ways in which they differ. Searle’s experiments utterly fails here.
Its a thought experiment involving a room where people write letters and shove them under the door of the Chinese kid’s dorm room. He doesn’t understand what’s in the letters so he just forwards the mail randomly to his Russian and Indian neighbours who sometimes react angrily or happily depending on the content. Over time the Chinese kid learns which symbols make the Russian happy and which symbols make the Indian kid happy, and so forwards the mail correspondingly until he starts dating and gets a girlfriend that tells him that people really shouldn’t be shoving mail under his door, and he shouldn’t be forwarding mail he doesnt understand for free.
Imagine that you’re locked in a room. You don’t know any Chinese, but you have a huge instruction book written in English that tells you exactly how to respond to Chinese writing. Someone outside the room slides you a piece of paper with Chinese writing on it. You can’t understand it, but you can look up the characters in your book and follow the instructions to write a response.
You slide your response back out to the person waiting outside. From their perspective, it seems like you understand Chinese because you’re providing accurate responses, but actually, you don’t understand a word. You’re just following instructions in the book.