originally posted in:Secular Sevens
Ever heard of the Chinese Room experiment? The idea behind it, if I remember right, was that until the person acts unlike a human, it is no longer a human. Then again, I'm not really good at these kind of subjects so I dunno.
English
-
That's alright. It's an open-ended question. So, if I understand correctly, the experiment essentially asserts that it is our behavior that defines us as human. Then it becomes a matter of what we define as human behavior, or even HOW we define human behavior.
-
Edited by Grey Shield: 3/1/2014 5:40:10 AMPretty much I think. So if we're to take a robot an give it an AI capable of perfect human imitation, the people would not be able to prove it wasn't a human nor would they suspect. So, if it didn't look like a human, and it acted like one? I would still say yes as long as the behaviors are still the same.
-
Its a thought experiment as to whether or not we would be able to tell a digital consciousness from a human one. And using the same language, if a human can be thought of as a digital consciousness, just following programming without context as to what its processing. The crux of the matter is would a human who doesn't speak Chinese, who was ordered to sit in a room sorting Chinese characters going in and going out of the room, if we would call that conscious deliberation. The key point is context. Would a machine programmed to simulate our various processes in the smallest detail actually be conscious, or would it just be following its programming protocol? Obviously we don't have an answer to that question. But it does necessitate we somehow find a way to no longer approximate consciousness, but concretely define it. And that process has infinite merit. Also, great topic.
-
Thanks for the explanation. I think there's a bit of a dilemma with an AI imitating human thinking. If it is programmed to give human-like responses, but as an automatic response rather than through actual reasoning (the definition of which is up to debate), then it can be differentiated from a human mind, whether easily or not. But if it is somehow programmed to reason like humans, without the human experience of development and mortality and everything involved with our bodies, it could never behave or think like real humans, in my opinion. The movie "Her" made me ponder this kind of scenario. If an AI's mind is just like our own but is absent of our biology, could it possibly be a part of humanity; could it truly share the human experience?
-
Yeah that is one of the questions people thinking about this are trying to figure out. The first one, as noted, is what consciousness really is. Our definitions are approximate and arbitrary. We need a better foundation to define consciousness before we can even hope to answer any of these other questions. Whether that new definition is accurate though...that is a tough one. As to the Azimovian question as to what we consider a person, we're still figuring out what all that stuff will entail. I though, think there will be initial resistance to the idea of considering a digital persona a "person", but will eventually be widely accepted in the spirit of inclusion and diversification. I suspect it will mirror the current cultural realignment with open homosexuality. Questions, usually founded by subjective moral leanings, leading into general and open acceptance.
-
That's interesting. So, as these new possibilities come about, we could expand our definition of what a human or a consciousness is. Though we may not consider a machine with a human mind to be a human today, in the future (in the context of future values and whatnot, because of course these things change), we may include them as one of us. Ha! I guess that means this question could have two completely different answers depending on when you ask it.
-
Yeah. I think that actually is the case with pretty much every meaningful question. Just as Einstein defined a metric based on the speed of light, the only actual metric defining everything else, when we ask/answer a question seems to prime our answer. Perhaps that then is a useful metric for civilization. When you ask that question, and the answer aligns with whatever preconceived notion you think of as correct. On that note, I personally think a human should mean homo sapiens, but a person can be anything that fits whatever criteria is used to define consciousness. One is simply ontological, for taxonomy's sake. The other obviously more philosophical, but one Azimov himself could blatantly see, and wrote about quite a bit. As I said, great topic. Happy to finally have something interesting to discuss round these parts.