Originally Posted by Flatfingers:
The theory is that a sufficient appearance of sentience is sentience.
Originally Posted by Phillip:But we're not talking about you evaluating whether you yourself are sentient -- the question is whether (and if so, how) we can evaluate whether some third party is sentient. And that's where the appearance of sentience enters into the discussion.
I wouldn't subscribe to that. I would think that Dubito, ergo cognito, ergo sum (I doubt, therefore I think, therefore I am) would be a much better litmus test, as it were, of sentience. Of course, especially with artificial intelligence, I would imagine there would be no blanket statements and every case would be handled individually.
Let's pretend for a moment that I am an extraordinarily talented programmer; I can make many very complex changes to some piece of complex software in an extremely short period of time.
For some reason, today I decide to create a program that cannot be distinguished from a real person. I crank out a bunch of code and connect it to an ICQ channel or some other live text chat system over the Internet, where anyone can talk to it just like we talk to each other.
At first, my program's not all that great. It parrots what other people say (like Eliza), or takes too long to respond, or doesn't seem to understand basic concepts. It's pretty clearly not a real person, and everyone who interacts with it is quick to tell me why they were able to draw that distinction.
Now, because I'm such a whiz-bang programmer, I'm able to take all these criticisms and improve my program based on them. I put the improved program out there again; I get more explanations of why it doesn't seem "intelligent"; I improve the program again; and so on.
Do you think there will ever come a time when the program is so good that no one even wonders whether it's a real person or not?
If that day ever comes, that will be the point at which the appearance of sentience becomes the reality of sentience. A sufficiently good simulation will be functionally indistinguishable from the real thing.
Well, then. If it's indistinguishable from the real thing -- in this case, if that program seems absolutely like a sapient lifeform to everyone who encounters it -- then isn't that the point at which we have to say that it is sapient, and should be accorded all the rights and responsibilities of a sapient being?
One thing different between what I've just described and Data or the Doctor is that the latter two have not only physical forms, but humanoid physical forms. They look like people.
Does that make a difference in whether they're considered sapient?
What about the fact that the Doctor can look like whatever he wants? Isn't he really just a very sophisticated program exactly like the kind I've described above, except that he has visible physical parameters that he can modify?
Well, if we're ready to say that the Doctor possesses the quality of sentience, don't we have to be able to say the same of the really clever program I wrote, even though we all know it's just ones and zeros?
"A sufficiently good simulation is indistinguishable from the real thing." Star Trek meets The Matrix. :)