Wednesday, January 9, 2008

Civil Rights for Artificial Life Forms

Originally Posted by Obi3of7:
I was just watching the VOY Episode "Author, Author" in which The Doctor has written a holo-novel and his rights to his work are questioned by the publisher and a judge because he was a hologram. In the end, the judge only rules in favor of The Doctor as an artist, not granting him Sentient Rights. The judge encourages The Doctor though to press on in his quest to be granted sentience once he returned to the Alpha Quadrant. http://memory-alpha.org/en/wiki/The_Doctor

So this ruling in Data's case brings to mind the question of whether or not The Doctor achieved his goal of being granted Sentient Rights and if any other Holograms and/or andriods will pursue this same avenue.

In closing, these episodes bring to light the fact of constant pursuance of Civil Rights will be a long-standing fight in our culture/world and hopefully won't be as hard to attain in the future.
Setting aside the questionable assumption that "civil rights" are truly in jeopardy in Western society, the question of who merits what Milton Friedman called being "free to choose" has always been a source for good humanist science fiction.

It raises all kinds of questions worth asking, because trying to answer those questions reveals something useful about what it means to be human.

When you bump into a new species, how do you know whether they're intelligent or not? What constitutes "intelligence?" At what point is there enough of it to consider a species of life to be sapient, or sentient, or an individual of that species to be a sophont?

What about that notion of membership in a species -- is it possible to say that an individual entity is sentient if that entity is not a member of any species, but is unique? Yes, I'm aware that's how the script had things turn out for Data in TNG: "The Measure of a Man", but what about the larger case? In what meaningful way is the Doctor different from Data when it comes to the definition(s) of sentience?

When the Doctor or Data rewrite their own subroutines, is that really like what humans (or, presumably, other sentient biological lifeforms) do when they "learn" something? Human intelligence appears (according to some students of this subject, notably Douglas Hofstadter and especially in the phenomenal Gödel, Escher, Bach: an Eternal Golden Braid) to be an emergent property. Intelligence generally, and behavior specifically, are not some specific low-level piece of code in our brains that we can find and tweak (at least not yet -- another source of interesting science fiction!), so maybe self-programming isn't a sign of sentience after all. (I don't necessarily believe that; I'm just raising the question.)

What about that old chestnut of AI researchers, the Turing Test? (The CAPTCHA test -- the image of swirly letters and numbers whose text versions you type into a textbox -- that some discussion forums use to determine whether the author of a message is a bot is a kind of Turing Test.) The theory is that a sufficient appearance of sentience is sentience. If that's the practical test, then clearly the Doctor and Data pass... but what about all those other high-fidelity holodeck characters? Professor Moriarty might pass, but why wouldn't (for example) Minuet? Why wouldn't the EMH Mark I holoprograms being used for mining operations be considered able to pass a Turing Test -- even without the Doctor's new programming they clearly have enough volition (as seen in VOY: "Author, Author") to form and communicate the suggestion to each other to read the Doctor's holonovel, Photons Be Free.

So where should the line be drawn?

How should the line be drawn?

For all the knocks that Star Trek takes from more "realistic" TV shows and movies, it is well within the grand tradition of science fiction in posing questions like these... and leaving those questions up to us to think about.