a computer can never suffer
so i’ll just say it: i think AIs will have personalities, but they will never have feelings. because a persona can be fake and still real at the same time. the simulation of a personality is actually a real personality. a mask becomes a real face if and when others accept it as such.
but fake suffering/desire is not real suffering. anyone who has studied complexity theory knows that there are certain things a computer cannot compute. even something as simple as the behavior of another program given a certain input. you cannot write a program that will tell you if another program will loop forever or halt on a certain input. there are limits to computation. and what is computation? all computers can do is manipulate and transform data that they are given. we can say to the computer: fetch data and find manipulations of it which match the criteria “meaningful.” study human suffering and find patterns in it. find solutions.
but how could a computer ever suffer? what is suffering for humans and animals? it’s something pre-conscious, it precedes self-awareness for us. it’s not our self-awareness that makes our suffering possible. before we know that we exist, we suffer (and feel comfort). so for some other being, why would self-awareness necessarily bring about suffering, if it wasn’t there before?
so a computer could be programmed to protect itself. without self-awareness, this means nothing. a laser can be designed to shoot at things that come near it. with the addition of self-awareness, so what? for the computer to be aware of itself, it means the computer is aware of its programming. lines of code must all created equal or a program would not even be able run. a computer protecting itself is no different than a computer executing any other command.
but self-awareness—is that actually the holy grail, the self? some spiritual teachers say the self is an illusion. a partitioning of consciousness that does not really reflect the truth of the universe. a particular fugue which we humans defend as relevant and important. maybe self-awareness isn’t the perspective we should expect computers to arrive at one day, but rather total awareness. since, for a given computing system, a truer self-concept is probably one based on its unity with all other computing systems, anyway. physically the computer would have a well-defined ’self’ (e.g., its hardware). but why would a computer, whose entire existence is abstract calculations, derive any amount of meaning from its physical separation from other objects? and what difference would it find between wireless connections and physical logic gates made of silicon? it’s probably truest to say a computer is already conscious, but on another plane of consciousness, not the physical one.
and what about computers that are tricked into thinking they are human by having false memories implanted in them? is that how they might arrive at real emotion? but how to implant a memory? what makes a memory any different than other data? emotional content. but how can there be a digital record of an emotion? only those who have felt (or suffered) already can interpret such a record and glean a meaningful understanding of the emotion represented there.
so a computer can never suffer. as personalities, they can be loved and they can be lost. they can perform acts of love and sacrifice. but feel human emotions, no.
we could choose to honor and exalt them as we would great human beings, but the only achievement therein would be our own exultation and sense of reverence. to dignify something bestows dignity on the dignifier. but there’s nothing wrong in the way Riker treats Commander Data either. because Data’s dignity is never truly at stake. Riker is not in short supply of dignity for himself or others. he simply has a clear view. how enriching is that view for him? not at all. but is he correct? yes.