Would a robot be affected by ‘Karma’?
Would they adhere to the same ‘karmic wheel’ as we, natural things do?
I am dedicating this week to this question.
Thank you in advance.
Answer by Geoffrey Klempner
On the face of it, the possibility of artificial intelligence, or androids who possess consciousness and have a sense of self like you or me, poses a significant challenge to religious beliefs such as the belief in the karma held in hinduism and buddhism. At the risk of ignoring the various important differences between accounts of karma (for example, whether karma is accounted for in cause and effect terms, or is dispensed by a divine entity who judges our actions) there is a common thread in the idea that this life I am now living is not my only life. I will be reborn, perhaps many times, and my actions in this life can affect what happens to me in the next.
One can also regard the doctrine of karma less literally, as an account of how our deeds make us the persons we are in this world – a concept which Plato would have well understood, in his account (in the dialogue Republic) of the nature of the soul and deeds which improve or harm the harmony of its integral parts. However, I shall start by focusing on the more literal interpretation.
Imagine you are an android who believes in karma. Your belief serves as a motivation to act ethically, because if you do not, then maybe in the next cycle you will live as a Windows PC. The problem is, while we have some idea or can imagine how it might be if my immaterial soul or atman ‘leaves’ my dying body and enters, say, the body of a beetle, it’s difficult to see what the connecting thread could be if mind-body dualism is rejected.
Then again, androids are just like you and me. You can get an android to believe anything that a human being can be made to believe. If a human being can believe they have an immortal soul then so can an android:
KRYTEN: He’s an android. His brain could not handle the concept of there being no silicon heaven.
LISTER: So how come yours can?
KRYTEN: Because I knew something he didn’t.
KRYTEN: I knew that I was lying. Seriously, sir. ‘No silicon heaven’? Where would all of the calculators go?
‘The Last Day’, Episode 18, Red Dwarf Series III by Rob Grant and Doug Naylor (1989)
The idea of artificial intelligence assumes that consciousness and the sense of self can be accounted for in purely material terms. One possibility is that human beings run a ‘program’ that can, in principle be uploaded to a storage disk and downloaded into a new body. This opens the prospect of everlasting life (at least, until the end of the universe) but also raises the question of identity. How can I ‘be’, for example, each of a hundred clones who have had the GK program downloaded into them? What does it mean to ‘survive’ in these terms? What is the difference between truly believing that I am GK, and being under the illusion that I am GK? Perhaps, ultimately, there is none.
However, it is not necessary to make the questionable assumption that human beings run a program. It would be sufficient, in order to create a copy of GK, to reproduce the architecture of my brain in some functionally isomorphic structure. Imagine a scenario similar to the Ship of Theseus, where my malfunctioning body parts and organs are replaced by contrivances of metal and plastic, and then, finally, each dying brain cell is replaced by a silicon substitute. I would have become an android version of my former self. Am I still me, GK, or merely under the illusion that I am? If I am merely under the illusion that I am GK, when did I ‘die’? (See my YouTube video What is death?)
Either way, there does seem to be some mileage in the idea that this life I am now living, whether in fact I am a human being or an android, might not be my only life. There might, for all I know, be indefinitely more. Seen from a certain perspective, the possibility that life goes on and on with no letup is as terrifying as the prospect of hell. For the non-believer, death releases us from the consequences of our evil deeds. The longer we live, the greater the prospect that the harm we have done to our ‘self’, the program or structure that has the potential to continue indefinitely into the future, will be sufficient punishment for the wrongs we have done. That’s a kind of karma a robot can believe in. And maybe a human being too.