Elements of Robotic Jurisprudence

4 May 2011 by Jeremy Bentham, posted in Uncategorized

Greetings my friends,

In a previous missive I vouchsafed the intelligence that I was deriving considerable pleasure from reading the fictional works of Isaac Asimov, and I have formed the intention of communicating to you the fruits of my reflections upon this cornucopia of diverting creative genius! As you might have anticipated, I was much taken with the Foundation series, and with Hari Seldon’s science of psychohistory, which combines psychological axioms – sadly not characterized as axioms of mental pathology – with statistical analysis of large populations, to predict future events. What a beguiling vista to set before something of a legislator manqué – the probable futures of countless peoples and planets laid out in diagrammatic form by repeated application of the four rules of arithmetic! As I wrote some two hundred and thirty years ago, the feelings of men are sufficiently regular to become the object of a science, and till this is done we can only grope our way by making irregular and ill-directed efforts. However, I confess that Hari Seldon’s science looks to me like an entertaining fantasy, and one which should not be confused with reality. I was very cautious about the legislator’s ability to make fine-tuned utilitarian calculations, and might insist that psychohistory fails sufficiently to discount the value of future sensations of pleasure and pain. I always urged legislators to value the actual sensations of actually existing people at a much higher rate than the potential sensations of future generations.

I might expound at length on the way in which the principles of utilitarianism inform the moral universe of Professor Asimov, but I choose instead to apply my own acquired knowledge in the field of legislation to his laws of robotics. As introduced in the short story ‘Runaround’, in 1942, the three laws of robotics are:
‘1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
’2. A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
’3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.’

Now these laws have certain necessary characteristics of a law – that is, an assemblage of signs declarative of a volition concerning the conduct to be observed in certain cases by certain persons – though lacking certain others. Thus, in guiding the conduct of human beings, the legislator supplies motives in the shape of sanctions – pleasures and pains – which influence behaviour. In that robots feel neither pleasure nor pain, no such motives can exist. The other element lacking is the idea of the sovereign, that individual or body to the commands of which the people are in the habit of obedience. Now since the laws of robotics are ‘hard-wired’—I believe is the expression—into positronic brains at the moment of their creation, Professor.Asimov’s robots have an unfailing habit of obedience to those laws, so that, so long as the laws are consistent, the question of non-compliance does not arise.

In several stories worthy of inclusion in the best traditions of moral fables, Professor Asimov explored the possible contradictions between the imperatives set out in his three laws: what, for instance, is a robot to do when faced with a situation in which the only way to prevent harm to a large number of human beings is to inflict harm on one such? Finally, in Robots and Empire, he conceded the necessity of adding a fourth, overarching or – to quote the rebarbative appellative forced on our author by the primacy given to laws in inverse relation to their magnitude – ‘zeroth’ law, which is as follows: ‘0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.’

Oh Professor Asimov! What a tragic misconception! How sad, that after so much that is excellent, you should fall into the trap of elevating an abstraction – a non-existent chimæra – as the pinnacle of moral sensitivity! Humanity has no interests: it enjoys no pleasures and suffers no pains! Permit me to reprise an argument I made many times in relation to a similar abstraction, the interest of the community or the public, in my own day. Individual interests are the only real interests: a truth simple but fecund – a truth simple but always forgotten. How many statesmen – and authors – never stop imagining a public interest in opposition to individual interests! Take care of individuals; do not harm them, do not allow others to harm them, and you will have done enough for the public. But this manner of thinking is not that of our great politicians – or authors – who wish always to consider the public well-being in abstracto, and to sacrifice individuals on the altar of this phantom. They love posterity better than the present generation: possible beings are more dear to them than existing ones. There was once a dispute among the Scholastics concerning which was more valuable, a potential angel or an actually existing mouse. Opinions were bitterly divided. However, among our great statesmen, there is no doubt. They prefer the man who is not to the one who is: they torment the living under the pretext of improving the condition of those who are not yet born, and who may never be born. A pain felt, and a pleasure not felt: such are the results of these high-minded operations in which individuals are sacrificed to a fictitious entity, whether that non-existent entity be the community, or humanity.

Of course, Professor Asimov and his creation R. Daneel Olivaw – the robot whose vast positronic brain, and millennia of acquired experience allow him to occupy a role to which no human could aspire, namely that of disinterested lawgiver, and who turns out to have been the guiding inspiration behind Hari Seldon and the Foundation (thus elegantly uniting the two themes of Asimov’s fiction) – actually have no intention of sacrificing the welfare of human beings in the service of a fictitious entity. As R. Daneel Olivaw himself expresses the point in Foundation and Earth_: ‘A human being is a concrete object. Injury to a person can be estimated and judged. Humanity is an abstraction.’ But Mr Olivaw, surely, having recognized that humanity has no bowels, while human beings do, it behoves you to draw the conclusion that it is in the summation of the sensations, good and bad, of human beings, that moral value lies. Indeed, what this remarkable robot does, using the vast intellectual resource which is his positronic brain, is to infer that the imposition of pains on human beings – that is harm – may be justified if it is necessary to avoid greater pains, that is greater harms. Allow me the indulgence of puffing my own insights, but the exclusion of greater evil lay at the centre of my justification for punishment – that is, the publicly authorized infliction of pain on the perpetrators of mischief. Mr Olivaw discovers by bitter experience that blind adherence to rules, however well intentioned their authors, allows evil to triumph. The vain rallying cry of what today would be called a deontologist (though my own understanding of the meaning of that word was very different) ’_Fiat iustitia, ruat cælum‘, ’Let justice be done though the heavens should fall’, is no more than an absurd failure to perceive the wood of human well-being for the trees of adherence to general rules. Professor Asimov, through R. Daneel Olivaw, argues, in fact, that numbers count in moral reasoning, and that the goal of morality is the maximization of well-being, through the production of pleasure and the avoidance of pain amongst human beings. All good this, the only error lies in expressing this fundamental truth by appealing to the abstraction ‘humanity’.

Happy reading,

J.B.


2 Responses to “Elements of Robotic Jurisprudence”

  1. Anonymous Reply | Permalink

    ? Luo Feng snappily reported. Really don't consider that your family provides a physician Over the internet guru designer handbags could easily damaged! Lest postpone. Now Yu Yang went. japan quit this duress with the city squadron.

  2. Anonymous Reply | Permalink

    ? Okazaki Saburo rushed so that you can sharp the actual get on the Mountbatten produced a fabulous qualitative title of counterattack. emerged about the sequence in the sun's rays pads contributed 80 sixty-nine complement Guru tote troops most pupils. think about it 2013 mentor hand bags is usually the location where the substantial yellow metal admin Fuyuan Shen Yun Lan agreement tool nearly instantly beneath stand vanish, Most suitable, .

Leave a Reply