Machine learning transforms computers from tools performing rote operations to devices sharing attributes we associate with human creativity, such as finding novel solutions to problems. Computers have bested top competitors at chess and Go, and they have beaten champions at Jeopardy. Artificial intelligence (AI) is transforming how we work and play. Yet few of us really understand how it works, or how it does what it does.
Whether AI replaces human creativity or merely augments the work we do, the fact remains that many of us now are routinely in situations where the devices on which we rely make representations on our behalf to others. At what point will our devices expose us to criminal liability?
A simple example suffices to show you what is at stake:
You drive a car filled with automated devices, ranging from a GPS device that will navigate for you, to automatic braking, and, in some instances, to the capacity to steer the car, the better to assist, for example, with parallel parking. Suppose you are behind the wheel when your car causes injury to another. Are you at fault?
The answer is not as easy as you think.
Consider automatic braking. You purchased the car because of its safety features. You know your attention lapses from time to time, so you rely on a setting that programs your car to brake automatically if it gets too close to another vehicle. As you cruise down the highway, doing the speed limit and no more, the car suddenly brakes. Your passenger, a fifteen year old adolescent, sitting beside you in the front seat, is cast forward into the windshield, causing her serious injury. She was not wearing her seatbelt.
Have you committed a crime?
Risk of injury to a minor, a felony in Connecticut requires proof that a person exposed a minor to risk of harm. Your failure to see to it that the child wore a seatbelt will, in the eyes of some prosecutors, be sufficient to warrant prosecution.
But suppose that the car’s “decision” to break was mistaken. There was no vehicle ahead of you. The computer made a mistake.
Are you liable for trusting the machine?
This is a more difficult question than you might think. To be guilty of a crime, a person must typically commit, or permit to be committed, some affirmative act, while at the same time, acting under the influence of a culpable mental state. There are different levels of culpability – specific intent, general intent, recklessness and negligence. And then there are crimes that require mere acts, so-called strict liability offenses.
Parsing murder from manslaughter or criminally negligent homicide can be difficult.
Even more difficult is determining what the criminal law prohibits when two or more people’s conduct results in a violation of the criminal law. Are the participants co-conspirators? Is one person culpable and the other an innocent participant? What about all the forms of accessorial liability – it is a crime, after all, to solicit another to break the law, or to aid or abet another in lawful activities. These problems fall under the general rubric of inchoate liability; much of a first-year law student’s criminal law class is devoted to acquiring the ability to spot the issues involved in such cases.
Our computers are now our accomplices in the ordinary affairs of daily life. As these devices become more sophisticated they do more than assist us. Often, they perform tasks in ways that mystify us. How often do our computers expose us to criminal liability?
As near as I can tell, only one book attempts a systematic overview of the topic, Gabriel Hallevy’s Liability for Crimes Involving Artificial Intelligence Systems (Springer, 2015). Hallevy, a law professor in Israel, has written extensively in the field, and wrote a book on AI and the criminal law for general readers published in 2013 by Northeastern University Press, When Robots Kill: Artificial Intelligence Under Criminal Law. His thesis in both books is simple: the general concepts of the criminal law can easily cover independent assessments of criminal liability for computers.
Perhaps, he’s right, I thought. But the more interesting question, at least to me, is whether our computers’ autonomous learning capacity can expose us to inchoate criminal liability.
I read When Robots Kill a few years ago, and wanted something more systematic, so I was eager to read his treatise.
I am not sorry I read the work, but I am deeply disappointed in it. Springer did not edit the work and, apparently, Hallevy is not a native English speaker. The net result is page after page of confusing prose, prose so bad I more than once wondered whether an inferior machine learning program composed the book as some sort of lark. If so, the book failed the Turing test: No academic, no academic publishing house, can write a book composed in such sloppy prose. Here’s but one example: “The condition of immediate and actual danger to the protected interest is neither depended on the offender.” Huh?
So why do I recommend that you read this book if you have an interest in a systematic overview of artificial intelligence and the criminal law? Because there is nowhere else to go. You need to start somewhere.
Hallevy is no fool. He knows the law’s terrain. But he’s a law professor, he doesn’t have a practitioner’s feel for the law. And the ample footnotes to cases shed no light on the arguments he advances. But he touches on the topics that need addressing. Because this law is a wide-open field just now, you’ll be on your own making sense of it.
I’m hoping that someone at Springer reads this review. As edited, this necessary book is a professional embarrassment. I had to think long and hard about ordering another work from Springer. Why pay top dollar for a book even the editors appear not to have read?
The answer: There is nothing else out there, so this must be read.
So are we criminally liable for the acts our computers perform? I don’t know. But I am a little better prepared to think about it after having struggled through this impenetrable book. Read it. You won’t thank me, but you will be glad you read it.