By Pablo Venezian (Student of Law at the Pontifical Catholic University of Valparaíso).
Powderhorn, Minneapolis, Minnesota. May 25th, 2020. Derek Chauvin pressed his knee into George Floyd’s neck for 8 minutes and 46 seconds killing him by asphyxiation. Within days, the event generated a wave of outrage and protests throughout the United States and the world. The recorded crime shocked the country and evoked stunned reactions worldwide. Helplessness, bewilderment, confusion, anguish and speechlessness were the immediate emotional reactions.
Then, law appears. Our judgment tells us that a crime has been committed, so a punishment must follow.
Now, in a similar case, imagine that instead of a human being, the perpetrator is a machine. How would our reaction to the crime change? Who should be responsible for it? Would we look for responsibility in the machine? Its creators? What would be the punishment? How should the legal system react?
Until now, laws have only organized and regulated humans and their institutions. The rise of Artificial Intelligence is changing the paradigms of current legal systems and it´s tensioning and pushing their boundaries on issues to which they usually arrive late with little or no preparation.
AI has begun to be introduced into legal systems through the use of algorithms that help predict judicial decisions or help choose a state, court or judge to obtain a more likely favorable outcome. Machine learning assists lawyers with predictive coding and technology assisted review and Data Analytics helps lawyers inspect and process information, supporting them with decision making and suggested conclusions.
The relatively simple tasks carried out by AI have brought efficiency and effectiveness to the legal field. Its greater autonomy and their capacity for self-development and self-learning, has initiated the debate on its necessary regulation in order to avoid potential risks that may arise.
Going back to the beginning of this paper, what we intend here is not to talk about the possible criminal liability that can be attributed to third parties for the acts carried out by AI, but the criminal liability that can be attributed to AI for its own acts.
For this to happen, the elements of the crime must be met, which in simple words means that AI must have an actus rea (culpable act) and a relevant mens rea (guilty mind), as well as, there must be a causation link between the act and the effect. The mental requirements necessary to commit a crime tend to differ among countries, legal systems and specific crimes, but the focus is generally on the defendant’s state of mind, which is what he believes and intends to do1.
To be respected, policies or guidelines such as the prohibition of certain behaviors, the respect to law and society, the prevention of new crimes, the purpose of sanctions, the purpose of social reintegration, etc., imply a certain level of awareness or consciousness.
The definition of consciousness is a matter which continues to elude philosophers, neurologists and computer scientists, but it can be defined as “the recognition by the thinking subject of its own acts or affections”2 (Hamilton) or, from the point of view of the elements that are required to be conscious, “the entity must be capable of (i) sensing stimuli, (ii) perceiving sensations and (iii) having a sense of self, namely a conception of its own existence in space and time”3.
The debate on the existence of an artificial consciousness through the creation of complex programs or machines is open, but the majority opinion is inclined to deny the possibility that a computer could give rise to something genuinely indistinguishable from a “human” consciousness.
Nevertheless, there are several examples that could begin to tip the balance of opinion in the opposite direction. For example, on 2017, Google’s Alpha Zero program, using the most advanced machine learning techniques, trained and played with himself for 4 hours without contacting a person before defeating AI Stock fish, the champion of computer programs in a chess competition4.
Based on the above, and many other examples that could be mentioned were it not for the limited size of this paper, it is possible to say that there are reasonable grounds to believe that AI is capable of conscious behavior.
So, assuming that a “human-like” mental state is proven, the next question we must ask ourselves, in relation to the possible criminal liability of AI, is how to apply criminal principles to it. The problems that arise relate to the issue of AI rights, responsibility and ethics5.
First, giving a legal personality to AI, in a similar way as we do with companies, would allow us to qualify them as «artificial people» and would serve as a technical label for a bundle of rights and responsibilities6. In this way, the AI will be able to act on its own initiative. Among the reasons that have been discussed for granting rights and protection to at least some types of AI we found their potential capacity for suffering, their value to humans, and situations where AI and humans are combined in some way7.
Moreover, and in relation to the previous argument, granting legal personality and rights to AI would allow us, at the same time, to make it civilly and criminally responsible for its own acts. AI, as well as the individual rights of each person, should often be subordinated to the wider rights of the community when justified. The justifications for punishments, both for humans and AI, include retribution, reform, deterrence and protection of society.
Retribution refers to “punishment motivated by a feeling that someone, or something, which has caused harm or transgressed an agreed standard should suffer detriment in return”8. Here, legal philosopher John Danaher9 analyzes the gap between human expectations that someone will be held responsible for harm and our current inability to punish AI, calling it a “retribution gap”. In Danaher words, apart from being able to treat AI acts as «acts of god» or hold a human being responsible for them, the only solution that could fill the gap in the retributive mechanism is a «kill switch».
Reform refers to the “shut down with a view to fixing AI and releasing it back again into the world”10, while deterrence occurs where “a known punishment operates as a signal to discourage a certain kind of behavior, either by the perpetrator or others”11. For this to happen, AI must be able to control their own actions and to make decisions on the basis of the perceived risks and rewards.
Last but not least, in relation with the protection of society, the “off button” or “kill switch”12, fulfills the same role as those types of human punishments which restrain or prevent the perpetrator on a practical level from presenting the same harm to wider society as they have previously committed.
In the third place, the issue of ethics, i.e., how AI should make decisions, which decisions should it make, which should not, how should it make them, etc. is one of the most relevant discussions for the future coexistence between humans and machines. It is a complicated issue to address as it encompasses security, political, military, economic and social issues.
As Jacob Turner says, the biggest question is “how humanity should live alongside AI”. Some experts believe that the survival of the human race could depend on solving this sort of issue.
In this sense, it is necessary to consider the need and opportunity of a consistent international regulation, as well as an effective worldwide enforcement to provide certainty, security and progress on the development of AI.
Governments have not yet reached definitive positions on how AI should be governed. For now, national policies have focused on the regulation to which AI may be subjected, the promotion of local AI growth and the problem of growing unemployment due to task automation12.
Amir Husain, the founder and CEO of the Austin, Texas-based SparkCognition, a leading U.S. artificial intelligence company and John R. Allen, the president of the Brookings Institution and a former commander of the NATO International Security Assistance Force and U.S. Forces in Afghanistan, argue in their paper «The Next Space Race is AI»14 about the importance of investment, development and regulation in AI and its advantages as the decisive element to be a world leading country in the future.
The U.S. is the birthplace of AI and it has historically been the home of the most important innovations and research institutions in this space. The Obama administration15 produced a major report on the Future of Artificial Intelligence, along with an accompanying strategy document. In late 2016, a large group of US Universities published “A Roadmap for US Robotics: From Internet to Robotics”, which included calls for further work on AI ethics, safety and liability.
The Trump Administration16, on the other hand, established the American AI Initiative via Executive Order 13859 in February 2019, which included increasing AI research investment, unleashing Federal AI computing and data resources, setting AI technical standards, building America’s AI workforce, and engaging with international allies. These lines of effort were codified into law as part of the National AI Initiative Act of 2020.
Lastly, the Biden Administration17 launched the National AI Research Source Task Force which “will serve as a Federal advisory committee to help create and implement a blueprint for the National AI Research Resource (NAIRR) — a shared research infrastructure providing AI researchers and students across all scientific disciplines with access to computational resources, high-quality data, educational tools, and user support”, as well, as, “it will provide recommendations for establishing and sustaining the NAIRR, including technical capabilities, governance, administration, and assessment”.
Despite all the above, the efforts of the American governments that have taken place in recent years may be insufficient compared to the progress made by countries such as China or Russia.
China18 recently announced a multibillion-dollar AI development plan to lead the world in technology by 2030, the country is attracting better AI talent, buying American tech companies and is publishing a larger number of papers about deep learning and supercomputers.
Russia19, despite not being the main driver of AI innovation itself, prioritizes investing in and capitalizing on developments in AI and autonomy. In terms of education, training, and the country’s technological structure, Russia’s technological foundation is growing at an accelerated pace.
In conclusion, and considering everything mentioned above, the rise of Artificial Intelligence, the consequences of its own acts against human beings in a broad sense, the importance it will have in the international sphere in the coming years, the possible conflicts it will originate, among many other considerations, will require a broad, effective regulation, both national and international, which must be based on shared civil and criminal principles to provide certainty and security in its development, rights and responsibilities. In order to do so, the challenge is clear. The tools are at our disposal, but the question is not whether we can, but whether we will20.
References
1. Turner, J. 2019. “Robot Rules, Regulating Artificial Intelligence”. Page: 118.
2. Oxford English Dictionary. 1989.
3. Turner, J. 2019. “Robot Rules, Regulating Artificial Intelligence”. Page: 147.
4. Kirpichnikov, D. Pavlyuk. A. Grebneva, Y. Okagbue, H. Criminal Liability of the Artificial Intelligence. Page: 6 URL: https://www.e3s-conferences.org/articles/e3sconf/pdf/2020/19/e3sconf_btses2020_04025.pdf
5. Turner, J. 2019. “Robot Rules, Regulating Artificial Intelligence”. Page: 37.
6. Turner, J. 2019. “Robot Rules, Regulating Artificial Intelligence”. Page: 175.
7. Turner, J. 2019. “Robot Rules, Regulating Artificial Intelligence”. Page: 145.
8. Turner, J. 2019. “Robot Rules, Regulating Artificial Intelligence”. Page: 358.
9. Danaher, J. 2016. “Robots, Law and the Retribution Gap”, Ethics and Information Technology. Page: 299 -309, Vol. 18, No. 4.
10. Turner, J. 2019. “Robot Rules, Regulating Artificial Intelligence”. Page: 360.
11. Turner, J. 2019. “Robot Rules, Regulating Artificial Intelligence”. Page: 361.
12. Turner, J. 2019. “Robot Rules, Regulating Artificial Intelligence”. Page: 362.
13. Turner, J. 2019. “Robot Rules, Regulating Artificial Intelligence”. Page: 225.
14. Husain, A. Allen, J. 2017. “The Next Space Race is Artificial Intelligence”. URL: https://foreignpolicy.com/2017/11/03/the-next-space-race-is-artificial-intelligence-and-america-is-losing-to-china/
15. Turner, J. 2019. “Robot Rules, Regulating Artificial Intelligence”. Page: 230.
16. https://trumpwhitehouse.archives.gov/ai/
18. Husain, A. Allen, J. 2017. “The Next Space Race is Artificial Intelligence”. URL: https://foreignpolicy.com/2017/11/03/the-next-space-race-is-artificial-intelligence-and-america-is-losing-to-china/
19. Edmonds, J. Bendett, S. Fink, A. Chesnut, M. Gorenburg, D. Kofman, M. Stricklin, K. Waller, J. “Artificial Intelligence and autonomy in Russia”. Page: 8. URL: https://www.cna.org/CNA_files/centers/CNA/sppp/rsp/russia-ai/Russia-Artificial-Intelligence-Autonomy-Putin-Military.pdf
20. Turner, J. 2019. “Robot Rules, Regulating Artificial Intelligence”. Foreword.