Skip to main content

Anna Hovsepyan

King's College London
PhD, MPhil in Law Research
Huys Scholar 2022 (John Aroutiounian Scholarship for Service to Humanity)

Headshot of Huys Scholar Anna Hovsepyan

The interest in artificial intelligence, blockchain, and its implications on society is understandable. There can be little doubt left that it will be a primary catalyst of legal, political, and social changes in the 21st Century. Nevertheless, while law schools from King’s College London to the University of Cambridge have shown increasing interest in AI and blockchain and have introduced new modules and even pathways of studying AI, law, and its ramifications on society, there is another malicious use of AI that has not gained as much attention as necessary, and that is synthetic media—better known as deepfakes. 

Over the last few years, deepfakes have mostly become a massive tool for disinformation and have been used as a type of revenge porn known as deepfake pornography. While the quality of the current deepfaked content does allow us to identify that it is computer-modified still, researchers argue that within two to three years, it is possible that even the most sophisticated detection technology will not be able to assure us “beyond reasonable doubt” that the content is genuine. And even more, the liar’s dividend – the idea that bad actors will abuse the system and will try to clear their names by falsely claiming content is fake while, in fact, it is not – has been argued by various scholars to be one of the most dangerous potential uses of deepfakes.  

Furthermore, deepfakes do not only pose a threat to elections by being a sophisticated tool for disinformation. Deepfakes can also be used to create child pornography, bring uncalculated risks to national security, result in an increase in financial fraud cases. And, most importantly, become a threat to justice by bringing deepfaked video and audio recordings to courtrooms. 

The new wave of courtroom deepfaked evidence is a tragedy for justice and requires research and understanding on how to combat falsifications, which my research will attempt to do. While for some not familiar with the topic might seem as if this is an exaggeration of the situation, as doctored videos have always been around, it is worth noting that there have already been cases with deepfaked recordings appearing in courtrooms. The problem with deepfaked evidence is its scale, quality, and accessibility by the general public. If forecasts for deepfake detection are true, what does this mean for the future of justice? 

Aims: This research aims to identify to what extent synthetic media, in particular deepfakes, are compatible with justice, the rule of law, free speech, and democracy. It also aims to investigate whether the legal systems are ready to adapt to the changes it brings and the extent to which regulatory models can be adapted to the unique challenges it presents (e.g. from malicious deepfaked courtroom evidence and deepfaked child pornography to its use in metaverse and as a tool to assist people with special needs). 

The European Union, the United States, and the United Kingdom have already made attempts to either regulate or understand how to combat the problem. From the General Data Protection Regulation (GDPR) and the establishment of the High-Level Expert Group on Artificial Intelligence, to the Online Harms Bill in the U.K., all these attempts, while not always directly, but address the issues surrounding deepfakes. Still, we are miles away from what needs to be done. As Lord Kitchin and Mr Francis Gurry said during the ‘AI: Decoding IP’ Conference in 2019, the law is facing challenges it had never encountered before. My research argues that deepfakes have become one of those challenges. This is to say that if we want our justice system and democracy with a properly functioning public sphere to be secured and to operate on the basis of fairness and trustworthiness, we need to implement better regulations and enforcement mechanisms and advance our understanding of the possible consequences of deepfakes. 

I am aware that Huys Foundation is granting the scholarship to me with the anticipation of my good faith pursuit and implementation of the projects and undertakings described in this letter, to which I hereby commit.