Human Rights and Artificial Intelligence: Programme

Schedule

 

Thursday 06 July

09.30 registration, coffee

10.00 – 12.00 Wolfgang Benedek – The debate on new digital rights

12.00 – 14.00. Sejla Maslo Cerkic – Artificial intelligence and its impact on journalists and human rights defenders – the human rights perspective

15.00 – 17.00 Nuno Garcia – Can we protect rights in the fundamentally anarchic Internet?

 

Friday 07 July

09.45 coffee

10.00 – 12.00 Lana Bubalo – Legal protection against discrimination by AI

12.00 – 14.00 Mart Susi – AI and trust from the human rights perspective

15.00 – 17.00 Jukka Viljanen – AI and freedom of expression: European perspective

 

Saturday 08 July

09.45 coffee

10.00 – 12.00 Vygante Milasiute – The use of AI in courts and its possible impact on the right to fair trial

12.00 – 14.00 Matthias C. Kettemann – Beyond the “Black Box”: potentials and limits of explaining AI-based recommendations and decisions in the Digital Services and the AI Act

15.00 – 17.00 Arturs Kucs – 3rd part liability. Does the Delfi principle still stand?

 

 

The program and speakers

 

Wolfgang Benedek (Institute of International Law and European Training and Research Centre for Human Rights and Democracy of University of Graz)

The debate on new digital rights

Several recent initiatives have proposed new human rights in the digital sphere. They are to respond to new challenges to human rights in an increasingly digitalized world. However, is there really a need for genuine new digital human rights or would it suffice to develop existing rights by interpretation to deal with the new threats from cyberspace? As the proposals claim the emergence of new principles and rights, in which stage is the development of new digital human rights? Can all digital rights proposed also be considered as human rights? How have European institutions reacted to the proposals or which regulatory efforts have they undertaken? If the new rights are proposed to be protected at the European level, what about the universal level of a global cyberspace? Specific attention will be given to the regulation of artificial intelligence from a human rights perspective.

 

Lana Bubalo (Associate professor of law at the University of Stavanger School of Business in Norway)

Legal protection against discrimination by AI

The use of AI is inescapable in our modern society. It has many benefits, but it also raises many concerns. Even though AI can be more objective than humans, in reality algorithms make decisions based on human input, which can be bias and exclusionary. AI used for automated decision making can lead to discrimination based on race, age and gender undermining equal opportunities and causing serious social consequences, particularly for vulnerable groups. This is why discrimination by AI has been identified as serious issue and challenge that is in the forefront of legal discussions on AI. In this lecture, we will explain what makes AI discrimination different from other types of discrimination and weather the current legal framework is enough to ensure efficient protection of the right to non- discrimination. Real life examples of AI discrimination (in health care, hiring process and criminal justice) will be used as a basis for in- class discussions in order to try to uncover answers to many questions the use of new technology poses for fundamental human rights, such as lack of algorithmic transparency, contestability, accountability and personhood issues.

 

Sejla Maslo Cerkic (Organisation for Security and Co-operation in Europe,”Dzemal Bijedic” University of Mostar, Faculty of Law)

Artificial intelligence and its impact on journalists and human rights defenders – the human rights perspective

Artificial intelligence has a major impact on the work of journalist and human rights defenders. From obvious benefits which have made the journalistic work and the efforts of human rights defenders more efficient and impactful to serious concerns that human journalists may be replaced by AI in the near future, the debate has become more relevant with the emergence and the wide use of new tools such as chat bots. Furthermore, the challenges posed by different surveillance technologies have made the work of journalists and human rights dangerous more dangerous in the light of a global political crisis and unsafe working environments.  Many of these concerns are related to the protection of freedom of expression, the right to privacy, issues with safety and labor rights, etc.  After a general introduction on the risks and benefits of AI for the journalism and the media in a contemporary setting, the lecture will focus on the human rights implications of AI, as well as some liability concerns with regards to automatically created journalistic content. The impact of AI on investigative journalism and human rights will be presented, with an overview of the most relevant policies developed by the international human rights bodies on the subject.

 

Nuno M. Garcia (Full Professor at University of Lisbon)

Can we protect rights in the fundamentally anarchic Internet?

The Internet is the technological machine that supports the World Wide Web (and other data continents) is inherently anarchic by design. In this short course we will address some of the basics of the technology that makes the data flow around the world, and we will try to conclude if it is possible to protect rights in this field. Spoiler alert: the answer is “maybe”.

 

Matthias Kettemann (Leibniz Institute for Media Research, Research Program Head; University of Innsbruck, Department of Legal Theory and Future of Law, Professor of Innovation, Theory and Philosophy of Law – Head of Department)

Beyond the “Black Box”: potentials and limits of explaining AI-based recommendations and decisions in the Digital Services and the AI Act

The presentation explores the growing need for transparency and explainability in artificial intelligence (AI) systems used in digital services, especially recommender systems. The lecture will discuss the benefits and limitations of explainability, along with the regulatory framework provided by the DSA and AI Acts. Highlighting how AI systems often operate as a “black box,” making decisions that are difficult to understand or explain, the lecture will show how the lack of transparency can lead to issues such as bias and discrimination. The lecture will also show how the DSA and AI Act approach “explainable AI” and which regulatory issues persist – and how to overcome them, including through ethical approaches to AI that go beyond regulatory obligations.

 

Artúrs Kućs (Judge at the Constitutional Court of Latvia, associate professor of Latvia University)

3rd part liability. Does the Delfi principle still stand?

The lecture will explore the broader context how the Internet has changed the ways we receive and impart information, the notion of media and the way media operate. What are developments as regards intermediary liability from the EU and the CoE? Some key ECtHR judgments will be explored, in particular after the Delfi vs. Estonia judgment: Index vs. Hungary, Pihl vs. Sweden, Hoinez vs. Norway. The lecture will also explore developments and discussions about intermediary liability in the US?

 

Vygantė Milašiūtė (Associate professor at Vilnius University)

The use of AI in courts and its possible impact on the right to fair trial

AI is already used in courts of law of some countries to simplify and speed up the exercise of certain tasks. Different challenges arise in carrying out tasks of different nature. The use of AI for administrative tasks seems to be widely accepted. The use of AI for judicial decision making would have a negative impact on the right to fair trial, which is why these tasks are still reserved for human judges. Different types of court proceedings (civil, criminal, administrative) because of their specific characteristics also pose different challenges for AI use. For example, AI-generated data could be used as evidence in court proceedings raising different questions of their reliability in different types of cases.  The lecture overviews existing examples of the use of AI in courts and discusses the reasons why AI-based judicial decision making would be problematic and whether these risks could be mitigated to a sufficient extent.

 

Mart Susi (Professor of Human Rights Law at Tallinn University, Chair of Global Digital Human Rights Network)

AI and trust from the human rights perspective

The proposition of turning unfair normativity into fair judicial or quasi-judicial outcome cannot be validated for the digital domain, where automated solutions or the artificial intelligence are allowed to make decisions. This is because of two reasons. The first is because of a phenomenon which is primarily intuitionistic and asserts that the concept of trust in the digital domain has a blind eye towards the question of normative fairness. This is related to the objection from Radbruch’s disavowal formula. Digital domain operators – automated systems and the artificial intelligence, take for granted the positive features of positive or quasi-positive rules. Judges refusing to apply a legal norm which would lead to extremely unfair outcome do so because they can exercise distrust towards law. Artificial intelligence and automated systems, on the other hand, are designed to trust law. To put it figuratively, the degree of our trust in physical judges is related to our expectation that they are capable of distrusting law. And our trust towards automated systems and artificial intelligence is weaker because we assume that they completely trust law. This is the context which will be discussed in the lecture.

 

Jukka Viljanen (Professor of Public Law at the Tampere University, Finland)

AI and freedom of expression: European perspective

Discussion on AI and Freedom of expression has arisen from several perspectives. There have been raised questions related to role of search engines, algorithms polarizing political discourse, even first defamation cases against Chat GPT are now before courts. There is also discussion of positive sides of using AI e.g. combating hate speech. In this lecture, we focus on some of the main issues. How should we review the freedom of expression doctrine in light of AI? What are the key elements of the prevailing freedom of expression doctrines that needs to be developed due to AI?  What are the roles and responsibilities of different actors (authorities, tech companies, individuals) in the age of AI?