AI and human rights: forging the universal source code

logo global campus

AI and human rights: forging the universal source code

Emphasising collaboration among diverse stakeholders—government, businesses, human rights defenders, and intergovernmental organisations, the FRA Forum 2024 highlighted the need for a human rights vision to address technological challenges like AI. Despite discrepancies, a collective common ground has called for a digital future rooted in dignity and justice.

AI tools, like all technologies, must be designed, developed and deployed so as to be consistent with the obligations of States and the responsibilities of private actors under international human rights law.

David Kaye (former Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression)

The grand halls of the Hofburg Palace gathered a diversity of voices – human rights defenders, European institutions, tech innovators, policymakers, and beyond on 11 and 12 March 2024. Amidst the chandeliers and frescoed ceilings, Vienna's renowned Rathaus (City Hall) welcomed the Fundamental Rights Forum 2024 (FRA Forum) to discuss, among other topics, one of the digital era's most pressing challenges: how can we harness Artificial Intelligence (AI) and digitalisation to uplift humanity while ensuring these very creations do not violate fundamental rights and freedoms?

Organised by the European Union Agency for Fundamental Rights (FRA), the forum entitled ‘Rights in Motion: Embracing the Human Rights for Europe’s Future’ became a dynamic space to reflect on the most challenging issues for Europe and, I would say, for many other regions in the world, around three thematic areas: 1) protecting democracy and civic space; 2) Shaping a socially and environmentally sustainable Europe, and 3) Ensuring rights-compliant digitalisation.

In recent years, there has been an intense discussion about the impact of AI on society. For example, concerns about social media platforms moderating content and attempting against marginalised communities, exacerbating inequities through algorithmic discrimination, content moderation steeped in bias, and data extraction.

However, with the introduction of ChatGPT in 2022 by Open AI and the race to regulate AI, this has become a more significant topic. The European Union presented and adopted in March 2024 the AI Act (‘the first regulation on AI’), while President Joe Biden issued an Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence to stay up-to-date, and China announced a regulation on Generative Artificial Intelligence (GenAI). These efforts are also in line with the Council of Europe (CoE) Draft Framework Convention on artificial intelligence, human rights, democracy and the rule of law, as well as the United Nations General Assembly resolution A/78/L.49 from March 2024 on ‘seizing the opportunities of safe, secure and trustworthy artificial intelligence systems for sustainable development’.

States are working to shape AI governance by the time civil society actors are calling for meaningful and transparent participation during the regulation discussions and robust protection for human rights.

The FRA Forum 2024 shifted its focus from the excitement surrounding AI to examining its risks, the regulatory intricacies, and the necessity of meaningfully including marginalised groups while considering the adverse effects on human rights. Different opinions were being formed, but a common goal also emerged. To explore new avenues of human potential, technological advancements must ensure that these powerful innovations are firmly entrenched in a human rights normative framework.

At the forum table called 'Code to Conscience, Ensuring Rights Compliant Digitalisation’, participants raised some relevant points during the discussions on AI and the process of the adoption of technological changes. First, in light of regulatory initiatives in the field of AI in the European Union, there is a need to have clarity on the present standards for human rights and states’ related obligations. This means establishing a connection between the current standards and those proposed to govern AI. The challenge is to be clear on the transposition of existing international and regional human rights frameworks into national laws and regulations without forgetting the obligation of states to uphold human rights and ensure accountability in AI and digitalisation development and deployment.

This transposition in the context of the proliferation of instruments — pre-existing and new standards — is creating an even more complex regulatory landscape, which could possibly offer a pick-and-choose approach for companies and governments. In such a scenario, some voices drove the attention to have more coordination and coherence among stakeholders. To dismantle the complexity and risks of making the application of human rights standards selective in light of the new generation of regulatory frameworks to govern AI and digitalisation, it is important to remember that adherence to the existing international and regional standards is vital in ensuring accountability for governments and companies.

It is worth remembering the work conducted by the United Nation Special Rapporteur on the promotion and protection of freedom of opinion and expression to develop several reports on the intersection of human rights and technology. In 2018, the former rapporteur, David Kaye, presented at the UN Human Rights Council the report on ‘Artificial Intelligence Technologies and Implications for the Information Environment’. At that time, he stated that

codes and guidelines issued by both public and private sector bodies should emphasise that human rights law provides the fundamental rules for the protection of individuals in the context of AI, while ethics frameworks may assist in further developing the content and application of human rights in specific circumstances.

Whereas the alignment to the human rights framework remains relevant when developing policy frameworks to govern AI, education is another key component when looking at the challenges of AI, digitalisation, and its relation to human rights. Not just digital but also algorithmic literacy is needed, focusing on educating the youth, the elderly, and groups in a situation of vulnerability.

However, in the currently moving scenario of regulation and defining responsibilities, stakeholders need to be aware of the risk in which companies could put the onus on individuals to make choices about their privacy and security. Yes, civic education is important, but it should not replace human-centred regulations based on meaningfully inclusive processes.

States and companies must move beyond the tokenism of 'ticking boxes' on inclusion. Minority groups can be engaged in meaningful ways. Amplifying their voices in the design, implementation, and oversight of these powerful systems can uplift them at last or deepen cycles of oppression. This should be done carefully because while GenAI, with its powerful capabilities to create images, can reinforce stereotypes, companies can also distort history by, for example, depicting Black Nazis when they are trying to address bias.

As conversations sprawled across the FRA Forum 2024, proposals also took place. There were calls for bolder human rights impact assessments for AI systems, robust algorithmic auditing to root out bias and discrimination, collective redress and legal remedies for those harmed, recognition of diversity, and data protection principles safeguarding digital privacies.

Human rights have a dual role in setting a vision for education and regulation while becoming the threshold to restraining potential harm. So, there is room to be idealist and pragmatic, where human rights should embrace their idealist role. The world we want to create.

The FRA Forum 2024 represented an opportunity to engage with diverse stakeholders. From governmental bodies to businesses, human rights defenders, and intergovernmental organisations. It made it clear that any sector should not work separately. Challenges to promoting and defending human rights in light of technological advances make it necessary to work together. There might be discrepancies, but there is also a common ground driven by a human rights vision.

Most crucially, this quest could leverage a spirit to keep forging unprecedented cooperation across sectors to enshrine human rights as the universal source code underlying our collective digital destiny.

The forum-related grand alliance of human rights defenders, authorities, companies, and academics has dared to envision a digital age elevated by the highest ideals rooted in human rights. An era where innovation exists to expand human dignity, liberty and justice for all, not erode them in the cold name of power and profit.

Vladimir Cortés Roshdestvensky

Written by Vladimir Cortés Roshdestvensky

Vladimir Cortés Roshdestvensky is a researcher and activist specialising in the intersection of human rights and technology, with a focus on freedom of expression, privacy, and AI. He has worked with Article 19 Mexico and Central America and Freedom House, researching internet access, disinformation, and AI policies in Mexico. Additionally, he serves as a policy advisor for Meta. He is currently enrolled on the European Master’s Programme on Human Rights and Democratisation (EMA).

Cite as: Cortés Roshdestvensky, Vladimir. "AI and human rights: forging the universal source code", GC Human Rights Preparedness, 8 July 2024, https://gchumanrights.org/gc-preparedness/preparedness-science-technology/article-detail/ai-and-human-rights-forging-the-universal-source-code.html

 

Add a Comment

Disclaimer

This site is not intended to convey legal advice. Responsibility for opinions expressed in submissions published on this website rests solely with the author(s). Publication does not constitute endorsement by the Global Campus of Human Rights.

 CC-BY-NC-ND. All content of this initiative is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

freccia sinistra

Go back to Blog

Original Page: http://gchumanrights.org/gc-preparedness/preparedness-science-technology/article-detail/ai-and-human-rights-forging-the-universal-source-code.html

Go back