Navigating new technologies in activism: challenges and opportunities for human rights

logo global campus

Navigating new technologies in activism: challenges and opportunities for human rights

Harnessing new technologies, activists confront challenges like algorithmic bias and misinformation. A strong emphasis on an ethical and responsible use to advance human rights must be made. Despite risks, technology can empower activism, promoting collective action, accountability and democracy worldwide.

In today's rapidly evolving digital landscape, new technologies have emerged as powerful tools for activism, enabling individuals and organisations to mobilise, document human rights abuses, and hold perpetrators accountable. However, alongside these opportunities come significant challenges, including concerns about data protection, the spread of misinformation, and the manipulation of digital content.

These issues were partially discussed during the Fundamental Rights Forum 2024, organised by the European Union Agency for Fundamental Rights (FRA), in relation to one of the three main thematic areas, namely ‘protecting democracy and civic space’. The forum highlighted the importance of using technology ethically to safeguard human rights and combat misinformation. In the thematic panel ‘protecting civic space and human rights defenders’, Doaa Al Zamel (Abo Nabout) and Martin Andreasson (State Secretary to the Minister for Gender Equality and Working Life) emphasised the need for strong support systems for human rights defenders facing increasing risks. Andreasson shared statistics on harassment faced by civil society organisations, mirroring global trends. Balazs Denes (Executive Director of Civil Liberties Union for Europe) and Domagoj Hajduković (Member of Parliament, Parliamentary Assembly of the Council of Europe) stressed the need for collaboration between civil society, tech companies, and policymakers to protect civic spaces and promote media literacy, aligning with the core issue of technology's role in activism and accountability.

In exploring the intersection between new technologies and activism, it is worth addressing how technology has reshaped the landscape of human rights advocacy and the complex dynamics at play. As an activist with experience working in Russia, my primary focus is on examples from that region to illustrate the struggles faced by activists in an authoritarian context.

Amplifying activism through technology
New technologies have revolutionised the way activism is conducted, providing innovative avenues for organising, raising awareness, and effecting change. From social media platforms such as Instagram or Twitter and hashtags like #metoo to crowdfunding websites such as GoFundMe or Just Giving, digital tools have empowered activists to reach broader audiences, mobilise resources, coordinate actions in real-time and highlight the struggles of vulnerable groups.

In a repressive state like Russia where protests and demonstrations are not safe, many NGOs have diverted their efforts to online approaches. Particularly strongly affected audiences like LGBTQ+ activists used websites and social networks to spread information, before the LGBTQ+ movement was declared extremist on November 30, 2023. Now, even that use has become risky and could lead to up to 12 years of imprisonment under article 282.2 of the Russian Criminal Code.

Nonetheless, also seemingly freer states like the United States have successfully made use of the developing technologies. The Black Lives Matter (BLM) movement has effectively used social media platforms to raise awareness about police brutality and systemic racism. Following the killing of George Floyd in May 2020, viral videos of police violence circulated on platforms like Twitter and Instagram, sparking nationwide protests and calls for racial justice. BLM organisers utilised hashtags like #BlackLivesMatter to amplify their message and mobilise supporters, leading to widespread demonstrations and policy reforms.

Similarly, the #MeToo movement, initiated by Tarana Burke in 2006, gained visibility in 2017 after Alyssa Milano’s tweet, sparked a global movement challenging the silence surrounding sexual harassment and starting to hold perpetrators accountable.

The use of livestreaming and citizen journalism has also played a crucial role in documenting human rights abuses and exposing injustices. For instance, eyewitnesses used smartphones to capture and share footage of atrocities, such as the massacre in Bucha, Ukraine. This instant dissemination of information sparked global outrage and catalysed calls for accountability.

Technological challenges and risks
While new technologies offer unprecedented opportunities for activism, they also present significant challenges and risks. For instance, the spread of misinformation and the manipulation of digital content, including the rise of deep fakes, are noteworthy. Deep fakes, which are AI-generated videos that convincingly depict individuals saying or doing things they never did, pose a grave threat to truth and authenticity. As the technology behind deep fakes advances, the potential for malicious actors to manipulate public opinion and undermine trust in institutions grows.

One of the recent examples is the 1-minute-deep-fake video of the Ukrainian President Volodymyr Zelenskiy asking his army to lay down the weapons and stop fighting. Even though the video did not have any major consequences as the real president reacted swiftly, it gained 5 Mio views before it was removed. According to Deep Media,the number of deep fakes doubles every six months and will reach 8 Mio by 2025. Such a worrying development calls for increased education in matters of technologies and additional protection mechanisms to avoid possible disasters in terms of social trust and security. Deep fakes have the potential to undermine public confidence in authentic communications, erode trust in institutions, and incite unrest by spreading misinformation. Therefore, proactive measures, including public awareness campaigns and advancements in detection technology, are essential to mitigate the risks associated with this rapidly evolving threat.

Data protection is another major concern in the digital age, with personal information often vulnerable to breaches and exploitation. While the European Union has enacted stringent data protection laws, such as the General Data Protection Regulation (GDPR), enforcement and compliance remain uneven. This variability in data protection standards poses challenges for activists and human rights defenders who rely on digital platforms to communicate and organise securely.

The complexities of media and propaganda
Human rights activists must also navigate the complex landscape of media and propaganda. While traditional media outlets play a crucial role in amplifying activists’ messages and shining a spotlight on human rights abuses, they are not immune to bias and manipulation. State-controlled media in authoritarian regimes often serve as instruments of propaganda, spreading disinformation and silencing dissenting voices.

In Russia, media control is exerted through censorship laws. Most recent legislation concerns the coverage of the Russian-Ukrainian war, the so-called ‘fake news law’, which can lead up to 15 years of imprisonment for content deemed to threaten the state. The inclusion of journalists into the lists of ‘foreign agents’, namely those who receive foreign funds to ‘influence’ domestic affairs, and as of February 2024 the prohibition of any advertisement from people with such a status, impact their primary income source and reduce the number of independent journalists able to continue working in the country. Such restrictions significantly diminish the availability of objective information to Russian citizens.

The assassination of investigative journalist Anna Politkovskaya in 2006 as well as the poisoning and imprisonment of the opposition figure Alexei Navalny, culminating in his death on February 16, 2024, underscore the high risks faced by those who challenge the Russian government's narrative.

Furthermore, algorithmic bias perpetuates racism in digital spaces, as highlighted by the research of Safiya Umoja Noble. Noble's work demonstrates how search engine algorithms often reinforce harmful stereotypes, disproportionately presenting negative content related to race. The findings from a US study analysing 200 facial recognition algorithms confirms it, showcasing a higher probability of misidentification among Asians, Blacks, and Native Americans. These results underscore concerns regarding discrimination, wrongful prosecutions and convictions. Addressing algorithmic bias and promoting algorithmic transparency are essential steps toward fostering a more inclusive and democratic digital sphere.

Toward an ethical and responsible use of technology for human rights
In confronting the challenges posed by new technologies, human rights activists and advocates must prioritise an ethical and responsible use. This entails critically evaluating the impact of digital tools on human rights, privacy, and democratic values, advocating for safeguards against abuse and exploitation. As discussed at the FRA Forum, collaboration between civil society, tech companies, and policymakers is essential to develop robust regulatory frameworks and industry standards that protect users while preserving innovation and freedom of expression.

Moreover, media literacy and digital literacy initiatives are crucial to empower individuals to discern facts from fiction and critically evaluate information sources. By promoting media literacy skills, educators and activists can help shield the public from disinformation and equip them with the tools to navigate the digital landscape safely and responsibly.

While new technologies offer immense potential for activism and accountability, they also pose significant risks. Ethical and responsible use of technology empowers human rights activists to amplify their voices, expose abuses, and drive meaningful change for justice and equality. Achieving this vision requires collective action, informed advocacy, and ongoing vigilance to safeguard digital rights and democratic values in an increasingly interconnected world.

Nadeshda Arontschik

Written by Nadeshda Arontschik

Nadeshda Arontschik is a human rights defender and is currently finishing her master’s degree in the European Master’s Programme in Human Rights and Democratisation (EMA) at the Global Campus of Human Rights. She worked for the Friedrich Ebert Foundation (FES) in Russia, managing projects on gender issues, trade unions and social policy matters. Her primary research interests include LGBTQ+, gender studies and civic society movements in Eastern Europe.

Cite as: Arontschik, Nadeshda. "Navigating new technologies in activism: challenges and opportunities for human rights", GC Human Rights Preparedness, 1 July 2024,


Add a Comment


This site is not intended to convey legal advice. Responsibility for opinions expressed in submissions published on this website rests solely with the author(s). Publication does not constitute endorsement by the Global Campus of Human Rights.

 CC-BY-NC-ND. All content of this initiative is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

freccia sinistra

Go back to Blog

Original Page:

Go back