Written by Rebecca Nanono
Introduction
Across today’s digital platforms, algorithms promise
personalization, relevance, and convenience. However, beneath this promise lies
a growing risk. When algorithms repeatedly serve users more of the same
content, they can intensify polarization, amplify harmful ideologies, and
accelerate pathways to radicalization.
For digital rights advocates, feminists, and social justice
actors, this is not just a technical flaw. It is a structural governance
problem with deeply gendered and political consequences.
How Algorithmic Repetition Works
Most social media and content platforms rely on engagement-optimizing
algorithms. These systems learn from users’ digital footprint such as clicks,
likes, shares, watch time, and comments, then prioritize content that maximizes
attention.
Over time, this creates the following.
- Feedback
loops, where users are repeatedly exposed to similar views
- Echo
chambers, limiting exposure to alternative perspectives
- Escalation
effects, where increasingly extreme content is recommended to sustain
engagement
Rather than offering diversity of thought, platforms often
reinforce a narrow worldview, because controversy, outrage, and fear keep users
online longer.
From Personalization to Radicalization
Radicalization rarely happens all at once. Algorithmic
systems facilitate it incrementally through the following.
- Normalizing extreme narrativesRepeated exposure makes fringe ideas appear common, acceptable, or “truthful.”
- Rewarding divisive contentContent that targets “enemies” , typically women, migrants, marginalised communities, journalists, or activists, often performs well under engagement-based ranking.
- Creating affective intensificationAlgorithms privilege emotionally charged content, which accelerates anger, fear, and resentment. These are key drivers of radicalization.
What begins as curiosity or frustration can quickly evolve
into ideological entrenchment.
A Feminist Lens on Algorithmic Harm
Radicalization online is not gender-neutral.
From a feminist digital rights perspective,
- Women
and gender-diverse people are disproportionately targeted by
radicalized content, including misogynistic ideologies, anti-feminist
movements, and coordinated harassment campaigns.
- Algorithms
frequently amplify gendered disinformation, portraying feminists as
threats to “culture,” “family,” or “national identity.”
- Online
radical spaces often use gendered grievances such as perceived loss
of male authority to recruit and mobilize users.
Moreover, women, especially in the Global South, are more
likely to face the offline consequences of online radicalization,
including political exclusion, violence, and silencing.
Algorithmic Power and the Global South
In contexts like Uganda and other African countries,
algorithmic radicalization intersects with the following.
- Weak
platform accountability mechanisms
- Limited
regulatory oversight
- Linguistic
and cultural blind spots in content moderation
- High
youth unemployment and political frustration
Algorithms trained largely on Global North data often
misinterpret or ignore local contexts, while still aggressively optimizing for
engagement. This leaves already marginalized communities more vulnerable to
manipulation and harm.
Why “Neutral Technology” Is a Myth
Platforms often frame algorithms as neutral tools. Feminist
technology studies challenge this narrative by showing that algorithms reflect:
- The
values of their designers
- The
incentives of surveillance capitalism
- Existing
power hierarchies around gender, race, and geography
When profit-driven systems are left unchecked, they
prioritize engagement over safety, growth over dignity, and scale over care.
Toward Rights-Based and Feminist Alternatives
Addressing algorithmic radicalization requires more than
content moderation. It demands structural change, including:
- Algorithmic
transparency and accountability
- Human
rights–based platform governance
- Gender-responsive
AI impact assessments
- Support
for cooperative, public-interest, and community-governed digital platforms
- Meaningful
inclusion of feminist, Global South, and youth voices in tech policymaking
A feminist approach insists that technology must serve
collective well-being rather than exploit division.
Conclusion
When algorithms continually serve “more of the same,” they
do more than personalize. They polarize. They harden identities, reward
harm, and quietly reshape political realities.
Challenging algorithmic radicalization is therefore a
digital rights issue, a feminist issue, and a democratic issue. The question is
no longer whether algorithms influence society but whether societies will
reclaim the power to govern them.
Reading & References for creating the blog.
- Zeynep
Tufekci – YouTube, the Great Radicalizer (The New York Times)
- Shoshana
Zuboff – The Age of Surveillance Capitalism
- Safiya
Umoja Noble – Algorithms of Oppression
- Mozilla
Foundation – YouTube Regrets Reports
- Amnesty
International – Surveillance Giants: How Facebook and Google
Threaten Human Rights
- Center
for Countering Digital Hate (CCDH) – Reports on algorithmic
amplification of extremism
- UN
Special Rapporteur on Freedom of Expression – Reports on online
radicalization and platform responsibility
- Data
& Society Research Institute – Work on disinformation, gender, and
platform governance
- Feminist
Internet Research Network (FIRN) – Feminist analyses of algorithmic
harm
- AlgorithmWatch
– Research on automated systems and democracy

Comments
Post a Comment