
|
Getting your Trinity Audio player ready...
|
At the Digital Rights and Inclusion Forum (DRIF) 2026, a high-level session titled “From Bias to Violence: Feminist Governance of AI and the Fight Against Tech-Enabled Gender Harm” brought renewed attention to conversations on Artificial Intelligence (AI) and gender justice. It was held at the Radisson Blu Hotel, Abidjan Airport, Côte d’Ivoire, on April 15, 2026.
The session, co-hosted by Naija Feminists Media and Human Rights Journalists Network (HRJN), explored how AI bias and online misogyny are connected, drawing on recent findings from Nigeria’s digital space.
Panellists included the Founding Director of Naija Feminists Media, Simbiat Bakare; Israel Olatunji Tijani, Founder/Data Scientist, ChatVE; Executive Director of BO Foundation for Inclusive Media, Blessing Oladunjoye; AI ethicist and journalist at Naija Feminists Media, Kosisochukwu Ani; and Bisola Adediji, tech policy and privacy expert. Kehinde Adegboyega, Executive Director of Human Rights Journalists Network, moderated the session.
The 60-minute session examined how biased datasets and algorithmic systems are increasing new forms of gender-based violence, particularly in Nigeria and across West Africa. In setting the tone, Kehinde introduced the concept of a pipeline of harm, explaining how bias in AI systems can evolve into real-world abuse, including the spread of deepfakes and AI-generated harassment in digital spaces such as WhatsApp.
Speaking on the realities of tech-enabled violence, Simbiat Bakare highlighted how online abuse is often dismissed as virtual, despite its serious physical and professional consequences for women journalists and activists. She noted that generative AI tools are increasing the scale of such attacks, making violence against women normalised.
Drawing on the systemic nature of the issue, Blessing Oladunjoye highlighted that research shows that female journalists and civic advocates are usually most affected by online platforms and harm. She highlighted that emerging technologies such as deepfakes and “nudification” tools violate rights to privacy and bodily autonomy. Blessing noted that the media has a role to play in challenging AI-driven abuse by engaging with relevant platforms and stakeholders to take down harmful content.
From a technical perspective, Israel pointed to the dangers of datasets that inherit societal misogyny, explaining how these biases are amplified when deployed across digital platforms. He emphasised that, without intervention, AI systems risk perpetuating existing inequalities, creating discriminatory gender stereotypes, or causing unequal treatment. Citing the recently released Google WAXAL dataset, a new large-scale, openly accessible speech dataset for 21 Sub-Saharan African languages, a significant performance disparity exists: any model trained on that dataset will be systematically more accurate for male users than for female users. He further highlighted the primary potential biases, including higher error rates for women and female voices being misinterpreted as background noise.
In shifting toward solutions, Kosi noted that existing Nigerian legal frameworks, such as the VAPP (Violence Against Persons Prohibition) Act, struggle to fully capture and respond to these evolving forms of harm. She recommended conducting a policy review to ensure accountability for digital perpetrators. Additionally, she advised that governments in West African countries collaborate to hold tech platforms accountable for fully protecting their citizens.
Bisola unpacked what feminist governance of AI means in practice, stressing that it requires more than representation. According to her, it involves rethinking the principles that guide how AI systems are designed, built, and regulated. She urged that when AI are designed, feminists must be added as critical stakeholders for their perspective to ensure tools are not weaponised against women.
In the closing segment, panellists and participants acknowledged gaps in feminist governance of AI models and stressed the need for stronger policy responses and a broader cultural shift in how digital safety is understood.






