Semester Award Granted

Spring 2025

Submission Date

May 2025

Document Type

Thesis

Degree Name

Master of Arts (MA)

Thesis/Dissertation Advisor [Chair]

Aaron Veenstra

Abstract

Algorithmic processing and classification of racism-related content have become increasingly prevalent across Meta's platforms. While these algorithms aim to create safer online environments, their effectiveness and impact on fairness remain understudied. This research examines how Meta's algorithmic processes for classifying racist content affect fairness, bias, and social justice across Facebook, Instagram, and WhatsApp.

Using qualitative research methodology, the study conducted focus groups with diverse platform users to understand their experiences with content moderation. The research also analyzed theoretical frameworks related to algorithmic bias, fairness metrics, and social justice in digital spaces.

Key findings revealed significant variations in algorithmic effectiveness across different languages and cultural contexts, with implications for fairness and user experience. The study identified patterns in false positives and negatives, transparency issues, and challenges in handling intersectional content.

These findings will add to the increasing research base on algorithmic fairness and provide recommendations for improving content moderation systems. The findings offer valuable insights for technology companies, policymakers, and civil rights advocates working to create more equitable digital spaces.

Share

COinS