U of T researchers developing AI system to tackle harmful social media content

Hate speech and misinformation on social media can have a devastating impact, particularly on marginalized communities. But what if we used artificial intelligence to combat such harmful content?

That’s the goal of a team of University of Toronto researchers who were awarded a Catalyst Grant by the Data Sciences Institute (DSI) to develop an AI system to address the marginalization of communities in data-centric systems – including social media platforms such as Twitter. 

The team consists of three faculty members. Syed Ishtiaque Ahmed is an assistant professor in the department of computer science in the Faculty of Arts & Science and a fellow of the Schwartz Reisman Institute for Technology and SocietyShohini Bhattasali is an assistant professor in the department of language studies at U of T Scarborough. Shion Guha is an assistant professor cross-appointed between the department of computer science and the Faculty of Information, the director of the Human-Centered Data Science Lab and a faculty affiliate of the Schwartz Reisman Institute for Technology and Society.

Their goal is to make content moderation more inclusive by involving the communities affected by harmful or hateful content on social media. The project is a collaboration with two Canadian non-profit organizations: the Chinese Canadian National Council for Social Justice (CCNC-SJ) and the Islam Unravelled Anti-Racism Initiative. 

Historically marginalized groups are most affected by content moderation failings as they have lower representation among human moderators and their data is less available for algorithms, Ahmed explains. 

Leave a Reply

Your email address will not be published. Required fields are marked *