DISARM: Detecting the Victims Targeted by Harmful Memes

Published in NAACL’22 (Findings), 2022

Recommended citation: Shivam Sharma, Md Shad Akhtar, Preslav Nakov, and Tanmoy Chakraborty. 2022. DISARM: Detecting the Victims Targeted by Harmful Memes. In Findings of the Association for Computational Linguistics: NAACL 2022, pages 1572–1588, Seattle, United States. Association for Computational Linguistics. https://aclanthology.org/2022.findings-naacl.118

Download paper here

This paper addresses the misuse of internet memes for harmful purposes, particularly targeting individuals, communities, or society. We introduce DISARM, a framework that detects and classifies harmful meme targets using deep neural networks. DISARM outperforms other systems, reducing harmful target identification errors by up to 9%.

Recommended citation: Shivam Sharma, Md Shad Akhtar, Preslav Nakov, and Tanmoy Chakraborty. 2022. DISARM: Detecting the Victims Targeted by Harmful Memes. In Findings of the Association for Computational Linguistics: NAACL 2022, pages 1572–1588, Seattle, United States. Association for Computational Linguistics.