Sitemap

A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.

Pages

Posts

Future Blog Post

less than 1 minute read

Published:

This post will show up by default. To disable scheduling of future posts, edit config.yml and set future: false.

Blog Post number 4

less than 1 minute read

Published:

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.

Blog Post number 3

less than 1 minute read

Published:

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.

Blog Post number 2

less than 1 minute read

Published:

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.

Blog Post number 1

less than 1 minute read

Published:

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.

portfolio

publications

Detecting Harmful Memes and Their Targets

Published in ACL’21 (Findings), 2021

This study explores the use of internet memes in social media, particularly their rise in conveying political and socio-cultural opinions. Harmful memes, often complex and satirical, have become a concern. The study introduces two tasks: detecting harmful memes and identifying their target (individual, organization, etc.). We present HarMeme, a dataset with COVID-19-related memes, and emphasize the importance of multimodal models for these tasks while acknowledging existing limitations and the need for more research.

Recommended citation: Shraman Pramanick, Dimitar Dimitrov, Rituparna Mukherjee, Shivam Sharma, Md. Shad Akhtar, Preslav Nakov, and Tanmoy Chakraborty. 2021. Detecting Harmful Memes and Their Targets. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 2783–2796, Online. Association for Computational Linguistics. https://aclanthology.org/2021.findings-acl.246/

MOMENTA: A Multimodal Framework for Detecting Harmful Memes and Their Targets

Published in EMNLP’21 (Findings), 2021

Internet memes are powerful for communication, but harmful ones are rising, posing detection challenges. The study introduces MOMENTA, a neural network to detect harmful memes and identify their targets in a multimodal context. MOMENTA outperforms rivals, providing interpretability and generalizability.

Recommended citation: Shraman Pramanick, Shivam Sharma, Dimitar Dimitrov, Md. Shad Akhtar, Preslav Nakov, and Tanmoy Chakraborty. 2021. MOMENTA: A Multimodal Framework for Detecting Harmful Memes and Their Targets. In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 4439–4455, Punta Cana, Dominican Republic. Association for Computational Linguistics. https://aclanthology.org/2021.findings-emnlp.379/

Findings of the CONSTRAINT 2022 Shared Task on Detecting the Hero, the Villain, and the Victim in Memes

Published in Workshop on Combating Online Hostile Posts in Regional Languages during Emergency Situations (CONSTRAINT22), ACL’22, 2022

The CONSTRAINT 2022 Workshop shared task focused on understanding harmful memes by labeling the roles of entities within them as hero, villain, victim, or none. We curated the HVVMemes dataset, containing 7000 memes related to COVID-19 and US Politics. Despite attracting 105 participants, only 6 submissions were made, with the top submission achieving an F1-score of 58.67.

Recommended citation: Shivam Sharma, Tharun Suresh, Atharva Kulkarni, Himanshi Mathur, Preslav Nakov, Md. Shad Akhtar, and Tanmoy Chakraborty. 2022. Findings of the CONSTRAINT 2022 Shared Task on Detecting the Hero, the Villain, and the Victim in Memes. In Proceedings of the Workshop on Combating Online Hostile Posts in Regional Languages during Emergency Situations, pages 1–11, Dublin, Ireland. Association for Computational Linguistics. https://aclanthology.org/2022.constraint-1.1

Detecting and Understanding Harmful Memes: A Survey

Published in IJCAI’22 (Survey), 2022

The paper addresses the challenge of identifying harmful online content, specifically harmful memes that often mix text, visuals, and audio. It introduces a new typology for harmful memes and highlights gaps in research, like the lack of suitable datasets for some types of harmful memes. The study also discusses challenges in understanding multimodal content and the need for further research in this area.

Recommended citation: Sharma, Shivam and Alam, Firoj and Akhtar, Md. Shad and Dimitrov, Dimitar and Da San Martino, Giovanni and Firooz, Hamed and Halevy, Alon and Silvestri, Fabrizio and Nakov, Preslav and Chakraborty, Tanmoy. (2015). "Detecting and Understanding Harmful Memes: A Survey.” Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence, IJCAI-22. https://doi.org/10.24963/ijcai.2022/781

DISARM: Detecting the Victims Targeted by Harmful Memes

Published in NAACL’22 (Findings), 2022

This paper addresses the misuse of internet memes for harmful purposes, particularly targeting individuals, communities, or society. We introduce DISARM, a framework that detects and classifies harmful meme targets using deep neural networks. DISARM outperforms other systems, reducing harmful target identification errors by up to 9%.

Recommended citation: Shivam Sharma, Md Shad Akhtar, Preslav Nakov, and Tanmoy Chakraborty. 2022. DISARM: Detecting the Victims Targeted by Harmful Memes. In Findings of the Association for Computational Linguistics: NAACL 2022, pages 1572–1588, Seattle, United States. Association for Computational Linguistics. https://aclanthology.org/2022.findings-naacl.118

Domain-aware Self-supervised Pre-training for Label-Efficient Meme Analysis

Published in AACL’22 (Main), 2022

The paper presents two self-supervised pre-training methods, Ext-PIE-Net and MM-SimCLR, for multi-modal tasks like meme analysis. These methods use specialized pretext tasks and outperform fully supervised approaches in various meme-related tasks, demonstrating their generalizability and the importance of better multi-modal self-supervision methods.

Recommended citation: Shivam Sharma, Mohd Khizir Siddiqui, Md. Shad Akhtar, and Tanmoy Chakraborty. 2022. Domain-aware Self-supervised Pre-training for Label-Efficient Meme Analysis. In Proceedings of the 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 792–805, Online only. Association for Computational Linguistics. https://aclanthology.org/2022.aacl-main.60

Characterizing the Entities in Harmful Memes: Who is the Hero, the Villain, the Victim?

Published in EACL’22 (Main), 2023

The paper discusses the importance of understanding the intent and potential harm associated with viral memes. It focuses on identifying the roles of entities within memes, such as 'hero,' 'villain,' or 'victim.' The study introduces a multi-modal framework called VECTOR for this task, which outperforms standard models. The research also highlights challenges in semantically labeling roles within memes and provides comparative analyses.

Recommended citation: Shivam Sharma, Atharva Kulkarni, Tharun Suresh, Himanshi Mathur, Preslav Nakov, Md. Shad Akhtar, and Tanmoy Chakraborty. 2023. Characterizing the Entities in Harmful Memes: Who is the Hero, the Villain, the Victim?. In Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics, pages 2149–2163, Dubrovnik, Croatia. Association for Computational Linguistics. https://aclanthology.org/2023.eacl-main.157

MEMEX: Detecting Explanatory Evidence for Memes via Knowledge-Enriched Contextualization

Published in ACL’23 (Main), 2023

The paper discusses the challenge of understanding the context of memes and introduces the MEMEX task. We create a dataset called MCC and propose a multimodal neural framework called MIME, which outperforms other models by 4% in F1-score. The study also provides detailed performance analyses and insights into cross-modal contextual associations.

Recommended citation: Shivam Sharma, Ramaneswaran S, Udit Arora, Md. Shad Akhtar, and Tanmoy Chakraborty. 2023. MEMEX: Detecting Explanatory Evidence for Memes via Knowledge-Enriched Contextualization. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 5272–5290, Toronto, Canada. Association for Computational Linguistics. https://aclanthology.org/2023.acl-long.289

What Do You MEME? Generating Explanations for Visual Semantic Role Labelling in Memes

Published in AAAI’23 (Main), 2023

The paper discusses the importance of memes in social media marketing and introduces a task called EXCLAIM, which generates explanations for semantic roles in memes. We create a dataset, ExHVV, and a multi-task learning framework called LUMEN, which outperforms baselines in natural language generation. The study shows that cues for semantic roles in memes also help generate explanations effectively.

Recommended citation: Sharma, S., Agarwal, S., Suresh, T., Nakov, P., Akhtar, M. S., & Chakraborty, T. (2023). What Do You MEME? Generating Explanations for Visual Semantic Role Labelling in Memes. Proceedings of the AAAI Conference on Artificial Intelligence, 37(8), 9763-9771. https://doi.org/10.1609/aaai.v37i8.26166

Emotion-Aware Multimodal Fusion for Meme Emotion Detection

Published in IEEE Transactions on Affective Computing, 2024

Memes are widely used on social media to express opinions, but current methods struggle to capture their emotional dimensions, relying on large datasets and lacking generalization. We introduce MOOD (Meme emOtiOns Dataset) with six emotions and ALFRED (emotion-Aware muLtimodal Fusion foR Emotion Detection), a neural framework that effectively models visual-emotional cues and cross-modal fusion. ALFRED outperforms existing methods by 4.94% F1, excels in the Memotion task, and generalizes well on HarMeme and Dank Memes datasets. It also offers interpretability through attention maps. We address the challenges of analyzing memes due to complex modality-specific cues.

Recommended citation: S. Sharma, R. S, M. S. Akhtar and T. Chakraborty, "Emotion-Aware Multimodal Fusion for Meme Emotion Detection," in IEEE Transactions on Affective Computing, doi: 10.1109/TAFFC.2024.3378698. https://ieeexplore.ieee.org/document/10475492

MemeMQA: Multimodal Question Answering for Memes via Rationale-Based Inferencing

Published in ACL’24 (Findings), 2024

Memes, widely used for humor and propaganda, need exploration for potential harm. Previous studies have focused on detecting harm and providing explanations in closed settings. We introduce MemeMQA, a multimodal question-answering framework designed to provide accurate responses and coherent explanations for structured questions about memes. We present MemeMQACorpus, a dataset with 1,880 questions related to 1,122 memes, including answer-explanation pairs. Our proposed ARSENAL framework, leveraging LLMs, outperforms baselines by ~18% in answer prediction accuracy and shows superior text generation in lexical and semantic alignment. We evaluate ARSENAL's robustness through diverse question sets and modality-specific assessments, enhancing our understanding of meme interpretation in multimodal communication.

Recommended citation: Siddhant Agarwal, Shivam Sharma, Preslav Nakov, and Tanmoy Chakraborty. 2024. MemeMQA: Multimodal Question Answering for Memes via Rationale-Based Inferencing. In Findings of the Association for Computational Linguistics: ACL 2024, Bangkok, Thailand. Association for Computational Linguistics. https://arxiv.org/abs/2405.11215

Factuality Challenges in the Era of Large Language Models and Opportunities for Fact-Checking

Published in Nature Machine Intelligence, 2024

The emergence of tools based on Large Language Models (LLMs), such as OpenAI's ChatGPT, Microsoft's Bing Chat, and Google's Bard, has garnered immense public attention. These incredibly useful, natural-sounding tools mark significant advances in natural language generation, yet they exhibit a propensity to generate false, erroneous, or misleading content – commonly referred to as "hallucinations." Moreover, LLMs can be exploited for malicious applications, such as generating false but credible-sounding content and profiles at scale. This poses a significant challenge to society in terms of the potential deception of users and the increasing dissemination of inaccurate information. In light of these risks, we explore the kinds of technological innovations, regulatory reforms, and AI literacy initiatives needed from fact-checkers, news organizations, and the broader research and policy communities. By identifying the risks, the imminent threats, and some viable solutions, we seek to shed light on navigating various aspects of veracity in the era of generative AI.

Recommended citation: Isabelle Augenstein, Timothy Baldwin, Meeyoung Cha, Tanmoy Chakraborty*, Giovanni Luca Ciampaglia, David Corney, Renee DiResta, Emilio Ferrara, Scott Hale, Alon Halevy, Eduard Hovy, Heng Ji, Filippo Menczer, Ruben Miguez, Preslav Nakov, Dietram Scheufele, Shivam Sharma, Giovanni Zagni, Factuality Challenges in the Era of Large Language Models, In Nature Machine Intelligence, 2024. https://arxiv.org/abs/2310.05189

talks

teaching

Teaching experience 1

Undergraduate course, University 1, Department, 2014

This is a description of a teaching experience. You can use markdown like any other post.

Teaching experience 2

Workshop, University 1, Department, 2015

This is a description of a teaching experience. You can use markdown like any other post.