PUBLICATIONS
2023:
1) ConvEx: A Visual Conversation Exploration System for Discord Moderators
Frederick Choi, Tanvi Bajpai, Sowmya Pratipati, Eshwar Chandrasekharan | CSCW 2023 | Paper (forthcoming)
2) Measuring User-Moderator Alignment on r/ChangeMyView
Vinay Koshy, Tanvi Bajpai, Eshwar Chandrasekharan, Hari Sundaram, Karrie Karahalios | CSCW 2023 | Paper (forthcoming)
Frederick Choi, Tanvi Bajpai, Sowmya Pratipati, Eshwar Chandrasekharan | CSCW 2023 | Paper (forthcoming)
- Moderators are at the core of maintaining healthy online communities. For these moderators, who are often volunteers from the community, filtering through content and responding to misbehavior on time has become increasingly challenging as online communities continue to grow. To address such challenges of scale, recent research has looked into designing better tools for moderators of various platforms (e.g. Reddit, Twitch, Facebook, and Twitter). In this paper, we focus on Discord, a platform where communities are typically involved in large, synchronous group chats, creating an environment with a faster pace and a lack of structure compared to previously studied platforms. To tackle the unique challenges presented by Discord, we developed a new human-AI system called ConvEx for exploring online conversations. ConvEx is an AI-augmented version of the standard Discord interface designed to help moderators be proactive in identifying and preventing potential problems. It provides visual embeddings of conversational metrics, such as activity and toxicity levels, and can be extended to visualize other metrics. Through a user study with eight active moderators of Discord servers, we found that ConvEx supported several high-level strategies in monitoring a server and analyzing conversations. ConvEx allowed moderators to obtain a holistic view of activity across multiple channels on the server while guiding their attention towards problematic conversations and messages in a channel, helping them identify important contextual information to obtain reliable information from the AI analysis while also being able to pick up on contextual nuances which the AI missed. We conclude with design considerations for integrating AI into future interfaces for moderating synchronous, unstructured online conversations.
2) Measuring User-Moderator Alignment on r/ChangeMyView
Vinay Koshy, Tanvi Bajpai, Eshwar Chandrasekharan, Hari Sundaram, Karrie Karahalios | CSCW 2023 | Paper (forthcoming)
- Social media sites like Reddit, Discord, and Clubhouse utilize a community-reliant approach to content moderation. Under this model, volunteer moderators are tasked with setting and enforcing content rules within the platforms' sub-communities. However, few mechanisms exist to ensure that the rules set by moderators reflect the values of their community. Misalignments between users and moderators can be detrimental to community health. Yet little quantitative work has been done to evaluate the prevalence or nature of user-moderator misalignment. Through a survey of 798 users on r/ChangeMyView, we evaluate user-moderator alignment at the level of policy-awareness (does users know what the rules are?), practice-awareness (do users know how the rules are applied?) and policy-/practice-support (do users agree with the rules and how they are applied?. We find that policy-support is high, while practice-support is low -- using a hierarchical Bayesian model we estimate the correlation between community opinion and moderator decisions to range from .14 to .45 across subreddit rules. Surprisingly, these correlations were only slightly higher when users were asked to predict moderator actions, demonstrating low awareness of moderation practices. Our findings demonstrate the need for careful analysis of user-moderator alignment at multiple levels. We argue that future work should focus on building tools to empower communities to conduct these analyses themselves.
2022:
1) Conversational Resilience: Quantifying and Predicting Conversational Outcomes Following Adverse Events
Charlotte Lambert, Ananya Rajagopal, Eshwar Chandrasekharan | ICWSM 2022 | Paper
Tanvi Bajpai, Drshika Asher, Anwesa Goswami, Eshwar Chandrasekharan | CSCW 2022 | Paper
Charlotte Lambert, Ananya Rajagopal, Eshwar Chandrasekharan | ICWSM 2022 | Paper
- Online conversations, just like offline ones, are susceptible to influence by bad actors. These users have the capacity to derail neutral or even prosocial discussions through adverse behavior. Moderators and users alike would benefit from more resilient online conversations, i.e., those that can survive the influx of adverse behavior to which many conversations fall victim. In this paper, we examine the notion of conversational resilience: what makes a conversation more or less capable of withstanding an adverse interruption? Working with 11.5M comments from eight mainstream subreddits, we compiled more than 5.8M comment threads (i.e., conversations). Using 239K relevant conversations, we examine how well comment, user and subreddit characteristics can predict conversational outcomes. More than half of all conversations proceed after the first adverse event. Six out of ten conversations that proceed result in future removals. Comments violating platform-wide norms and those written by authors with a history of norm violations lead to not only more norm violations, but also fewer prosocial outcomes. However, conversations in more populated subreddits and conversations where the first adverse event's author was initially a strong contributor are capable of minimizing future removals and promoting prosocial outcomes after an adverse event. By understanding factors that contribute to conversational resilience we shed light onto what types of behavior can be encouraged to promote prosocial outcomes even in the face of adversity.
Tanvi Bajpai, Drshika Asher, Anwesa Goswami, Eshwar Chandrasekharan | CSCW 2022 | Paper
- Online social platforms are evolving at a rapid pace. With the addition of new features like real-time audio, the landscape of online communities and moderation work on these communities is being out-paced by platform development. In this paper, we present a novel framework that allows us to represent the dynamic moderation ecosystems of social platforms using a base-set of 12 platform-level affordances, along with inter-affordance relationships. These affordances fall into the three categories---Members, Infrastructure, and Content. We call this the MIC framework, and apply MIC to analyze several social platforms in two case studies. First we analyze individual platforms using MIC and demonstrate how MIC can be used to examine the effects of platform changes on the moderation ecosystem and identify potential new challenges in moderation. Next, we systematically compare three platforms using MIC and propose potential moderation mechanisms that platforms can adapt from one another. Moderation researchers and platform designers can use such comparisons to uncover where platforms can emulate established, successful and better-studied platforms, as well as learn from the pitfalls other platforms have encountered.
2021:
1) Quarantined! Examining the Effects of a Community-Wide Moderation Intervention on Reddit
Eshwar Chandrasekharan, Shagun Jhaver, Amy Bruckman, Eric Gilbert | TOCHI 2021 | Paper
2) Conversations Gone Alright: Quantifying and Predicting Prosocial Outcomes in Online Conversations
Jiajun Bao, Junjie Wu, Yiming Zhang, Eshwar Chandrasekharan and David Jurgens | WWW 2021| Paper
Eshwar Chandrasekharan, Shagun Jhaver, Amy Bruckman, Eric Gilbert | TOCHI 2021 | Paper
- Should social media platforms override a community's self-policing when it repeatedly break rules? What actions can they consider? In light of this debate, platforms have begun experimenting with softer alternatives to outright bans. We examine one such intervention called quarantining, that impedes direct access to and promotion of controversial communities. Specifically, we present two case studies of what happened when Reddit quarantined the influential communities r/TheRedPill (TRP) and r/The_Donald (TD). Using over 85M Reddit posts, we apply causal inference methods to examine the quarantine's effects on TRP and TD. We find that the quarantine made it more difficult to recruit new members: new user influx to TRP and TD decreased by 79.5% and 58%, respectively. Despite quarantining, existing users' misogyny and racism levels remained unaffected. We conclude by reflecting on the effectiveness of this design friction in limiting the influence of toxic communities and discuss broader implications for content moderation.
2) Conversations Gone Alright: Quantifying and Predicting Prosocial Outcomes in Online Conversations
Jiajun Bao, Junjie Wu, Yiming Zhang, Eshwar Chandrasekharan and David Jurgens | WWW 2021| Paper
- Online conversations can go in many directions: some turn out poorly due to antisocial behavior, while others turn out positively to the benefit of all. Research on improving online spaces has focused primarily on detecting and reducing antisocial behavior. Yet we know little about positive outcomes in online conversations and how to increase them—is a prosocial outcome simply the lack of antisocial behavior or something more? Here, we examine how conversational features lead to prosocial outcomes within online discussions. We introduce a series of new theory-inspired metrics to define prosocial outcomes such as mentoring and esteem enhancement. Using a corpus of 26M Reddit conversations, we show that these outcomes can be forecasted from the initial comment of an online conversation, with the best model providing at relative 24% improvement over human forecasting performance at ranking conversations for predicted outcome. Our results indicate that platforms can use these early cues in their algorithmic ranking of early conversations to prioritize better outcomes.
2020:
1) Still out there: Modeling and Identifying Russian Troll Accounts on Twitter
Jane Im, Eshwar Chandrasekharan, Jackson Sargent, Paige Lighthammer, Taylor Denby, Ankit Bhargava, Libby Hemphill, David Jurgens, Eric Gilbert | WebSci 2020 |Paper
2) Synthesized Social Signals: Computationally-Derived Social Signals from Account Histories
Jane Im, Sonali Tandon, Eshwar Chandrasekharan, Taylor Denby, Eric Gilbert | CHI 2020 | Paper
Jane Im, Eshwar Chandrasekharan, Jackson Sargent, Paige Lighthammer, Taylor Denby, Ankit Bhargava, Libby Hemphill, David Jurgens, Eric Gilbert | WebSci 2020 |Paper
- There is evidence that Russia's Internet Research Agency attempted to interfere with the 2016 U.S. election by running fake accounts on Twitter—often referred to as "Russian trolls". In this work, we: 1) develop machine learning models that predict whether a Twitter account is a Russian troll within a set of 170K control accounts; and, 2) demonstrate that it is possible to use this model to find active accounts on Twitter still likely acting on behalf of the Russian state. Using both behavioral and linguistic features, we show that it is possible to distinguish between a troll and a non-troll with a precision of 78.5% and an AUC of 98.9%, under cross-validation. Applying the model to out-of-sample accounts still active today, we find that up to 2.6% of top journalists' mentions are occupied by Russian trolls. These findings imply that the Russian trolls are very likely still active today.
2) Synthesized Social Signals: Computationally-Derived Social Signals from Account Histories
Jane Im, Sonali Tandon, Eshwar Chandrasekharan, Taylor Denby, Eric Gilbert | CHI 2020 | Paper
- In this paper, we propose a new idea called synthesized social signals (S3s): social signals computationally derived from an account's history, and then rendered into the profile. To demonstrate and explore the concept, we built Sig, an extensible Chrome extension that computes and visualizes S3s. Results from field deployments show that Sig reduced receiver costs, added important signals beyond conventionally available ones, and that a few users felt safer using Twitter as a result.
Doctoral Thesis:
Combatting Abusive Behavior in Online Communities Using Cross-Community Learning | Georgia Tech | Thesis
- Defended PhD thesis on March 3, 2020. Graduated with a PhD in CS from Georgia Tech on May 1, 2020.
2019:
1) Crossmod: A Cross-Community Learning-based System to Assist Reddit Moderators
Eshwar Chandrasekharan, Chaitrali Gandhi, Matthew Wortley Mustelier, Eric Gilbert | CSCW 2019 | Paper
2) A Just and Comprehensive Strategy for Using NLP to Address Online Abuse
David Jurgens, Eshwar Chandrasekharan, Libby Hemphill | ACL 2019 | Paper
3) Prevalence and Psychological Effects of Hateful Speech in Online College Communities
Koustuv Saha, Eshwar Chandrasekharan, Munmun De Choudhury | WebSci 2019 | Paper
4) Hybrid Approaches to Detect Comments Violating Macro Norms on Reddit
Eshwar Chandrasekharan, Eric Gilbert | (under submission) | Paper on arXiv | Dataset
Eshwar Chandrasekharan, Chaitrali Gandhi, Matthew Wortley Mustelier, Eric Gilbert | CSCW 2019 | Paper
- In this paper, we introduce a novel sociotechnical moderation system for Reddit called Crossmod. Through formative interviews with 11 active moderators from 10 different subreddits, we learned about the limitations of currently available automated tools, and build a new system that extends their capabilities. To the best of our knowledge, Crossmod is the first open source, AI-backed sociotechnical moderation system to be designed using participatory methods.
2) A Just and Comprehensive Strategy for Using NLP to Address Online Abuse
David Jurgens, Eshwar Chandrasekharan, Libby Hemphill | ACL 2019 | Paper
- Online abusive behavior affects millions and the NLP community has attempted to mitigate this problem by developing technologies to detect abuse. However, current methods have largely focused on a narrow definition of abuse to detriment of victims who seek both validation and solutions. In this position paper, we argue that the community needs to make three substantive changes: (1) expanding our scope of problems to tackle both more subtle and more serious forms of abuse, (2) developing proactive technologies that counter or inhibit abuse before it harms, and (3) reframing our effort within a framework of justice to promote healthy communities.
3) Prevalence and Psychological Effects of Hateful Speech in Online College Communities
Koustuv Saha, Eshwar Chandrasekharan, Munmun De Choudhury | WebSci 2019 | Paper
- We employ a causal-inference framework to study the psychological effects of hateful speech in these college subreddits, particularly in the form of individuals’ online stress expression. Our findings suggest that exposure to hate leads to greater stress expression. However, everybody exposed is not equally affected; some show lower psychological endurance to hate than others. Low endurance individuals are more vulnerable to emotional outbursts, and are more neurotic than those with higher endurance.
4) Hybrid Approaches to Detect Comments Violating Macro Norms on Reddit
Eshwar Chandrasekharan, Eric Gilbert | (under submission) | Paper on arXiv | Dataset
- In this dataset paper, we present a three-stage process to collect Reddit comments that are removed comments by moderators of several subreddits, for violating subreddit rules and guidelines. Working with over 2.8M removed comments collected from 100 different communities on Reddit, we identify 8 macro norms (i.e., norms that are widely enforced on most parts of Reddit). We extract these macro norms by employing a hybrid approach (classification, topic modeling, and open-coding), on comments identified to be norm violations within at least 85 out of the 100 study subreddits. Finally, we label over 40K Reddit comments removed by moderators according to the specific type of macro norm being violated, and make this dataset publicly available.
2018:
1) The Internet’s Hidden Rules: An Empirical Study of Reddit Norm Violations at Micro, Meso, and Macro Scales
Eshwar Chandrasekharan, Mattia Samory, Shagun Jhaver, Hunter Charvat, Amy Bruckman, Cliff Lampe, Jacob Eisenstein, Eric Gilbert | CSCW 2018 | Paper
Eshwar Chandrasekharan, Mattia Samory, Shagun Jhaver, Hunter Charvat, Amy Bruckman, Cliff Lampe, Jacob Eisenstein, Eric Gilbert | CSCW 2018 | Paper
- In this paper, we study community norms on Reddit in a large-scale, empirical manner. Via 2.8M comments removed by moderators of 100 top subreddits over 10 months, we use both computational and qualitative methods to identify three types of norms: macro norms that are universal to most parts of Reddit; meso norms that are shared across certain groups of subreddits; and micro norms that are specific to individual, relatively unique subreddits. Given the size of Reddit’s user base we argue this represents the first large-scale study of norms across disparate online communities. In other words, these findings shed light on what Reddit values, and how widely-held those values are.
2017:
1) You Can't Stay Here: The Efficacy of Reddit's 2015 Ban Examined Through Hate Speech
Eshwar Chandrasekharan, Umashanthi Pavalanathan, Anirudh Srinivasan, Adam Glynn, Jacob Eisenstein, Eric Gilbert | CSCW 2017| Paper
2) The Bag of Communities: Identifying Abusive Behavior Online with Preexisting Internet Data
Eshwar Chandrasekharan, Mattia Samory, Anirudh Srinivasan, Eric Gilbert | CHI 2017 | Paper
3) Situated Anonymity: Impacts of Anonymity, Ephemerality, and Hyper-Locality on Social Media
Ari Schlesinger, Eshwar Chandrasekharan, Christina Masden, Amy Bruckman, W Keith Edwards, Rebecca Grinter | CHI 2017 | Paper
Eshwar Chandrasekharan, Umashanthi Pavalanathan, Anirudh Srinivasan, Adam Glynn, Jacob Eisenstein, Eric Gilbert | CSCW 2017| Paper
- In 2015, Reddit closed several subreddits—foremost among them r/fatpeoplehate and r/CoonTown—due to violations of Reddit's anti-harassment policy. However, the effectiveness of banning as a moderation approach remains unclear: banning might diminish hateful behavior, or it may relocate such behavior to different parts of the site. We study the ban of r/fatpeoplehate and r/CoonTown in terms of its effects on both participating users and affected subreddits. Working from over 100M Reddit posts and comments, we generate hate speech lexicons to examine variations in hate speech usage via causal inference methods. We find that the ban worked for Reddit. More accounts than expected discontinued using the site; those that stayed drastically decreased their hate speech usage—atleast by 80%. Though many subreddits saw an influx of r/fatpeoplehate and r/CoonTown "migrants", those subreddits saw no significant change in hate speech usage. In other words, other subreddits did not inherit the problem.
2) The Bag of Communities: Identifying Abusive Behavior Online with Preexisting Internet Data
Eshwar Chandrasekharan, Mattia Samory, Anirudh Srinivasan, Eric Gilbert | CHI 2017 | Paper
- We introduce a novel computational approach to address this problem called Bag of Communities (BoC)—a technique that leverages large-scale, preexisting data from other Internet communities. Using this conceptual and empirical work, we argue that the BoC approach may allow communities to deal with a range of common problems, like abusive behavior, faster and with fewer engineering resources.
3) Situated Anonymity: Impacts of Anonymity, Ephemerality, and Hyper-Locality on Social Media
Ari Schlesinger, Eshwar Chandrasekharan, Christina Masden, Amy Bruckman, W Keith Edwards, Rebecca Grinter | CHI 2017 | Paper
- We conducted an interview-based study to examine the factors that were integral to the success and popularity of Yik Yak during its initial deployment, by interviewing 18 Yik Yak users on an urban university campus.
2015:
1) Footprints on Silicon: Explorations in Gathering Autobiographical Content
Eshwar Chandrasekharan, Sutanu Chakraborti | CICLing (IJCLA) 2015 | Paper
Eshwar Chandrasekharan, Sutanu Chakraborti | CICLing (IJCLA) 2015 | Paper
- We built a system that can identify emails containing autobiographical content for aiding the autobiographical summarization of a user’s mail Inbox, over the years. This data can be used to generate a story about the user’s life or an autobiography of sorts.