SELECTED PUBLICATIONS
2025:
1) Does Positive Reinforcement Work?: A Quasi-Experimental Study of the Effects of Positive Feedback on Reddit
Charlotte Lambert, Koustuv Saha, Eshwar Chandrasekharan | CHI 2025 | Paper (forthcoming)
Fred Choi, Charlotte Lambert, Vinay Koshy, Sowmya Pratipati, Tue Do, Eshwar Chandrasekharan | CHI 2025 | Paper (forthcoming)
Xianyang Zhan*, Agam Goyal*, Yilun Chen, Eshwar Chandrasekharan, Koustuv Saha | NAACL 2025 (main conf.) | Paper (forthcoming)
Charlotte Lambert, Koustuv Saha, Eshwar Chandrasekharan | CHI 2025 | Paper (forthcoming)
- Social media platform design often incorporates explicit signals of positive feedback. Some moderators provide positive feedback with the goal of positive reinforcement, but are often unsure of their ability to actually influence user behavior. Despite its widespread use and theory touting positive feedback as crucial for user motivation, its effect on recipients is relatively unknown. This paper examines how positive feedback impacts Reddit users and evaluates its differential effects to understand who benefits most from receiving positive feedback. Through a causal inference study of 11M posts across 4 months, we find that users who received positive feedback made more frequent (2% per day) and higher quality (57% higher score; 2% fewer removals per day) posts compared to a set of matched control users. Our findings highlight the need for platforms and communities to expand their perspective on moderation and complement punitive approaches with positive reinforcement strategies.
Fred Choi, Charlotte Lambert, Vinay Koshy, Sowmya Pratipati, Tue Do, Eshwar Chandrasekharan | CHI 2025 | Paper (forthcoming)
- Much of the research in online moderation focuses on punitive actions. However, emerging research has shown that positive reinforcement is effective at encouraging desirable behavior on online platforms. We extend this research by studying the “creator heart” feature on YouTube, quantifying their primary effects on comments that receive hearts and on videos where hearts have been given out by creators. We find that creator hearts increased the visibility of comments, and increased the amount of positive engagement they received from other users. We also find that the presence of a creator-hearted comment soon after a video is published can incentivize viewers to comment, increasing the total engagement with the video over time. We discuss how creators can use hearts to shape behavior in their communities by highlighting, rewarding, and incentivizing desirable behaviors from users. We discuss avenues for extending our study to understanding positive signals from moderators on other platforms.
Xianyang Zhan*, Agam Goyal*, Yilun Chen, Eshwar Chandrasekharan, Koustuv Saha | NAACL 2025 (main conf.) | Paper (forthcoming)
- Large language models (LLMs) have shown promise in many natural language understanding tasks, including content moderation. However, these models can be expensive to query in real-time and do not allow for a community-specific approach to content moderation. To address these challenges, we explore the use of open-source small language models (SLMs) for community-specific content moderation tasks. We fine-tune and evaluate SLMs (less than 15B parameters) by comparing their performance against much larger open- and closed-sourced models in both a zero-shot and few-shot setting. Using 150K comments from 15 popular Reddit communities, we find that SLMs outperform zero-shot LLMs at content moderation—11.5% higher accuracy and 25.7% higher recall on average across all communities. Moreover, few-shot in-context learning shows only a marginal increase in the performance of LLMs, still lacking compared to SLMs. We further show the promise of cross-community content moderation, which has implications for new communities and the development of cross-platform moderation techniques. Finally, we outline directions for future work on language model based content moderation.
2024:
1) Understanding Community Resilience: Quantifying the Effects of Sudden Popularity via Algorithmic Curation
Jackie Chan, Charlotte Lambert, Frederick Choi, Stevie Chancellor, Eshwar Chandrasekharan | ICWSM 2024 | Paper
Charlotte Lambert, Frederick Choi, Eshwar Chandrasekharan | CSCW 2024 | Paper
Dominic Zaun Eu Jones, Eshwar Chandrasekharan | CSCW 2024 | Paper
Evey Huang, Abhraneel Sarma, Sohyeon Hwang, Eshwar Chandrasekharan, Stevie Chancellor | DIS 2024 | Paper (Honorable Mention!)
Jackie Chan, Charlotte Lambert, Frederick Choi, Stevie Chancellor, Eshwar Chandrasekharan | ICWSM 2024 | Paper
- The sudden popularity communities receive via algorithmically-curated "trending" or "hot" social media feeds can be beneficial or disruptive. On one hand, increased attention often brings new users and promotes community growth. On the other hand, the unexpected influx of newcomers can burden already overworked moderation teams. To examine the impact of sudden popularity, we studied 6,306 posts that reached Reddit's front page—a feed called r/popular that millions of users browse daily—and the effects of sudden popularity within 1,320 subreddits. We find that on average, r/popular posts have 45 times the comments, 42 times the removed comments, and 70 times the number of newcomers compared to posts that did not reach r/popular from the same community. Additionally, r/popular posts led to a peak 85% median increase in the subreddit's comment rate, and these effects lingered for about 12 hours. Our regression analysis shows that stricter moderation and previous r/popular appearances were associated with shortened and less intense effects on the community. By quantifying the differential effects of sudden popularity, we provide recommendations for moderators to promote stability and resilience in the face of unexpected disruptions.
Charlotte Lambert, Frederick Choi, Eshwar Chandrasekharan | CSCW 2024 | Paper
- The role of a moderator is often characterized as solely punitive, however, moderators have the power to not only execute reactive and punitive actions but also create norms and support the values they want to see within their communities. One way moderators can proactively foster healthy communities is through positive reinforcement, but we do not currently know whether moderators on Reddit enforce their norms by providing positive feedback to desired contributions. To fill this gap in our knowledge, we surveyed 115 Reddit moderators to build two taxonomies: one for the content and behavior that actual moderators want to encourage and another taxonomy of actions moderators take to encourage desirable contributions. We found that prosocial behavior, engaging with other users, and staying within the topic and norms of the subreddit are the most frequent behaviors that moderators want to encourage. We also found that moderators are taking actions to encourage desirable contributions, specifically through built-in Reddit mechanisms (e.g., upvoting), replying to the contribution, and explicitly approving the contribution in the moderation queue. Furthermore, moderators reported taking these actions specifically to reinforce desirable behavior to the original poster and other community members, even though many of the actions are anonymous, so the recipients are unaware that they are receiving feedback from moderators. Importantly, some moderators who do not currently provide feedback do not object to the practice. Instead, they are discouraged by the lack of explicit tools for positive reinforcement and the fact that their fellow moderators are not currently engaging in methods for encouragement. We consider the taxonomy of actions moderators take, the reasons moderators are deterred from providing encouragement, and suggestions from the moderators themselves to discuss implications for designing tools to provide positive feedback.
Dominic Zaun Eu Jones, Eshwar Chandrasekharan | CSCW 2024 | Paper
- Trust is crucial for the functioning of complex societies. Testimony, from one speaker to another, underlies many social systems. Epistemic trust, or testimonial credibility, is the likelihood to accept a speaker's claim due to belief on their competence or sincerity. Epistemic trust is closely related to several `pathological epistemic phenomena': democratic (il)legitimacy, the spread of misinformation, and echo chambers. To the best of our knowledge, this theoretical contribution is novel in the field of social computing. We further argue that epistemic trust is no philosophical novelty: it is measurable. Weakly supervised text classification approaches achieve F1 scores of around 80 to 85 per cent on detecting epistemic distrust. This is also, to the best of our knowledge, a novel task in natural language processing. We measure expressions of epistemic distrust across 954 political communities on Reddit. We find that expressions of epistemic distrust are relatively rare, although there are substantial differences between communities. Conspiratorial communities and those focused on controversial political topics tend to express more distrust. Communities with strong epistemic norms enforced by moderation are likely to express low levels. While we find users to be an important potential source of contagion of epistemic distrust, community norms appear to dominate. It is likely that epistemic trust is more useful as an aggregated risk factor. Finally, we argue that policymakers should be aware of epistemic trust considering their reliance on legitimacy underwritten by testimony.
Evey Huang, Abhraneel Sarma, Sohyeon Hwang, Eshwar Chandrasekharan, Stevie Chancellor | DIS 2024 | Paper (Honorable Mention!)
- Given the scale at which online harassment occurs, researchers and practitioners alike have turned to computationally driven approaches to address it. However, because harassment is highly contextual and personal, designing effective solutions to this problem can be extremely challenging. This paper examines how harassment-mitigation systems studied in human-computer interaction (HCI) consider victim-centered principles in their design. Through a scoping literature review and close reading of 17 papers, we contribute—(1) a characterization of how novel and existing systems consider victims' identity characteristics, definitions of harassment, and preferred strategies for dealing with harassment; (2) challenges faced by the systems along these dimensions to surface limitations, gaps, and tensions; (3) practical recommendations for researchers, designers, and practitioners to overcome these challenges. In doing so, we offer potential new directions to positively design computational approaches to addressing online harassment with victim-centered principles in mind.
2023:
1) ConvEx: A Visual Conversation Exploration System for Discord Moderators
Frederick Choi, Tanvi Bajpai, Sowmya Pratipati, Eshwar Chandrasekharan | CSCW 2023 | Paper
Vinay Koshy, Tanvi Bajpai, Eshwar Chandrasekharan, Hari Sundaram, Karrie Karahalios | CSCW 2023 | Paper (Best Paper Award!)
Frederick Choi, Tanvi Bajpai, Sowmya Pratipati, Eshwar Chandrasekharan | CSCW 2023 | Paper
- Moderators are at the core of maintaining healthy online communities. For these moderators, who are often volunteers from the community, filtering through content and responding to misbehavior on time has become increasingly challenging as online communities continue to grow. To address such challenges of scale, recent research has looked into designing better tools for moderators of various platforms (e.g. Reddit, Twitch, Facebook, and Twitter). In this paper, we focus on Discord, a platform where communities are typically involved in large, synchronous group chats, creating an environment with a faster pace and a lack of structure compared to previously studied platforms. To tackle the unique challenges presented by Discord, we developed a new human-AI system called ConvEx for exploring online conversations. ConvEx is an AI-augmented version of the standard Discord interface designed to help moderators be proactive in identifying and preventing potential problems. It provides visual embeddings of conversational metrics, such as activity and toxicity levels, and can be extended to visualize other metrics. Through a user study with eight active moderators of Discord servers, we found that ConvEx supported several high-level strategies in monitoring a server and analyzing conversations. ConvEx allowed moderators to obtain a holistic view of activity across multiple channels on the server while guiding their attention towards problematic conversations and messages in a channel, helping them identify important contextual information to obtain reliable information from the AI analysis while also being able to pick up on contextual nuances which the AI missed. We conclude with design considerations for integrating AI into future interfaces for moderating synchronous, unstructured online conversations.
Vinay Koshy, Tanvi Bajpai, Eshwar Chandrasekharan, Hari Sundaram, Karrie Karahalios | CSCW 2023 | Paper (Best Paper Award!)
- Social media sites like Reddit, Discord, and Clubhouse utilize a community-reliant approach to content moderation. Under this model, volunteer moderators are tasked with setting and enforcing content rules within the platforms' sub-communities. However, few mechanisms exist to ensure that the rules set by moderators reflect the values of their community. Misalignments between users and moderators can be detrimental to community health. Yet little quantitative work has been done to evaluate the prevalence or nature of user-moderator misalignment. Through a survey of 798 users on r/ChangeMyView, we evaluate user-moderator alignment at the level of policy-awareness (does users know what the rules are?), practice-awareness (do users know how the rules are applied?) and policy-/practice-support (do users agree with the rules and how they are applied?. We find that policy-support is high, while practice-support is low -- using a hierarchical Bayesian model we estimate the correlation between community opinion and moderator decisions to range from .14 to .45 across subreddit rules. Surprisingly, these correlations were only slightly higher when users were asked to predict moderator actions, demonstrating low awareness of moderation practices. Our findings demonstrate the need for careful analysis of user-moderator alignment at multiple levels. We argue that future work should focus on building tools to empower communities to conduct these analyses themselves.
2022:
1) Conversational Resilience: Quantifying and Predicting Conversational Outcomes Following Adverse Events
Charlotte Lambert, Ananya Rajagopal, Eshwar Chandrasekharan | ICWSM 2022 | Paper
Tanvi Bajpai, Drshika Asher, Anwesa Goswami, Eshwar Chandrasekharan | CSCW 2022 | Paper
Eshwar Chandrasekharan, Shagun Jhaver, Amy Bruckman, Eric Gilbert | TOCHI 2021 | Paper (Editor's pick for Notable Paper!)
Charlotte Lambert, Ananya Rajagopal, Eshwar Chandrasekharan | ICWSM 2022 | Paper
- Online conversations, just like offline ones, are susceptible to influence by bad actors. These users have the capacity to derail neutral or even prosocial discussions through adverse behavior. Moderators and users alike would benefit from more resilient online conversations, i.e., those that can survive the influx of adverse behavior to which many conversations fall victim. In this paper, we examine the notion of conversational resilience: what makes a conversation more or less capable of withstanding an adverse interruption? Working with 11.5M comments from eight mainstream subreddits, we compiled more than 5.8M comment threads (i.e., conversations). Using 239K relevant conversations, we examine how well comment, user and subreddit characteristics can predict conversational outcomes. More than half of all conversations proceed after the first adverse event. Six out of ten conversations that proceed result in future removals. Comments violating platform-wide norms and those written by authors with a history of norm violations lead to not only more norm violations, but also fewer prosocial outcomes. However, conversations in more populated subreddits and conversations where the first adverse event's author was initially a strong contributor are capable of minimizing future removals and promoting prosocial outcomes after an adverse event. By understanding factors that contribute to conversational resilience we shed light onto what types of behavior can be encouraged to promote prosocial outcomes even in the face of adversity.
Tanvi Bajpai, Drshika Asher, Anwesa Goswami, Eshwar Chandrasekharan | CSCW 2022 | Paper
- Online social platforms are evolving at a rapid pace. With the addition of new features like real-time audio, the landscape of online communities and moderation work on these communities is being out-paced by platform development. In this paper, we present a novel framework that allows us to represent the dynamic moderation ecosystems of social platforms using a base-set of 12 platform-level affordances, along with inter-affordance relationships. These affordances fall into the three categories---Members, Infrastructure, and Content. We call this the MIC framework, and apply MIC to analyze several social platforms in two case studies. First we analyze individual platforms using MIC and demonstrate how MIC can be used to examine the effects of platform changes on the moderation ecosystem and identify potential new challenges in moderation. Next, we systematically compare three platforms using MIC and propose potential moderation mechanisms that platforms can adapt from one another. Moderation researchers and platform designers can use such comparisons to uncover where platforms can emulate established, successful and better-studied platforms, as well as learn from the pitfalls other platforms have encountered.
Eshwar Chandrasekharan, Shagun Jhaver, Amy Bruckman, Eric Gilbert | TOCHI 2021 | Paper (Editor's pick for Notable Paper!)
- Should social media platforms override a community's self-policing when it repeatedly break rules? What actions can they consider? In light of this debate, platforms have begun experimenting with softer alternatives to outright bans. We examine one such intervention called quarantining, that impedes direct access to and promotion of controversial communities. Specifically, we present two case studies of what happened when Reddit quarantined the influential communities r/TheRedPill (TRP) and r/The_Donald (TD). Using over 85M Reddit posts, we apply causal inference methods to examine the quarantine's effects on TRP and TD. We find that the quarantine made it more difficult to recruit new members: new user influx to TRP and TD decreased by 79.5% and 58%, respectively. Despite quarantining, existing users' misogyny and racism levels remained unaffected. We conclude by reflecting on the effectiveness of this design friction in limiting the influence of toxic communities and discuss broader implications for content moderation.
2021:
1) Conversations Gone Alright: Quantifying and Predicting Prosocial Outcomes in Online Conversations
Jiajun Bao, Junjie Wu, Yiming Zhang, Eshwar Chandrasekharan and David Jurgens | WWW 2021| Paper
Jiajun Bao, Junjie Wu, Yiming Zhang, Eshwar Chandrasekharan and David Jurgens | WWW 2021| Paper
- Online conversations can go in many directions: some turn out poorly due to antisocial behavior, while others turn out positively to the benefit of all. Research on improving online spaces has focused primarily on detecting and reducing antisocial behavior. Yet we know little about positive outcomes in online conversations and how to increase them—is a prosocial outcome simply the lack of antisocial behavior or something more? Here, we examine how conversational features lead to prosocial outcomes within online discussions. We introduce a series of new theory-inspired metrics to define prosocial outcomes such as mentoring and esteem enhancement. Using a corpus of 26M Reddit conversations, we show that these outcomes can be forecasted from the initial comment of an online conversation, with the best model providing at relative 24% improvement over human forecasting performance at ranking conversations for predicted outcome. Our results indicate that platforms can use these early cues in their algorithmic ranking of early conversations to prioritize better outcomes.
2020:
1) Still out there: Modeling and Identifying Russian Troll Accounts on Twitter
Jane Im, Eshwar Chandrasekharan, Jackson Sargent, Paige Lighthammer, Taylor Denby, Ankit Bhargava, Libby Hemphill, David Jurgens, Eric Gilbert | WebSci 2020 | Paper (Best Paper Runner Up!)
Jane Im, Sonali Tandon, Eshwar Chandrasekharan, Taylor Denby, Eric Gilbert | CHI 2020 | Paper
Jane Im, Eshwar Chandrasekharan, Jackson Sargent, Paige Lighthammer, Taylor Denby, Ankit Bhargava, Libby Hemphill, David Jurgens, Eric Gilbert | WebSci 2020 | Paper (Best Paper Runner Up!)
- There is evidence that Russia's Internet Research Agency attempted to interfere with the 2016 U.S. election by running fake accounts on Twitter—often referred to as "Russian trolls". In this work, we: 1) develop machine learning models that predict whether a Twitter account is a Russian troll within a set of 170K control accounts; and, 2) demonstrate that it is possible to use this model to find active accounts on Twitter still likely acting on behalf of the Russian state. Using both behavioral and linguistic features, we show that it is possible to distinguish between a troll and a non-troll with a precision of 78.5% and an AUC of 98.9%, under cross-validation. Applying the model to out-of-sample accounts still active today, we find that up to 2.6% of top journalists' mentions are occupied by Russian trolls. These findings imply that the Russian trolls are very likely still active today.
Jane Im, Sonali Tandon, Eshwar Chandrasekharan, Taylor Denby, Eric Gilbert | CHI 2020 | Paper
- In this paper, we propose a new idea called synthesized social signals (S3s): social signals computationally derived from an account's history, and then rendered into the profile. To demonstrate and explore the concept, we built Sig, an extensible Chrome extension that computes and visualizes S3s. Results from field deployments show that Sig reduced receiver costs, added important signals beyond conventionally available ones, and that a few users felt safer using Twitter as a result.
Doctoral Thesis:
Combatting Abusive Behavior in Online Communities Using Cross-Community Learning | Georgia Tech | Thesis
- Defended PhD thesis on March 3, 2020. Graduated with a PhD in CS from Georgia Tech on May 1, 2020.
2019:
1) Crossmod: A Cross-Community Learning-based System to Assist Reddit Moderators
Eshwar Chandrasekharan, Chaitrali Gandhi, Matthew Wortley Mustelier, Eric Gilbert | CSCW 2019 | Paper
David Jurgens, Eshwar Chandrasekharan, Libby Hemphill | ACL 2019 | Paper
Koustuv Saha, Eshwar Chandrasekharan, Munmun De Choudhury | WebSci 2019 | Paper
Eshwar Chandrasekharan, Eric Gilbert | (under submission) | Paper on arXiv | Dataset
Eshwar Chandrasekharan, Chaitrali Gandhi, Matthew Wortley Mustelier, Eric Gilbert | CSCW 2019 | Paper
- In this paper, we introduce a novel sociotechnical moderation system for Reddit called Crossmod. Through formative interviews with 11 active moderators from 10 different subreddits, we learned about the limitations of currently available automated tools, and build a new system that extends their capabilities. To the best of our knowledge, Crossmod is the first open source, AI-backed sociotechnical moderation system to be designed using participatory methods.
David Jurgens, Eshwar Chandrasekharan, Libby Hemphill | ACL 2019 | Paper
- Online abusive behavior affects millions and the NLP community has attempted to mitigate this problem by developing technologies to detect abuse. However, current methods have largely focused on a narrow definition of abuse to detriment of victims who seek both validation and solutions. In this position paper, we argue that the community needs to make three substantive changes: (1) expanding our scope of problems to tackle both more subtle and more serious forms of abuse, (2) developing proactive technologies that counter or inhibit abuse before it harms, and (3) reframing our effort within a framework of justice to promote healthy communities.
Koustuv Saha, Eshwar Chandrasekharan, Munmun De Choudhury | WebSci 2019 | Paper
- We employ a causal-inference framework to study the psychological effects of hateful speech in these college subreddits, particularly in the form of individuals’ online stress expression. Our findings suggest that exposure to hate leads to greater stress expression. However, everybody exposed is not equally affected; some show lower psychological endurance to hate than others. Low endurance individuals are more vulnerable to emotional outbursts, and are more neurotic than those with higher endurance.
Eshwar Chandrasekharan, Eric Gilbert | (under submission) | Paper on arXiv | Dataset
- In this dataset paper, we present a three-stage process to collect Reddit comments that are removed comments by moderators of several subreddits, for violating subreddit rules and guidelines. Working with over 2.8M removed comments collected from 100 different communities on Reddit, we identify 8 macro norms (i.e., norms that are widely enforced on most parts of Reddit). We extract these macro norms by employing a hybrid approach (classification, topic modeling, and open-coding), on comments identified to be norm violations within at least 85 out of the 100 study subreddits. Finally, we label over 40K Reddit comments removed by moderators according to the specific type of macro norm being violated, and make this dataset publicly available.
2018:
1) The Internet’s Hidden Rules: An Empirical Study of Reddit Norm Violations at Micro, Meso, and Macro Scales
Eshwar Chandrasekharan, Mattia Samory, Shagun Jhaver, Hunter Charvat, Amy Bruckman, Cliff Lampe, Jacob Eisenstein, Eric Gilbert | CSCW 2018 | Paper
Eshwar Chandrasekharan, Mattia Samory, Shagun Jhaver, Hunter Charvat, Amy Bruckman, Cliff Lampe, Jacob Eisenstein, Eric Gilbert | CSCW 2018 | Paper
- In this paper, we study community norms on Reddit in a large-scale, empirical manner. Via 2.8M comments removed by moderators of 100 top subreddits over 10 months, we use both computational and qualitative methods to identify three types of norms: macro norms that are universal to most parts of Reddit; meso norms that are shared across certain groups of subreddits; and micro norms that are specific to individual, relatively unique subreddits. Given the size of Reddit’s user base we argue this represents the first large-scale study of norms across disparate online communities. In other words, these findings shed light on what Reddit values, and how widely-held those values are.
2017:
1) You Can't Stay Here: The Efficacy of Reddit's 2015 Ban Examined Through Hate Speech
Eshwar Chandrasekharan, Umashanthi Pavalanathan, Anirudh Srinivasan, Adam Glynn, Jacob Eisenstein, Eric Gilbert | CSCW 2017| Paper
Eshwar Chandrasekharan, Mattia Samory, Anirudh Srinivasan, Eric Gilbert | CHI 2017 | Paper
Ari Schlesinger, Eshwar Chandrasekharan, Christina Masden, Amy Bruckman, W Keith Edwards, Rebecca Grinter | CHI 2017 | Paper
Eshwar Chandrasekharan, Umashanthi Pavalanathan, Anirudh Srinivasan, Adam Glynn, Jacob Eisenstein, Eric Gilbert | CSCW 2017| Paper
- In 2015, Reddit closed several subreddits—foremost among them r/fatpeoplehate and r/CoonTown—due to violations of Reddit's anti-harassment policy. However, the effectiveness of banning as a moderation approach remains unclear: banning might diminish hateful behavior, or it may relocate such behavior to different parts of the site. We study the ban of r/fatpeoplehate and r/CoonTown in terms of its effects on both participating users and affected subreddits. Working from over 100M Reddit posts and comments, we generate hate speech lexicons to examine variations in hate speech usage via causal inference methods. We find that the ban worked for Reddit. More accounts than expected discontinued using the site; those that stayed drastically decreased their hate speech usage—atleast by 80%. Though many subreddits saw an influx of r/fatpeoplehate and r/CoonTown "migrants", those subreddits saw no significant change in hate speech usage. In other words, other subreddits did not inherit the problem.
Eshwar Chandrasekharan, Mattia Samory, Anirudh Srinivasan, Eric Gilbert | CHI 2017 | Paper
- We introduce a novel computational approach to address this problem called Bag of Communities (BoC)—a technique that leverages large-scale, preexisting data from other Internet communities. Using this conceptual and empirical work, we argue that the BoC approach may allow communities to deal with a range of common problems, like abusive behavior, faster and with fewer engineering resources.
Ari Schlesinger, Eshwar Chandrasekharan, Christina Masden, Amy Bruckman, W Keith Edwards, Rebecca Grinter | CHI 2017 | Paper
- We conducted an interview-based study to examine the factors that were integral to the success and popularity of Yik Yak during its initial deployment, by interviewing 18 Yik Yak users on an urban university campus.
2015:
1) Footprints on Silicon: Explorations in Gathering Autobiographical Content
Eshwar Chandrasekharan, Sutanu Chakraborti | CICLing (IJCLA) 2015 | Paper
Eshwar Chandrasekharan, Sutanu Chakraborti | CICLing (IJCLA) 2015 | Paper
- We built a system that can identify emails containing autobiographical content for aiding the autobiographical summarization of a user’s mail Inbox, over the years. This data can be used to generate a story about the user’s life or an autobiography of sorts.