We’ve all been there: scrolling through a comment section, minding our own business, when suddenly we encounter that person—the one who seems to exist solely to provoke, upset, or derail conversations. Trolling has become so pervasive that research suggests approximately 5-10% of internet users engage in some form of trolling behavior regularly. But here’s the uncomfortable truth I’ve observed in my years working with digital behavior: trolling isn’t always about malice. Sometimes it’s about loneliness, power, or even boredom. As our lives become increasingly digitized—particularly post-2020 when online interactions skyrocketed—understanding the psychology behind trolling has never been more urgent.
Why does this matter now? Because trolling isn’t just annoying anymore; it’s shaping political discourse, mental health outcomes, and even threatening democratic processes. In this article, you’ll discover the psychological mechanisms driving trolling behavior, the hidden motivations that fuel it, and—crucially—how to identify and respond to it constructively. As someone who leans toward a humanistic, progressive understanding of human behavior, I believe we need to look beyond simple condemnation and understand the systems and psychological needs that create trolls in the first place.
What exactly is trolling? A psychological definition
Before we dive into motivations, let’s establish what we’re actually talking about. Trolling refers to deliberately provocative online behavior intended to upset others, disrupt conversations, or elicit emotional reactions. It’s not simply disagreeing with someone—it’s the intentional pursuit of chaos or distress.
The spectrum of trolling behavior
Trolling exists on a continuum, from relatively harmless pranks to genuinely harmful harassment. Some trolls engage in what they consider “playful” disruption—posting absurd statements to see who takes the bait. Others engage in coordinated harassment campaigns that can devastate victims’ mental health and even physical safety. This distinction matters because it reflects different underlying motivations and psychological needs.
Trolling versus cyberbullying: Key differences
While related, these behaviors aren’t identical. Cyberbullying typically targets specific individuals repeatedly with intent to harm. Trolling, by contrast, often seeks a broader audience reaction and may not focus on a single victim. That said, the line blurs frequently, and both can cause significant psychological damage. In my practice, I’ve seen how victims often struggle to distinguish between “just trolling” and genuine threats—and honestly, should they have to?
The dark tetrad: Personality traits linked to trolling
Research has consistently identified certain personality characteristics associated with trolling behavior. Understanding these traits helps us see trolling not as random chaos, but as patterns emerging from specific psychological profiles.
Sadism: Finding pleasure in others’ distress
Perhaps the most uncomfortable finding in trolling research involves everyday sadism—the enjoyment of causing others pain. Studies examining the relationship between personality and trolling have found that individuals scoring higher on measures of sadism were significantly more likely to engage in trolling behavior. Think of it like this: while most of us experience empathy when we see someone upset, some individuals experience a different neurochemical reward. Their brains literally light up differently when witnessing distress.
A 2021 study examining online behavior patterns found that sadistic pleasure was among the strongest predictors of trolling frequency and intensity. This doesn’t mean trolls are psychopaths (that’s a different, much rarer condition), but rather that they’ve developed or possess reduced empathic responses in online contexts.
Machiavellianism: Strategic manipulation
Some trolls operate from a calculated, strategic mindset. They view online interactions as games where manipulation is simply a valid tactic. These individuals, scoring high on Machiavellianism, don’t necessarily enjoy causing pain—they’re more interested in demonstrating their cleverness, controlling conversations, or achieving specific outcomes (like derailing discussions about topics they oppose).
Narcissism and psychopathy: Seeking attention and lacking empathy
Narcissistic traits drive trolling through an insatiable need for attention and validation. For these individuals, negative attention remains preferable to no attention at all. Meanwhile, psychopathic traits contribute through shallow affect and reduced empathy—these trolls simply don’t process others’ emotions the same way most people do.
Case example: In 2022, researchers analyzed trolling patterns on political forums and found that individuals with higher narcissistic traits were more likely to engage in “attention-seeking” trolling—making outrageous statements designed to generate maximum response—while those with higher Machiavellian traits engaged in more subtle, strategic disruption aimed at specific political outcomes.
Hidden motivations: Why people really troll
Beyond personality traits, trolling serves specific psychological functions. Understanding these motivations is crucial for developing effective interventions—and for recognizing that trolling often signals unmet needs.
Boredom and entertainment-seeking
Let’s be honest: sometimes people troll simply because they’re bored. Research into online disinhibition has shown that the internet provides unprecedented opportunities for low-stakes entertainment. For some individuals—particularly those feeling understimulated or disconnected—provoking reactions becomes a form of recreation.
This might seem trivial, but consider the broader context: we live in an era of profound social atomization. Traditional community structures have weakened, particularly in anglophone countries where individualism reigns supreme. When someone lacks meaningful social connection or purpose, trolling can fill that void, however dysfunctionally.
Power and control in powerless lives
Here’s where my progressive perspective becomes particularly relevant: trolling often reflects broader systems of disempowerment. When individuals feel powerless in their offline lives—economically precarious, socially marginalized, or politically voiceless—online spaces offer rare opportunities to affect others and “matter,” even negatively.
Research examining trolling motivations has found that perceived lack of control in offline life correlates with increased trolling behavior. This doesn’t excuse the behavior, but it contextualizes it. When young men (who constitute the majority of trolls) face declining economic prospects, social isolation, and cultural narratives questioning their relevance, some channel that frustration into online aggression.
Ideology and political trolling
Since 2016, we’ve witnessed the weaponization of trolling for political purposes. What began as isolated provocateurs has evolved into coordinated campaigns designed to suppress certain voices, spread disinformation, or shift discourse boundaries.
Political trolling serves multiple functions: it can silence opposition through exhaustion and harassment, normalize previously unacceptable viewpoints through constant repetition, and create false equivalencies that muddy productive debate. Research on coordinated inauthentic behavior has documented how organized trolling campaigns have targeted journalists, activists, and political candidates—disproportionately affecting women and people of color.
Case example: Analysis of harassment patterns on platforms like X (formerly Twitter) has shown that women in public roles receive disproportionate trolling, often sexually aggressive or appearance-focused. This isn’t random—it’s strategic silencing designed to push women out of public discourse.
Anonymity and the online disinhibition effect
The internet’s structure itself enables trolling through what researchers call the online disinhibition effect. When we’re anonymous or perceive limited accountability, our normal social inhibitions weaken. We say things we’d never say face-to-face.
This phenomenon isn’t inherently negative—it also allows marginalized individuals to explore identity and seek support safely. But for those with pre-existing antisocial tendencies or unmet psychological needs, reduced inhibition removes the guardrails that typically constrain harmful behavior.
The controversy: Are we creating trolls?
Here’s a debate worth having: To what extent does platform design actually encourage trolling? This remains contentious, but I’ll state my position clearly: social media companies have built engagement-maximizing algorithms that reward provocative content, including trolling.
Think about it: platforms profit from user engagement. Controversial, emotionally charged content generates more engagement than nuanced discussion. Trolling creates argument threads that keep users scrolling, clicking, and returning. Research examining algorithmic amplification has shown that content provoking strong negative emotions often receives greater algorithmic promotion than neutral or positive content.
Some researchers argue this overstates platform responsibility—trolls existed before social media, after all. But I’d counter that while trolling predates modern platforms, these platforms have industrialized it, providing tools, audiences, and incentive structures that enable trolling at unprecedented scale. We’ve essentially gamified antisocial behavior and then wondered why it proliferates.
This isn’t just academic—it’s a matter of social justice. When platforms enable coordinated harassment campaigns against activists working for racial justice, environmental protection, or LGBTQ+ rights, they’re not neutral spaces. They’re infrastructures of oppression that require systemic change, not just individual-level interventions.
How to identify trolling: Practical warning signs
Recognizing trolling helps you respond appropriately rather than feeding the behavior. Here are concrete indicators I’ve identified in both research and practice:
Behavioral red flags
| Trolling Indicator | What It Looks Like | Why It Matters |
|---|---|---|
| Topic derailment | Consistently steering conversations off-topic, especially toward inflammatory subjects | Indicates intentional disruption rather than genuine engagement |
| Exaggerated certainty | Absolute statements designed to provoke, lacking nuance | Genuine discussion involves some uncertainty; trolls seek reactions, not truth |
| Bad faith questions | “Just asking questions” that imply offensive premises | Creates plausible deniability while spreading harmful ideas |
| Pattern recognition | Similar provocative behavior across multiple threads or platforms | Suggests systematic trolling rather than one-off frustration |
| Emotionally charged language | Deliberately offensive or inflammatory word choices | Designed to trigger emotional rather than thoughtful responses |
Context clues that suggest trolling
Pay attention to when provocative comments appear. Trolling often intensifies during high-emotion events (elections, tragedies, social movements) because trolls seek maximum impact. Similarly, brand-new accounts making extreme statements often indicate throwaway “burner” profiles created specifically for trolling without consequence.
Your emotional response as data
Here’s something I tell clients: your gut reaction provides information. If a comment makes you feel unusually angry, defensive, or upset despite seeming “reasonable” on surface reading, you might be encountering sophisticated trolling. Skilled trolls have learned to package provocations in superficially civil language—a phenomenon sometimes called “sea-lioning” where persistent, bad-faith questioning masquerades as genuine curiosity.
Practical strategies: Responding to trolling constructively
So what do we actually do about trolling? Here are evidence-informed approaches that balance individual well-being with systemic change.
For individuals encountering trolls
Don’t feed the trolls remains valid advice, but with nuance. Research on online interaction patterns confirms that trolls seek responses—your anger, defensiveness, or lengthy rebuttals provide exactly the reward they’re seeking. However, complete silence isn’t always appropriate either, especially when trolling spreads misinformation or targets vulnerable groups.
Consider these calibrated responses:
- The single factual correction: For misinformation, post one clear, sourced correction—then disengage. This serves other readers without rewarding the troll with extended interaction.
- Supportive redirection: If trolling targets someone else, publicly support the target and redirect conversation productively, rather than engaging the troll directly.
- Strategic blocking: Don’t hesitate to block or mute. This isn’t weakness—it’s boundary-setting and self-care.
- Documentation: Screenshot serious harassment before blocking. This creates records useful for platform reporting or, in extreme cases, legal action.
For community moderators and platform designers
From a systemic perspective, addressing trolling requires structural changes. Effective moderation combines clear community guidelines, consistent enforcement, and restorative rather than purely punitive approaches.
Research on community moderation has identified several effective strategies: requiring minimal account history before participating in sensitive discussions; implementing “slow mode” features that prevent rapid-fire inflammatory posting; and creating trusted reporter systems where established community members can flag concerning behavior for priority review.
Importantly, effective anti-trolling measures must consider context. Automated systems often fail to distinguish between hate speech and marginalized groups discussing their experiences with that hate speech. This is why I advocate for moderation approaches that center affected communities’ expertise rather than relying solely on algorithmic enforcement.
For mental health professionals
In clinical practice, we’re increasingly seeing clients affected by trolling—both victims and, occasionally, perpetrators seeking change. For victims, treatment often addresses trauma symptoms similar to other forms of harassment: anxiety, hypervigilance, and sometimes depression or PTSD.
For individuals who troll, intervention requires understanding underlying needs. Is trolling compensating for social isolation? Expressing anger about genuine grievances through maladaptive means? Providing thrill-seeking that healthy activities could satisfy? Treatment plans might include social skills development, anger management, empathy cultivation, and addressing root causes like depression, trauma, or systemic disempowerment.
What does trolling reveal about society?
Here’s my more controversial take: trolling is a symptom, not just a problem. It reflects deeper societal issues—atomization, inequality, democratic decline, and the commercialization of human interaction through extractive platform capitalism.
Yes, individual trolls make choices for which they bear responsibility. But we’ve created systems that incentivize antisocial behavior, then act surprised when people behave antisocially. When economic systems leave young people without prospects, when political systems feel unresponsive to ordinary citizens, when technology companies profit from conflict—trolling becomes almost predictable.
From a progressive, humanistic perspective, addressing trolling ultimately requires addressing these root causes. We need stronger social safety nets that reduce desperation and resentment. We need platform regulation that prioritizes human well-being over engagement metrics. We need educational systems that teach digital literacy and emotional intelligence alongside traditional academics.
Moving forward: A call for systemic change
So where does this leave us? After examining the personality traits, hidden motivations, and structural factors driving trolling behavior, several key insights emerge:
First, trolling isn’t random chaos—it’s patterned behavior serving specific psychological and social functions. Understanding these patterns helps us respond more effectively.
Second, while individual personality traits matter, we cannot ignore how platform design and broader social conditions enable and incentivize trolling. Focusing exclusively on individual pathology while ignoring systems that cultivate antisocial behavior is both ineffective and unjust.
Third, effective responses require multiple levels of intervention: individual boundary-setting and self-care, community-level moderation and norm-setting, and systemic platform redesign and social reform.
Looking ahead, I’m cautiously optimistic. We’re seeing growing recognition that the “move fast and break things” ethos of early internet culture has broken real people. There’s increasing demand for platform accountability, digital rights legislation, and design ethics. Research continues expanding our understanding of online behavior, providing evidence for effective interventions.
But optimism requires action. For psychologists and mental health professionals, this means developing specialized competencies in digital behavior and advocating for policy changes. For technology workers, it means pushing back against engagement-maximizing designs that harm users. For all of us, it means cultivating online communities based on mutual respect rather than conflict-driven engagement.
Here’s my challenge to you: The next time you encounter trolling—whether directed at you or others—pause before responding. Ask yourself: What need is this behavior serving? What systemic factors enable it? How can I respond in ways that protect vulnerable people without rewarding the troll? And crucially: What would a healthier digital ecosystem look like, and how can I help build it?
We’ve created digital spaces that too often bring out the worst in human nature. But human nature also includes creativity, compassion, and cooperation. We built these systems; we can rebuild them better. That reconstruction starts with understanding why people engage in trolling—and refusing to accept it as inevitable.
The internet could be a space for genuine connection, democratic discourse, and collective flourishing. Getting there requires seeing trolling not as a few bad actors we can simply ban, but as a symptom of deeper problems requiring systemic solutions. That’s hard work. But isn’t it worth it?
References
Buckels, E. E., Trapnell, P. D., & Paulhus, D. L. (2014). Trolls just want to have fun. Personality and Individual Differences, 67, 97-102.
Coles, B. A., & West, M. (2016). Trolling the trolls: Online forum users constructions of the nature and properties of trolling. Computers in Human Behavior, 60, 233-244.
Fichman, P., & Sanfilippo, M. R. (2016). Online Trolling and Its Perpetrators: Under the Cyberbridge. Rowman & Littlefield Publishers.
Hardaker, C. (2010). Trolling in asynchronous computer-mediated communication: From user discussions to academic definitions. Journal of Politeness Research, 6(2), 215-242.
March, E., & Steele, G. (2020). High esteem and hurting others online: Trait sadism moderates the relationship between self-esteem and internet trolling. Cyberpsychology, Behavior, and Social Networking, 23(7), 441-446.
Suler, J. (2004). The online disinhibition effect. Cyberpsychology & Behavior, 7(3), 321-326.
Sest, N., & March, E. (2017). Constructing the cyber-troll: Psychopathy, sadism, and empathy. Personality and Individual Differences, 119, 69-72.
Shachaf, P., & Hara, N. (2010). Beyond vandalism: Wikipedia trolls. Journal of Information Science, 36(3), 357-370.
Voggeser, B. J., Singh, R. K., & Göritz, A. S. (2018). Self-control in online discussions: Disinhibition effects in political discussions. Computers in Human Behavior, 84, 162-169.