Have you felt it? That flash of defensiveness when scrolling through your feed. Someone shares their feeding choice, their career pivot, their decision to be childfree, their joy about becoming a parent—and somehow it lands like a personal judgment of your different choice. The post wasn't about you, wasn't directed at you, maybe wasn't prescriptive at all. But it felt like an attack anyway.
This isn't a character flaw. It's a fascinating collision of human psychology, social media incentive structures, and the parasocial relationships we've formed with people we'll never meet. Understanding what's happening can help you recognize when you're taking things personally—and more importantly, give you tools to stop.
The Pattern: When Observation Feels Like Prescription
Here's the core dynamic: Someone shares "I do X" or "I'm happy about X," and we hear it as "You should do X" or "You're wrong if you don't do X." A mom posts about her feeding choice without mentioning anyone else's, and other moms feel attacked. Someone shares career satisfaction, and others interpret it as judgment about their different path. An anti-natalist posts about their childfree life, and people preparing for parenthood feel mocked.
The poster is often just living their life out loud. But the audience experiences it as prescriptive, as if every choice shared is implicitly a judgment of different choices.
Research shows that people automatically conflate descriptive norms (what people commonly do) with injunctive norms (what you're morally obligated to do). Even when someone just describes their own behavior, we tend to hear it as a prescription for how we should behave. Our brains blur the line between "this is what I do" and "this is what you should do" without any conscious intention on either side.
The Cognitive Slip: When you observe what others do, your brain unconsciously infers not just that this is common behavior, but that it's the "right" behavior. This automatic association happens even when no prescription was intended.
Why This Happens: Identity Threat and Norm Signaling
Several psychological mechanisms create this pattern:
Identity Threat
When we perceive that our identity or choices are being implicitly challenged, we experience what researchers call social identity threat—a psychological risk that our group membership or personal identity is being devalued. Your feeding choice, career path, family structure, or lifestyle becomes part of your identity. When someone else's different choice gets positive attention, it can trigger defensive reactions even when there's no actual attack.
Research shows that when social identity is threatened, people become more likely to select identity-reinforcing options and double down on their choices, especially when they feel publicly observed. That's why we sometimes see people getting more vocally defensive about their choices after encountering someone doing things differently.
The Descriptive-to-Prescriptive Slide
Studies with children demonstrate that humans are susceptible to "is-ought reasoning"—the tendency to move from descriptive statements about what is commonly done to prescriptive judgments about what should be done. When children learn that a group does something, they often negatively evaluate group members who do something different, even when no explicit rule was stated.
Even adults automatically associate regularities with norms. When we observe what others do, we unconsciously infer not just that this is common behavior, but that it's the "right" behavior. This pervasive cognitive pattern means that simply sharing "I do X" can trigger defensive reactions in people who do Y, because they've unconsciously translated your description into a prescription.
Parasocial Relationships Gone Wrong
Parasocial relationships are one-sided emotional connections we form with media figures, which now includes social media creators and influencers. The term was coined in the 1950s to describe how television viewers developed bonds with TV personalities. Because we consume so much content from these people, we develop a false sense of intimacy and connection.
The problem arises when fans develop such strong emotional investment that they interpret the creator's choices and statements as personal to them. When someone you feel parasocially connected to makes different life choices, it can feel like a friend judging you, even though they don't know you exist.
Research shows that social media has intensified parasocial relationships by providing unprecedented access to creators' lives and creating the perception of authentic two-way connection. As National Geographic reports, creators on platforms like YouTube, TikTok, and Instagram actively cultivate parasocial relationships with audiences as part of their business model, blurring the boundaries between the relationship people think they have and the actual one-sided nature of the connection.
The Intimacy Illusion: When you feel like you "know" a creator because you've watched hundreds of their videos or read their posts daily, your brain treats their life choices like a friend's choices—even though the relationship is entirely one-sided.
The Algorithm Makes It Worse
Now here is where it gets truly insidious: the platforms profit from this dynamic.
Recent research on Twitter's engagement-based algorithm found that it amplifies emotionally charged, angry, and out-group hostile content far more than neutral or positive content. Of political tweets chosen by Twitter's algorithm, 62% expressed anger and 46% contained out-group animosity, compared to 52% and 38% respectively in chronological feeds.
The algorithm is designed to maximize engagement indicated by clicks, shares, and time on platform, which means it learns to serve content that triggers strong emotional reactions. Users' revealed preferences (what they engage with) often conflict with their stated preferences (what they say they want to see). The algorithm optimizes for the former, not the latter.
Algorithms have learned to amplify what researchers call "PRIME" information: Prestigious, In-group, Moral, and Emotional content. This oversaturates feeds with content designed to provoke reactions, regardless of whether that content is accurate or representative of actual group opinions.
Here's what this means practically: If you engage with posts that make you defensive about your choices, even to argue in the comments, the algorithm learns that you find this content engaging. It will show you more posts that trigger that same defensiveness. You're training your feed to upset you.
The Engagement Trap: Every time you comment on, share, or even pause to read content that makes you defensive, you're teaching the algorithm "I want more of this." The platform doesn't distinguish between positive and negative engagement, it just wants you scrolling.
The Anti-Natal Example
Let me share my own experience with this. I've been struggling with fertility issues while actively wanting to become a mother. During this vulnerable time, anti-natalist content started appearing frequently in my feed: people expressing joy about their childfree choices, sometimes with an edge of superiority about it.
My initial reaction: feeling mocked, feeling like my life choices were being judged, feeling defensive about wanting something they were rejecting.
Then I recognized the pattern. These people weren't talking about me. They were living their lives, making their choices, sometimes being cocky about it because that's their personality, not because it was directed at me. I had to get clear about what I wanted and why, separate from their choices. And I had to recognize that if I kept engaging with this content emotionally, I was teaching the algorithm to show me more of it.
When I take things personally now, I focus on what those feelings are telling me about myself, not about the other person. The defensiveness often points to areas where I'm less secure in my own choices, or where I'm seeking external validation rather than trusting my own judgment.
Your Consumption Should Serve You, Not Consume You
This is the critical insight: When you take someone else's choices personally, you become the product. Your emotional reaction inlcuding your engagement, your defensiveness, your time spent arguing, is exactly what the algorithm wants. You're being affected and triggered not for your benefit, but for the platform's advertising revenue.
Research shows that when users read tweets selected by engagement-based algorithms, they feel more negative about their political out-groups and more positive about their in-groups, increasing polarization. Yet users consistently report that they don't actually prefer the content the algorithm selects, they're just more likely to engage with it.
The Paradox: Studies found that U.S. adults believe social media platforms amplify negative, emotionally charged, and out-group hostile content but should not. We know we don't like it, we say platforms shouldn't do it, yet our engagement behavior keeps rewarding it.
The uncomfortable truth is that rage bait serves the creator and the platform, not you. Even when the creator is innocently sharing their world, choices, and views without any intent to judge yours, the algorithm can weaponize that content to trigger your insecurities.
How to Recognize You're Taking Things Personally
Is this actually about me?
Did they mention you, your situation, your choices? Or are you inserting yourself into their story?
What am I defending?
Often the intensity of your reaction reveals insecurity about your own choice. If you were completely secure in your path, would someone else's different path feel threatening?
Am I seeing this a lot?
If yes, you've trained your feed to show you triggering content. The algorithm thinks you want to see it because you keep engaging with it, even if that engagement is anger or defensiveness.
Would I feel this way if my best friend shared this in person?
Parasocial relationships create false intimacy. Someone you don't know sharing their life on social media is not the same as a friend judging your choices.
How to Train Your Feed Instead of Letting It Train You
The algorithm is not showing you a representative sample of the world. It's showing you what will keep you scrolling. Social media users consistently report that they believe platforms amplify negative, emotionally charged, and out-group hostile content—and they say platforms should not do this. Yet the engagement metrics suggest users keep consuming it. Here's how to take control:
Stop engaging with content that triggers defensiveness. Don't argue in comments. Don't quote-tweet to criticize. Don't share to your stories to vent about it. Every interaction teaches the algorithm "show me more of this."
Actively seek and engage with opposite content. If you're seeing a lot of anti-natal content, intentionally search for and engage with pro-parent content that feels supportive of your choices. Train your feed to show you what serves you.
Remember the sample size problem. Something going viral on social media doesn't mean it represents majority opinion. Algorithms amplify extreme or controversial content regardless of how representative it is of actual group opinions. The feed is showing you what triggers engagement, not what's common.
Recognize your own bias toward PRIME content. Humans are naturally drawn to content that is Prestigious, about In-groups, Moral, or Emotional. Algorithms have learned to exploit this bias by oversaturating feeds with PRIME information. Simply being aware of this can help you scroll past content that's designed to manipulate your attention.
Take Back Control: You need to train your feed and your own responses, or they will train you. If you're seeing a lot of negative posts about your life choices, it's because you're engaging with them in some way—even if that engagement is anger or hurt.
What's Actually Happening Here
This isn't victim-blaming. The platforms have built systems that exploit human psychology for profit. The misalignment between algorithm objectives (maximize engagement for ad revenue) and human wellbeing creates predictable negative effects: increased polarization, exhaustion, false perceptions of majority opinions, and the spread of extreme content.
But recognizing the pattern gives you power. When you understand that your defensiveness is partly algorithmic manipulation, partly identity threat, partly the automatic conflation of descriptive and prescriptive norms, you can step back and ask: "Is this feeling serving me? Is my engagement serving me? Or am I being consumed by a system designed to profit from my emotional reactions?"
The Freedom of Not Engaging
There's profound value in being comfortable enough to not engage when someone chooses differently than you. This is what it means for your consumption to serve you: you can scroll past content that triggers defensiveness without needing to argue, correct, or defend.
Someone else living their life isn't a referendum on yours. Their choice isn't a prescription for you. Their joy in their path doesn't diminish your different path. And if you find yourself unable to see that, if every different choice feels like an attack, that's valuable information about where you need to build more security in your own decisions.
It's Okay to Take Things Personally Sometimes
I want to be clear: I'm not saying you should never be affected by what you see online. Sometimes content that triggers you is pointing to genuine problems like systemic injustices, harmful rhetoric, or dangerous misinformation. The question is: Are you taking it personally because there's something genuinely wrong with what's being said, or because it's triggering insecurities about your own choices?
When I see anti-natal content now, I pause and ask myself: Is this person actually attacking people who want kids, or am I feeling defensive because I'm still working through my own fears and hopes about parenthood? Usually it's the latter. And when it's the former—when someone actually is being judgmental rather than just sharing their own path, I can recognize that and choose not to engage rather than letting it derail my day.
The goal isn't to become unaffected by everything. It's to become more aware of why you're affected, so you can respond consciously rather than reactively.
What This Means for Everyone
If you're a creator sharing your life: You're not responsible for managing everyone's insecurities. You don't need 47 disclaimers before sharing your choices. The fact that people now feel they need disclaimers ("This is just what works for me! I'm not saying anyone else should do this! This isn't judgment!") reveals how dysfunctional the dynamic has become.
If you're consuming content: You're responsible for recognizing when someone sharing their life has become a trigger for your own unresolved feelings. Your defensive reaction isn't the creator's problem to solve—it's information about your own relationship with your choices.
We're all navigating this together. We have our own reasons, our own views, lives, experiences, and needs. No one else needs to provide disclaimers, covers, or protection because we are all unique, independent, and beautifully free to make our own choices. The question is: Can you extend that same freedom to others without taking their different choices as an indictment of yours?
Start Here
Next time you feel that flash of defensiveness scrolling through your feed, pause and ask:
- Did this person actually say anything about my choices, or did I insert myself?
- What am I defending? What insecurity is this triggering?
- If I engage with this emotionally, am I training my feed to show me more triggering content?
- Is my consumption serving me right now, or consuming me?
The algorithm will happily feed your defensiveness for as long as you're willing to engage with it. The creator might not even know you exist, much less intend judgment. And your choices are valid regardless of whether everyone on the internet makes the same ones.
Your feed doesn't have to be a constant emotional battleground. You can train it to show you content that supports rather than threatens your identity. You can practice scrolling past what triggers you without needing to argue about it. You can get curious about your own reactions rather than defensive about others' choices.
This is what it means for your consumption to serve you. The alternative—letting every different choice feel like a personal attack—only serves the algorithm.
Related: This connects directly to the framework in The Four Failure Modes: A Diagnostic for Why Conversations Collapse, particularly the Is/Should Confusion pattern. For more on how media consumption shapes our perception of reality, see my upcoming book This Is Not The Whole Story.