Dealing with Online Hate as a Company

Back then democratic, now toxic? – How the Internet has changed

Fifteen years ago, the internet was a different place – at least in the eyes of many. Web 2.0 had become the new reality. Insults, tastelessness, or cyberbullying certainly existed, but many users still shared a common vision in their communication: the internet as a democratic space for exchange, creativity, and positive community. Platforms like Facebook, Twitter, and YouTube promised to give everyone a voice.

It was a time when terms like netiquette* gained importance and evolved. The goal was respectful, polite interaction in digital spaces, a sort of unwritten code of conduct – non-binding and self-evident – for this new, interconnected public. In 2010, the “German Etiquette Council,” a private group of etiquette experts, published the “Social Media Etiquette Guide” for Web 2.0. It focused on social media as a different form of networking for heterogeneous communities compared to traditional newsgroups, mailing lists, or web forums. The twelve-point program aimed to prevent harassment and trolling at an early stage and to promote lively, humorous, and appropriately distanced communication.

„Web 2.0 began with a vision of the internet as a democratic space for exchange, creativity, and positive community.“

Hate is no longer the exception – it’s the norm

Fourteen years later, German Family Minister Lisa Paus presents the results of the study “Louder Hate – Quieter Withdrawal: How Online Hate Threatens Democratic Discourse.” This representative study shows that online hate is now a daily occurrence and increasingly normalized. Nearly every second person in Germany (49%) has been insulted online. One in four (25%) has been confronted with threats of physical violence, and 13% with sexualized violence.

The consequences for users are immense, not only directly in the form of fear and frustration, but also indirectly: the study shows that online hate significantly reduces diversity on the internet and suppresses free expression. Moreover, online hate often fuels real-life violence.

This reality presents major challenges not only for individuals but also for public authorities and companies.

Between regulation and reality: The state’s fight against hate

In Germany, the federal government funds numerous initiatives against online hate, including the “Competence Network Against Online Hate” composed of five organizations: HateAid, Das Nettz, Neue Deutsche Medienmacher*innen, Gesellschaft für Medienpädagogik und Kommunikationskultur, and Jugendschutz.net.

A constitutional state offers the possibility to file criminal charges against hate speech. Even a single Facebook comment can constitute a criminal offense. Individuals can also assert their civil rights, such as the right to their own image, in court. However, how do these relatively bureaucratic processes work in practice? Counseling centers like HateAid help explore these options in detail and support individuals in taking appropriate action.

In Luxembourg, state support is essentially limited to one tool – BEE SECURE. In 2023 and 2024, more than 860 URLs were reported via this platform. In 575 cases, BEE SECURE forwarded the information to the relevant authorities. However, only a few cases resulted in convictions: from 2020 to 2024, only 23 legally binding rulings were issued.

States are also responsible for regulating the “places” where online hate occurs. The Digital Services Act (DSA), in effect in the EU since February 2024, obliges platform operators to moderate content. The European legal framework is comparatively strict, especially compared to the U.S. Hate speech must be deleted promptly once reported, and platforms must provide clear mechanisms for users to report violations.

Opinions on the DSA’s effectiveness are mixed. On one hand, transparency has increased regarding what actions platforms actually take against hate content, as they are now required to regularly report on their moderation efforts. On the other hand, reports of hate speech remaining online for extended periods persist. The EU is currently unable to ensure consistent enforcement of the DSA.

Between profit and responsibility: The role of platforms

Emotionally charged content is inherently desirable for major platforms because it drives engagement and reach – and that includes hate. Troll factories have long exploited this dynamic. Given the ambiguity around what exactly constitutes hate or insult in individual cases, the policies and algorithms of platforms walk a fine line between upholding human rights and maintaining their business models.

Platforms have revised their policies multiple times, most recently Meta, publicly defended by Mark Zuckerberg himself.

Taking action: What individuals can do

Before reporting an online crime to the police, many turn to online counseling, reporting, or complaint centers. The LOVE-Storm initiative maintains a list of resources and offers a 10-point guide on how to respond practically to online hate. Hate does not have to be endured in silence – reliable help is available in many forms, including accessible formats such as YouTube videos [Example 1] [Example 2] [Example 3].

When hate hits the comment section – what companies can do

On their official channels, companies aim for what the “Social Media Etiquette Guide” from 2010 promoted: lively, humorous, and respectful discourse. No brand wants to discover hate speech – legal or not – in its comment sections.

A toneshift diagram illustrates the core elements of a moderation strategy in response to online hate.

Prevention starts before the shitstorm

PREVENTION involves both technical and strategic measures. On the technical side, companies can use platform-specific algorithms to identify and even automatically hide hateful content. For example, YouTube Studio offers options to scan comments for “inappropriate content” and hold potential hate comments for manual review.

Human moderation also needs to be well-organized, with proper scheduling or the support of external community management agencies.

Strategy over spontaneity: Responding to hate

Strategic preparation ensures that everyone involved in a brand’s social media presence is on the same page about how to handle different scenarios. When should comments be deleted immediately versus merely hidden? When is a reply appropriate, and who should draft it? How are text templates managed appropriately? When should an issue be escalated, and through what channel?

Creating internal community guidelines involves aligning moderation resources, company values, and communication goals. Often, it’s beneficial to develop these guidelines and social media strategies through workshops involving representatives from all relevant departments.

When hate appears on a company’s channel, prompt but measured action is key. Ideally, social media teams can follow a pre-established COMMUNICATION strategy. Experience shows that a calm, firm, and non-preachy tone is often the best way to respond, while also appealing to the “silent majority” of bystanders. A CURE Intelligence study for a telecommunications company in 2024 revealed that while hate speech calls for counter speech, overly aggressive replies can backfire, fueling a toxic exchange – essentially a modern interpretation of “don’t feed the troll.”

In other situations – for example, when a clear mistake has occurred – an apology or explanation may help defuse criticism.

„A strategically guided use of the tools of deletion, reply, and silence is the modern interpretation of ‘don’t feed the troll’ in many social media teams!“

After the incident: Support, reflection, adaptation

POST-INCIDENT measures include regularly reviewing the effectiveness of internal handling of online hate and adapting approaches to changing communication trends or platform features. Additionally, especially in cases of targeted attacks on individuals, affected staff must be offered or provided with support. Legal assistance or psychological counseling may be necessary in some cases.

Strong channels need smart strategies – and data

In the long run, continuous ANALYSIS of social media traffic is essential. A validated and structured data foundation that grows over time becomes an increasingly solid basis for adjusting PREVENTION and COMMUNICATION strategies when online hate occurs.

This results in a unique pool of insights on constructive communication with a company’s specific audience and critics. It also ensures that institutional knowledge is retained even when staff changes occur.

CURE Intelligence is your partner in building a solid foundation to effectively deal with hate on your digital channels. Ask us about our Data Intelligence Services for post-incident analysis and evidence-based recommendations. Explore our Media Intelligence Services for 24/7 channel monitoring and personalized early warning systems, and use our Marketing Intelligence Services for professional support in day-to-day community management, campaign moderation, and social media strategy workshops.

 

* Netiquette is a portmanteau of “net” and “etiquette.” The term dates back to the 1980s and originally focused on proper behavior when using technical systems such as newsreaders, emphasizing readability and clarity.

Sources and References

  • https://www.cure-intelligence.com/loesungen/data-intelligence/
  • https://anwaltskanzlei-feuerhake.de/rechte-bei-rachevideos-und-nacktfotos-revenge-pornhttps://www.bee-secure.lu/de/publikation/netiquette/
  • https://de.wikipedia.org/wiki/Netiquette
  • https://knigge-rat.de/mitglieder/frank-heinrich/https://love-storm.de/10-tipps-gegen-hass-im-netz/
  • https://neuemedienmacher.de/wp-content/uploads/2019/10/Leitfaden-gegen-Hassrede-2019.pdf
  • https://www.anwalt.org/facebook-hasskommentare/
  • https://www.bmfsfj.de/bmfsfj/aktuelles/alle-meldungen/hass-im-netz-gefaehrdet-demokratie-236282
  • https://www.bosch-stiftung.de/de/story/gegen-hass-und-hetze-im-netz-was-unternehmen-wie-alba-dagegen-tun
  • https://www.handelsblatt.com/meinung/gastbeitraege/zuckerberg-verwandelt-facebook-in-eine-plattform-fuer-hass-und-hetze/100100904.html
  • https://www.jugendschutz.net/fileadmin/daten/publikationen/praxisinfos_reports/report_hass_gegen_junge_klimaaktivist_innen.pdf
  • https://www.taskcards.de/#/board/fe82ce6e-03af-4e62-b3a3-bac32101dcd0/view?token=4d350aeb-43ff-45d0-ac8e-2ffa6b32229c

Types of Online Hate (Overview)

“Online hate” serves as an umbrella term for various forms of verbal digital violence, including:

Hate speech:
Derogatory or violent statements targeting individuals or groups based on traits such as race, religion, gender, sexual orientation, or disability; often legally actionable

Toxic speech:
Provocative or demeaning language meant to insult or offend; often not legally actionable unless persistent

Dangerous speech:
Statements that incite or legitimize violence, especially in tense social climates

Shitstorms:
Mass online outrage often expressed through insults and public criticism

Revenge porn:
Sharing explicit, often manipulated, images without consent

Doxxing:
Publishing private or identifying information of individuals

Online stalking:
Persistent digital harassment or surveillance

Account or identity hacking:
Unauthorized access to digital accounts with the intent to steal, leak, or manipulate information

All these forms of violence aim to demean, intimidate, or socially exclude others.

Related Topics

Share

XING
LinkedIn
Facebook
X
WhatsApp
Email

Leave a Reply

Your email address will not be published. Required fields are marked *