Skip to main content

😊 Sentiment Moderation

The Sentiment Moderation feature leverages AI to detect and act on messages with negative sentiment about the project or community. This automated system helps maintain a positive atmosphere and ensures that potentially harmful messages are addressed swiftly.

Overview

Monitor and moderate users who display consistent negative sentiment. The AI analyzes recent user messages to calculate an average sentiment score, and takes action when users consistently post negative content. Actions can be customized, and community members can even veto decisions through a voting system.

Key benefits:

  • Maintain positive community atmosphere
  • Automatic detection of toxic behavior
  • Community-driven oversight through veto system
  • Customizable sensitivity and actions
  • Reduces moderator workload

Configuration

To access Sentiment Moderation settings:

  1. Use the /settings command in your chat
  2. Navigate to 🤖 AI😊 Sentiment Moderation
  3. Use the Toggle Sentiment option to turn the feature 🟢 ON or 🔴 OFF

Sentiment Moderation Settings

Enabled/Disabled Toggle

Enable or disable sentiment-based moderation for the group.

  • 🟢 Enabled: Sentiment analysis is active and will take actions
  • 🔴 Disabled: Sentiment analysis is paused
Testing Period

Keep the feature disabled while you're calibrating the threshold and testing to avoid unintended actions.

Messages

The number of recent messages analyzed per user to calculate sentiment.

Available options:

  • 3 (default) - Quick assessment
  • 5 - Balanced analysis
  • 7 - More comprehensive
  • 10 - Deeper analysis
  • 15 - Extensive review
  • 20 - Maximum history
  • Other - Custom value

How it works: The bot analyzes the user's last X messages to calculate an average sentiment score. This ensures the bot looks at patterns rather than single messages.

Choosing the right number:

  • Fewer messages (3-5): Faster response to negative behavior, but more prone to false positives
  • More messages (10-20): More accurate assessment of consistent negativity, but slower to detect problematic users
Example

With Messages set to 5, the bot analyzes the user's last 5 messages. If the average sentiment score of those 5 messages falls below the threshold, the action is triggered.

Threshold

Defines the sentiment threshold (average score) below which action will be taken against a user. Lower values indicate negative sentiment.

Available options:

  • 0.7 - Very sensitive (catches mild negativity)
  • 0.75 - Moderately sensitive
  • 0.8 - Balanced (default)
  • 0.85 - Less sensitive
  • 0.9 - Only extremely negative
  • Other - Custom value

Understanding the threshold:

  • Sentiment scores typically range from 0 (extremely negative) to 1 (extremely positive)
  • The threshold represents the minimum acceptable average sentiment score
  • Lower threshold = more sensitive (catches more messages)
  • Higher threshold = less sensitive (only catches very negative messages)

💡 Be careful with low threshold settings, which will allow the bot to evaluate a wider variety of statements as negative (in other words, the bar for what is negative will be set very low).

Important consideration: A user experiencing an error or other difficulty (for example, "I can't complete my registration" or "Registration doesn't work") is technically a negative sentiment, and a very low threshold setting may result in action being taken against that user.

Threshold Calibration

You may need to take some time to tweak this setting to fit the general culture and sentiment of your community, or make specific changes to the threshold when you are running a campaign that might result in more reports of errors or problems.

Recommended settings by community type:

  • Professional/Business: 0.75-0.8 (moderate tolerance)
  • Casual/Social: 0.7-0.75 (catch more negativity)
  • Support/Technical: 0.85-0.9 (allow problem reports)
  • High-engagement: 0.8-0.85 (balanced approach)

Veto

Allow group members to vote and veto actions taken against users deemed negative.

Enable/Disable:

  • 🟢 Enabled: Community can vote to reverse bot actions
  • 🔴 Disabled: Bot actions are final

When enabled, configure:

Votes Needed

The minimum number of votes required to veto an action against a user.

Common configurations:

  • 3-5 votes: Small communities
  • 5-10 votes: Medium communities
  • 10-20+ votes: Large communities

How veto works:

  1. Bot takes action against user (mute/kick/ban/warn)
  2. Bot announces the action with veto option
  3. Community members can vote to "save" the user
  4. If enough votes are reached, the action is reversed
Community Participation

The veto system creates community buy-in and prevents false positives while still maintaining automated protection.

Example: If Votes Needed is set to 5, community members must collectively cast 5 "save" votes to overturn the bot's decision and restore the user.

Action

Specifies the action to take against users with consistently low sentiment scores.

Available actions:

  • Mute: Temporarily restrict the user from sending messages
  • Kick: Temporarily remove the user from the group (can rejoin)
  • Ban: Permanently remove the user from the group
  • Warn: Issue a warning to the user

Duration (for Mute action)

If Mute is selected, specify the duration:

Common durations:

  • 1 hour: Brief timeout
  • 6 hours: Extended timeout
  • 1 day: Day-long restriction
  • 7 days: Week-long restriction
  • Custom: Set your own duration
Action Severity

Start with less severe actions (Warn or short Mute) and monitor results before escalating to Kick or Ban. The goal is to maintain positivity, not alienate legitimate community members.

Common Use Cases

Protecting New Project Communities

Scenario: Crypto project launch needs to maintain positive sentiment.

Configuration:

  • Messages: 5
  • Threshold: 0.75
  • Veto: Enabled (5 votes)
  • Action: Mute (6 hours)

Result: Catches FUD while allowing community to defend legitimate concerns.

Support Channel Protection

Scenario: Technical support channel needs to allow problem reports.

Configuration:

  • Messages: 7
  • Threshold: 0.85 (less sensitive)
  • Veto: Enabled (3 votes)
  • Action: Warn

Result: Only catches extremely toxic behavior, allows technical issues to be reported.

High-Engagement Community

Scenario: Active discussion community with diverse opinions.

Configuration:

  • Messages: 10
  • Threshold: 0.8
  • Veto: Enabled (10 votes)
  • Action: Mute (1 hour)

Result: Focuses on consistent negativity patterns, not individual critical messages.

Zero-Tolerance for Toxicity

Scenario: Family-friendly or professional community.

Configuration:

  • Messages: 3
  • Threshold: 0.7
  • Veto: Disabled
  • Action: Kick

Result: Quick response to any negativity with no community override.

Best Practices

Effective Sentiment Moderation
  • Start Conservative: Begin with high threshold (0.85-0.9) and more messages (10+)
  • Monitor and Adjust: Track false positives and negatives, adjust accordingly
  • Enable Veto: Community input helps calibrate the system
  • Communicate Policy: Let members know sentiment moderation is active
  • Review Actions: Periodically check moderation logs
  • Context Matters: Adjust threshold during campaigns or events
  • Combine with Human Moderation: Use as first line, not sole enforcement
  • Test Thoroughly: Run in disabled mode first to see what would be caught
Avoiding False Positives
  • Don't Set Threshold Too Low: 0.7 or below can catch legitimate concerns
  • Account for Support Issues: Users reporting problems aren't being toxic
  • Cultural Differences: Sentiment analysis may vary across languages
  • Sarcasm Detection: AI may misinterpret sarcastic positivity
  • Event Planning: Temporarily adjust during troubleshooting periods
  • Community Size: Smaller communities need lower veto thresholds
  • Regular Audits: Review why actions were taken

Advanced Strategies

Graduated Response System

Combine sentiment moderation with other features:

Tier 1 (First Offense):

  • Sentiment triggers → Warn
  • Alert sent to user
  • No actual restriction

Tier 2 (Repeated Negativity):

  • Sentiment triggers → Mute (1 hour)
  • User gets time to cool off
  • Veto available

Tier 3 (Persistent Issues):

  • Sentiment triggers → Mute (1 day)
  • Extended timeout
  • Admin review triggered

Tier 4 (Extreme Cases):

  • Manual admin review
  • Kick or ban based on history
  • Community input considered

(Note: This requires manual tracking of violations)

Context-Aware Thresholds

Adjust settings based on community phase:

Launch Phase:

  • Threshold: 0.75 (catch FUD early)
  • Messages: 5
  • Action: Mute (short duration)

Growth Phase:

  • Threshold: 0.8 (balanced)
  • Messages: 7
  • Action: Warn first, then mute

Mature Community:

  • Threshold: 0.85 (less sensitive)
  • Messages: 10
  • Action: Warn (trust the community)

Integration with Other Features

Sentiment + Vote Ban:

  • Sentiment triggers initial flag
  • Community vote ban for final decision
  • Distributed moderation approach

Sentiment + Warns:

  • First negative sentiment → Auto-warn
  • Track warnings across features
  • Escalate based on total warns

Sentiment + Notifications:

  • Alert mods about sentiment triggers
  • Review patterns in flagged users
  • Manual intervention when needed

Veto Analysis

Monitor veto patterns to improve settings:

High Veto Rate (>50% of actions vetoed):

  • Threshold might be too low
  • Increase to 0.85 or higher
  • Analyze what's being falsely flagged

Low Veto Rate (<10% of actions vetoed):

  • Threshold might be appropriate
  • Community agrees with bot decisions
  • Consider reducing veto votes needed

No Veto Activity:

  • Community may not understand veto
  • Communicate the feature better
  • Consider whether veto is needed

Troubleshooting

Too many false positives?

  • Increase threshold (try 0.85 or 0.9)
  • Increase number of messages analyzed
  • Enable veto system
  • Review what's being flagged and adjust

Missing obvious toxic users?

  • Decrease threshold (try 0.75 or 0.7)
  • Decrease number of messages analyzed
  • Check if users are evading with positive messages
  • Consider combining with other moderation tools

Veto system not working?

  • Verify veto is enabled
  • Check votes needed isn't too high for community size
  • Ensure community understands how to veto
  • Test with a known scenario

Actions not being taken?

  • Verify sentiment moderation is enabled
  • Check bot has necessary permissions (mute/kick/ban)
  • Ensure threshold and messages are configured
  • Test with obviously negative messages

Technical issues being flagged?

  • Increase threshold temporarily during issues
  • Communicate with community about temporary changes
  • Use manual moderation during technical problems
  • Consider disabling during major incident responses

Sentiment detection seems inaccurate?

  • AI sentiment may need more context (increase messages)
  • Non-English content may be less accurate
  • Sarcasm and irony can be misinterpreted
  • Consider language and cultural factors

Understanding Sentiment Scores

The AI analyzes messages and assigns sentiment scores:

Score Ranges:

  • 0.9-1.0: Very positive, enthusiastic
  • 0.8-0.9: Positive, supportive
  • 0.7-0.8: Neutral to slightly positive
  • 0.6-0.7: Neutral to slightly negative
  • 0.5-0.6: Negative, critical
  • 0.0-0.5: Very negative, toxic

Example messages by score:

High Positive (0.9+):

  • "This project is amazing! So excited!"
  • "Great work team, loving the updates!"

Neutral (0.7-0.8):

  • "OK, I'll check that out"
  • "Thanks for the info"

Negative (0.5-0.7):

  • "This isn't working for me"
  • "I don't think this is a good idea"

Very Negative (0.0-0.5):

  • "This is terrible, waste of time"
  • "Complete scam, don't trust them"
tip

Remember to give the Fren One bot admin privileges in your chat for Sentiment Moderation to work properly. Start with conservative settings and adjust based on your community's specific needs and culture.