Overview of protection and moderation of your platform
Your platform has built‑in protections and moderation tools to ensure safe and respectful participation.
Key mechanisms include:
Community reporting for inappropriate content
Profanity filtering to block offensive language
AI-powered detection using Natural Language Processing (NLP)
Spam and throttling prevention to block malicious or automated abuse
Strong Terms & Conditions, Privacy Policy, and GDPR compliance
Together, these tools help admins maintain a healthy, constructive environment with minimal manual intervention.
Platform protection systems
Every GoVocal platform includes:
Spam and throttling detection: Automatically stops suspicious behavior (e.g., mass idea submissions or repeated spammy responses).
IP monitoring: Can signal if multiple suspicious accounts are created quickly from the same IP address.
Downvote-based self-moderation: Poorly received content naturally sinks in ranking.
Admin digest emails: Weekly summaries of submitted content for easy monitoring.
Terms of Use enforcement: Administrators can remove accounts, ideas, or comments if improper use is detected.
Privacy Policy & GDPR alignment: Protects user data and ensures legal compliance.
How inappropriate content is detected
Your platform uses three complementary detection methods:
Community Reporting: Any participant can flag a post, proposal, or comment as inappropriate.
Profanity Filter: Blocks common offensive words in supported languages and prompts users to edit before posting.
AI/NLP Detection: Uses Natural Language Processing to detect abusive, toxic, or inappropriate language automatically.
Note: NLP detection is available in English, French, German, Spanish, and Portuguese.
How to review and moderate inappropriate content?
Go to the Notifications tab in the admin panel.
Click on the content warning or report notification.
Review the reported item (post, comment, proposal).
Take moderation action based on your guidelines:
Edit or delete input/proposals.
Post an official update or a clarifying comment.
Hide items (for timeline projects) by deselecting their associated phases.
Important notes:
If you delete input/proposals, authors will not receive an email (notify separately if needed).
Comments cannot be edited, only deleted. If deleted, the author will receive an email explaining why.
What can be edited vs deleted?
Inputs (ideas, contributions) & Proposals: Can be edited or deleted by admins/managers.
Comments: Cannot be edited — only deleted.
When deleted, the author gets an email notification.
Hiding content: In timeline projects, you can hide items instead of deleting them by deselecting all associated phases.
How to enable/disable moderation features?
Go to Platform Settings.
Enable or disable:
Profanity filter (blocks common offensive words).
Detect inappropriate content (AI/NLP scanning).
Save changes.
You can toggle these features on or off anytime.
How to add custom inappropriate words?
Steps:
Prepare a list of single words (phrases are not supported).
Contact support via the chat bubble.
Send the list (for longer lists, prepare an .xlsx or similar file).
Support will add the words to your platform’s filter.