Overview of protection and moderation of your platform
Your platform has built‑in protections and moderation tools to ensure safe and respectful participation.
Key mechanisms include:
Community reporting for inappropriate content
Profanity filtering to block offensive language
AI-powered detection using Natural Language Processing (NLP)
Spam and throttling prevention to block malicious or automated abuse
Strong Terms & Conditions, Privacy Policy, and GDPR compliance
Together, these tools help admins maintain a healthy, constructive environment with minimal manual intervention.
Platform protection systems
Every GoVocal platform includes:
Spam and throttling detection: Automatically stops suspicious behavior, such as mass idea submissions or repeated spammy responses.
IP monitoring: Flags unusual activity, such as multiple accounts created rapidly from the same IP address, while accounting for shared networks.
Downvote-based self-moderation: Poorly received content naturally sinks in ranking, helping maintain constructive discussions.
Admin digest emails: Weekly summaries of submitted content for easy monitoring and quick identification of issues.
Manual spam flagging: Both users and administrators can flag ideas as spam and specify a reason for reporting.
Email and citizen verification: Email verification is enabled by default to confirm valid registrations. In certain regions, citizen verification (e.g., via ItsMe) is also supported, read more about it in this support article.
Terms of Use enforcement: Administrators can remove accounts, ideas, or comments when improper use is detected.
Privacy Policy & GDPR alignment: Protects user data and ensures compliance with European data protection regulations.
How inappropriate content is detected
Your platform uses three complementary detection methods:
Community Reporting: Any participant can flag a post, proposal, or comment as inappropriate.
Profanity Filter: Blocks common offensive words in supported languages and prompts users to edit before posting.
AI/NLP Detection: Uses Natural Language Processing to detect abusive, toxic, or inappropriate language automatically.
Note: NLP detection is available in English, French, German, Spanish, and Portuguese.
How to review and moderate inappropriate content?
Go to the Notifications tab in the admin panel.
Click on the content warning or report notification.
Review the reported item (post, comment, proposal).
Take moderation action based on your guidelines:
Edit or delete input/proposals.
Post an official update or a clarifying comment.
Hide items (for timeline projects) by deselecting their associated phases.
Important notes:
If you delete input/proposals, authors will not receive an email (notify separately if needed).
Comments cannot be edited, only deleted. If deleted, the author will receive an email explaining why.
What can be edited vs deleted?
Inputs (ideas, contributions) & Proposals: Can be edited or deleted by admins/managers.
Comments: Cannot be edited — only deleted.
When deleted, the author gets an email notification.
Hiding content: In timeline projects, you can hide items instead of deleting them by deselecting all associated phases.
How to enable/disable moderation features?
Go to Platform Settings.
Enable or disable:
Profanity filter (blocks common offensive words).
Detect inappropriate content (AI/NLP scanning).
Save changes.
You can toggle these features on or off anytime.
How to add custom inappropriate words?
Prepare a list of single words (phrases are not supported).
Contact support via the chat bubble.
Send the list (for longer lists, prepare an .xlsx or similar file).
Support will add the words to your platform’s filter.





