External providers and data processing
We use Microsoft Azure’s implementation of OpenAI models, specifically GPT-4 Turbo, via secure APIs. This setup ensures enterprise-level data handling.
Key data handling practices:
Purpose-limited use: Data is processed only to provide and support the AI services
No direct OpenAI access: The service is hosted entirely within Microsoft Azure and does not interact with external OpenAI platforms like ChatGPT or the public OpenAI API
Regional hosting: Data is processed in the region specified by the customer. For us, that includes one of four: Europe, UK, US, or Canada
Access control: Microsoft only accesses data for abuse monitoring purposes
What data is sent to these subprocessors and does it include PII?
By default, only textual user contributions (e.g., survey answers or ideas) are sent to the AI subprocessors. These are submitted to support AI analysis features like summarization and tagging
When is data sent?
Surveys: Text is sent when an admin visits the survey results page.
Ideation: Text is sent when an admin actively starts an AI analysis.
What kind of data is included?
Only the free-text content users wrote in their contributions
No structural user information is shared (e.g., email, username, profile picture, demographic data)
⚠️ If a user includes personal information (PII) in their own contribution text, that PII may be sent to sub-processors as part of the message content. This is not filtered out automatically.
Are these subprocessors using the data to train and improve their models?
No, both sub-processors explicitly state that they are not using the data for such purpose.
Where is Microsoft processing the data?
Microsoft allows us to specify the region of processing. We currently make use of 7 regions, where our customers use the region most local to them. The regions are:
Frankfurt
UK
US
Canada
Brazil
Paris
Stockholm
Why are the answers of AI not in my language?
Our AI feature tries to answer as much as possible in the language of the input it’s receiving. Exceptionally, in cases where there are mixed languages, there are very few inputs, or when the AI gets it wrong, it might generate answers in the wrong language. In such cases, it mostly suffices to retry.
⚠️ All core languages are supported with the exception of Greenlandic
How accurate are the generated summaries?
Summarization always means discarding some details while retaining what seems most important.
AI models are strong at identifying common elements, but deciding what is most relevant requires context, domain knowledge, and subjective judgment.
Because of this, summaries can be very useful but are not 100% accurate. Human oversight remains essential.
To ensure correct conclusions, our approach emphasizes responsible AI use:
The human reviewer always remains in control.
Transparency is maximized so you can verify how summaries are created.
AI provides efficiency, while humans ensure quality and accuracy.
Our platform includes several features to help you assess and improve summary quality:
Expected accuracy indication: Before and after generating a summary, the system shows an accuracy estimate (percentage)
In-line references: Each summary links back to the original resident inputs it is based on
Full data access: All project inputs remain browseable, so you can always compare summaries with raw contributions.
Tagging: Manually or automatically segment inputs into smaller groups for more focused, accurate summaries
Auto-tagging options: Multiple methods are available; tags can always be overridden for maximum control
Source-available software: The code is available on GitHub, ensuring transparency into how the tool works
Summary: while AI-generated summaries are highly effective, they are never 100% accurate. Our design philosophy is human-centric: the AI assists with efficiency, while you retain full transparency and control over interpretation.
