Welcome to the comprehensive guide for Citizenlab's new AI Analysis feature. This innovative tool aims to revolutionize the way you understand, categorize, and analyze the plethora of input from residents participating in your community engagement initiatives. Compatible with both ideation methods and surveys, the AI Analysis tool offers a user-friendly interface and a range of functionalities to help you derive actionable insights from community feedback.
Access to this feature depends on your plan.
Getting Started
Our AI analysis tool can be used to process two types of textual inputs: ideas collected through our ideation method or open-ended survey responses.
Ideation
To access the AI Analysis feature, navigate to the input manager of your project that employs an ideation method. Here, you'll find the option to launch the AI Analysis tool.
Surveys
For survey-based projects, the AI Analysis feature can be accessed directly from the survey results page. Upon launching the tool, you'll be prompted to select the questions you wish to analyze. You can also add follow-up questions to the initial question for a more holistic analysis.
The AI Analysis tool currently supports only textual input summarization for textual survey or ideation questions. An 'other' field that is part of quantitative questions like multiple choice or single select will not get summarized.
Process, Interface and Columns
The AI Analysis interface is divided into four main columns, each serving a specific purpose:
1. This is your control panel for creating and managing tags. Tags are essential for clustering inputs and facilitating more nuanced analysis.
2. This column displays a list of all the input you wish to analyze. It serves as your data pool.
3. Here, you can:
View the input selected and manually add or remove tags as needed.
Filter your inputs by another survey question
4. This is where you can generate summaries and ask questions to the AI for further clarification or insights.
Tagging Methods
Tagging is a crucial part of the analysis process, and our tool offers multiple ways to do it:
1. Fully Automated: Let the system do the work. It will scan through the data and apply common tags automatically.
2. By label: If you have specific criteria in mind, you can create your own tags.
3. By example: You can provide a few manual examples to teach the system how to tag future inputs.
4. Sentiment: For a more nuanced understanding, you can also tag inputs based on sentiment, which will divide them in positive/negative sentiment
5. Language : Language detection
If you prefer complete control, you can opt to tag all inputs manually on each input.
Filtering and Previewing Input
The second & third column help you with exploring the input:
Use various filters to focus on input from specific time periods, engagement levels, or demographic fields.
Preview the overall distribution of answers across different demographic groups, giving you a snapshot of community engagement.
Summaries and Questions
The fourth column is your go-to place for summaries and further inquiries:
Use the 'Summarize' button to generate concise summaries of the selected input.
Use the 'Ask a Question' button to probe deeper into specific areas.
Our summaries include references to the original inputs to validate the AI’s statement. To draw correct conclusions, It’s crucial to have a human in the loop, and offer maximal transparency on how the AI draws its conclusions. Our AI analysis has been conceived, from the ground up, to let you use AI responsibly, and offer maximum transparency and control to the human, while having the machine at its side for highly efficient assistance.
Remember, the more data you feed into the system, the vaguer the summaries may become. Use tags and filters to maintain the quality of your summaries.
The AI-generated content may not be 100% accurate. Please review and cross-reference with the actual inputs for accuracy. Be aware that the accurary is likely to improve if the number of selected inputs is reduced
When a summary is generated, it will include clickable references to related ideas, allowing you to delve deeper into the context.
Missing In-Line References
Our AI summarization tool aims to provide accurate and concise summaries with appropriate in-line references. Occasionally, however, absence of references occurs for a few reasons:
1. Contextual Relevance: In some cases, the summaries produced do not require specific references, as they may pertain to general knowledge or widely accepted information. Our AI is designed to discern when a reference may not add substantial value, thus omitting it to maintain the summary's clarity and conciseness.
2. Balancing Reference Inclusion: Initially, our AI tended to include references extensively, sometimes resulting in overly lengthy lists that were not always helpful. To improve the utility and readability of the summaries, we refined the system to include references more selectively. This adjustment sometimes means that a summary might miss out on a reference that could be deemed useful, although it was an intentional design choice to prevent overwhelming the user with unnecessary citations.
3. Continuous Improvement: We recognize the importance of in-line references and are continually working to fine-tune our AI's performance in this area. The current system represents a balance we've struck based on extensive testing and user feedback. While we aim to include references where they add value, the dynamic nature of language and context means some omissions are inevitable.
Exporting Summaries & Building Reports
You can either copy the summary from the AI Sensemaking interface or you can directly integrate into a report, via the report builder. You will find your AI summaries in the AI tab on the left and you can simply drag the summary you need and drop wherever you need it into the report.
Any further questions or in need of support? Get in touch with us via the chat bubble on your platform. 💬