Hurry! Our Biggest Sale of the Year is On - Contact us to lock in prices and more
Skip to content

How to Visualize and Analyze Activity Metrics

Updated:

Once your Study is complete, it’s now time to dive into the data collected. Quantitative methodologies are meant to provide researchers with objective, measurable results. Many of the activities your participants complete will provide you with concrete data on what’s working and what isn’t working within your platform. 

Some of the quantitative metrics Userlytics offers are the System Usability Scale (SUS), Net Promoter Score (NPS), Single Ease Question (SEQ), success/failure questions, rating questions, time on task, card sorting, tree testing and more. 

Our fully re-imagined Metrics section allows you to visualize data with clear and dynamic charts, find what you need faster with advanced filters, and navigate effortlessly through the interface for a smoother, more efficient experience. Think of it as a landing place where your data becomes actionable insights allowing you to improve your brand’s customer experience.

In this guide, we’ll walk you through how to visualize each activity’s metrics and, where applicable, offer additional tips on how to analyze them.

Getting Started

To get started, locate your latest study and navigate to the left-hand panel. From there, you’ll find the Metrics section, which contains all the data associated with the study you ran.

Screenshot of the Userlytics platform showing the Study Info setup page. A red box and arrow highlight the Metrics option in the left-hand navigation menu, indicating where users can access the Metrics section.

Once there, you’ll be able to see up to four tabs at the top of the page, depending the characteristics of your study:

  • Activities
  • Participants
  • Sentiment Analysis
  • AI UX Insights

Filtering Results

Filters allow you to focus on specific segments of your data and compare participant performance across different criteria. The filters you apply affect the information displayed across all activities within your study, and they remain active even when you switch between activities.

Screenshot of the Userlytics Metrics dashboard showing a Single Choice activity with a bar chart of participant responses. A red box and arrow highlight the Filter by: Participants option in the upper right corner, indicating where users can apply participant filters to the results.

Depending on the type of study, you may see the following filters at the top of the Analysis page:

  • Participants: Always available. Use this filter to select one or multiple participants and compare their results side by side.
  • Screeners: Available only if screener questions were included in your study. Use it to view responses that meet specific screener criteria.
  • Demographics: Available for PANEL studies only. Use it to filter participants by attributes such as age, gender, or location.

There are two types of filters in the Analysis section:

  • The top-right filters (Participants, Screeners, Demographics) apply to ALL results displayed in the dashboard.
  • The top-left “Responses” filter affects only the current activity’s metrics and visualizations.

When you apply a filter, the graphs, metrics, and response lists below update automatically to reflect your selections.

To download responses, locate the three-dot menu on the left side of the screen and you’ll see the option to download as a CVS file.

Let’s review how to view and analyze metrics per activity type!

Activities

1. Scenario

The ‘Scenario’ activity is commonly used to show instructions to the participants. To visualize the results, simply click on the Scenario icon and the following screen will appear showing the average time on task per participant.

Screenshot of the Userlytics Metrics dashboard showing the Scenario activity. The interface displays participant time metrics, including average, maximum, and minimum completion times. A list of participants with their individual time spent and “Play” video buttons appears below. The top bar shows global filters for Demographics, Screener, and Participants.

The Scenario activity provides:

  • Average time all participants take to complete the activity.
  • Longest and fastest completion times. 
  • Participant list with individual completion times and play buttons to watch each individual activity clip. Option to filter data ascending or descending. 

2. Task/Verbal

This activity type asks testers to complete an action without an associated performance metric. However, during setup in the Study Builder, you can enable an optional self-report question where testers indicate whether they successfully completed the task. 

  • Success/Failure – Whether they completed the task successfully
  • Difficulty Level – On a scale from 0-7

To visualize responses, click on the Task/Verbal activity. 

Screenshot of the Userlytics Metrics dashboard showing a Task/Verbal activity with performance data. The screen displays average, maximum, and minimum task times, a donut chart indicating task success and failure, and a bar chart showing SEQ (Single Ease Question) scores from 1 to 7. Below, a participant list shows individual results, success status, SEQ scores, and “Play” buttons to view recordings.

The activity displays:

  • The average, max, and min response times
  • A pie chart with % of success/failure rate
  • Bar chart showing difficulty levels
Screenshot of the Userlytics Metrics dashboard highlighting the Task/Verbal activity view. Red boxes outline two main charts: a green and orange donut chart labeled Success/Failure, and a bar chart labeled SEQ (Single Ease Question) showing user ease-of-completion scores from 1 to 7. A red arrow points from the left-hand activity list to the visual data section, indicating the connection between the selected task and its metrics display.

Below the metrics, see the list of participants with access to individual responses, time spent and clips. 

If you need to change how individual responses are categorized (success vs. failure), you can update these classifications by selecting the appropriate radio buttons for each response. The system will automatically recalculate all associated graphs and metrics based on your updated classifications.

3. Write In

The Write In activity collects open-ended text responses from participants, providing rich qualitative insights into their thoughts, opinions, and experiences.

At the top, view completion metrics displayed side by side, including the average, maximum and minimum response times recorded.

Below the metrics, a sortable table displays individual participant data including their written answer, time spent on the task and video recording.

Screenshot of the Userlytics Metrics dashboard showing the Write-In activity results. The interface displays average, maximum, and minimum completion times at the top, and a participant list below that includes written responses, timestamps, and “Play” buttons to view each video recording. The layout highlights qualitative feedback collected from participants.

Use the Participants filter at the top right to view responses from specific participants or groups. Click “Show All” to expand the full list.

This task is ideal for gathering detailed feedback, understanding user reasoning, collecting feature requests, or exploring participant perspectives in their own words. The combination of written responses and video playback provides context that purely quantitative data cannot capture.

4. Single Choice

For a single choice task, the system shows aggregated results.

Screenshot of the Userlytics Metrics dashboard displaying a Single Choice activity. The chart on the right shows participant responses with colored horizontal bars representing options such as “Attractive/beautiful design,” “Easy to find what I wanted,” and “Images and products I could see.” The left panel lists all available activities, and the top-right section shows participant filtering options.

By hovering over results, you can visualize both percentages and absolute numbers. While percentages provide a quick overview when you have a large sample, absolute values provide more context in smaller qualitative studies.

Screenshot of the Userlytics Single Choice activity results showing a bar chart of participant responses. A red box highlights a tooltip that appears when hovering over the green bar for “Attractive/beautiful design,” showing that 50% (3 out of 6 participants) selected this option. The tooltip provides percentage and participant count details.

You can also filter results per participant(s) by clicking on the Participants filter button on the upper right-hand side of the screen or by “Responses”.  

Screenshot of the Userlytics Single Choice activity results showing response data in a horizontal bar chart. Red boxes highlight two filter areas: the Responses filter within the activity panel and the Filter by: Participants option in the upper right corner. These filters allow users to refine data by specific responses or participant groups.

Additionally, below the graphs you can access a list showing each participant’s answer, completion time, and video response. Click the play button to watch any video clip.

Screenshot of the Userlytics Single Choice activity results showing the “List of participants” table highlighted in red. The table displays participant names, their selected responses, time spent completing the task, and “Play” buttons to view each participant’s session recording.

5. Multiple Choice

Similar to Single Choice, the system will display aggregated results for multiple choice tasks in a visual chart.

Screenshot of the Userlytics Metrics dashboard displaying a Multi Choice activity. The screen shows a horizontal bar chart summarizing participant selections such as “Easy to find what I wanted,” “Attractive/beautiful design,” and “Uncluttered.” Below, a participant list includes individual responses, time spent on the activity, and “Play” buttons to view each participant’s video recording.

To view data, hover over results to see both percentages (useful for large samples) and absolute numbers (better for smaller qualitative studies). Filter results as needed as explained under the “Filtering Results” section above.

Below the chart, view a list of each participant’s answer, completion time, and video response. 

Screenshot of the Userlytics Multi Choice activity view within the Metrics dashboard. A horizontal bar chart displays participant selections such as “Easy to find what I wanted,” “Attractive/beautiful design,” “Uncluttered,” and “Colors.” Below the chart, a table lists participants with their selected responses, time spent, and “Play” buttons for viewing session recordings.

6. Rating

The Rating activity works similarly to the Single Choice activity, as participants select a single response. It’s a great way to gauge user sentiment on a website, interface, app, or prototype. 

The results are displayed as a color-coded chart that always sums to 100%, making trends easy to interpret at a glance.

Screenshot of the Userlytics Rating activity view in the Metrics dashboard. A horizontal bar chart shows participant ratings from “Very Innovative” to “Not innovative at all,” with an average rating of 4 out of 5 displayed in the center. The left sidebar lists other activity types, and the lower section includes a participant table with responses, time spent, and “Play” buttons for viewing session recordings.

Results can be filtered by Participant or by Responses as seen below. Simply select the responses you wish you to filter by and they will be displayed in a bar chart format.

Screenshot of the Userlytics Rating activity with the Responses filter panel open on the right. The panel lists options including “Very Innovative,” “Somewhat innovative,” “Neutral,” “Not very innovative,” and “Not innovative at all,” all selected with checkmarks. Red boxes highlight the Responses filter button in the activity view and the open filter menu where users can apply or adjust visible response options.

In summary, this metric shows user sentiment, including:

  • A bar chart showing ratings distributions.
  • The average rating, offering a quick overview of sentiment trends.
  • The users list each participant’s individual rating, time taken to complete the task, and the play button to view response.

This format is ideal to quickly identify satisfaction levels and usability issues at a glance.

7. System Usability Scale (SUS)

The System Usability Scale (SUS) activity measures system usability through a standardized 10-question survey. Results are displayed in three sections:

1. Overall Score:

  • Summary showing the overall SUS score (0-100 scale)
  • Acceptability rating indicating if the score falls within Acceptable, Marginal, or Not Acceptable ranges.
  • Adjective rating providing a qualitative descriptor (e.g., Best Imaginable)
  • A letter grade that converts the score to a familiar grading scale
Infographic explaining the System Usability Scale (SUS). It shows a 101-point scoring range divided into categories: Not Acceptable (0–39), Marginal Low (40–59), Marginal High (60–69), and Acceptable (70–100). The chart includes grade equivalents (F to A) and adjective ratings from “Worst Imaginable” to “Best Imaginable,” illustrating how SUS scores correspond to usability perceptions.

2. Response Distribution (Middle)

  • Horizontal bar charts displaying response distribution for each of the 10 SUS questions
  • Color-coded scale that ranges from Strongly Disagree (red) to Strongly Agree (blue), with Neutral (gray) in the middle
  • At-a-glance insights to quickly identify which statements received positive, negative, or mixed feedback

3. Individual Participant Data (Bottom)

  • Sortable table to view each participant’s responses to all 10 questions
  • Question columns showing individual ratings (1-5 scale) per question
  • Participant search to filter or locate specific testers
Screenshot of the Userlytics SUS (System Usability Scale) activity view. The screen shows an overall SUS score of 94.4, labeled as Acceptable, with the adjective Best Imaginable and letter grade A+. Below, a horizontal bar chart visualizes responses to 10 usability statements, ranging from Strongly Disagree to Strongly Agree. The lower section includes a participant table listing individual scores per question.

8. NPS

The NPS (Net Promoter Score) gauges how likely testers are to recommend the product they are testing, whether it’s a website, app, or another platform. The score reflects the overall sentiment towards the product based on their feedback. 

An infographic explaining how the NPS is calculated can be accessed by clicking the (i) icon next to the activity title. 

Screenshot of the Userlytics SUS (System Usability Scale) activity view. The screen shows an overall SUS score of 94.4, labeled as Acceptable, with the adjective Best Imaginable and letter grade A+. Below, a horizontal bar chart visualizes responses to 10 usability statements, ranging from Strongly Disagree to Strongly Agree. The lower section includes a participant table listing individual scores per question.

Once you click on the (i) icon, the following pop up explaining NPS score calculation will appear.

Infographic explaining Net Promoter Score (NPS) calculation. It shows that NPS = % Promoters − % Detractors. The scale ranges from 0 to 10, dividing respondents into Detractors (0–6), Passives (7–8), and Promoters (9–10). Each group is described: Detractors are dissatisfied customers who may harm growth, Passives are neutral and open to competitors, and Promoters are loyal advocates who drive positive word-of-mouth.

For the NPS, we provide the overall score as pictured in the pie chart below, as well as the breakdown between promoters, neutrals, and detractors, and an adjacent bar chart. Below the graphs, find individual participant information, including individual scores, time spent, and video playback.

Screenshot of the Userlytics NPS (Net Promoter Score) activity view in the Metrics dashboard. A circular chart displays an NPS score of 45, segmented by Promoters, Neutrals, and Detractors. A bar chart on the right shows the distribution of participant scores from 0 to 10. Below, a participant list includes individual NPS responses, labeled by category and color-coded, with corresponding time spent and “Play” buttons to view recordings.

Keep in mind:

  • A positive NPS indicates a strong base of loyal customers who are likely to recommend the product. 
  • A negative NPS suggests more detractors than promoters, pointing to potential dissatisfaction. 
  • An NPS score of 0 signals a neutral stance, where users are on the fence about the product. 

The NPS is a useful metric to measure user satisfaction and loyalty. To boost the NPS, focus on turning Neutrals (7-8 scores) into Promoters (9-10) by improving the user experience, resolving pain points, and engaging with feedback.

9. Tree Testing

Tree Testing evaluates how intuitively users can find information within your site’s navigation structure. 

In the metrics section, you can access different views of your data, including: 

  1. Results: Success/failure rates and overall performance

The Results tab provides a high-level overview of task completion performance through two interactive donut charts. The Success chart displays the overall success rate as a percentage in the center, with the participant count shown below (e.g., 82/100). 

The chart is divided into Direct Success (dark green) for users who followed the expected path immediately, and Indirect Success (light green) for users who reached the correct answer through alternative routes. 

Screenshot of the Userlytics Tree Testing activity view in the Metrics dashboard. Two circular charts display 82% Success (including Direct and Indirect success) and 18% Failure rates. The interface tabs show options for Results, First Click, Final Selection, Paths, and Tree. Below, a participant table lists each user’s result type, visited paths, and time spent, with “Play” buttons to review session recordings.

Hover over each segment to view the exact percentage and participant count. The Failure chart (right) shows the overall failure rate as a percentage, with the total number of failed attempts. Below the charts, a sortable table displays individual participant results, including their outcome (Direct Success, Indirect Success, or Failure), first click, navigation path, completion time, and video playback option. Use the Responses filter as needed.

Screenshot of the Tree Testing activity view showing detailed success and failure tooltips. The Success donut chart displays 82% success, broken down into 75% Direct Success (75 out of 100 participants) and Indirect Success. The tooltip explains that direct success means users followed the expected path, while indirect success means they reached the goal through alternative routes. The Failure chart shows 18% failure (18 out of 100 participants). A participant table appears below with task paths and results.
  1. First Click: Analysis of users’ initial navigation choices

The First Click tab shows where participants clicked first when starting the task, helping identify which labels are more intuitive or misleading.

Screenshot of the Tree Testing activity in the Userlytics Metrics dashboard showing the First Click tab. Two bar charts display data on participant behavior — one labeled Visited First, showing which sections users clicked first (Computers, Furniture, Store) with corresponding percentages, and another labeled Visited During, showing how often each section was accessed during navigation. A participant table below lists individual responses, paths taken, and time spent, with “Play” buttons to view recordings.

The “Visited first” and “Visited during” metrics help you understand user navigation behaviors through the hierarchical structure you’ve set up.

Screenshot of the Tree Testing activity in the Userlytics dashboard, focusing on the First Click metrics view. Two tooltips are shown: one over Visited First, explaining that it represents the percentage of users who made their first click on a section (reflecting intuitiveness or appeal), and another over Visited During, showing the percentage of users who visited a section at any point during navigation. The chart displays menu sections such as Internet, Cell Phones & Plans, Help & Support, and Home Phone, with associated percentages.

Here’s how these percentages are calculated, assuming a panel of 100 participants.

Visited First

This metric indicates how many participants first clicked on a particular node when attempting to complete a task.

Formula:

Formula showing how to calculate the “Visited First” percentage in Tree Testing:
Visited First % = (Number of participants who clicked this node first ÷ Total number of participants who attempted the task) × 100.

Example:

Example formula showing how to calculate the “Visited First” percentage:
Visited First = (25 ÷ 100) × 100 = 25%.
This means 25 out of 100 participants clicked the node first.

If 25 out of 100 participants clicked on the node “Help Center” as their first action, then:

Visited During

This metric reflects how many participants clicked on a particular node at any point during their navigation path, not necessarily the first or final click.

Formula:

Example:
If 60 out of 100 participants visited the “Pricing” node at any time while trying to complete the task:

Additional information:

  • These metrics are calculated per activity, so for multi-task tree tests, you’ll get different percentages for each task.
  • They help identify misleading labels, confusing hierarchy, or well-performing paths.
  • If participants drop out or skip the task, they’re generally excluded from the denominator.
  1. Final Selection: Where users ended their journey

The Final Selection tab shows which menu items participants chose as their final answer. The legend on the right identifies each menu item by color, making it easy to see where participants ended their navigation journey. Below the chart, a table lists each participant’s first click, complete path taken, and time spent. 

Screenshot of the Tree Testing activity’s Final Selection tab in the Userlytics dashboard.
A donut chart shows where participants made their final selection: Home internet plans (DSL) (50%), Pay as you go plans, Knowledge base, Frequently asked questions, and Others (2%). Tooltips display details for “Home internet plans (DSL)” and “Others.”
Below, a participant table lists users, their results (e.g., Direct or Indirect Success), the first section they visited, the path they took, and their time spent, with “Play” buttons for viewing recordings.

This view helps identify whether users are selecting the correct destination or ending up in wrong locations, revealing potential issues with menu structure or labeling. You may filter by participant(s) to isolate specific results.

  1. Paths: Complete navigation routes taken

The Paths tab displays a summary of all unique navigation routes taken by participants, showing how many users followed each specific path through the menu structure.

  1. Tree: Visual representation of the menu structure

The Tree tab displays the complete menu hierarchy, showing how many times each section and subsection was clicked during navigation and how many participants selected each item as their final answer.

Screenshot of the Tree Testing activity in the Userlytics dashboard, showing the Tree tab view.
The hierarchical menu structure is displayed with sections such as Help & Support, Knowledge base, Frequently asked questions, Cell Phones & Plans, and Internet on your cell phone.
The table includes columns for Times Visited and Final Selection, with color-coded indicators showing where users navigated successfully (green) or unsuccessfully (red). Below, a participant list shows results, visited paths, and playback options for each session.

10. Card Sorting

The first tab of results is the Cards tab, which provides an overview of how participants categorized each card in your study. You’ll see a sortable table that displays all cards and the different categories participants used and how often. 

  1. Cards

Cards sorted into fewer categories with high agreement indicate clear, intuitive groupings. Cards sorted into many different categories suggest confusion or ambiguity in labeling, signaling opportunities to refine your content structure.

Below the cards overview, access each participant’s complete sorting choices, time spent, and video recording to understand their reasoning and decision-making process. 

Screenshot of the Card Sorting activity view in the Userlytics dashboard, showing the Cards tab.
A list of cards such as Laptop, Books, Car, Mobile, Scarves, and Gloves is displayed, each with the number of times it was sorted and the categories it was placed into (e.g., Accessories, Technology, Science). The interface includes tabs for Categories, Result Matrix, Dendogram, Similarity Matrix, and Recommendation, with participant filtering and search options at the bottom.

2. Categories 

The Categories tab shows which cards were placed in each category. Categories can be created by participants, predefined by researchers in the Study Builder, or a combination of both. 

Each category row displays the total number of cards it contains and lists all cards sorted into that category with their frequency counts (e.g., “4 (15)” means card “4” was placed in this category by 15 participants). Below, view individual participant responses and completion times to explore how each person organized the cards.

Screenshot of the Card Sorting activity in the Userlytics dashboard, displaying the Categories tab.
A table lists categories such as Accessories, Computer, Technology, Science, Clothes, and Culture, along with the number of cards each contains and the cards most frequently sorted into them (e.g., Laptop, Books, Car, Mobile). Below, a participant list includes users, their response view buttons, and time spent on the activity.

3. Result Matrix 

The Result Matrix displays a color-coded heatmap showing how many participants sorted each card into each category. Rows represent cards, columns represent categories, and the intensity of the purple color indicates frequency:

  • Dark purple – First option with the most responses (highest agreement)
  • Medium purple – Second option with more answers (moderate agreement)
  • Light purple – Third option with fewer answers (lower agreement)

The numbers inside each cell show exactly how many participants placed that specific card in that category. This visualization quickly reveals strong patterns of agreement (dark cells with high numbers) and areas of confusion (cards spread across multiple categories with low numbers).

Screenshot of the Card Sorting activity in the Userlytics dashboard, showing the Result Matrix tab.
A heatmap table displays how participants grouped cards like Laptops, Books, Car, Mobile, Mouse, Tablet, and Gloves into categories such as Accessories, Computer, Technology, Education, Science, Animal, and Clothes.
The cells are shaded in varying shades of purple to indicate frequency, with darker tones representing higher sorting frequency.

4. Dendrogram 

A brand new visualization that shows a hierarchical tree visualization that reveals how cards cluster together based on participant sorting patterns. Cards that were frequently grouped together appear closer in the tree structure, with branches showing the strength of their relationships.

Screenshot of the Card Sorting activity in the Userlytics dashboard, displaying the Dendrogram tab.
A hierarchical dendrogram visualization appears, showing relationships between cards such as Falcon, Lion, Bobcat, Bear, Wolf, Coyote, Sheep, Rabbit, Cow, and Marmot. The chart uses purple connecting lines to represent grouping similarities based on participant responses. The left panel lists study sections including Metrics, Sessions, and Annotations.

This powerful visualization helps you identify natural groupings and understand which cards participants consistently associate with one another. The horizontal distance between branches indicates how strongly related the cards are, while the shorter distances mean stronger associations. Use this view to discover logical category structures that align with user mental models.

5. Similarity Matrix

The Similarity Matrix is a color-coded grid that shows how frequently participants grouped pairs of cards together. 

Each cell displays the percentage of participants who sorted both cards into the same category. Darker purple cells indicate stronger agreement, meaning most participants grouped those cards together, while lighter cells show weaker associations. 

Screenshot of the Card Sorting activity in the Userlytics dashboard, showing the Similarity Matrix tab.
A triangular heatmap displays the similarity percentages between cards such as Laptop, Books, Car, Mobile, Scarves, Gloves, Tablet, Mouse, Microphone, and Socks.
The cells are shaded from light to dark purple to represent the degree of similarity, with higher percentages like 83% and 100% in darker shades. The left panel shows study navigation options including Metrics, Sessions, and Annotations.

This matrix helps you identify which cards have strong natural relationships in users’ minds (high percentages) and which cards don’t clearly belong together (low percentages). Use this data to validate potential category groupings and spot cards that consistently cluster together across participants.

6. Recommendation 

The Recommendation tab provides smart, data-driven category suggestions based on participant sorting patterns. Cards are automatically tagged as Recommended (green indicators) when there’s strong agreement among participants about their category placement.

At the top, you’ll see a summary showing the total number of cards and categories, along with a count of recommended cards. Use the “Show empty categories” toggle to view or hide categories with no card assignments. This intelligent analysis helps you quickly identify which category structures have strong user consensus, making it easier to finalize your information architecture with confidence.​

Screenshot of the Card Sorting activity in the Userlytics dashboard, showing the Recommendation tab.
The interface displays two automatically suggested groups: Numbers and Letters. Each category contains six cards — Numbers include 1 to 6, and Letters include A to F — all marked with green dots indicating recommended groupings. The panel on the left shows study navigation options such as Metrics, Sessions, and Annotations, and a participant list appears at the bottom showing response time details.

When you hover over icons or indicators in the Recommendation tab, small black tooltips appear to explain their meaning. 

For example, hovering over the Recommended tag displays the message “Based on users’ feedback, these are the cards we suggest including in this category.” 

This indicates that there is strong participant agreement on where each card belongs. Another tooltip appears near the category toggle, stating “All categories are displayed,” confirming that all available categories are currently visible. 

Screenshot of the Card Sorting activity in the Userlytics dashboard, showing the Recommendation tab.
The interface displays three automatically generated suggested categories based on participant feedback: My booking with two cards, Book & Prepare with two cards, and Accessories with one card. Each card is marked with a green dot indicating a recommended placement.
A tooltip explains that recommendations are based on user feedback and data analysis. The left panel lists study sections such as Metrics, Sessions, and Annotations, while the bottom section shows participant details with category counts and time spent.

The following view shows the Recommendation tab in a more typical scenario where cards have mixed classification statuses. The system automatically tags each card based on participant sorting behavior. Cards marked with a yellow indicator are tagged as Needs Review, suggesting that user responses were inconsistent or unclear. When hovering over the yellow tag, a tooltip appears that reads: “User responses suggest that these cards are unclear. It may be worth considering a different category or renaming the card.” This helps users understand why certain cards may require attention or adjustments.

Screenshot of the Card Sorting activity in the Userlytics dashboard, displaying the Recommendation tab.
Three categories are shown: Book & Prepare with three cards, My booking with four cards, and Accessories, which is empty and labeled Empty Category.
Green dots mark recommended cards, while yellow dots mark cards that Need Review. A tooltip explains that unclear cards may need renaming or reassignment to a different category. The left panel lists study navigation items like Metrics, Sessions, and Annotations, while the participant list with time spent is visible at the bottom.

At the top, a summary shows the total number of cards and categories, along with a count of cards under each status. The Show empty categories toggle is on, so even empty categories (like “Accessories”) are displayed. This provides a complete overview of all possible categories, helping users see both assigned and unassigned cards in one place.

The following view shows the Recommendation tab with all three card statuses: Recommended (green), Needs Review (yellow), and Uncategorized (red). These tags indicate how consistently participants sorted each card.

Hovering over each label displays a tooltip explaining its meaning. For example, the Uncategorized tooltip says the system couldn’t place those cards into any category due to low participant agreement.

At the top, you’ll see the total number of cards and categories, along with counts for each status. Through the Show empty categories toggle you can see all categories being displayed, including the empty ones.

Screenshot of the Card Sorting activity in the Userlytics dashboard, showing the Recommendation tab.
The display includes several categorized groups — Book & Prepare, My booking, Accessories, Clothes, Technology, and Uncategorized.
Each category contains cards labeled numerically (e.g., Card 01–Card 06) with colored status indicators: green for Recommended, yellow for Needs Review, and red for Uncategorized. A tooltip explains that uncategorized cards lack user consensus. The left sidebar lists navigation options like Metrics, Sessions, and Annotations, and participant details are visible at the bottom.

Only 6 cards are shown per category. If you want to see all cards per category, click on the “Show More” button and all cards corresponding to a category will appear.

Screenshot of the Card Sorting activity in the Userlytics dashboard, under the Recommendation tab.
The screen displays seven categories—Book & Prepare, My booking, Accessories, Clothes, Technology, and Uncategorized. Each contains a list of cards labeled numerically (e.g., Card 01–Card 10) marked with color indicators: green for Recommended, yellow for Needs Review, and red for Uncategorized.
A red arrow at the bottom right points to a “Show Less” button. The left panel lists navigation options like Metrics, Sessions, and Annotations, while participant information is visible at the bottom.

Lastly, this view shows the Recommendation tab when all cards are classified as Uncategorized (red). This means the system was unable to assign any cards to a category due to low participant agreement. The tooltip explains: “The system was unable to place these cards into any category due to a lack of user consensus.”

Screenshot of the Card Sorting activity in the Userlytics dashboard under the Recommendation tab.
The screen shows that all 40 cards have been grouped into a single Uncategorized section, marked with red icons. A toggle labeled Show empty categories is set to off. The participant list at the bottom displays names, number of categories created, and time spent per participant. The left sidebar contains navigation options such as Metrics, Sessions, and Annotations, and the header shows filters for Demographics, Screener, and Participants.

This scenario typically indicates that participants sorted the cards in very different ways, with little overlap between their choices. It signals that the categories or card labels may need to be reviewed or refined before finalizing the information architecture.

11. Matrix

The Matrix activity measures user satisfaction across multiple dimensions, allowing you to evaluate several aspects (such as product features, pricing, experience, or support) simultaneously using a consistent rating scale.

Userlytics dashboard displaying a Matrix question heatmap showing participant satisfaction levels across different product aspects.

The color-coded matrix shows response distribution across a 5-point satisfaction scale. Darker or highlighted areas indicate features that participants find most valuable, offering insights into areas of high engagement. Variations in response patterns across rows reveal differences in user preferences, helping you identify opportunities for improvement or customization.

Below the matrix, a sortable table displays each participant’s ratings for all questions and completion time. This allows you to dive into individual responses and understand the reasoning behind specific ratings. 

Screenshot of the Matrix activity screen in the Userlytics dashboard, displaying user satisfaction results. The central heatmap visualizes satisfaction levels across five categories — “The product meets my needs,” “Your experience at the sign-up process,” “The products offered,” “Price,” and “Interaction with the support team.”
Percentages are shown in purple gradient cells, ranging from Very Satisfied to Very Dissatisfied.
In the top-right corner, a dropdown menu is expanded from the Modify Participants button, showing options: Download (CSV), Clone, Move, and Edit Study.
The left sidebar includes navigation items such as Metrics, Sessions, and Annotations, and the participant list with response details is visible below the chart.

To download responses, locate the three-dot menu on the left side of the screen and you’ll see the option to download as a CVS file.

12. X Second Test

The X Second Test evaluates first impressions by displaying an image or design for a limited time, then asking participants follow-up questions to assess what they can recall first.

The interface shows:

  • The design or image shown.
  • Viewing duration.
  • The participant list with time spent and video playback for each session.

This is ideal for testing visual hierarchy and immediate messaging.

Screenshot of the X Second Test activity in the Userlytics dashboard. The interface displays an image preview in the center, representing the design shown to participants for a short duration (10 seconds).
A label below the image reads “Image viewing time: 00:10.”
The left panel lists other activity types such as Scenario, Single Choice, Tree Testing, and Matrix.
At the bottom, the participant table lists respondents (e.g., A. Thomson) with corresponding Time Spent and a Play button for reviewing individual session recordings.
The top section includes navigation filters for Demographics, Screener, and Participants.

13. First Click

The First Click activity tracks where users click first in an interface, evaluating navigation and linking structure.

Toggle between two visualization modes:

Heat Map: The results are displayed as a heatmap showing the areas where participants clicked most frequently. Heat zones are color-coded to represent click density, with warmer colors (red, orange) indicating higher concentrations of clicks and cooler colors (green, blue) showing lower activity.

Screenshot of the First Click activity in the Userlytics dashboard showing a heat map of user interactions. The central image displays a webpage with a banner reading “Understand your users’ emotions behind every click, tap, and swipe” and a photo of a person holding two gaming controllers.
Colored heat spots (mainly red, yellow, and green) indicate where participants clicked most — concentrated near the top navigation bar and the Schedule a Free Demo button.

Click Map: Displays precise individual click positions overlaid on the design.

Screenshot of the Userlytics platform showing a “First Click” task within a usability study. The interface displays question types in a sidebar and a click map highlighting where participants clicked on the Userlytics homepage.

Below the visualization, a sortable table lists each participant with their time to first click.

Key Insights:

If first clicks concentrate on the correct element (e.g., a primary CTA or navigation item), your visual hierarchy is working. Scattered clicks or concentration in unexpected areas signal potential issues with visual prominence, labeling, or layout that may require adjustment.

Ideal for validating that users navigate intuitively from the start and that the most important elements stand out effectively.

Participants

The Participants tab provides a comprehensive view of all study participants, with detailed information about each individual’s session and responses. Head to the three-dot menu on the right-hand side of the screen to edit participant’s names, download data into a CVS file, and more.

Userlytics dashboard showing participant details for A. Thomson in the New Graphic Charts study, including session info (April 2, 2024, iOS, 12:05 min) and task responses with playback options.

Click any participant card to view their detailed information such as Screener, Activity responses, and sentiment analysis if there is one available. You’ll have the option to download individual data by clicking on the three-dot menu on the left.

Userlytics Participants dashboard showing details for A. Thomson with an open dropdown menu offering options to Edit participant name, Download (CSV), and Go to Participant Session.

View overall sentiment breakdown with a donut chart showing Positive, Neutral, and Negative sentiment distribution, plus an average sentiment score.

Userlytics dashboard showing participant A. Thomson’s sentiment analysis. A doughnut chart indicates 50% positive sentiment, with remaining responses neutral or negative, and an average sentiment score of 50.0 displayed below.

Use the Demographics, Screener, and Participants filters at the top right to narrow down the participant list by specific criteria.

Sentiment Analysis

The Sentiment Analysis tab uses AI to analyze participant emotions and attitudes expressed during their sessions, providing insights into how users feel about your product or experience. 

Generating Sentiment Analysis:

Sentiment analysis is not automatically generated for all studies. When you navigate to this tab, you’ll see one of two states:

  1. Not Yet Generated – A message states “The sentiment analysis has not been generated yet” with a blue Request Sentiment Analysis button
  2. Analysis Ready – Sentiment data is displayed across all participants

Requesting Analysis:

Click Request Sentiment Analysis to begin. 

Userlytics participant view for A. Thomson displaying demographic details and system specs. The Sentiment Analysis tab shows a message stating that the sentiment analysis has not been generated yet, with an option to Request Sentiment Analysis.

A prompt will appear with the following options:

  • Email notification toggle – Enable to receive an email when the analysis is complete
  • Remaining transcriptions counter – Shows how many transcription credits you have available (e.g., “15/40”)
  • Plan limitation notice – If you’re running low on transcriptions, you’ll see “Your plan is limited. Do you need more transcriptions?” with a link to Contact Sales to increase the limit.

Click Start to begin processing, or Cancel to return without generating the analysis.

Popup window titled “Request Sentiment Analysis.” It shows an option to enable email notifications, a progress bar indicating 15 of 40 remaining transcriptions, and a note that the user’s plan is limited with a link to Contact Sales. Buttons at the bottom allow Cancel or Start.

Once generated, view aggregated insights across all participants with overall insights, including:  

Participants Analyzed – Percentage with completed analysis

Donut chart – Sentiment distribution (green = positive, gray = neutral, red = negative)

Average Score – Overall sentiment with indicators 

Userlytics dashboard displaying the Sentiment Analysis section for a study titled All Pan Unmod Test. A circular chart shows overall insights with green (positive), gray (neutral), and red (negative) segments. The analysis indicates an average score of 52.5, with all 20 participants analyzed. A list below displays individual participant sentiment bars combining positive, neutral, and negative proportions.

Each participant row displays a horizontal bar with color-coded segments showing their positive (green), neutral (gray), and negative (red) sentiment proportions, plus an overall emoji indicator. Click any participant for a detailed breakdown.

Use the top-right filters to analyze sentiment by Demographics, Screeners, or specific Participants.

AI UX Insights

With advanced AI algorithms, this feature is designed to analyze the qualitative video recordings of a UX study to summarize the key, actionable insights. 

This feature eliminates the need for manual review of session recordings and saves a great deal of time, which can be repurposed to review and analyze only the most significant video recording moments. 

Below, we’ll share how to quickly and easily generate an AI UX Analysis report from your UX studies.

Understanding Your UX AI Insights Report

Once you navigate to the UX AI Insights tab and transcriptions have been processed, you’ll see AI-generated insights from your study.

The UX AI Insights feature analyzes video and audio transcriptions to extract actionable insights such as:

  • Common themes across participant responses
  • Sentiment patterns and emotional reactions
  • Key quotes and critical moments
  • Behavioral insights

Each insight card highlights important themes, patterns, or observations discovered by analyzing the transcriptions. Longer insights may span multiple lines to provide complete context.

Userlytics UX AI Insights dashboard showing automatically generated insights from usability test data. The section titled Overall Insights lists five AI-generated observations about interface design and usability, including layout stretching on multi-monitor setups, style consistency, visual hierarchy, loading performance, and menu structure efficiency. Each insight card includes a small blue AI icon and a copy button for easy sharing.

Next to UX AI Insights see how many participants have been analyzed (e.g., “4/6” means 4 out of 6 participants analyzed)

Note that the report is based on the participants available at the time you click “Generate UX AI Insights.” If you add new participants to your study after generating insights, you’ll see a notification on screen: “New study results are available.” Simply click on the “Refresh UX AI Insights” to get an updated report that includes the latest participants.

Userlytics UX AI Insights dashboard showing a banner at the top with a blue button labeled “Refresh UX AI Insights” and a message reading “New study results available – Refresh your AI UX Insights.” Below, several AI-generated insights are listed, highlighting user feedback about navigation, onboarding flow, logo behavior, and visual hierarchy improvements.

The Copy All button (on the top right) exports all insights to your clipboard for easy sharing or documentation.

Userlytics UX AI Insights dashboard showing a tooltip that says “This report is based on 4 transcriptions from 8 participants.”

These insights are actionable starting points for product decisions, design iterations, and prioritizing improvements based on real user feedback.

Userlytics UX AI Insights dashboard with a tooltip explaining the feature. The tooltip reads: “These insights highlight patterns, opportunities, and areas of improvement derived from user interactions. Use them to guide decisions and enhance your product experience.” It also includes a link labeled “Learn more about how this works.” Below, the Overall Insights section displays several AI-generated insight

Getting Started

When you first navigate to the UX AI Insights tab, you may see an empty screen with the following message: 

Userlytics UX AI Insights dashboard showing an empty state message. The page displays a gray inbox-style illustration with text stating, “It looks like some transcriptions are missing to generate the UX AI Insights.” Below, there’s an explanation that at least one session with a transcription is required to enable AI analysis and generate insights, followed by a link labeled “Learn more about how this works.”

This occurs because the AI requires transcriptions to generate insights. 

If no recordings have been transcribed, the AI has no input text to analyze, hence the section remains empty.

To activate and populate this section, follow these steps.

  1. Generate transcriptions for Video or Audio recordings:
    • Go to your sessions.
    • Go to transcriptions.
    • Request the transcription.
  2. Wait for processing to complete:
    • Transcriptions may take a few minutes depending on the length of recordings.
    • Once complete, return to the “UX AI Insights” tab and click on the “Generate UX AI Insights” button to get results.

It may take a few minutes for insights to be ready. 

Userlytics UX AI Insights dashboard showing a loading screen with the message “Generating UX AI Insights...” and a blue progress bar above it. The accompanying text reads, “We’re generating your UX AI Insights report. This should only take a moment. Thank you for your patience!” followed by a link labeled “Learn more about how this works.”

Note: Transcription and AI analysis features consume account credits. Check your remaining credits in account settings or contact sales for additional capacity.

If you have any questions regarding the Metrics section, reach out to your account manager for support.

Didn’t find what you were searching for?

Related posts:

Clients
October 6, 2025

Transcriptions

Learn about Userlytics' video transcriptions. Get accurate, searchable, and speaker-differentiated transcripts to quickly find key moments.
Clients
October 6, 2025

How to Use the Video Analysis Tool

Learn to use the Video Analysis Tool. Streamline session reviews with enhanced search, synchronized annotations, and sentiment analysis.
Clients
October 6, 2025

How To Use Sentiment Analysis

Learn to use sentiment analysis on Userlytics. Classify feedback as positive, negative, or neutral with this guide.
Clients
October 6, 2025

How to Use AI UX Analysis

Learn to use AI UX Analysis to quickly extract insights from video recordings. Generate a report in seconds to summarize key information.

Ready to Elevate Your UX Game?