Hurry! Our Biggest Sale of the Year is On - Contact us to lock in prices and more
Skip to content

Different Activities Available in the Userlytics Study Builder

Updated:

With Userlytics’ easy-to-use test builder, you can create custom UX tests for your own participants or participants from our global panel of over 2 million participants.

The Activities tab on your test builder allows you to choose from a variety of different task and question types for your usability test.

Each task comes with a generic template that you can adjust and edit as you please. Strategically choosing tasks that will help draw the insights you are looking for is key to conducting a successful usability study.

Userlytics Study Builder - Activities

Below, we go over 11 different types of tasks, and how they can help you get the UX insights you are looking for.

Userlytics Study Builder - Add Activity/Questions

Scenario

The “Scenario” task sets the stage for your test by asking participants to imagine they are doing something specific.

For example, if you are testing an e-commerce website for pet products, the scenario task might be: “Imagine you are browsing for dog food made from natural ingredients.” A well-written scenario helps participants provide better feedback about your product.

In addition to setting the stage, the scenario task typically reminds participants to speak out loud throughout the test and share their thoughts. This provides valuable insights and helps you draw powerful conclusions about your product’s overall usability.

Userlytics Study Builder - Scenario Task

Task / Verbal Response

The “Verbal/Task” activity will prompt your participant to either answer a question out loud, or perform an action that can be measured. Which one should you choose?

Verbal Response (recommended in most cases) : Ensure participants can answer questions while following the instructions alongside the digital asset. 

Task: Choose this activity type when it’s essential that participants start the task only after thoroughly reading the instructions. This approach is useful for accurately measuring the time taken to complete the task itself, excluding the time spent reading instructions. It ensures that the recorded timing reflects only the task completion duration.

Userlytics Study Builder - Task / Verbal Response

When you select the ‘task’ activity type, the process for participants is as follows:

  1. Read the Question: Participants will first be shown the task question in full screen. 
  2. Click Start: They must click the “Start” button to begin.
  3. Minimized Task Box: The task box or instruction box will be minimized after clicking “Start.”
  4. Full-Screen Prototype: Participants will then see the prototype/asset in full screen, ensuring they read the instructions first and have the full screen available for task completion. 
  5. Once they have completed the task, participants will have to expand the task box and click “Next” to jump to the next question. 

Using this method ensures participants are well-prepared before starting the task and that the recorded time accurately reflects their task performance. 

Single / Multiple Choice

The “Single / Multiple Choice” task gives you the option of:

  1. Creating questions with only one correct answer
  2. Asking checkbox style questions where the participant can select more than one answer.

Single-choice questions are useful for determining if an aspect of your platform or prototype is clear to the consumer. With only one correct answer and the rest serving as distractors, participants must fully understand the content to select the correct option. Each participant can provide only one response to a single-choice question.

Multiple-choice questions are effective for gaining insights into the overall sentiments of your audience. If your results show that two or three answer choices are more frequently selected, it can indicate which aspects of your platform are successful and which may need improvement. For multiple-choice questions, participants can choose from one up to the total number of specified options. For instance, if you create a multiple-choice question with five possible answers, participant responses might include one, two, three, four, or all five of those choices.

Userlytics Study Builder - Single / Multiple Choice

You will also notice that you have the option to use question logic, enabling a tester to either advance or be disqualified from a test based on your requirements.

Rating

“Rating” questions use a scale to measure opinions in a group.

We use rating-scale questions to measure attitudes, beliefs, preferences, and self-reported behavior. This data helps us understand how users see our product or service.

A common rating task is asking participants to rate an aspect of your website, app, or prototype on a scale, with the lowest number being the worst and the highest number being the best.

Rating questions can have as many answer choices and display values (numbers, words, etc.) as the test creator wants. For example, with five answer choices, each participant will give one response between “1” and “5.”

Userlytics Study Builder - Rating

Write-in Response

“Write-in Response” tasks let you ask the participant an open-ended question and get a written answer. Some participants may feel more comfortable answering honestly in writing than with verbal prompts.

Userlytics Study Builder - Write-in Response

Net Promoter Score (NPS)

The “Net Promoter Score (NPS)” task measures the percentage of customers who would recommend a company, product, or service to a friend or colleague. According to Netpromoter.com, you calculate your NPS using answers to a key question on a 0-10 scale: How likely are you to recommend [brand] to a friend or colleague?

Respondents are grouped as follows:

  • Promoters (score 9-10) are loyal enthusiasts who will keep buying and refer others, driving growth.
  • Passives (score 7-8) are satisfied but unenthusiastic customers who are open to competitive offers.
  • Detractors (score 0-6) are unhappy customers who can harm your brand and slow growth through negative word-of-mouth.
Userlytics Study Builder - Net Promoter Score

For this task, each participant’s answer is a number between 0 and 10. Subtracting the percentage of Detractors from the percentage of Promoters gives the Net Promoter Score (NPS). This score can range from -100 (if all customers are Detractors) to 100 (if all customers are Promoters).

If all customers give a score of 6 or lower, the NPS is -100. If all customers give a score of 9 or 10, the NPS is 100. The NPS helps you determine if you have correctly identified your customer’s needs in your prototype or platform.

Userlytics Study Builder - NPS

System Usability Scale (SUS)

The “System Usability Scale (SUS)” task is a quick, reliable tool for measuring product usability. It includes a 10-item questionnaire with five response options ranging from “Strongly agree” to “Strongly disagree”. SUS can evaluate various products and services, including hardware, software, mobile devices, websites, and applications. Our platform automatically generates the SUS questionnaire, allowing you to specify the type of asset your participants are testing.

Benefits of using SUS include:

  • It is easy to administer.
  • It provides reliable results even with small sample sizes.
  • It is valid and can differentiate between usable and unusable systems.

SUS questions have five answer options, with “1” being the most negative response and “5” being the most positive. Each participant will provide one response per question between “1” and “5.”

To calculate the cumulative SUS score:

  1. Convert the scale into numbers for each of the 10 questions:
  • Strongly Disagree: 1 point
  • Disagree: 2 points
  • Neutral: 3 points
  • Agree: 4 points
  • Strongly Agree: 5 points
  1. Calculate:
  • X = Sum of the points for all odd-numbered questions
  • Y = 25 – Sum of the points for all even-numbered questions
  • SUS Score = (X + Y) x 2.5

The rationale is intuitive: the total score is 100, and each question has a weight of 10 points.

The average SUS score is 68, meaning a score of 68 places you at the 50th percentile. Here is a general guideline for interpreting SUS scores:

Userlytics Study Builder - System Usability Scale

Your SUS task will appear to your participants as follows:

Userlytics Study Builder - How SUS task appears to participants

Card Sorting

Card sorting is a popular method for understanding the user’s mental model. Instead of organizing a website by your corporate structure, you base it on how users think by having them sort items into categories.

You can create three different types of card sorting tasks:

  1. Open Card Sort: Participants create and name their own categories.
  2. Closed Card Sort: You predetermine the categories for sorting.
  3. Hybrid Card Sort: You predetermine some categories and allow participants to create additional categories if they wish.
Userlytics Study Builder - Card Sorting

Your participants’ card sorting results will be available in the “metrics” section of your Userlytics dashboard. This section includes three parts: Cards, Categories, and Recommendations.

Card Metrics

The first tab of results is the Cards tab, which provides an overview of how participants categorized each card in your study. You’ll see a sortable table that displays all cards and the different categories participants used and how often. 

  1. Cards

Cards sorted into fewer categories with high agreement indicate clear, intuitive groupings. Cards sorted into many different categories suggest confusion or ambiguity in labeling, signaling opportunities to refine your content structure.

Below the cards overview, access each participant’s complete sorting choices, time spent, and video recording to understand their reasoning and decision-making process. 

Screenshot of the Card Sorting activity view in the Userlytics dashboard, showing the Cards tab.
A list of cards such as Laptop, Books, Car, Mobile, Scarves, and Gloves is displayed, each with the number of times it was sorted and the categories it was placed into (e.g., Accessories, Technology, Science). The interface includes tabs for Categories, Result Matrix, Dendogram, Similarity Matrix, and Recommendation, with participant filtering and search options at the bottom.

2. Categories 

The Categories tab shows which cards were placed in each category. Categories can be created by participants, predefined by researchers in the Study Builder, or a combination of both. 

Each category row displays the total number of cards it contains and lists all cards sorted into that category with their frequency counts (e.g., “4 (15)” means card “4” was placed in this category by 15 participants). Below, view individual participant responses and completion times to explore how each person organized the cards.

Screenshot of the Card Sorting activity in the Userlytics dashboard, displaying the Categories tab.
A table lists categories such as Accessories, Computer, Technology, Science, Clothes, and Culture, along with the number of cards each contains and the cards most frequently sorted into them (e.g., Laptop, Books, Car, Mobile). Below, a participant list includes users, their response view buttons, and time spent on the activity.

3. Result Matrix 

The Result Matrix displays a color-coded heatmap showing how many participants sorted each card into each category. Rows represent cards, columns represent categories, and the intensity of the purple color indicates frequency:

  • Dark purple – First option with the most responses (highest agreement)
  • Medium purple – Second option with more answers (moderate agreement)
  • Light purple – Third option with fewer answers (lower agreement)

The numbers inside each cell show exactly how many participants placed that specific card in that category. This visualization quickly reveals strong patterns of agreement (dark cells with high numbers) and areas of confusion (cards spread across multiple categories with low numbers).

Screenshot of the Card Sorting activity in the Userlytics dashboard, showing the Result Matrix tab.
A heatmap table displays how participants grouped cards like Laptops, Books, Car, Mobile, Mouse, Tablet, and Gloves into categories such as Accessories, Computer, Technology, Education, Science, Animal, and Clothes.
The cells are shaded in varying shades of purple to indicate frequency, with darker tones representing higher sorting frequency.

4. Dendrogram 

A brand new visualization that shows a hierarchical tree visualization that reveals how cards cluster together based on participant sorting patterns. Cards that were frequently grouped together appear closer in the tree structure, with branches showing the strength of their relationships.

Screenshot of the Card Sorting activity in the Userlytics dashboard, displaying the Dendrogram tab.
A hierarchical dendrogram visualization appears, showing relationships between cards such as Falcon, Lion, Bobcat, Bear, Wolf, Coyote, Sheep, Rabbit, Cow, and Marmot. The chart uses purple connecting lines to represent grouping similarities based on participant responses. The left panel lists study sections including Metrics, Sessions, and Annotations.

This powerful visualization helps you identify natural groupings and understand which cards participants consistently associate with one another. The horizontal distance between branches indicates how strongly related the cards are, while the shorter distances mean stronger associations. Use this view to discover logical category structures that align with user mental models.

5. Similarity Matrix

The Similarity Matrix is a color-coded grid that shows how frequently participants grouped pairs of cards together. 

Each cell displays the percentage of participants who sorted both cards into the same category. Darker purple cells indicate stronger agreement, meaning most participants grouped those cards together, while lighter cells show weaker associations. 

Screenshot of the Card Sorting activity in the Userlytics dashboard, showing the Similarity Matrix tab.
A triangular heatmap displays the similarity percentages between cards such as Laptop, Books, Car, Mobile, Scarves, Gloves, Tablet, Mouse, Microphone, and Socks.
The cells are shaded from light to dark purple to represent the degree of similarity, with higher percentages like 83% and 100% in darker shades. The left panel shows study navigation options including Metrics, Sessions, and Annotations.

This matrix helps you identify which cards have strong natural relationships in users’ minds (high percentages) and which cards don’t clearly belong together (low percentages). Use this data to validate potential category groupings and spot cards that consistently cluster together across participants.

6. Recommendation 

The Recommendation tab provides smart, data-driven category suggestions based on participant sorting patterns. Cards are automatically tagged as Recommended (green indicators) when there’s strong agreement among participants about their category placement.

At the top, you’ll see a summary showing the total number of cards and categories, along with a count of recommended cards. Use the “Show empty categories” toggle to view or hide categories with no card assignments. This intelligent analysis helps you quickly identify which category structures have strong user consensus, making it easier to finalize your information architecture with confidence.​

Screenshot of the Card Sorting activity in the Userlytics dashboard, showing the Recommendation tab.
The interface displays two automatically suggested groups: Numbers and Letters. Each category contains six cards — Numbers include 1 to 6, and Letters include A to F — all marked with green dots indicating recommended groupings. The panel on the left shows study navigation options such as Metrics, Sessions, and Annotations, and a participant list appears at the bottom showing response time details.

When you hover over icons or indicators in the Recommendation tab, small black tooltips appear to explain their meaning. 

For example, hovering over the Recommended tag displays the message “Based on users’ feedback, these are the cards we suggest including in this category.” 

This indicates that there is strong participant agreement on where each card belongs. Another tooltip appears near the category toggle, stating “All categories are displayed,” confirming that all available categories are currently visible. 

Screenshot of the Card Sorting activity in the Userlytics dashboard, showing the Recommendation tab.
The interface displays three automatically generated suggested categories based on participant feedback: My booking with two cards, Book & Prepare with two cards, and Accessories with one card. Each card is marked with a green dot indicating a recommended placement.
A tooltip explains that recommendations are based on user feedback and data analysis. The left panel lists study sections such as Metrics, Sessions, and Annotations, while the bottom section shows participant details with category counts and time spent.

The following view shows the Recommendation tab in a more typical scenario where cards have mixed classification statuses. The system automatically tags each card based on participant sorting behavior. Cards marked with a yellow indicator are tagged as Needs Review, suggesting that user responses were inconsistent or unclear. When hovering over the yellow tag, a tooltip appears that reads: “User responses suggest that these cards are unclear. It may be worth considering a different category or renaming the card.” This helps users understand why certain cards may require attention or adjustments.

Screenshot of the Card Sorting activity in the Userlytics dashboard, displaying the Recommendation tab.
Three categories are shown: Book & Prepare with three cards, My booking with four cards, and Accessories, which is empty and labeled Empty Category.
Green dots mark recommended cards, while yellow dots mark cards that Need Review. A tooltip explains that unclear cards may need renaming or reassignment to a different category. The left panel lists study navigation items like Metrics, Sessions, and Annotations, while the participant list with time spent is visible at the bottom.

At the top, a summary shows the total number of cards and categories, along with a count of cards under each status. The Show empty categories toggle is on, so even empty categories (like “Accessories”) are displayed. This provides a complete overview of all possible categories, helping users see both assigned and unassigned cards in one place.

The following view shows the Recommendation tab with all three card statuses: Recommended (green), Needs Review (yellow), and Uncategorized (red). These tags indicate how consistently participants sorted each card.

Hovering over each label displays a tooltip explaining its meaning. For example, the Uncategorized tooltip says the system couldn’t place those cards into any category due to low participant agreement.

At the top, you’ll see the total number of cards and categories, along with counts for each status. Through the Show empty categories toggle you can see all categories being displayed, including the empty ones.

Screenshot of the Card Sorting activity in the Userlytics dashboard, showing the Recommendation tab.
The display includes several categorized groups — Book & Prepare, My booking, Accessories, Clothes, Technology, and Uncategorized.
Each category contains cards labeled numerically (e.g., Card 01–Card 06) with colored status indicators: green for Recommended, yellow for Needs Review, and red for Uncategorized. A tooltip explains that uncategorized cards lack user consensus. The left sidebar lists navigation options like Metrics, Sessions, and Annotations, and participant details are visible at the bottom.

Only 6 cards are shown per category. If you want to see all cards per category, click on the “Show More” button and all cards corresponding to a category will appear.

Screenshot of the Card Sorting activity in the Userlytics dashboard, under the Recommendation tab.
The screen displays seven categories—Book & Prepare, My booking, Accessories, Clothes, Technology, and Uncategorized. Each contains a list of cards labeled numerically (e.g., Card 01–Card 10) marked with color indicators: green for Recommended, yellow for Needs Review, and red for Uncategorized.
A red arrow at the bottom right points to a “Show Less” button. The left panel lists navigation options like Metrics, Sessions, and Annotations, while participant information is visible at the bottom.

Lastly, this view shows the Recommendation tab when all cards are classified as Uncategorized (red). This means the system was unable to assign any cards to a category due to low participant agreement. The tooltip explains: “The system was unable to place these cards into any category due to a lack of user consensus.”

Screenshot of the Card Sorting activity in the Userlytics dashboard under the Recommendation tab.
The screen shows that all 40 cards have been grouped into a single Uncategorized section, marked with red icons. A toggle labeled Show empty categories is set to off. The participant list at the bottom displays names, number of categories created, and time spent per participant. The left sidebar contains navigation options such as Metrics, Sessions, and Annotations, and the header shows filters for Demographics, Screener, and Participants.

This scenario typically indicates that participants sorted the cards in very different ways, with little overlap between their choices. It signals that the categories or card labels may need to be reviewed or refined before finalizing the information architecture.

Tree Testing

A “Tree test” is another term for a reverse card sort. It evaluates a hierarchical category structure, or tree, by having users find the locations where specific tasks can be completed. Including a tree testing task in your usability test can help verify the validity of the information structure and the findability of different products or themes on a site, mobile app, or prototype.

To build your Tree Testing task:

  1. Customize the text your testers will see. Define the product or theme your participants will sort into a specific category.
  2. Define your categories or menu hierarchy. You can also define subcategories to get more specific.
  3. Participants will sort through these categories and choose where they believe the specified product fits best.
Userlytics Study Builder - Tree Testing task

After naming your categories, click the toggle bar on the right-hand side to select the correct response to the task. This will allow you to view the success rate of your Tree Testing task once your participants complete the usability test.

Userlytics Study Builder - Tree Testing Toggles

Your participants’ tree testing results will be available in the “metrics” section of your Userlytics dashboard. This section includes three parts: Option Selected, First Click, and Other Info.

Option Selected

Clicking “Option Selected” on the left-hand side will display the percentage of participants who chose each answer option for your Tree Testing question.

First Click

The middle section, “First Click,” provides information about whether any participants initially clicked on one category but then switched to another before making a final decision.

The pie chart shows the percentage of participants who exited a path after entering that tree. This information is helpful because it indicates if participants hesitated before selecting a final category, suggesting that your categories may need fine-tuning to be more clear and intuitive.

Other Info

The third section on the right-hand side, “Other Info,” provides data about the overall results of your Tree Testing task. This includes the percentage of participants who found the correct path without any backtracking, the number of participants who selected the correct option on the tree, and the average time participants spent on the task.

Depending on the results, you may see a red highlighted “WARNING” button. Clicking this button will alert you if the success rate of your task is lower than the industry average. If so, you may need to consider revising your categories for added clarity.

Matrix Questions

A matrix question groups together relevant questions on a particular topic in a row-column format. This simplified format allows participants to view and answer survey questions at a glance. Matrix-style answer choices are often offered on a scale.

Userlytics Study Builder - Matrix Questions
Userlytics Study Builder - Matrix Questions

Here’s why you should consider using Matrix questions for your study:

  1. It saves space: Matrix questions display multiple questions and answers on one page, reducing lengthy surveys.
  2. Reduced response time: Consistent response options make it easier and quicker for participants to answer.
  3. Increased responses: Clear layout and simple answer options encourage more participants to complete the survey.
  4. Eliminates monotony: Groups related questions together, reducing the repetitive nature of surveys.
  5. Insight on sub-topics: Matrix questions provide detailed views on specific aspects of your product or service.

You can analyze the results of your matrix question within the “Metrics” of your completed study.

X-Second Test

The X-second test quickly captures people’s immediate reactions. You show them something for a brief period, usually 5 seconds, and then ask for their thoughts or feelings right away. By uploading images and screenshots to the Userlytics platform, this method helps understand what people first notice and their initial, often unconscious, responses.

Once you select this activity, a new window will appear prompting you to confirm your choice. Click “Add”.

Userlytics Study Builder - X-second Test

Once this is done, click on the activity you just created and fill out the details. These include the question that will appear on the participant’s screen, the file you want them to see, and the duration in seconds.

Userlytics Study Builder - Select test duration for X-second test

First Click Testing

This activity type helps you understand user behavior by tracking where they click first on an image.


By analyzing these initial clicks, you can see if your design is intuitive and easy to navigate.

First Click Testing works smoothly on both mobile and desktop platforms, and is available in all Userlytics subscription plans.


It supports every testing approach, from unmoderated to moderated, making First Click Testing a versatile tool for all your UX research needs.

Userlytics Study Builder - First Click Testing


It works by uploading an image or screenshot of your website, app, or prototype. You’ll receive a comprehensive heatmap and click map showing where users clicked on specific areas.

After setting up your First Click activity test, you can view the results on your study Metrics dashboard, where you’ll be able to toggle between two visualization modes:

  1. Heat Map: The results are displayed as a heatmap showing the areas where participants clicked most frequently. Heat zones are color-coded to represent click density, with warmer colors (red, orange) indicating higher concentrations of clicks and cooler colors (green, blue) showing lower activity.
Screenshot of the First Click activity in the Userlytics dashboard showing a heat map of user interactions. The central image displays a webpage with a banner reading “Understand your users’ emotions behind every click, tap, and swipe” and a photo of a person holding two gaming controllers.
Colored heat spots (mainly red, yellow, and green) indicate where participants clicked most — concentrated near the top navigation bar and the Schedule a Free Demo button.

2. Click Map: Displays precise individual click positions overlaid on the design.

Screenshot of the Userlytics platform showing a “First Click” task within a usability study. The interface displays question types in a sidebar and a click map highlighting where participants clicked on the Userlytics homepage.

Below the visualization, a sortable table lists each participant with their time to first click.

Key Insights:

If first clicks concentrate on the correct element (e.g., a primary CTA or navigation item), your visual hierarchy is working. Scattered clicks or concentration in unexpected areas signal potential issues with visual prominence, labeling, or layout that may require adjustment.

Ideal for validating that users navigate intuitively from the start and that the most important elements stand out effectively.

Hovering over the heat-zones with your mouse will provide a detailed breakdown of the number of participants who clicked in that specific location.

Remember to Preview Tasks!

After building out your test tasks, be sure to preview each one to ensure they are easy to understand, well-written, and work well with your test asset.

You can choose to run a “Quick Preview,” which allows you to check the flow of tasks and questions through an interactive HTML interface without downloading the Userlytics Recorder or displaying your test asset.

Alternatively, you can run a “Recorder Preview,” which uses the Userlytics Recorder to simulate a real test without any actual recording or uploading.

Didn’t find what you were searching for?

Related posts:

Clients
October 6, 2025

What kind of files can be uploaded for remote user testing?

Learn what kind of files you can upload for remote user testing. Find out which image formats and mobile apps are supported.
Clients
October 6, 2025

What are Matrix Questions, and How Can I Add Them to my Study?

Help Center: Discover how to use matrix questions to collect organized feedback and compare responses efficiently in your UX studies with Userlytics.
Clients
October 6, 2025

What are Matrix Questions, and How Can I Add Them to my Study?

Help Center: Discover how to use matrix questions to gather structured user feedback efficiently in your UX studies with Userlytics.
Clients
October 6, 2025

Tree Testing

Help Center: Discover how tree testing helps you assess the clarity of your website’s navigation and information hierarchy with Userlytics.

Ready to Elevate Your UX Game?