Can you predict user behaviour?

In the work we do at TestMate, a key focus is observing how users interact with websites and apps. The information gleaned from user testing helps steer decisions on both initial website/app designs, and also subsequent iterations of those designs. The ultimate goal is for users to achieve their aims as quickly and painlessly as possible.

User testing is obviously not the first step in this process. It is part and parcel of a web designer’s job (before making design suggestions or plans) to consider the user experience and imagine, sometimes even presume, what the user will do.

But can we predict user behaviour?

How we can predict what a user will do

Of course, it’s impossible to know what a user will do next. This is why we conduct user usability testing. If we could predict user behaviour with complete accuracy, we wouldn’t need user testing at all. But there are some UX principles that can give us a good idea of what will and won’t work in terms of usability. These guiding principles are a starting point when designing a new website/app, and also give us direction when fixing websites/apps that are failing in some area.

What are the principles that help us predict user behaviour?

We’re going to outline a few tried and tested principles that can be relied upon to enhance user usability. These UX design principles are based upon typical user behaviour, and whilst we can never be certain a user will behave in a certain way, these principles eliminate a lot of guesswork, and get us more quickly to a place where our website provides a successful experience for the user.

There are many UX design principles, so this list is by no means exhaustive. These are simply the principles we think are most important.

The Principle of Least Effort

None of us want to do more than necessary when it comes to looking up a service or making a purchase online. We’re generally time poor, so if a website or app requires too much effort, frustration sets in and we give up. This may be us giving up on getting a service we need, or giving up on a purchase we were attempting.

Here, the Principle of Least Effort can help us predict a user’s behaviour.

Users want to follow the path of least resistance. This means that if a website/app is convoluted, we can pretty accurately predict the user will quit. Predicting how long it takes them to quit a site is harder, but if the customer experience is too difficult, you can be reasonably sure that the user will bow out at some stage. Some may return when they have more time. Some will forget to return. And some will choose another service/product provider with a simpler customer journey.

Think of when you buy something online. There’s always a part of you that’s ready to change your mind. If there are too many steps involved to purchase the item you want, you’re asked too many questions that aren’t relevant, the payment method you prefer is not available, or something in the process is not functioning, then you have plenty or time to reconsider your purchase, and not go ahead.

The principle of least effort provides solid ground for any UX designer. At every stage of the process, a designer should think about incorporating ways to reduce:

  • The time it takes to complete a task
  • The thought required to complete a task
  • The energy required (keystrokes, scrolling, searching).

They should also consider ways to make things easier on the user. Some ways to do this are:

  • Include a decent search function, so that the user can easily find what they need
  • Make menu headings clear, and store information in logical groupings
  • Reduce the amount of content the user needs to read. Use images and simple icons instead
  • Encourage scannability. For example, rather than a user needing to open up a new page to view information about a product, have a quick list of key points they can easily scan. They can then choose if they want to look further.

The Principle of Perpetual Habit

People have been using the internet to access products and services for a long time now, and are accustomed to seeing information presented in familiar structures.

Tried and true formats make it easier for users to find the information they need.

Examples are:

  • The Hamburger menu button, which users know to select to see a full menu of available options, rather than including all options on the page.Hamburger
  • The F-Shaped Pattern for Reading. This pattern is based on eye tracking studies that found users scan a page first horizontally in the upper area of the page (the top bar of the F), then vertically down the left-hand side of the page (the vertical stem of the F), and across to the right of the page, on areas they’re interested in reading (forming the second, lower bar of the F). F-Shaped Pattern A standard example of the familiar F pattern is a menu at the top of the page, a second menu on the left-hand side, and product images to the right. (We know this format so well, it barely seems like a deliberate design choice.)

You can safely predict that, based on perpetual habit, a user will find it more time-consuming and difficult to find information if you deviate from standard web formats and place information in less predictable places.

This is not to say you can’t be creative as a designer, but you need to carefully weigh up what looks good with what serves the user. Sometimes, there’s a nice in-between.

(Notes on the hamburger menu: whilst it’s a familiar tool users can rely on and it’s worth including on a web page, there’s suggestion you should be wary of storing important information only there. A user won’t necessarily click on the hamburger menu, so if there’s information you want to champion, place it on the page, rather than hiding it away in a menu. See more about the pros and cons of the hamburger menu here.)

The Isolation Effect

If we have a collection of items on display, we can predict a user will notice and remember the one item that stands out as different from the others. This ability to predict is based on the Isolation Effect, or the Von Restorff Effect (coined by German psychiatrist, Hedwig von Restorff).

In relation to UX design, we can take advantage of this effect by determining what the most important items are on a page, and adjusting design to make those items stand out from elements around them.

An example is highlighting a CTA button in a distinct colour and separating it from surrounding elements with white space. This makes it clear to users that they should click on it.

Another example of using the Isolation Effect is adding something like a ‘Best seller’ or ‘Most Popular’ banner to an item on a page. This not only makes the product or service stand out, but it also says to the customer that they don’t have to think as much— the thinking has been done already, and people have chosen that item.

Here’s an example. On this Vodafone page, they’ve added an ‘Our pick’ banner to the product they want to promote, and have situated the content a little higher up the page than the other options, so it stands out. We could safely predict that a user would read this option before reading the others.

Our pick

This other example from Vodafone does the same, with a ‘Great value’ banner. It also changes the colour of the item from red to maroon, to make it stand out from the rest.

Great value

This Aldi page uses a similar technique. The red banners saying ‘Now with more data’ draw the eye to the two centre products.

Now with more data

The Isolation Effect is an easy tool to use when designing product pages. The only risk is overusing it. If you try to make too many elements stand out, none of them will. An example is having many different CTA buttons on one page. The user may not know which CTA you most want them to select. So, use the Isolation Effect sparingly.

The Principle of Cognitive Load

We can predict that if we require too much thinking from a user, we may lose them along the journey. This comes down to cognitive load.

With attention spans ever-diminishing and so much competing content online, users have little mental space for your website or app. If you overload them, they may disappear.

You may also see them disappear if:

  • your pages are taking too long to load
  • there is too much content to read
  • CTAs aren’t clearly available
  • your user has to look at separate instructions to perform an action, and gets lost along the way.

So, what do you do if you have thousands of products or service options you want to include in your page, and you don’t want to lose your customer?

Here are some ways to include your entire offering, without overloading your user.

Eliminate the appearance of choice.

Obviously your customer has many choices, but it’s a wise idea to reduce the number of choices in any given section.

Featuring between 3 to 4 products across a page gives your user the space to absorb the images.

If you squeeze more products in, it’s:

  • harder to differentiate between them
  • harder for the user to remember which they liked, and
  • harder for them to ultimately make a choice.

Here’s a standard example of a website with thousands of products (Kmart), limiting their offering to four products across a page. If the user wants to see more, they can scroll, or use the search menus to locate the products they need. Reducing the options in this section allows for more white space. There is more time and space for the viewer to actually absorb the images, and think about the product.

Kmart

Use chunking

One way to reduce options on a page is by chunking. This means, placing products or services into logical categories.

In the example below, rather than a user searching for men’s clothing and seeing every item of men’s clothing available, clothing types are categorised and the user can select the category relevant to them.

They can also apply filters to further hone in on what they need. These options prevent them from being overwhelmed by choice.

Use chunking

Many of the streaming services also use categorisation to ease choice overload for viewers. By grouping genres of TV shows, and suggesting programs based on what a viewer has previously watched, they save the user a lot of time scrolling through every choice available to find what they need. See this example from Netflix.

Netflix

Again, chunking/categorising is so ubiquitous, it doesn’t seem like a deliberate design choice. But there is method to it, and there are many websites and apps out there that fail to categorise at all, or place things in categories that don’t make sense, and the user never finds what they need.

Use progressive disclosure

Swamping users with too much information in one hit is always a bad idea, and again, overloads their brains.

To prevent this, it’s a good idea to use progressive disclosure.

For example, if a user needs to complete a form, you may break that form up into sections (within reason), so that the appearance of all content on one screen doesn’t overwhelm them. This is particularly pertinent for mobile users, who are using small screens and can’t scroll through an entire form easily.

Here are some tips on successfully using progressive disclosure.

  • Let users know throughout the process where they are at, and that they are making progress.
  • Use back/forward options or arrows to make it easy for users to move back to a previous step, and then to pick up where they left off. If they are dragged back to the beginning of a form or payment process when they want to correct a detail, it’s likely they’ll give up.
  • If a user needs more detail about a step they need to take, it makes it much easier for them if the detail is included on the same page (for example, with a help button and a pop up). If you make a user go off the page to find help, they may not remember the instructions by the time they return to the form. It slows things down, and again, they may quit.)

To be continued…
We’re going to leave it there for now, but we’d like to talk more about how we might predict user behaviour using other UX design principles. Stay tuned for our upcoming blogs!

TestMate User Testing
Maybe you’ve made a prediction or two about how users might respond to your website or app? Do you want to put those predictions to the test and see how users actually interact? If so, get in touch with TestMate.

What is a user-friendly website?

Spend any time on the internet and you’ll likely see the term “user-friendly” pop up in relation to a website. But what exactly does it mean and how can you make your own website more user friendly? Here we’ll explain all the ins and outs.

Put simply, a website is user friendly if it is easy to navigate, performs as it should, and does not create any issues for the reader. There are many aspects relating to how websites can be more user friendly and what will separate an easy-to-use website from one that people will try their best to avoid.

Navigation

User-friendly websites make it easy for you to know how to find things, with clear titles directing you to different parts of the website, and information laid out in a logical manner so that it is quick and easy to find what you’re looking for. If you have lots of menu items, rather than presenting each of them on the page, you can use drop-down menus to reduce the clutter.

Aesthetics

Speaking of clutter, how a website looks can greatly determine how user friendly it is. If you have, for example, a white background with yellow text on it, it can be very difficult to read. It can also be distracting if you use lots of different fonts on the page or if there are lots of large images with small amounts of text between them. On websites, less is often more.

Popups

You’ve probably experienced browsing on a website and then a popup interrupts you and impedes your view. These can be incredibly annoying and should be used sparingly, particularly if they require you to click on a little ‘x’ in the corner of the popup in order to get rid of it.

Speed

With high-speed internet becoming more commonplace both for WIFI and mobile data, the last thing you want is for users to sit there and watch as your website slowly loads. One of the biggest culprits is images that are saved on the page in full size, rather than as thumbnails.

Accessibility

Accessibility is one aspect of a user-friendly website, which tries to take into account the different skills and limitations that users may have, such as vision impairment. The Web Content Accessibility Guide (WCAG) was developed to help website owners make their websites more accessible for people with disabilities, as well as for older users.

Mobile friendly

The website Statista.com reports that in 2018 over 50% of internet traffic was generated via mobile phones. It’s therefore vital that if you’re creating a website, it is tested to ensure it functions properly on mobile phones. If it doesn’t, most people will simply move on to something else, rather than sending you feedback.

Testmate are experts at making websites user friendly, drawing on our years of experience to help website owners in a range of industries, such as e-commerce, finance, government, health insurance and startups.

Are you interested in our user testing services? Get in touch or request a demo today!

How user testing can help your online business in 2021

When a website or app is created, it’s often difficult to know if there are issues with the code or functionality until someone tries to use it. This is where user testing comes in. But what exactly is user testing and how can it benefit your online business in 2021? Read on…

User testing is important because website and app developers need to know how their creations function in the “real world” when users interact with them. If a test user has an issue that causes a crash or a bug, or if the functionality is confusing or doesn’t work exactly as it should, then user testing can help to shed some light on these problems, before the website or app goes live.

User testing can be undertaken a number of ways, each producing different results. “Guerrilla testing” is usually done in the early stages of a product or app’s development, with the focus on gauging how people react to your idea or concept. Large numbers of the general public are ideally sampled during guerrilla testing, although they may not all be located within your preferred target market.

Lab usability testing involves a moderator obtaining in-depth information from users as they interact with your product. The main downside of this kind of user testing is that it tends to be expensive and time consuming, and while it can lead to detailed responses, not every company can afford to utilise it. However, its main strength is that it will render a large amount of focused information that can quickly highlight any potential issues.

Another popular form is remote user testing, where participants are given tasks to complete by themselves on their own computers/devices. Remote testing offers the convenience of guerrilla testing and the ability to obtain results from large numbers of people, and as the product likely already exists, these results are specific to the product and are not simply general feedback. It’s also possible to specify certain demographics for users so that you can target the testing to the people most likely to use your product.

The first step for a company to take in user testing is to create a test plan which explains your objectives and the questions you need answered, such as how easily users are able to interact with your product or what functions need more work. As seen above, user testing can be implemented at almost any stage in a product’s development, so you don’t have to wait until you are ready for release; in fact, you want to know about your product’s usability or issues well before this time anyway.

After you have gained insights from users about their experiences, it comes time to analyse them and discover if the product can be enhanced at all. The greater the breadth and focus of the first step, the more information you will have to analyse at the completion of user testing.

Testmate is a leader in user testing, working closely and collaboratively with businesses to help them find potential issues, improve conversion rates, and discover new ideas. Get in touch for a quote or request a free demo today.

Why You Need Confidence Intervals in User Testing

What is a confidence interval? How do confidence intervals work? If you’re interested in learning all about confidence intervals and what they mean for your user research, read on! When running usability tests, KPIs and other quantitative metrics can be powerful tools in evaluating UX. Success metrics like task completion rate, efficiency metrics like time-on-task, and standardised questionnaires like the System Usability Scale (SUS) can help you quickly assess and compare the usability of your designs. However, before you make any important business decisions, you need to determine if your sample data is representative of your actual customer base. Finding the population mean, which is the average of every person who fits your demographic, would be unrealistic and costly. Therefore, any user test, especially those done on a small-scale, is subject to some amount of sampling error. This error can lead to both overconfidence or unfounded skepticism with your data. In order to counteract this, user researchers use confidence intervals to determine what the population mean could look like based on the sample data.

What are confidence intervals?
A confidence interval tells us how accurate our data is. Wondering how to calculate confidence a interval? Knowing how to calculate a confidence interval can be challenging. It is calculated by deriving the upper and lower limits of the sample data, which give us a range of values where the population mean would likely lie. Say we ask 10 people to rate how easy it is to complete a task on a scale of 1 to 7 and the average is 5. You can generate confidence intervals that show the likely population mean would be between 4.2 and 5.8. The more people we ask, the smaller the range becomes.

Exactly how likely your population mean is within the confidence intervals is determined by the level of confidence you choose. UX researchers typically calculate confidence intervals with 95% confidence. This means that there is a 95% chance that the population mean will lie within the confidence intervals you derived. It is important to choose what level of confidence you are willing to accept to make conclusions with your data. Higher levels of confidence will give you wider intervals with the same sample size. You need to test at a large enough scale to have confidence intervals that are narrow enough for you to properly assess your data. Confidence intervals are an important part of UX research and should be recognised as so.

Why do confidence intervals matter in user research?
You may be wondering how to interpret confidence intervals. Say you are conducting a usability test on a prototype to see the completion rate of your new feature. Your goal is to have a 90% completion rate at launch. After testing with 5 participants, you observe that only 3 out of 5 completed the test task. If you were to calculate the confidence intervals at 95% confidence, you would find that the lower limit could be as low at 23% while the upper limit could be as high as 88% completion. While the sample size is small, you can still present these findings to stakeholders and definitively show that this new feature needs improvement. Because our upper limit was below 90% completion rate, there is only a 5% chance that the product would achieve 90% completion if launched.

Utilizing confidence intervals in user research can help you to make informed decisions even when using small sample sizes and adds to the benefits of user research. Take the guesswork out of testing and instead discuss probabilities. If your confidence intervals are too wide to make statistical conclusions, increase the sample size and see if your results change.

Making Comparisons with Confidence Intervals
One way you can utilise usability testing is to compare the effectiveness of two different design iterations. Say your team wants to know which iteration will have a higher completion rate at launch. In order to make statistically significant comparisons, you need to take into account the confidence intervals for both iterations. For example, here is completion rate data for designs A and B collected with 10 and 100 test participants respectively:

In this example, using only 10 testers per test resulted in an overlap in your confidence intervals. In theory, these two designs could have the same completion rates when launched to your entire customer base despite design A performing better in the test. Increasing the sample size to 100 testers definitively shows that design A’s completion rates will be higher than design B’s at least 95% of the time. Having a larger sample size narrows our confidence intervals enough to show that the completion rates of these two iterations will be different from each other.

Dealing with wide confidence intervals
Does this mean that you can’t make effective comparisons with smaller sample sizes? That depends on how much potential error you are willing to accept. If designs A and B from the previous example were both early prototypes, then it makes more sense to continue developing design A. The upper limit of design B’s completion rate is only 68% whereas design A’s could be as high as 95%. Given that these are early prototypes, it would not be a big risk to pick design A over B to continue to develop. If you were making a decision between two iterations that are ready to launch, then you would need to conduct further user tests to be sure that you are presenting your users with the superior product.

Sometimes, using different usability metrics with smaller sample sizes can be more effective. Completion rate is a binomial metric, meaning that either the tester will complete the task or won’t, and this results in wider confidence intervals. Instead, you could measure task ease (SEQ), which is recorded on a 1-7 scale and is more nuanced than a simple yes or no. Even at small sample sizes, the confidence intervals will be narrower than what we saw with completion rates while still measuring the effectiveness of our design. Choose the metric and sample size that will give you narrow enough confidence intervals that you feel comfortable making decisions with.

Takeaways
Confidence intervals are another layer of user research that allows you to apply your small-scale user tests to your actual user base. The key to assessing data using confidence intervals is to ask yourself how wide of a range you are willing to accept for your data. In early user research prototypes, it makes sense to conduct small scale user tests to quickly figure out what isn’t working. When trying to reach usability goals at launch or making comparisons between designs, larger sample sizes are required to make statistically relevant conclusions. Design a test plan that makes the most sense for your current goals, and leverage confidence intervals to add credibility to your conclusions.

5 Reasons Why Your Content isn’t Driving Engagement

Many marketing gurus today emphasise the importance of content, and engaging content at that. Yet despite your best efforts, you can’t seem to see what the content hype is all about. You write blogs, make videos, and actively post on social media and yet none of it generates the momentum you’re looking for to drive user engagement. If your content is not driving engagement and ultimately sales, then you’re wasting time, energy, and resources contributing to an already overcrowded web. So, what drives engagement? Here are 5 reasons why your content isn’t being engaged with and what you can do to help increase user engagement for your business.

1. Not the Right Audience

One of the main reasons your content might not be engaged with is because it is not aimed at the right audience. If your content does not directly speak to or address the pain points of the person reading it, chances are they will not engage in a meaningful way. In order to counteract this, developing detailed content personas can help you make sure your content is on the right track. Content personas are composite sketches of a target market based on validated data, not assumptions. Start off with a demographic in mind, and gather inputs from customer support representatives, sales reps, product managers, and most importantly – customers. Once you have a picture of your target persona’s goals and pain points you can create value propositions specific to their needs and compile keywords that are relevant to them. 

2. Doesn’t Provide Value

This goes hand in hand with serving the right audience. Even if your content is being served to the right audience, it needs to provide value to the reader in order to get results. Look for ways to gear your content towards a specific persona by offering valuable information that is directly relevant to their pain points. This content engagement strategy will allow you to better understand your target persona, and therefore create more relevant, engaging content. 

3. Not the Right Medium

Blogs are not the only form of content marketing. It is important to diversify the types of content you are putting out in order to find out how your audience prefers to consume content. Some types of content commonly used to gain attention from users include:

– Video
– Podcast
– Animations
– Checklists
– Infographics
– Webinars
– Books
– Email Series

Take the time to figure out what forms of content your personas would most likely respond to. Are your customers more active on social media or do they prefer receiving emails? Develop a content strategy that reflects your specific audience’s online habits.

4. Not Discoverable 

Sometimes the problem isn’t that your content isn’t relevant enough, but that your intended audience just can’t find it. Here are some ways that you can increase traffic to your page:

– Organic Search (SEO): Using the right keywords to appear in relevant search results
– Paid Social Media: Sponsored posts on LinkedIn, Facebook, Twitter, etc.
– 3rd Party Paid Media: Pushing your content through a paid third party service
– Email: Sharing your content directly to your email list
– Influencer Marketing: Leveraging tastemakers and industry experts to share your content with their audiences

It is important that you consistently are generating new content, writing content that is tailored to your audience, and promoting your content to reach the right people. In order to properly gauge the engagement of your content, you first need to generate a consistent traffic flow to your site.

5. Not Being Measured

UX design teams have plenty of experience with testing their visual design components and UI, but an area that is often neglected when testing is content. Content engagement metrics are extremely important. Leverage techniques like card sorting to figure out the most intuitive organisation of information for your site. Ask testers to recall product descriptions or ad copy to see how well your messaging is getting across to your audience. Pay attention to the language and keywords they use, and the ones they don’t. Conduct A/B testing of social media copy to see what posts generate more traffic. Use measurable data to determine if your content is actually providing value to your target audience.

Need more help determining whether your content is user friendly and engaging? Would you like to test your website? TestMate offers user testing in Australia that will benefit your brand! For more information on our website usability testing services, contact us today!

How many User Testers do you need?

Say you went fishing and had one day to catch as many fish as you can from several different ponds. Each pond has some fish that are bigger and easier to catch than others. To maximise the number of fish you can catch, would you spend all day fishing in one pond or spend some time at each pond to catch the easiest fish out of each?

Fishing for Usability Problems

Testing for usability and maximising the problems you can discover is a lot like trying to catch fish. While you can spend all of your time in one pond and try to catch every fish, it is much more efficient to focus on the big fish in each pond. Likewise, the most effective way to catch usability problems is to conduct iterative, small-scale usability tests while continuously updating your testing conditions as your design evolves. Because the objective of usability testing is to improve the overall user experience, it can not be a set-it-and-forget-it process. After conducting a usability test and fixing the discovered issues, it is necessary to retest to see if these issues were resolved and whether or not the solution presented any new issues. Spending big on a single, large-scale usability test is not as effective in discovering usability problems because each tester you add will give you diminishing returns. So what is the ideal sample size when conducting a usability test?

The “Magic Number”

In order to determine the ideal sample size for a usability test, it is important to understand how you calculate the probability of discovering an issue. The most commonly used model to evaluate the effectiveness of a usability test is:

In this formula, p is the likelihood of problem discovery (per tester) and n is the number of testers. Jakob Nielson, a leader in user research, used this model to determine that if the problem discovery frequency is at least 30%, then about 85% of discoverable problems will be found by the first five testers and 95% by the first eight. Many research teams have since pointed to this study and use the “magic number 5” as the ideal sample size for usability tests.

The Goal of User Testing

Does this mean that testing with 5 users is enough? The answer is yes and no. Looking at the curve at 30% problem discovery frequency seems to indicate that you actually need to test with at least 15 users to discover all of your product’s usability issues. However, this doesn’t mean that you need to conduct tests with 15 users at a time. While large scale tests are great for evaluating the current state of your user experience and creating benchmarks, in order to make measurable design improvements you have to continuously search for new usability problems. Focus on the big fish problems with each iteration and fine tune your user experience along the way.

Different sites have different problem discovery frequencies

One of the key assumptions in Nielson’s magic number calculations is that the frequency of usability issues encountered must be at least 30%. This is because the less likely a tester is to discover a problem, the lower the percentage of problems will be discovered with the same number of testers. By rearranging the discovery model from above and instead solve for n, you can calculate the number of testers you need to discover a certain percentage of usability issues given your testers likelihood to uncover a problem.Determine what percentage of discoverable issues you want to catch and build a test plan that best suits your needs.

How does problem discovery frequency affect your test plan?

For government, insurance, and banking sites where there are large amounts of information and complex functionality, there is typically a higher problem discovery rate. This means that these types of sites, especially if they have not done any UX design before, will likely get the most out of conducting iterative small-scale usability tests. The same goes for antiquated internal systems that are poorly designed and lack proper integration. However, for more simple websites with a limited number of tasks or ecommerce sites looking to improve conversion rates, a smaller multitude of usability tests but with larger sample sizes is a more effective approach. Use the insights gathered from usability tests to develop hypotheses for possible design changes. Then conduct A/B tests on a large scale to gather data on which iteration increases conversions. Continue identifying improvements using further usability tests to catch big fish, but validate them with A/B tests.  

Do you want help finding the right sample size for you? TestMate can help! Get in touch with our friendly staff to find out how we can tailor solutions to your business.

How To Conduct & Use User Testing: A Step by Step Guide

Wondering how to conduct user testing? Read our user testing guide below and you’ll know how to run and use a user test in no time!
There are many ways of how to use usertesting. Despite many variations in methodology, user testing can be broken down into five general steps. 

  • Define the objectives,
  • Develop a test plan,
  • Gather user insights,
  • Conduct analysis and
  • Understand the results.

Here is our Step By Step Guide on How to Run User Testing:

Step #1: Define your testing goals


Whether it’s a website, software, or an app, there is always room for customer feedback. User testing tasks can be used to measure how difficult it is to find information on your site, the responsiveness of a software feature, or the intuitiveness of your app interface. When it comes to how to use user testing effectively, your testing goals need to be specific and customer-centric, so it is important that you ask the right questions when planning.


Deciding on KPI or metrics you want to measure with your user testing tasks is a good place to start. For example, if you want to test the level of ease to find information on your website, some metrics you can use are minutes taken to find target information, number of steps taken, or both.
Don’t know what your testing goal should be? Don’t worry, it is very common for organisations to get lost in their own visions and miss out on potential improvements. In this case, first identify the main needs or problems your product addresses and those that it should address to develop more focused goals for your user test.

Step #2: Design a test plan

Now it’s time to write out your test plan. This involves defining a testing technique, your testing audience, testing task, and testing scenario for the project.

Usability testing is the most common and effective method to test user experience and get a good understanding of how a user experiences your website, software, or app.

Usability testing can either be moderated in person or unmoderated remotely. According to Adobe XD Ideas, unmoderated user testing offers quick, robust, and inexpensive results and it’s best used to test very specific questions or observe and measure behaviour patterns of your users. Supplement remote user testing with surveys to get more complete feedback, similar to insights you gain from in-person moderation.

Design a user test plan

Your user testing participants should be reflective of your actual users, so determine what demographics your service targets. For example, if you are an Australian-based women’s clothing boutique, your participants should be female users between ages 25 – 45 in Australia. Next, decide on what tasks you want to test. In order for your usability test to be relevant, make sure that you are choosing realistic situations that accurately depict your user flow. The goal is to make tangible improvements to your user’s experience, so you must start with a good understanding of how your users use your service.

Step #3: Find user testing participants

Once you’ve developed your test plan, you then need to find users to complete the test. As simple as it sounds, finding the right user testing participants can be an exhausting process.

Find user testing participants

Do your testers match the demographics of your actual users in terms of age, area of living, computer level, lifestyle, etc? On top of that, do they understand the testing purpose and tasks assigned? Do they have access to the testing platforms (i.e. IOS, Android, tablet)? Can they provide credible results in an unmoderated user testing setting?

Believe it or not, selecting the right audience can be the most crucial step in terms of the success of your project. In order to build user-centric experiences, your test audience needs to be reflective of your actual users.

To do this, you can make use of your existing customer database if you have one already, your social media networks, referrals from employees, or you can sign up for a user testing service. Luckily, there are many user testing agencies out there that can help you connect with the right target audience in a breeze, among other benefits. TestMate, a user testing agency in Melbourne, Testmate can connect you with real Australian users that are selected through a rigorous recruitment process to provide you with quality user insights.

Step #4: Conduct user testing

Almost there! Before sending out your tests, consider what format you want your responses in. Video recording of your participants interacting with your digital product is a popular choice, as it captures real-time reactions of a user along with any subconscious actions and patterns that can be invaluable. Other response formats also include audio recording, written feedback, and questionnaire. You can also combine techniques, for example having participants talk out loud about their thought process while attempting to complete a task in a screen recording. This can give you valuable insight into not only what is or isn’t working but why.

Now you are set. Send out testing materials to all participants and ensure all requirements such as testing tasks, testing environment, are understood by each participant.

Step #5: Review results

Finally, you need to collect and analyze all the responses received from the participants. Understand whether each error is functional-related (User Interface related) or experience related (User Experience related); compare it with benchmark data for each performance area and identify areas for improvements. Compile your findings into actionable takeaways to work out how to use your usertesting results to help you improve your digital product and share them with your team.

Why Hire a User Testing Agency?

Outsourcing user testing has become a popular choice for many businesses. Here’s why:

  • Time and cost-efficient.
    User testing, unfortunately, is not a one-man job. Agencies leverage their team of experts to meet your marketing budget and deadlines. With a dedicated team at your disposal, testing agencies are able to provide you with a focused test plan that is suitable for your digital product and marketing spend.

  • Dedicating expertise.
    From test planning, finding user testers, to data analysis, you will have access to all the skills you need. Experienced UI and UX experts can ensure you are on the right track and provide you with actionable results based on their industry experience and technical expertise.

  • Connect with the right users.
    User testing agencies recruit a large group of qualified testers based on designated recruitment processes and review user feedback to ensure credibility and legitimacy. So you don’t have to worry about not having enough participants or unqualified testers.Looking for Australian user testers? TestMate has connected many businesses with real Australian users to improve user experience and customer satisfaction. Find out how we can boost your UX.

Final Words

So now you know how to do user testing! It’s great to have empathy, but it’s impossible to understand all users and predict their behaviour patterns. When you conduct user testing with real people, you’re able to gather invaluable insights that you might otherwise never even have thought of.