21 July 2022 (updated: 25 October 2022)
Chapters
A UX expert review helps you spot product issues that bear a significant impact on your business results. Learn the why, how, and what of conducting a UX expert review.
It’s quite common to launch a new website or an application and believe that it will do just fine because it’s brand new. We are all biased in our own ways, designers and industry experts alike - and it’s difficult to find those elements of the user experience that could be done a bit differently. A bit better. Quite often we just leave the website be for at least a couple of years, without a second thought about improving it. There are many - some simple, some complicated - ways of validating the user experience of your digital product and a UX review is one of the most efficient ones.
A UX review (or UX audit services) is a professional analysis of a digital product’s user experience. More on the details later, but for now let’s say it requires a professional UX designer or researcher to go through the application and list issues with how the website is structured, whether the correct UI patterns are used and whether the copy would not be too confusing to the users.
Employing a structured approach like this works to the business advantage on many levels, such as:
Almost 74% of the top 10,000 sites in the world use Google Analytics. Almost every commercial website uses some sort of quantitative tracking of their users. The data might show you the most visited pages, time spent on them or a number of clicks on particular elements. However, it won’t provide you with anything deeper than that - without expert knowledge of user experience you might not find the true reasons behind some persistent issues. Quantitative data might tell you the symptoms, but not the underlying problems. And we all know that treating symptoms is unsustainable in the long run.
Issues may appear on many levels of the user experience:
Or, if you’re very unlucky, all of the above could occur altogether.
A well-seasoned UX expert is typically trained to spot all of these types of issues, no matter their size or severity. Quite often they would summarize their findings in an easy-to-read report, along with initial recommendations of ways to improve the product. Sometimes fixing a small user flow will have a tremendous impact on the whole website, because it appears in all major flows, for instance a contact form on a company website. It’s the precision in pinpointing problematic areas that is the most difficult in reviewing the user experience.
The first step to a successful UX Review is finding out the primary website or app issues.
I personally conducted tens of expert audits that contained over 100 usability issues. Of course such a long list needs to be divided into pages/screens and have priorities assigned in order to be used efficiently. But how do I go about performing an analysis like that?
Many beginner UX designers and other digital professionals use industry-standard heuristics or best practices from world’s leading professionals. Perhaps the most popular method is Jakob Nielsen’s 10 Usability Heuristics (it’s probably the most linked-to UX article ever). The heuristics are:
It’s basically a checklist to help you review every page or function of your website or application. Some experts even create complex heuristic analysis templates for Google Sheets or Airtable to help gather the findings.
There are many approaches alternative to the above, from using other sets of heuristics to performing something called “cognitive walkthrough”. It’s basically an expert stepping into the user’s shoes and performing whole flows of the application, trying to focus on the most important elements of each screen. Quite often this means omitting lower-priority screens or focusing on a couple of main flows, as it is quite time-consuming.
A typical flow of a shopper in an online store would be:
As you can imagine, a UX professional going through a flow like this will often find tens of usability issues on any given screen. This type of analysis is much more effective if done often, with focus on a particular flow that has been recently launched.
When we started working with Hyphen (currently Betterworks Engage), a US-based SaaS startup, the aim was to redesign the platform in order to make it more modern and nearly consumer-grade. With such a large interface, spanning across tens of screens and hundreds of features, jumping straight into design would be a false start.
We decided to conduct a thorough UX review that consisted of:
I went through the interface with my HR hat on (I actually got my MSc in HR Management) and was able to see all the nooks and crannies of most user flows - employee survey creation, survey results analytics, aggregated favorability metrics dashboards across the whole company. Apart from the standard UI improvements, such as UI consistency, reorganizing the navigation menus, tidying up button labels and removing dropdown menus where possible, I found much more.
It turned out that with limited in-app user help, the users had problems understanding the domain-specific language used in the app - favorability, heatmapping, driver analysis, sentiment.
The audit resulted with a thorough redesign of the interface, which in turn enabled Hyphen to win new clients and onboard them much more easily. Furthermore, in 2021 Hyphen was acquired by Betterworks (and renamed to Betterworks Engage) and continues being developed further.
Finding out areas for improvement is quite intuitive, however we all have our biases and even an experienced UX practitioner may miss a few notes here and there. In the end, they are highly tech-savvy, which you wouldn’t say about many regular internet users.
While an expert audit might yield a hundred potential usability improvements, many of them may still lack customer insight. That’s why I always recommend combining many different research methods together, in order to get a full picture of the user experience. This concept is better known as research triangulation.
Research triangulation means using multiple methods to to address one research question.
User testing, which is testing a product with potential final users instead of experts, is one of the staple methods in user experience design. You observe the users performing a set of core tasks in a product and note down observations of what they do, say and show with body language. The outputs are rich data, which takes a few hours to analyze, but the results are extremely valuable.
You get detailed feedback from the target group of your product and it’s not just their declarations - you actually see them in action!
If you feel the need of thoroughly evaluating your current user experience or even the prototype of your product, there are many ways in which you can find areas of improvements. Whether it’s web analytics, expert UX reviews or user testing, you will always find valuable insights in a way that’s suitable to your team or budget. However, combining many data sources together can create true synergy and certainty that whatever changes you make to the product will be informed by rich and objective data. And nothing beats objectivity in user experience design.
After finishing a UX review project, teams often end up with a long list of prioritized improvements and recommendations, usually in the form of a written report. The next steps depend on the business setup - some organizations have internal tech teams that could fit the recommendations into the next couple of sprints. Others could hire a specialized software company with strong competence in UX Design (preferably the same one that did the review). From my experience, it’s always good to start with a few quick wins, so small improvements that yield improportionately large results.
They will excite the whole team and the customers - and we all know that both might be a bit apprehensive of big changes happening suddenly. A digital product is never finished, but after you gain momentum from a UX review, make sure to start improving straight away.
3 September 2024 • Maria Pradiuszyk