In UX design training bootcamps, we are often told to rely on qualitative research more than we do on quantitative data.
This makes sense: qualitative information helps foster empathy with our users, giving us a direct understanding of what does and doesn’t work for them; while quantitative information can feel impersonal and be harder to interpret.
At OVO we use a balanced approach, combining qualitative research and quantitative research to create better user experiences and meet business goals.
Using data well
I work in OVO’s Digital Support Experience team; we’re building an online Help Centre so our customers can resolve the majority of their issues without needing to contact us directly. We use quantitative data to experiment with new ideas and measure the impact of features we release. Data also gives us a shared language with our stakeholders to help make decisions.

Quantitative data has limits, just like qualitative data. It’s a powerful way to expose problems and their scale, but less useful to explain why an issue happens or how to fix it.
So we look at our product through both quantitative and qualitative lenses. This gives us a more scientific way to assess our designs in a measurable way, and inform us what to do next.
The key is using data in the right way, at the right time.
Quantitative data helps focus qualitative research
Our team’s focus is helping people resolve straightforward issues themselves, freeing up our phone lines for more complex problems. This is a key business need as we continue to grow and expand internationally, but without reducing our customer support quality.
We keep a record of Help Centre visitor sessions by using Fullstory (anonymously, of course!), and this helps shape the aims of our qualitative research. Recently, we analysed 90 sessions taken at random so we could assess the effectiveness of our self-service features. We tracked the proportion of people who used each feature and didn’t click the buttons behind which the contact details are hidden. This helped us understand which features lead to more successful self-help.

This data gave us a good starting point for qualitative research to test our user journeys. To find out why certain features had lower self-help success rates, we focused our time and questions on just these features. Talking to our users and getting feedback helped us to create a backlog of ideas to optimise underperforming features, which will hopefully improve the performance of the Help Centre as a whole.
Quantitative data from A/B tests shows if your designs work
In an ideal world, we’d implement a new feature and the data would tell us if the design was successful or not.
In the real world, when designing for websites with large audiences and maintained by multiple product teams, other factors—such as unexpected surges of new customers, or changes in other parts of key user journeys—can impact our product’s performance. This makes it hard to judge if a design has achieved its goal.
A/B testing can give us more clarity.
From working on Help Centres for several companies I’ve learnt that when customers need help, they often get confused between asking for help and raising a case with an agent and making a complaint. This creates artificially high complaint rates and increases the waiting time for others’ issues to be addressed.
To solve this challenge on a previous team, we changed the complaints process from writing an email to completing a step-by-step form which asked for the customer’s case number. This helped customers to understand they need to raise a case first, before making a complaint.
We conducted a small round of qualitative research to make sure the design was user-friendly and to create confidence that it will help us lowering the falsely high complaint rates.
We knew variations in new customer numbers could impact on case and complaint metrics. So before we implemented the new design, when there happened to be a surge in new customers, we created an A/B test to control the environment, so we could see if the new feature made a difference.
The results showed that the new design helped users better differentiate between ‘Ask an Agent’ and ‘Make a Complaint’. This gave us confidence that the reduction in unnecessary complaints was the result of the new design, rather than being caused by other business factors.
Takeaway
Collecting and analysing quantitative data at scale is a valuable source of insight for your design process. Quantitative data can be a lens to help set clearer aims for our qualitative research, and better understand where our designs succeed and fail.