OVO Tech Blog
OVO Tech Blog

Our journey navigating the technosphere



OVO Tech Blog

How I Layered my UX Research

For my first major project at Kaluza I asked myself, could I layer my research?

As a user experience designer, I’ve always layered my work. Overlapping, refining and designing in chunks. For my first major project at Kaluza I asked myself, could I layer my research? What I discovered was an insanely easy way to squeeze more out of card sorts and workshops.

Note: italics underlined text means there's a definition further down the page.

The context

Smart Ops is a piece of software used by agents at OVO Energy to work with smart meters. It presents meter reads, supply info and activity tracking. And it empowers people to interact with a meter directly for actions such as changing how often a meter delivers a read.

At Kaluza, we quickly built the tool off the back of a requirement. As it grew, little time was available to reflect on how user-friendly it was. Feedback informed us that it was laborious for users to find the right thing quickly and important details were sometimes hidden. So we set out to give the Smart Ops tool a much-needed user experience makeover.

i) agents: for the purpose of this article, agents will refer to a range of customer service and technical specialists who assist customers and engineers in keeping smart meters working

A smart meter as viewed in Smart Ops

The outcome we needed

For this piece of work to succeed, we needed to design a Smart Ops UI that shows a person what they need where they expect it and empowered them to carry out day-to-day tasks without delay.

So we needed to understand:

i) UI: user interface, anything you look at or interact with on an app (e.g. a button)
i) mental model: a model of what users know (or think they know) about a system

The plan of action

In Kaluza’s Platform Design Team, we use nimble techniques to quickly gather and validate knowledge throughout the lifespan of a product. A lot of us follow methodology from Lean UX by Jeff Gothelf.

We had a close deadline on this project. Although we’d gather further feedback on the new design once live, we still needed to provide value and improve the user experience in our first version. The plan was:

  1. engage agents in a closed card sort to learn how often parts of UI are used
  2. engage agents in an open card sort to reveal their mental models
  3. list every task carried out on Smart Ops by agents
  4. cross reference tasks with card sorts to deepen empathy for the agents
  5. generate a new sitemap and validate/refine it with various specialists
  6. rebuild Smart Ops UI and measure feedback

i) card sort: participants sort cards containing information into groups. The results provide you with an insight into their mental model based on what they choose
i) closed card sort: participants sort cards into groups that are pre-defined (e.g. sorting food cards between fruit or vegetable)
i) open card sort: participants sort cards into custom groups they define (e.g. sorting food cards into memories they evoke)
i) sitemap: a diagram listing pages/elements in a system’s navigation

Sitemap for the win

At the core of this piece of work was the sitemap. Every time I learnt something new, it was expressed there. This kept my focus on the outcome of the project, squeezing actions out of insights. This sitemap would eventually be used to regroup everything in a new design.

The Smart Ops sitemap.

I'd created a sitemap of Smart Ops a few weeks after joining Kaluza and extended it here so every piece of information (e.g. a meter’s install date) was treated as its own page within the navigation. I wanted to be as granular as possible and avoid making any assumptions about how things should be grouped.

i) design system: guidance and rules on how a product should be designed. Can be as definitive as specific colours, sizes, layouts and styles

First step of research – unmoderated card sorts

Some activities seemed simple enough to be carried out remotely by agents in unmoderated sessions. This meant more flexibility and hopefully a higher rate of engagement. We used a tool called ProvenByUsers to share URLs to our card sorts. We set agents the following exercises:

Closed card sorts

  1. Sort these views by how often you use them: [frequently, infrequently, never]
  2. Sort these actions by how often you use them: [frequently, infrequently, never]

Open card sorts

  1. Sort these views in a way that makes sense to you: [custom groups]
  2. Sort these actions in a way that makes sense to you: [custom groups]

The closed card sorts threw up some interesting low agreement to be explored in moderated sessions. Participants reported using certain actions/views at different rates. For a card sort to give actionable results, you ideally want as many dominant trends as you can get to provide certainty. The moderated follow-up sessions would aim to cement our certainty of these findings.

The yellow highlighted text denotes cards with low agreement.

In the open card sorts we’d cast the net too wide. The grouping criteria used by different agents didn’t merge together, they were too different. This stopped us from identifying any dominant trends, so we planned a complete redo in the workshops. At this point, we also flagged low agreement and frequency of use in our sitemap.

Low agreement and frequency of used flagged in our sitemap using Miro's card tagging.

i) views – used in this project to refer to individual pieces of information presented in a UI (such as the date a meter was installed)
i) low agreement – card sort results are too disparate to provide any dominant pattern of agreement
i) unmoderated: if a participant completes an activity alone, it is unmoderated. This is more flexible for participant routines
i) moderated: a moderated session features a facilitator present, who can answer questions and keep the participant(s) from straying from the brief

Next steps – moderated workshops

We invited agents to workshops featuring four activities:

1. Complete the closed card sort, grouping cards of low agreement from the unmoderated exercise.

A closed card sort before the agents sorted the cards of low agreement.
Low agreement could now be eliminated from our sitemap.

2. Sort views and actions in an open card sort as a team, to determine your collective mental model. This would provide us with early suggestions for a new sitemap.

An example of an open card sort from one workshop.
A new suggestive sitemap based on one workshop's open card sort.

3. List all the tasks that use Smart Ops at some point.

An example of tasks listed by agents, during one of the workshops.

4. Sort the tasks into their suggested card sort groups depending on what Smart Ops UI is used. Change your card sorts groups if necessary.

An example of tasks mapped to the open card sort, from one workshop.
One suggestive sitemap combined with tasks linked to their necessary areas of UI.

The final exercise was crucial. It gave agents the chance to means test their groupings against the tasks they carry out. I took notes throughout these exercises, capturing reasons for choices and agent feature wish lists. I also made sure to validate the closed card sort results with events in Google Analytics where possible, to add more certainty to our results.

After the moderated exercises

I merged the suggested groupings from the open card sorts into one sitemap. Some groups had to be split and this had the potential to ‘split’ tasks across more than one group. So I ran through the sitemap with product managers, developers and two former OVO Energy Ops Agents who now work in our squad at Kaluza. They provide our Voice of Customer. All in all this new sitemap had three reviews post-workshop.

The rough version of this sitemap, as presented to specialists in the squad.

i) Ops Agent: a highly trained specialist in smart meters, who assists customer service agents and engineers in Smart-related issues
i) Voice of Customer: insights into needs of the customer, in their own words

Everyone was confident the new sitemap would not cause issues regarding the tasks. I designed a new version of Smart Ops and made sure to signpost these changes and invite agents to leave feedback. We are in the process of gathering this feedback from agents using the tool in their work.

A Figma design of the new Smart Ops navigation.

To sum up

To commit to a design, teams need certainty. They have to be sure they're not harming the quality of a product by building something based on an assumption. We tapered uncertainty by stacking what we learned. Like pass the parcel, each stage of research removed another layer and brought us closer to the solution.

This approach, along with consolidating insights into an actionable format, let the project run smoothly. I’d recommend this approach to anyone, regardless of experience.

I’ve created a template of the layered card sort workshops we carried out here. Feel free to duplicate and use!


Michael Lever

View Comments