At Kaluza, our intelligent platform empowers OVO Energy’s customer care agents to help and support their customers. Focused on delivering an award-winning service, agents don’t always have the time to participate in research.
As product designers we thrive on research; our ideal world is centered upon observing and analysing real-life feedback. To reach out to busy agents without disrupting their workflow, we had to raise our game.
Why did we need to talk to agents?
We have a screen where it’s difficult to understand the difference between industry data and agent data. Industry data comes from a collection of third-party companies and industry regulators, whereas agent data is created or edited by customer care agents.
Our aim was to understand why agents needed to create or edit this data with two clear goals: informing how the platform could reconcile the data and identifying opportunities to improve the experience for agents.
To find out more we needed to talk to OVO Energy’s second-line agents. These agents are experts in their field, tasked with tackling a backlog of complex cases. Their time is precious and we were super-pleased to secure two seasoned participants for our research.
The in-depth agent interviews revealed we had a UX iceberg on our hands. The qualitative research had only just scratched the surface. Far more agents were utilising the data than we had first suspected. We needed to scale-up our learnings to pinpoint which types of agents were editing the data, how they were using it, and for what types of customer cases.
We now had to adapt our approach to entice as many agents as possible to share their valuable knowledge with us, without interrupting their workflow. Thanks to Kaluza’s super-smart Nebula design system, we could easily insert a system message on the screen in question.
The message informed agents that we were considering removing edit access of the data, linking to a short survey where they could raise any concerns. The survey captured data about the agents: names, emails, roles/departments, which fields they needed to edit, and any comments or questions.
Within a couple of days, over thirty agents completed our questionnaire.
The learnings were magical; the kind that user-driven dreams are made of. We now understood which groups of agents were editing the page, how they were doing it and why; in a much wider context. From these learnings, we had more questions...
We identified a star responder to our survey, an agent who provided both a wealth of information and was in a team we could approach for an in-depth interview. This took our understanding to a whole new level, validating things we had already discovered and identifying new opportunities in agent processes and training documentation.
We also reached out to our initial respondents with follow-up questions. We embedded a couple of short questions into an email with a view to learning what agents did next after editing the data. By capturing agent actions outside of the screen, any potential solution would consider the agent task as a whole, providing us with the ability to suggest automations.
When 70% of our agents responded within a couple of hours, we were blown away.
Our research meant we could spot meaningful improvements without negatively impacting the agents’ ability to resolve cases. ‘Quick-win’ opportunities were evidenced, as well as longer-term benefits.
So, although our qualitative research was helpful, our true insights and scope were unlocked by blending our approach to incorporate methodology from quantitative research.
We could now accurately calculate the impact of our work and prioritise its value accordingly. Most importantly, without disrupting agents!