Discovery phase is often an overlooked phase in agile delivery. But they can be incredibly valuable.
Discovery is like an investigation, a small phase of time to understand the landscape of our users, the business, the technology and other external factors such as legislation. A good outcome of a discovery may be to not do anything, or it might be to set up a product team to proceed to alpha and testing some hypotheses generated by the phase.
We’re beginning to implement more discovery phases in the work we do. We’ve had good success following a leaner approach of jumping into hypotheses testing, however some problems need further investigation to begin to define good hypotheses.
Energy Usage graphs
A common pattern among energy suppliers to illustrate how much energy you’re using in a graph format.
But why are these useful? How are they used? What do people use them for? We were struggling to answer these questions. We had heard anecdotal stuff like “I like looking for spikes” but why is that important?
Ultimately we didn’t know what needs the graphs and the energy usage data were meeting.
We had started sketching ideas of how we might improve on how we display energy usage data. However we found that our conversations kept coming back to trying to improve the graph solely based on our opinions. This wasn’t helping us. Without knowing the user need(s) graphs meet, we couldn’t have an informed conversation about how we might improve the display of energy usage.
If we had attempted to start by researching our proposed sketches we would have struggled to design valid research. We needed to learn more about how graphs are used today in order to design contextual usability research to test our proposed ideas.
We saw this as an opportunity to conduct a discovery. We adopted the Government Digital Service (GDS) definition of a discovery. Before initiating this discovery I thought the aim of it would be something like “understand the needs people have when reading their energy usage data”. But it seems that the best discoveries are not centred around getting understanding but focused on being able to make a decision, reading these tips from Will Myddelton really helped frame how to kick off this phase.
A cross-disciplinary team got together for a 90 minute workshop to thrash out what we wanted to learn, what we didn’t want to learn and things we needed to find out.
- Learn about the tasks that users complete with the help of their usage graphs.
- Understand what technical limitations there were.
- Regulations we needed to be aware of.
We ended our workshop with this shared goal:
to understand the tasks our customers complete with their energy usage data so that we can decide if there is an opportunity for OVO to better help our customers.
Over the course of 3 weeks we:
- Found answers to the business and technical questions we had.
- We conducted depth interviews with OVO customers.
- Held a conversation on the OVO forum.
- Analysed the findings from our user research.
We intentionally sampled users who are using their usage data more frequently than average. We hypothesised that by doing so we would learn about the advanced tasks that are able to be completed with energy usage data. In understanding these more advanced needs, we hoped we’d learn how we could make them simpler so more users could benefit.
Because research is a team sport it wasn’t just me, the researcher, conducting these depth interviews and analysing the findings. I was joined by different team members to listen and take notes during the research sessions, and then joined again to conduct the analysis.
This is important, it’s more effective than any playback of research can be in getting a shared understanding of the user needs. It removes biases in the research analysis, and exposing team members to research has been proven to directly improve the experience that we deliver.
Tasks that are completed with energy usage data
As with any discovery, we were surprised with what we found. We spoke with people who were able to conduct a number of really sophisticated graphs with their energy usage data.
We heard how people used the graphs we provide to:
- Check that there is nothing untoward happening in their house.
- Control their monthly outgoings.
- Decide if there are any actions they can take to use less energy.
- Measure the impact a change made has had.
However, we also learnt that it’s not just about the graphs, we heard stories of how people collated their own data into alternative formats or paired it up with other sources of data, such as their solar panel outputs, to make more meaning from their usage.
Doing so helped people to:
- Make a decision about when to use an appliance.
- Get better informed estimates of what other deals would cost.
- Tune household appliances to get to the optimum settings.
- Ascertain whether something was worth investing in.
- Decide if they should change an appliance.
Understanding that there are these range of tasks that can be, and need to be completed in the pursuit of energy efficiency is helping us define new hypotheses that we want to test. As suspected, it’s not really about making great graphs but actually about presenting the data in more usable ways.
Taking the time to really investigate an area to enable us to define better hypotheses is certainly something that we’ll be doing more of. By having an understanding of the tasks that users are completing with their energy usage data we are having better conversations around what improved usage data could be like. Before, we were too heavily centred around trying to improve the graphs which would have not had a great impact on meeting our user needs.
From this discovery we’ve learnt (maybe somewhat obviously) that trying to spin off a discovery on the side of the main focus of the product team is not the most effective way in which to get a shared understanding. We were left balancing this work alongside other competing team requirements. But this discovery has now given us a model on which to base and evolve the next one.