OVO Tech Blog
OVO Tech Blog

Our journey navigating the technosphere

Hope Thomas
Author

Product Lead at OVO

Share


Tags


OVO Tech Blog

From Problem to Product

How we developed a new carbon cutting feature at OVO from a Product perspective

How we developed a new carbon cutting feature at OVO from a Product perspective


My name is Hope, I’m a Product Lead at OVO, and I work with the team that makes OVO Greenlight, OVO’s collection of carbon cutting tools.

I wanted to share a little bit about how one of our newest features, Energy Saving Challenges, came to be as an example of how our process works, and talk about the role of the Product Manager at each stage of the process. I hope this helps people understand a bit about some of the different things someone in a Product role might do to make sure that the team is maximising value for customers.

About the team

The OVO Greenlight team is a cross functional team full of people with expertise in different domains, from Data science, to Development, UX and Marketing, so leaning on their expertise has been key to our success. Each member of the team is responsible for their own area, but takes on responsibility with the whole team for the success of the product.

The team were working fully remotely for most of the time this feature was being developed, so all of the workshops mentioned happened remotely via Google Meet and Miro. You’ll notice a lot of collaborative sessions throughout the process. We default where possible to multiple shorter collaboration sessions when we work remotely rather than single longer sessions, as we’ve found this works better, both for people’s attention spans and for fitting around busy schedules.

Step 1: Research


Our ‘Energy Saving Challenges’ feature started with a piece of formative research. As a product person in the team, part of my role is to think about the vision for the product and where we are going, and then work with our UX researchers to decide which customer problem areas we need to know more about before we start. This is a key part of the process, as without understanding what problems customers have, we can’t make sure that we are solving them with the products we are building.

As with pretty much all of these stages, this is a collaborative process. In this case, it became clear that we didn’t understand enough about how people were going about reducing their energy use, and how the tools we made could support them. Our User Researcher suggested that we talk to users over a number of sessions to understand how they go about reducing their energy use, and what problems they encounter. We did this through interviews, observation and a diary study. Over a month we interviewed customers twice and observed them using tools which would help them understand their energy use, with the participants completing a diary study of what they did to reduce energy use in between.

For each of the user sessions, all of the members of the cross functional team were able to attend, to get a better understanding of our members. It doesn’t matter if your job title is data scientist, product manager or developer, your choices will inform what we build, so it’s important that everyone know who we are building for.

That being said, it’s absolutely key that the Product Manager in the team understands who their customers are, so I made sure to be able to attend as many of the sessions as possible, and contribute to the note taking.

Once all of the sessions were complete, we then had a collaborative session run by our User Researcher to understand what we had learnt from the research, and to make sure we understood the problems customers were experiencing. The Researcher then combined this insight with a number of other data sources, including external research and usability reviews of our existing product to create a list of customer problems.

The outcomes of this piece of research have been at the heart of a lot of the changes we’ve made. We identified six key user problems that have been key to helping work out which work we need to do. The two of these six  problems we identified that led to this feature were:

Once we had our list of customer problems, our User Researcher ran a survey to understand how prevalent these problems were with a wider audience, so we could be confident that the problems we were solving were reflected in our wider member base. Seeing that they were, and that these two particular problems were echoed in a large number of customers gave me the data I needed to decide this was what we would focus on.

Step 2: Prioritisation

Once we had our list of customer problems, it was time to work out which of the six customer problems we wanted to solve, and how to solve them. This step was primarily run by Product, with the input of the whole team. The first part was running a workshop with the team, thinking about the problems and which we could solve in various ways.

We tried a new approach for this, looking back at the research and coming up with ‘How Might we’ (HMW) statements. Each member of the team took two of the statements and spent a minute thinking up potential solutions to it. We then swapped, and spent a minute adding ideas to the next two statements. We continued until we had all added new ideas onto each statement, building on the ideas that others had before us.

What our 'How Might We' board looked like after we finished coming up with ideas

As the product person in the team, I then used the outputs from this session to prioritise which problem we could solve. The two HMW statements that we prioritised that led to this feature were

‘How might we help customers to take high impact actions rather than low impact ones?’

and

‘How might we make it easy for people to build habits?’

Both of these answered key issues we had seen in user testing, and had a number of potential solutions that we could test that would be achievable and potentially impactful both to the user and to drive the team’s objectives.

Finally, I suggested a success measure for solving these customer problems and after some feedback from the team and our main stakeholder we agreed how we would add working on this problem into our quarterly OKRs.

Step 3: Design

Once we had agreed that habit building was the problem we would attempt to solve, our UX designer ran an ideation session for the team reminding us of the research and best practice, and then coming up with ideas of the kinds of energy saving habits we could build, and then putting all of the ideas on a matrix of impact vs effort.

Our impact vs effort matrix

After the workshop, the UX designer and I identified two of the most promising ideas from the ideation and created hypothesis statements for each of them.

This hypothesis statement stated the change we had decided to make, the value we expected it to drive, and what outcomes we expected, if it had been successful. It also included how we would test that the outcomes had been achieved or not. This meant that we were all clear up front why we were building the feature, and how we would know if it had been successful or not.

Our Hypothesis was:

If a member can join an energy saving challenge

Then they’ll be more likely to visit OVO Greenlight more often and reduce their energy usage

Because by joining a challenge they will feel like they are making a commitment and have info on what to do

This will help drive Engagement, Return Visits, Energy and Carbon Reduction

Our Hypothesis template

With the hypothesis agreed, the team then did a sketching session where everyone came up with ideas for how it could be implemented. The UX designer then took away these ideas and created wireframe designs of the three features, one of which was Energy Saving Challenges.

Finally, when the wireframing was done, our UX designer took the designs through user testing, seeing how people understood what they were being asked to do and made some tweaks and changes to the designs. She ran two rounds of testing, looking at how successful users were in each part of the tasks we had set them, and improving those areas that performed badly.

The initial wireframes

It’s worth noting that there are a lot of collaboration sessions in this phase, and we discussed as a team whether we were spending too much time on joint sessions rather than on our specialisms. Ultimately, we decided that in order to build the right thing in the right way, we needed a range of expertise, and although this was time spent at this point, it made sure we had team buy in and avoided work down the line on building the wrong thing.

Step 4: Build

Once the designs were complete, the developers who would be working on the feature talked them through with the UX designer, and started to break them down into tasks to complete.

As we talked about how the feature would be implemented, it became clear that there were two different potential technical approaches - one a more lightweight approach with less development time needed, that could potentially work whilst we tested this feature and would have to be replaced if successful and a second, more fully featured version that we could test and then if successful launch to all customers straight away.

Our two options were tracking with cookies on the front end, so that users on the same device saw the challenges they were opted in to, vs keeping track in a database which people had joined which challenges.

As a Product Manager, I definitely have a view on how I think things should be done, but ultimately it is up to the team. I made sure they were aware of my considerations - that we test as fast as possible, and that we answer the questions posed by the hypothesis that we had generated as part of the design phase. Ultimately, the team decided that in order to fully answer the hypothesis and tell if customers had saved energy, we would need to build the more fully featured version of the design so we could track which customers had joined and how much energy they had saved.

The tasks then went into development. I checked in with the team at daily standups to understand where we were, and some of the issues the team uncovered when developing this feature, as well as communicating regularly on slack with the developers and designer to work out what should happen in any cases that hadn’t been covered in the original design, such as what would happen if we didn’t have all the information we needed to see if someone had completed a challenge.

One of the screens from the final feature

Step 5: Hypothesis Testing

Of course, it’s not worth putting all of this effort into a feature and then not learning anything from it. Using our hypothesis template as a guide, we set the feature up to show to a small percentage of people looking at OVO Greenlight, so we could see whether it helped them to save energy or not. I monitored the test and realised that we needed more people to engage with it in order to get a robust result, so we made some tweaks to the experiment.

As it stands, we’re still getting data in on how effective the feature is for driving long term habits, but we are confident that customers are positively engaging with the feature and it is helping them understand where to reduce energy in the short term - the majority of customers in our test cohort saved energy over their challenge period. We’ll continue to monitor to make sure that this feature is helping people understand and reduce their energy usage in the long term, and its effect on engagement with our tools.

Step 6: Marketing

The OVO Greenlight team is lucky to have a Marketing Manager as part of the cross functional team, who works on helping our members access the tools that the team produce. Our Marketing Manager was involved throughout the ideation process for the feature, and so understood the aim of the feature and how it could benefit customers. Once we had initial indications that the feature was working and our hypothesis was correct, they included the feature in our communications with OVO members, so more people could discover and use the functionality, and we could gather more data on how the feature was performing.

Step 7: Continuous Improvement

We continue to monitor a number of metrics around the feature - how many people are looking at it, and what percentage of them go on to commit to a challenge. This is already an area where we can see scope for improvement, and has given us some ideas on how to improve the feature. These will be incorporated into future prioritisation sessions as we look at how best to continuously improve our Carbon cutting tools.


I hope that gives you some insight into how a Product Manager can be involved in each stage of product development, and what their role is in trying to maximise customer value. If you’re interested in OVO’s Carbon Cutting tools, check out OVO Greenlight or message me on Linked In if you have any questions




Hope Thomas
Author

Hope Thomas

Product Lead at OVO

View Comments