OVO Tech Blog

Rocket science in the smart home


Alex DeCastro

Alex DeCastro

Alex DeCastro is an AI Engineer with VCharge, a subsidiary of OVO Energy that develops technologies for smart homes.

Rocket science in the smart home

Posted by Alex DeCastro on .

Rocket science in the smart home

Posted by Alex DeCastro on .

You may not have heard of Kalman nor of his filter, but the track pad or perhaps mouse you're using to scroll down this page may be benefitting from it. Other mundane applications of the Kalman filter that you may have heard and/or taken advantage of include satellite navigation devices, every smart phone, and many computer games.

Rudolf Kalman was a Hungarian scientist who in the 60's contributed to the Apollo 11 project that took Neil Armstrong to the moon, and (most importantly) brought him back. He received in 2011 a National Medal of Science from President Barack Obama for that feat. The problem that Kalman tried to tackle in the 60's is an old question: How do you get accurate information out of inaccurate (noisy) data? Everyday scientists all around try to contribute with new tricks to tackle that old question.

A filter in engineering parlance is a mathematical procedure to sift unwanted features, or noise, out of a stream of data. In Kalman's case that stream of data was a radio signal from which the approximate position of the Apollo spaceship would be indirectly estimated. When passing through Earth's atmosphere a radio signal would often be corrupted and therefore a mathematical way to extract signal from the noise was decidedly important to keep our space crew afloat.
Proprietary image of Mathworks.

Let's fast forward 50 years in time, and look at another unexpected place where the Kalman filter is being used these days: to make storage heaters smarter.

Smarter storage heaters at VCharge

VCharge is a subsidiary of OVO Energy acting in the smart home space. One of our most popular products is a VCharge Dynamo. The Dynamo is a control gadget which our field engineers retrofit to an existing storage heater to make them 'smarter'. The important question then is: how to make a storage heater smarter?

Let's first understand the default behavior of storage heaters (for more info on electric heating systems, click here):

Storage heaters are very heavy as they are filled with bricks. The bricks are heated to store the heat. The idea is that the storage heaters are on overnight using cheap rate electricity on Economy 7 (or Economy 10) to heat the bricks which then retain the heat during the day and allow the heat to be released during the day.

There are two problems with this approach:

  • Storage heaters will release heat during the day when people are out and there is not enough heat left by the evening when temperatures are falling.
  • This means expensive day time electricity has to be used to heat the storage heaters in the evening.

This is where the Dynamo comes is. The Dynamo smarten your storage heater by allowing it to listen to various data streams such as: weather forecast, market prices of electricity, temperature telemetry and user settings to generate a heating strategy for the heater that is cheaper and keeps our users warm when they want to be warm. For instance, if I set the comfort settings of my living room to be: 21-23 degrees Celsius between 19:00-21:00 the heater will receive a list of instructions for when it should be turned on/off in order to meet those comfort requirements.

We include here an explanatory diagram of a typical electric storage heater model. Courtesy of Open University.

Back to Kalman

One of the most challenging tasks involved in creating a more intelligent storage heater is to design a model for how your living room warms up when you turn on your heater or cools down when you turn the heating off. Physics is our friend here and people have been thinking about cooling laws since at least Isaac Newton's time (yes, the same gentleman who had an epiphany after seeing an apple falling from a tree).

One of the simplest models for the how the ambient temperature in room changes as a function of the power input and the weather outside is:


Even if your A-levels math is rusty you can tag along. The equation says that the temperature of the room, say minutes from now, can be inferred from what is the temperature now, whether the heater is on/off and how cold/warm it is outside.

So in theory if you have knowledge about: weather, the heater state and the temperature of the room we should be able to estimate the temperature of the room in the future. Not so fast. If you look back at the equation above, there are two symbols in bold face that embody the physics of the room/heater system and that cannot be directly measured -- at least not without peeping at someone's heater/living room. We don't want that.


This where the notion of grey box modeling is useful. We have only partial knowledge about the system, and therefore the other parameters of our thermal modeling need to be modeled indirectly. Being able to estimate how your heater warms-up and/or cools down allows us here at OVO to determine if your comfort settings are being met, and wheather we can do so by also saving you energy since we can simulate when your heater should be turned on/off.

Back to filters, the Kalman filter we discussed earlier is really useful in this sort of partial knowledge scenarios -- where some variables of the system can be directly observed (e.g. via the temperature sensors of the room and the heater) and others cannot and therefore they need to be estimated. The more temperature telemetry we collect, the more accurate are the models we can build and the better service we can provide. For the AI enthusiasts out there, you can think of the Kalman filter as a type of unsupervised algorithm for learning latent (hidden) variables in time series models. It is also a recursive generalization of the least-squares model used in astronomy for over two-centuries now to track the trajectory of comets in the sky.

Try the Kalman filter yourself

Let's say you're listening to a stream of numbers (you can try the random.org website). The list goes on and on like: 2, 10, 58, etc. The numbers are random, and bound between 1 and 100.
Let's say you decide to summarize the digits you've listened so far and decide to compute their average. For the first 3 that average is
m_old = (2 + 10 + 58)/3 = 23.33.
If you receive a new digit, say 65, how do you recalculate the average? Most people would go ahead and compute
m_new = (2 + 10 + 58 + 65)/4 = 33.75
Now, how would you use your previous knowledge of the average to compute a new average? It turns out, you can try it yourself, that the new average is:
m_new = m_old + (65 - m_old)/4.
Do you see how it generalizes? We've updated our knowledge about the average without having to recompute it for the whole stream. This makes online/real-time learning very efficient. If you're into coding, try it yourself at home:

from time import sleep
from random import randint, seed

time = 0
x0 = 0

while True:
    # stream
    y = randint(1, 100)
    # new average
    x1 = x0 + (y - x0)/(time + 1)
    # old average
    x0 = x1
    print('average now:', x1, ',time:', time)
    time += 1
    # this slows down the printing in the terminal

At each second an updated average will popup in your terminal. Where do you think that average will end up at?

Kalman for a storage heater (back to rocket science)

You may still be wondering how this framework applies to storage heaters? The typical information flow in the Kalman filter looks as follows (courtesy of bzar's blog):
The mechanics of the information work flow in typical cycle of Kalman filtering are beyond the scope of this blog. However, the gist of the idea is still acessible to us.
As you can see in the diagram, there are two phases:

  1. prediction
  2. update

If you look back at line 12 the code snipped above, you'll see that the new estimate (for the average) was a combination of the old estimate and the new data available. That's the crux of Kalman filtering: the new estimates are a combination of the old estimates and the new data available, with weights proportional to our degrees of believe in both the the model and the data source.

In the case of heaters/rooms, when new telemetry comes in we weight in both the model predictions and new measurements to create an update prediction. We don't waste data.

For example, for a typical heater, after estimating some of the hidden physical parameters, we can obtain a fairly good forecast for how the heater will respond to actuation (power on/off commands):


There 48 ticks along the x-axis for we consider half-hourly intervals. The further away the forecast, the less accurate. For the first few hours, the trends are accurate within a range of +/- 1C. Not bad!

Final remarks

I hope the reader will finish this blog convinced how a 50 years old algorithm is present in so many layers of our daily life, and how it continues assisting us in making life more secure, practical and comfortable. And we here at OVO Energy/VCharge are continuing this tradition by leveraging this technology and other similar technologies to help create a smart home. Thanks for your time and Happy Holidays!

Alex DeCastro

Alex DeCastro

Alex DeCastro is an AI Engineer with VCharge, a subsidiary of OVO Energy that develops technologies for smart homes.

View Comments...