OVO Tech Blog
OVO Tech Blog

Our journey navigating the technosphere



OVO Tech Blog

Monorepo Microservices in Go

A retrospective of one team's first large scale cloud project, using Golang, IaC and Kubernetes.

We recently completed a sizable piece of work to roll out a brand new service, the PSR (Priority Service Register) service. This post will go into how the service was designed, rolled out, and lessons learned from the process.

This post is written by and targeted at developers. It is helpful but not essential to have some knowledge of the following:

Firstly, a bit of background.

What is the Priority Service Register?

First things first, what is the Priority Service Register, and why do OVO need a service to support it?

The Priority Service Register is a register for vulnerable customers, so that they can be supported in vulnerable situations. For example, a person with electrical medical equipment may need knowledge of planned outages.

All energy suppliers are required to provide this service. You can find out more about the Priority Service Register here.

Why did we need a new service?

We already provide a PSR service to our customers. However, we are in the process of migrating our members to a new platform that allows us to better serve them. New platforms are built iteratively, and so to enable the migration of all customers to the new platform, we would need to provide the PSR on this new platform.

The existing service could be used to provision customers on the PSR, but a longer term solution was required. Furthermore, because the existing service was tied into our CRM platform, we wanted to create the service in a more robust and scalable environment.

Architecting a microservice

These were the high-level requirements for the service:

From these requirements, we structured the following high-level architecture:

One of the key objectives for the developers was to make this as maintainable as possible, whilst also allowing this to be packaged as a service that could be deployed by anyone.

On maintainability, we decided to develop the PSR service in monorepo format, whilst breaking the separate components into their own microservices. A monorepo is a single repository containing multiple related parts. For the PSR service this makes perfect sense, as it allows us to package the service as one entity, without adding unnecessary complexity and coupling in the separate parts of said service.

The individual components were:

Most of these components shared data, for example the schema for a PSR. Rather than define this everywhere and keep them in sync across the microservices, it made sense to reference them in one place. We have a core Golang library that we also reference, and found that with Golang's "early-access" module support, constanly having to keep this up to date caused many issues.

The other benefit of this approach was in the CI pipeline. At one point, we had 5 or 6 almost identical CircleCI configurations amongst the microservices, as well as separate-yet-similar Helm, Terraform, and Docker configurations. Combining these all into one yielded much less duplicated boilerplate, and an ease in rolling out any improvements or patches to these components. Given the individual parts wouldn't provide much value without one another, it made sense to deploy them "as one".

On deployment by anyone, we wanted our architecting process to be centered around the principle that any other team could pick up, understand, and deploy the service with little hassle. For this, we would us a combination of Terraform and Helm to automatically provision the service for whoever wanted to pick it up - more details on these later.


As for platform, we chose Google Cloud Platform, and specifically its Kubernetes solution. This would allow us to create Dockerised images that we could deploy, the added benefit being that by creating Docker containers as the output of our build process, we could package our services in a manner that makes them transferable should we ever need to change to a different platform.

The final key decision would be which language to create the microservice in. Our one hard requirement was that it had to be supported on GCP. We decided on Golang, as it was a language we've had some experience with, is well supported by Google, and gives us everything we need.

Infrastructure as code

One of the first steps we take when spinning up a new service is to define the infrastructure, and "scaffolding" around the service we're building. If you are creating a service that could be spun up by any team without them needing to know step-by-step setups for each tiny component, it makes most sense to define your infrastructure as code, or IaC.

You can imagine IaC as you defining what you want, then giving it to some provider to do all the legwork for you. A good IaC provider will handle creation from scratch, incremental upgrades, and teardown of your services.

IaC can go beyond just defining infrastructure for individual services, and could be used to define all of the infrastructure for your organisation. A system with as complex an infrastructure as Facebook or Amazon.com could be defined entirely using IaC - and they are for the most part.

IaC provides a number of other benefits:

For us, that came in two parts.

Helm for Kubernetes

Our target system, Kubernetes, has many working parts, the scope of which is well beyond this post. Consider that you need at least a cluster and some nodes. Consider further that for a complex application with REST endpoints, communication with an event bus, and cron jobs running what are essentially lambdas, then you have a significant number of moving parts.

We used Helm to define the infrastructure we wanted to set up in Kubernetes. Helm allows you to define things such as containers, pods, and cron schedules. Coupled with this are your configuration values, which can be defined separately for production and development environments. And because a few of our microservices were very similar (for example we had two event consumers), once you've written one set of Helm files, it's simple to port them for reuse elsewhere.

Helm allows you to automatically tag components with the current release. This allows us to publish images with specific tags. We tied this into our CI pipeline, so that when we can automatically point Kubernetes to the latest released image. This isn't best practice, as it will continuously tag different images with the same label, in this case "LATEST". Ideally you'd Git hash this, or some other unique identifier, to allow rollback and auditing. Lessons learned!

As an example, this is what the Helm charts looked like for one of the microservices, which defined the REST API endpoints for retrieving or creating PSRs:

Once values are passed in, it's really as simple as building our Docker file, pushing it to Google's container registry, referencing it in Helm, then running a couple of Helm commands to deploy it. We take this one step further by defining these steps in CircleCI, which also allows us to plug into our version controlling and release processes.

The infrastructure for the application is defined. But we can take this further.


Helm allows us to define and create our infrastructure specifically in Kubernetes. Terraform allows us to define and create our infrastructure almost anywhere. In fact, you can even Terraform your Helm release, as we did in this service.

Terraform can go as far as defining the architecture and services for an entire organisation. Imagine the steps it would take to terraform a planet - stabilising the atmosphere, creating a balanced ecosystem, creating water, plant-life, and so on; Terraform itself has the power to do that for your cloud infrastructure.

As I'm raving perhaps a bit too hard about it, disclaimer - other IaC tools do exist. We just happen to like Terraform.

This is where having the application logic defined in a monorepo comes in handy. At the base of our monorepo, we defined a single Terraform folder, containing Terraform files for the parts we cared about. To demonstrate it's usefulness, here's a rundown of the files we defined, and their purpose:

With all this defined, when terraform plan is run, it will show exactly what Terraform plans to create or modify (or destroy, though we don't run this). Running terraform apply has Terraform carry out the plan, automatically provisioning your requested services. Terraform maintains the current state of the system, allowing it to act appropriately, only performing the required actions.

Terraform is not a magic solution for spinning up services instantly, but it is very powerful. One thing it can't do is write your services for you.


Golang is a relatively new programming language to our team, and yet has been quickly adopted as our language of choice.

Golang promotes a functional programming approach, though "objects" are supported. Typically you will end up with some hybrid of the two, passing simple struct pointers around to mutate states. One of the downsides of this hybrid approach is that no two libraries seem to do things the same way. Some expect you to pass pointers to empty slices (arrays), while some will require a slice with the correct memory pre-assigned to it. We found this while working with one particular library, which caused a lot of stress, particularly as the functions were not well documented.

I would describe as something of a cross between C and Javascript - the power of the former, with the simplicity and flexibility of the latter. It is built for performance, and doesn't take any nonsense; one of Golang's quirks is its hard requirements on public properties being commented correctly, and an expectation that each file has a unit test file associated with it.

Yes, it takes a while to get used to these nuances. But once you're there, it's really simple to spin things up quickly. Go does things its way, and it forces you to do it too. You can't argue that this doesn't create consistency in code, as well as enforcing solid design principles, but we've had a few cases of "that's a stupid way of doing things" uttered on the team (toned down from what we actually said).

As a team, we'd actually built up a "core library" of common functionality we could reuse across services. The benefits of this approach are files with good test coverage, a significant reduction in boilerplate, and a standardised way of approaching problems. With this core, we were able to use a standard set of tools for accessing the likes of Datastore and Apache Kafka. As a side-effect, team members were able to work on separate parts of the service, port code into our core library where necessary, then share knowledge with other developers. This meant we only ever needed to solve a problem once, and didn't need to refactor the same boilerplate logic across multiple subrepos whenever a minor issue was solved or performance improvement made.

Go has mature libraries for most things Google, including GCP, so we were mostly covered and able to use Google-developed packages for the majority of tasks. We also used open-source libraries from the likes of Uber and LinkedIn. OVO itself runs regular open-source days to help contribute to these projects, and the usefulness and power of the open-source community is something we endeavor to be a part of.

Go isn't the finished package. It is missing a few features - for example, it's module support is limited and considered beta right now, and generics are a long touted missing feature. If you were thinking of giving Go a... go, I can recommend A Tour Of Go, which takes you through all the essentials, whilst also providing a playground to mess around in as you learn.

Lessons learnt and "do's and don'ts"

The project in total lasted one quarter, and was a great success for our team, having come from relative noobs in the IaC and Golang realms. And it's worth noting off the bat - we could have done this any way we wanted. One of the most fantastic things about working at OVO is the freedom to do things the way you want. We weren't forced to use a particular language, and we as a team were responsible for ensuring the product we delivered was quality.

With that said, here are a few lessons learnt from the project:


This has been a review of one particular monorepo project at OVO, and has hopefully provided some insight and starting blocks for those looking to better architect their cloud solutions.

We are constantly learning and trying new technologies at OVO. The lessons learned and retrospective above are from a team of developers who started out with little to no knowledge of creating a project of this scale and complexity with the tools we wanted to use. As such, this is by no means a "definitive guide", but hopefully helps avoid at least one or two gotchas.

As I've stated throughout, none of the solutions provided above are silver bullets. The team sat down and refined the requirements of the service early on to determine what we would need to build. The decision to create a monorepo came after experiencing pain with multiple separate repositories. The key takeaway is to always use the right tools for the job; it may sound obvious, but it's surprising how often people stick to something out of familiarity or loyalty.

All in all, the tools we used above are highly recommended, and resulted in a service which is live today, delivered on time, and is robust and trustworthy. Here's to the next one!

Gopher image by Renee French, licensed under Creative Commons 3.0 Attributions license.


Chris Waldron

View Comments