OVO Tech Blog
OVO Tech Blog

Our journey navigating the technosphere

Share


Tags


Cloud Networking

A Shared Network

Connecting cloud projects together and to on-prem services is a standard problem of organisations. At OVO we use the 10.0.0.0/8 CIDR block for our internal network. This range is split into subnets for each office, AWS or GCP project, or on-prem.

When a new cloud project wants to join the network they get allocated a block from the correct range for the project type (AWS/GCP/Azure).

An overview of the network looks like this:
Cloud-Networking---Overview

We have a cross-project-network GCP project that has as transit VPC. This VPC has a VPN connection to our data center, and a VPN connection to AWS. GCP projects can connect to the network by creating a VPN connection into the cross-project-network VPC.

AWS provides a managed solution for the Transit VPC design as the AWS Transit Gateway service. AWS VPCs can be attached to the transit gateway for connectivity to the OVO network. The transit gateway terminates the VPN from the cross-project-network and has a VPN to our data center. New AWS projects can attach the transit gateway to get connectivity to the OVO network.

In this way traffic can be routed efficiently within and between cloud providers and on-prem.

AWS

Let's take a closer look at the AWS side
Cloud-Networking---AWS--1-

Each team at OVO will have one (or more) AWS accounts.
The Transit Gateway exists in an aws-network account and is shared with other accounts using an ovo-network Resource Access Manager resource share.

The Transit Gateway is not configured to propogate any routes. We do this so each team is able to fully manage their accounts and attach/detach from the Transit Gateway as necessary, without this risk of a routing mishap. The aws-network account has a lambda function that periodically checks attachments to the Transit Gateway and validates that correct routes exist, adding routes if necessary based on the CIDR block assigned to the owning AWS account.

The aws-network account also contains a VPC attached to the transit gateway in the same way. This is used for health monitoring and also contains outbound endpoints for a shared Route53 Resolver Rule. This is included in the ovo-network resource share, and teams can attach the rule to a VPC to enable name resolution using the on-prem DNS servers.

VPC Design

Having a single shared network is great for easy connectivity, but not everything should be openly accessible. With the number of cloud projects constantly growing, we also want to conserve the allocated address space.

We are recommending that the VPCs primary IP range is private. This could be one of the other RFC1918 ranges, but we also reserve 10.145.0.0/16 for this purpose. This will never be routed by the transit gateway or cross-project-network, and may be used by multiple projects.

This means that we only need to allocate a secondary /24 block to each project, which should be plenty. As we don't need to ensure the primary range doesn't overlap, we don't need to allocate blocks 'just in case' they need ovo network connectivity in the future.

Let's zoom in again to see how we achieve this in AWS:

Cloud-Networking---VPC

The subnets are the same in each zone, and break down as:

Private

These subnets should contain your services. There is outbound connectivity to the internet and the ovo network. Inbound connections must go through a load balancer in the Internet Public or OVO Public subnets.

Internet Public

These subnets should contain resources that need direct public internet access, such as load balancers for public services.

OVO Public

These subnets should contain resources that need direct access to the ovo network, such as load balancers for services available to other projects.

Internal

These subnets should contain services that don't require access to the internet or the ovo network, such as databases. There is only connectivity to the rest of your VPC.

This is a fairly typical private/public subnet design, where compute resources are in a private subnet. Outbound internet access is through a NAT gateway in a public subnet, and inbound access must go through a load balancer with listeners in the public subnet.

We've taken this idea and added an additional OVO Public Subnet which has a NAT/load balancers for the ovo network. This is the only subnet which is routed to the transit gateway, and the only subnet that needs a non-overlapping CIDR block. This allows the team to provide or consume services for the OVO network without exposing the entire VPC.

Author

Daniel Flook

View Comments