Cross-region sharing of live data

Luna Ricci

Last Update 2 years ago

This article explores different alternatives to share live data between EC2 instances across regions. By live data we refer to data that is meant to be consumed or updated instantly, not long-term storage or disaster recovery options. Many of these solutions are applicable to other compute options or other AWS services, but they are out of the scope of this article.

Keep in mind that for all of these solutions cross-region data transfer costs apply, and some will have (sometimes substantial) extra costs. Also, due to the CAP theorem, when setting up an alive-alive multi-region architecture you will need to choose between consistency and availability.

One EFS in each region, sync them with AWS DataSync

While a single EFS file system cannot be mounted in different regions, it is possible to create multiple EFS file systems and use DataSync to keep the files synchronized. The main disadvantage of this approach is that DataSync runs on an EC2 instance provisioned by the customer (you), which introduces significant extra charges and can act as a bottleneck and a single point of failure.


This alternative also works for FSx.

A single EFS accessed through peered VPCs

An alternative to using multiple EFS file systems consists of using VPC Peering to give instances in one region access to the resources in another VPC in another region, particularly the EFS mount points in that VPC. This way, the instances will use the EFS mount points in the other VPC as if they were on the same VPC (and on the same region).


Scaling this up beyond two or three VPCs is possible but difficult to maintain, since each VPC should be peered with the VPC that has the mount points (VPC Peering is not transitive).


Keep in mind that there is only one EFS file system handling all the load, so performance might be an issue.


This alternative also works for FSx.

A centralized EFS in a Shared Services account, accessed through Transit Gateway

This is an evolution of the previous solution. Instead of having an EFS in the same account and region as some instances, and granting instances in a separate region access to that EFS through VPC Peering, we consider a multi-account setup where EFS is set up in the shared services account, to which all other accounts already have access through Transit Gateway. A separate account or Transit Gateway are not technical requirements, but rather best practices that result in a solution that is easier to maintain.


This does not mean the performance of EFS will scale, just the ease of setting up a shared EFS for multi-account organizations.


As part of StackZone, we provide a multi-account setup including a shared services account and Transit Gateway to communicate accounts.

S3 with cross-region replication

If block storage isn’t needed, S3 can be used as object-level storage. The main disadvantages are that there is a cost for each request, any change to an object will require that the object is re-uploaded as a whole, and with this solution, only two regions can be used since cross-region replication only works between two buckets.


The main advantages are that it’s easy to set up (just create two buckets and enable replication), you only pay for the space you’re actually using, and it is extremely scalable performance-wise.

Amazon RDS/Aurora with read replicas or multi-master

Amazon RDS and Amazon Aurora support read replicas in different regions. In a single master configuration, instances can read from the read replica in their region, but they will need to perform writes to the database master, possibly in a different region. With the multi-master configuration for Aurora, you can set up multiple master instances in different regions, so that EC2 instances can perform their reads and writes against the master in their own region.


Of course, this is only possible with data that fits in a relational database, and using it to share short-lived data might not be the most cost-effective option. But the main advantage is that in any multi-region environment you should already have at least read replicas, possibly even a multi-master setup.

DynamoDB Global Tables

DynamoDB supports a configuration called Global Tables, where the same table has multiple masters in multiple regions. With this configuration, instances can read and write from the local node in their own region

.

This solution is extremely scalable, but it’s limited to DynamoDB’s data format. It’s also very cost-efficient, since DynamoDB can autoscale very quickly in very small steps.


Want to know more about StackZone and how to make your cloud management simple and secure?

Check our how it works section with easy to follow videos or just create your own StackZone Account here.

Was this article helpful?

1 out of 1 liked this article

Still need help? Message Us