TvE 2100

At 2100 feet above Santa Barbara

Deploying Many Rails Sites Onto Amazon EC2

One of our customers is deploying many Rails sites onto EC2, more precisely, many instances of virtually the same site. Basically they have a Rails application and they tweak it for each individual site they set-up. EC2 is a wonderful deployment platform for this type of business because there is very little friction in adding customers since it takes just one button press to get more servers.

The overall architecture concept we’re using for this customer is to build a number of app+database clusters and to load multiple sites onto each one. The number of sites per cluster can be adjusted such that the database portion of each cluster is loaded up optimally, and it’s designed such that sites can be moved around easily, for example to offload a cluster that may have become too heavily loaded as some of the sites on it have grown.

In the end, the architecture boils down to having two instances running a mysql master/slave set-up managed by our Manager for MySQL plus two instances running load balancers and Rails/Mongrel as redundant app servers. This makes for a fully redundant cluster on which a number of sites can be hosted. It is also easy to add a few more EC2 instances running Rails depending on the Rails vs. MySQL workload balance.

Each site on a cluster has its own logical database (i.e. what MySQL calls a “database”), this makes it easy to backup and restore a site individually, and most importantly, to move a site to another cluster in order to free up resources on the original one. The sites on a cluster can also share the app servers as long as there is no HTTPS involved. The reason for this caveat is that each Amazon EC2 instance has only a single IP address and it is not possible to do “virtual hosting” with HTTPS sites. With HTTP all the,, etc. DNS entries point to the same two load balancing instances (using what’s called “round-robin DNS” for fail-over purposes) and the load balancer (or front-end Apache, if used) figures out which site the user is visiting based on the “host” header included in every HTTP 1.1 request.

What’s really nice about these 4-6 machine clusters is that they’re very powerful yet so simple. There’s no “infinitely scalable” magic under the hood that breaks at the worst moments. No, it’s a plain set-up that anyone with a bit of experience can fully understand. The magic is that it’s so easy to set these clusters up with Amazon EC2 plus RightScale so you can really take advantage of the same “horizontal scaling” as the big guys (Google, Yahoo!, etc.).

One of the interesting design decisions with all this is how to set-up DNS. For example, the app for site1 needs to locate the IP address of the database it’s supposed to talk to. We use DNS as follows: * the app connects to * resolves to a CNAME for * resolves to the IP address of the instance that currently hosts the master * DNS for is set-up with a low TTL (we use 75 secs) and supports dynamic updates * if the DB master crashes or is otherwise replaced the DNS entry is automatically updated by the RightScale MySQL manager, which switches all the sites hosted by that cluster over with one stroke * if site1 is moved to different cluster, then the CNAME has to be updated to point to the correct cluster DB

For the web sites themselves its also nice to use CNAMEs: * points to * resolves to the IP addresses of all the load balancer instances * if a load balancer instance is restarted, the entry is dynamically updated * if the site is moved to different cluster, the CNAME needs to be updated

Wow, it’s amazing all this is actually possible and not just a dream! Amazon EC2 enables it and RightScale makes it possible to manage without an army of sysadmins running around and tweaking servers all the time.