The key to a successful site setup on Amazon EC2 is scalability and redundancy. RightScale makes this easy by providing server templates and multi-server deployments. To get started, let’s take the simplest case: a single server set-up. We have a free “Rails all-in-one” server template that is excellent not just to play around with, but also to use as development server, staging server, or even as production server for small sites that don’t need more horsepower or much redundancy.
Our Rails all-in-one is described in more detail elsewhere, but you see on the right what’s involved: it runs Apache as a reverse proxy in front of 4 Mongrel/Rails processes all backed by a simple MySQL installation. Last, but not least, we set-up cron jobs that run a mysqldump every 10 minutes to Amazon S3 so you have your data safe in case the instance dies unexpectedly. Apache in the front can be set-up to serve up static and cached pages, it can do HTTP and/or HTTPS, it can canonicalize the hostname (e.g. redirect http://mysite.com to http://www.mysite.com), and it can serve-up a maintenance page while you’re updating your app. Oh, of course Apache load balances across the 4 Mongrels too!
Ready for more? You’re almost ready to launch for real, you expect some traffic soon and don’t want to be reliant on a single server anymore. Time to upgrade to a fully redundant site architecture using 4 servers! The set-up almost all our customers use consists of two front-end servers and two back-end database servers giving us full redundancy. We use this ourselves for the RightScale site itself! Let’s walk through the set-up from beginning to end.
It all starts when a user types http://www.mysite.com into the browser. The browser does a DNS lookup and gets two IP addresses which are the public IPs of the two front-end instances. The browser picks one and tries to connect. If it fails, it rather quickly tries the other, this gives you the fault tolerance you need in case one of the instances dies or has other problems. Also, having multiple IP addresses for your site is the only form of fail-over that browsers support, see this page for additional details.
The first thing the request from your browser hits is Apache, which has the same roles as in the all-in-one server: dealing with SSL, canonicalizing the hostname, serving up static files, putting up a maintenance page, and anything else you might want a full-fledged web server for. For requests destined to your application, Apache acts as a reverse proxy and forwards the request to HAproxy on the same machine.
HAproxy is a very nice piece of software that proxies and load balances requests to back-end servers. We use it for HTTP here, but it can also do plain UDP and TCP load balancing, for example for DNS or mail servers. We chose HAproxy because it has good support for health checks and the ability to redirect requests to alternate servers if a back-end fails mid-way. HAproxy is set-up to send a request to each back-end process (Mongrel/Rails in our example) to ensure that it’s running properly. It then only forwards requests to servers that respond. While Apache can do load balancing across multiple back-end servers as well using mod_balance_proxy it does not include health checks. What this means is that when a sever goes down it has to send live customer requests to it every few seconds to see whether it has come back up. This means that while any Mongrel process is down on any server your customers are going to be impacted because some of their requests are being sent into a black hole. Not nice…
HA proxy forwards the request to one of the Mongrel/Rails processes on either of the two servers. Load balancing across both servers is nice because it means that you can shut the Mongrels on one server down to update the code without impacting customers at all.
Everything on the front-end servers is open source software except for your application. So we need a way you can get you app code onto the instance at boot time, and a way you can update the code. Note that for major upgrades we always recommend to launch fresh instances so you keep the old ones around for a day, just in case you want to switch back. (Hey, that’s really cheap insurance at only $2.40 per day per server!) We provide two different RightScripts to do minor code updates: one pulls the code from a tarball located on S3, the other does an svn export from your subversion repository. We recommend the S3 route for production use because else starting new servers depends on the availability of your svn repository and often the svn export is the slowest portion of the entire instance boot process. But sometimes the svn route is just so much more convenient, specially if you’re playing with a test set-up where you change the code frequently. In addition, for Rails, we set-up the app code directory structure the same way capistrano does, so you can point your capistrano config file at your instance and do a “cap update”. Again, something we don’t recommend for production servers but really handy for test and dev boxes.
Behind the front-end servers we place two replicated MySQL instances managed through our Manager for MySQL with backups to Amazon S3. We use frequent backups from the slave server where the load of the backup itself doesn’t affect production and daily backups from the master as added security.
Scalable redundant site
For a fully redundant and scalable site we recommend an architecture that is a natural extension from the 4-server set-up using more of the same components. We basically add a number of Mongrel/Rails application servers and hook them into the load balancing rotation on the two front-end servers. This array of app servers can now be expanded and contracted as warranted by the load on the web site: expand to handle surges in traffic when your PR and marketing lands a success, contract at night when the load on your site goes down and you’d rather hold on to your $$. The wonderful thing is that with this set-up you are paying for the average cost of your hosting needs, not for a once-a-month peak!
If you look closely we’re running the app server on the two front-end load balancing instances. We find that the load balancing takes very few resources and that there’s room for some application cycles. Using HA proxy it’s easy to have less traffic go to the local app servers than to remote dedicated instances. The reason we keep the app on the front-end instances (as opposed to switching to pure load balancing instances) is that this way there are always two app servers available even if the array is scaled back to zero servers. Or put differently, when your site is under minimal load at 4am it scales down to 4 instances as opposed to 6. If the load-balancing or serving of static files becomes a significant load, it is of course possible to switch of the app serving on the front-end or, alternatively, to add 2 additional from-end load balancing instances.
The way we currently handling the changes to the load balancer config when servers come online is to automatically edit the config file using operational RightScripts and do a seamless restart of HAproxy which ensures that no connections are dropped in the change.
If you are interested in using our site setups please don’t hesitate to try out the free Rails all-in-one server template and please contact us for more at email@example.com. The multi-server set-ups are not available in pre-packaged form with the free RightScale accounts.