TvE 2100

At 2100 feet above Santa Barbara

MySQL Manager Backup/restore

The MySQL Manager backup and restore procedures are designed to provide full-system backups of the database so it can be restored to fresh servers in case of failure wherever a clone database is desired. The backup/restore mechanism is also used to provide the initial database state when setting-up slave servers. The overall design goal has been to provide a reliable and fast operation, especially to make the restore as fast as possible so that launching additional database servers is as quick as possible. The need for restore speed makes operations based on mysqldump less attractive than using an approach based on filesystem snapshots.

Here’s a short synopsis of how it works:

Instance Set-up

  1. the 160GB /mnt ephemeral disk is formatted to use LVM2 (Linux Volume Manager) with about 20% free space to allow for filesystem snapshots
  2. the MySQL data files are set-up in the /mnt partition to take advantage of the snapshot feature
  3. typically an initial database, e.g. from a prior backup, is restored

Database backup

  1. acquire a database read lock
  2. perform a FLUSH TABLES WITH READ LOCK with a timeout of a few seconds
  3. retry a few times in case it fails (we don’t want to block the database if a long update is going on)
  4. flush binary logs using FLUSH LOGS
  5. create an LVM snapshot
  6. get the list of data files used by the database (e.g. show variables like ‘innodb_data_file_path’)
  7. release the database read lock
  8. tar-up the files, optionally with compression
  9. delete the LVM snapshot
  10. upload the tar archive to S3 (include timestamp in filename)

Database restore

  1. download the tar archive from S3 onto /mnt
  2. stop mysql
  3. move data directory to .old to preserve anything that may have been there
  4. unpack the tar archive into the data directory
  5. start mysql


  • Compressing the backup tends to be slower than backing up the uncompressed data, it also imposes a high load on the machine being backed-up, but of course uncompressed storage is more expensive…
  • To do a backup there has to be enough storage on /mnt to hold the backup itself (compression can help here), the reason being that it’s not possible to simply stream the backup to S3 because S3 requires a content-length header up-front.
  • These backup/restore operations are designed to be whole-database-system operations, not per database or per-table.
  • The my.cnf files and other database config files are not backed-up. It is assumed that these are taken care of by the RightScripts that set-up the servers.