Aiven Blog

May 27, 2019

Introducing MyHoard, your solution to MySQL backups and restoration

Getting the details just right to fully automate MySQL backups isn't simple. Find out how we accomplished it with our Open Source tool, MyHoard.

rauli-ikonen

Rauli Ikonen

|RSS Feed

Staff Software Engineer at Aiven

We at Aiven are big believers in Open Source. Not only do we provide Open Source databases as a service, we are contributors to several of those and many related projects. Additionally, we open source our in-house tools we've developed to provide Aiven services whenever we can.

Available under Apache License Version 2.0, our latest tool is MyHoard, the daemon we use to create, manage and restore MySQL backups. In this post we cover our requirements and how we went on to accomplish fully automated MySQL backups with MyHoard.

Aiven's operational model and requirements

The model with which Aiven operates services differs quite a bit from how traditional on-premise databases have been hosted where relational database master servers have been expected to remain operational for extended periods of time and replacing them has been an exceptional situation.

This often means the process is not well automated, safe or repeatable. We on the other hand expect servers to potentially fail at any time and also perform major updates in a rolling forward fashion that causes the old master servers to be replaced.

To cope with both expected and unexpected server failures, we work hard towards perfecting the failover scenarios so that they're as seamless as possible.

For controlled failovers, we cover various steps on both the old and new masters before and after the promotion to ensure no data loss and as little downtime as possible. By the time of writing, controlled failovers are usually completed so that service unavailability is around 5 seconds.

One of the properties of seamless failovers is keeping backups working like nothing out of the ordinary happened. We provide point-in-time recovery (PITR) that allows recovering to any point in time within the backup history window (2-30 days depending on service plan) and this also needs to work at the time just around a failover.

Also, should the newly promoted master fail quickly, the backup must be valid and at least include almost all changes from it. Consequently, requiring a full backup on the newly promoted master is not an option because it could take an extended period of time.

Of course, everything needs to be compressed and encrypted – we run an encrypted file system and don't rely on the MySQL encryption features that were introduced in the recent 8.0.x versions. All major cloud storages must also be supported as backup target / source. None of the existing solutions -- not that there were many to choose from -- fit our requirements just right, so we created our own.

MyHoard's main functionalities

To get started you first need to get a full snapshot of the database. MyHoard uses Percona XtraBackup for this part because it does a great job. Both backup and restore operations use direct streaming so that no extra disk space is needed, which required extending XtraBackup to fully support this.

However, creating and restoring the snapshot only gets you so far. You still need scheduling, removing old backups based on backup count / age, retrying operations, monitoring, etc. -- all of which MyHoard handles.

Once done with the snapshot, you need to take care of the continuous changes happening on top of the snapshot, i.e. binary logs. The way binary logs are managed tends to be somewhat naive: usually, new binary logs are just uploaded as is when new files are created on disk. On the surface, this is what MyHoard does as well. But, there is more to this than meets the eye.

MyHoard not only uploads, compresses and encrypts binary logs, it also scans the files to keep track of which transactions (GTIDs) the binary log files contain, as well as the timestamp of the first and last transactions in the file. This metadata can later be used when performing PITR to automatically pick the right target file based on recovery target time.

MyHoard also ensures that binary logs are rotated within a reasonable amount of time.

By default MySQL creates new binary log files when a certain size threshold is exceeded. But, in environments with a fairly low change rate, it can take a long time to exceed the threshold and data could get lost for a long window of time if the server failed before the binary log got uploaded. MyHoard requests MySQL to switch to the next binary log if no changes were performed in a configured amount of time (5 minutes by default) to have a known maximum data loss window for single node services.

To ensure all of the binary logs can be backed up and replicated, a reasonably high binary log retention period should be used to ensure that the files aren't removed before they're processed. MySQL's default value is as high as 30 days. Once the binary logs have been replicated and backed up they're typically no longer needed on the master, but there's also nothing that would clean them up until the MySQL retention threshold is exceeded.

While keeping old binary logs for long periods of time may not be an issue in environments with a low number of changes, some environments may generate more data in a day as binary logs than there is data in the actual database.

To avoid excessive disk usage by binary logs that are no longer needed, MyHoard automatically purges binary logs that have been backed up. It is even possible to pass MyHoard replication state info for other cluster nodes to ensure it can never purge binary logs that still might be required for some standby or standalone read replica server.

Standby servers must also be configured to write binary logs because when the master fails and a standby is promoted, there might be some other standby server or separate read replica that had not replicated as far as the new master had. If the new master didn't have binary logging enabled, the servers that were behind it in replication couldn't catch up and would need to be re-initialized somehow, causing a potentially long period of a lower-than-expected level of high availability (or none at all).

The old master might also have managed to replicate transactions to standby servers that it had not yet backed up. In order to keep the backup consistent, the new master needs to have the capability to back up those transactions.

MyHoard must also be configured to run on standby servers. It handles purging binary logs that have been backed up and replicated to all servers in the cluster.

When failover happens, MyHoard starts backing up binary logs starting from the first binary log containing transactions that the old master had not backed up. It also updates the backup metadata to indicate the failover point so that any stray updates from the old master will be properly ignored when processing the backup later -- it is possible that the old master was still somehow alive and accepting writes that aren't getting replicated because the new master had already been chosen, but the old master could still have backed up the transactions before the server was eventually shut down.

Restoring binary logs is perhaps the most interesting part of the whole flow. The official MySQL documentation suggests recovering binary logs using mysqlbinlog, but mentions one major caveat:

Restoring is only safe if you restore everything with one invocation of mysqlbinlog because temporary tables will be dropped unexpectedly and recovery might not succeed, or it might recover invalid data otherwise.

This means you need to have all binary logs on disk simultaneously -- a potentially very large amount of disk space for change heavy servers -- and the entire operation will either succeed or fail in its entirety. If it fails, the only thing that can be done is starting over from scratch. The mysqlbinlog based processing also prevents parallelizing the operation.

MyHoard works around these problems by making use of MySQL's normal SQL slave thread, which is the same thing that processes updates from the MySQL master during normal replication: MySQL splits the processing into the IO thread, which is responsible for receiving updates from master and SQL threads, which is responsible for applying them. During restoration, MyHoard keeps the IO thread stopped and emulates what it would be doing, directly feeding the backed up binary logs as relay logs for the SQL thread.

This allows batching the processing so that only part of the binary logs are stored on local disk at any point in time and any possible errors can be handled more gracefully. This would also allow using multithreading when processing the binary/relay logs, though this is currently not supported.

Running and controlling MyHoard

MyHoard is a daemon intended to be run as a systemd service and likewise expects MySQL to be managed via systemd. It takes a JSON configuration file that defines MySQL related configuration, cloud object storage configuration, various backup related options, monitoring related options, and so forth. MyHoard runs an HTTP server that provides simple management API.

Once done with creating the configuration and starting up the service management it's very simple. On the first ever cluster node, you'd make one PUT request to place MyHoard into active mode to start taking backups, on any other node you'd start by putting it into restore mode to get it fully restored from backup and in case master fails doing another request on one of the standby servers to promote it as the new master.

The full configuration and control API is explained on MyHoard's GitHub page.

Requirements and assumptions

MyHoard requires binary log with ROW format and GTIDs to be enabled. Only single server is expected to be accepting writes. It only works with MySQL 8.0.x+ and on Linux.

MyHoard is considered stable but there are numerous things that can still be improved. For example, making it support parallel binary log restoration would bring big improvements to restoration speed. Currently, binary logs always need to be created on all servers in order to guarantee consistent state which adds a fair amount of overhead and could possibly be optimized.

We welcome contributions, bug reports and improvement suggestions. Please get in touch via the GitHub project or opensource@aiven.io!


Related resources

  • Solving issues for everyone

    Jul 6, 2023

    Why use Aiven for MySQL service when you could just roll your own? The simple answer: managed services make economies of scale work for you.

  • Data and disaster recovery illustration

    Mar 17, 2021

    We don’t like to think about disasters, but sometimes they just happen. Find out how you should prepare your data for the worst, even while hoping for the best.

  • Making robust and future-proof decisions about your data setup can be complex. The SOFT methodology is here to guide you to the right choices for your needs.

    Dec 8, 2022

    Making robust and future-proof decisions about your data setup can be complex. The SOFT methodology is here to guide you to the right choices for your needs.