{"id":32063,"date":"2018-11-04T15:35:53","date_gmt":"2018-11-04T10:05:53","guid":{"rendered":"https:\/\/www.wikitechy.com\/technology\/?p=32063"},"modified":"2018-11-04T15:37:00","modified_gmt":"2018-11-04T10:07:00","slug":"backup-mysql-database-to-object-storage-with-percona-on-ubuntu-16-04","status":"publish","type":"post","link":"https:\/\/www.wikitechy.com\/technology\/backup-mysql-database-to-object-storage-with-percona-on-ubuntu-16-04\/","title":{"rendered":"To Backup MySQL Databases to Object Storage with Percona on Ubuntu 16.04"},"content":{"rendered":"<p><span style=\"color: #000080;\"><strong>To Backup MySQL Databases to Object Storage with Percona on Ubuntu 16.04<\/strong><\/span><\/p>\n<ul>\n<li>The databases square measure store a number of the foremost valuable data in your infrastructure.<\/li>\n<li>As a result of it&#8217;s necessary to own reliable backups to protect against information loss within the event of Associate in Nursing accident or hardware failure.<\/li>\n<\/ul>\n<p>The <strong>Percona XtraBackup<\/strong> backup tools offer a technique of playing &#8220;<strong>hot<\/strong>&#8221; backups of <a href=\"https:\/\/www.wikitechy.com\/angularjs\/angularjs-delete-using-php-mysql\" target=\"_blank\" rel=\"noopener\">MySQL<\/a> information whereas the system is running. They are doing this by repeating the information files at the filesystem level then playing a crash recovery to realize consistency at intervals the dataset.<\/p>\n<ul>\n<li>In a previous guide, we tend to put in <strong>Percona&#8217;s backup utilities<\/strong> and created a series of scripts to perform rotating native backups.<\/li>\n<li>This works well for backing up information to a special drive or network mounted volume to handle issues together with your info machine.<\/li>\n<\/ul>\n<p>However, in most cases, information ought to be saved off-site wherever it is often simply maintained and fixed up.<\/p>\n<ul>\n<li>We can extend our previous backup system to transfer our compressed, <strong>encrypted backup files<\/strong> to Associate in Nursing object storage service.<\/li>\n<\/ul>\n<p>We are going to be victimization DigitalOcean areas as <strong>Associate in Nursing<\/strong> example during this guide. However, the essential procedures square measure possible applicable for alternative S3-compatible object storage solutions yet.<\/p>\n<h3 id=\"prerequisites\"><span style=\"color: #ff0000;\"><strong>Prerequisites<\/strong><\/span><\/h3>\n<p>Before you begin this guide, you&#8217;ll like a <strong>MySQL info server<\/strong> organized with the native Percona backup answer printed in our previous guide. the complete set of guides you wish to follow are:<\/p>\n<ul>\n<li>Initial Server Setup with <a href=\"https:\/\/www.wikitechy.com\/technology\/how-to-install-kde-on-ubuntu-16-04-and-ubuntu-16-10\/\" target=\"_blank\" rel=\"noopener\">Ubuntu<\/a> sixteen.04: This guide can assist you to tack together a user account with sudo privileges and tack together a basic firewall.<\/li>\n<\/ul>\n<h3 id=\"one-of-the-subsequent-mysql-installation-guides\"><span style=\"color: #003366;\">One of the subsequent MySQL installation guides:<\/span><\/h3>\n<ul>\n<li>How To <strong>Install MySQL<\/strong> on Ubuntu sixteen.04: Uses the default package provided and maintained by the Ubuntu team.<\/li>\n<li>How To <strong>Install the newest MySQL<\/strong> on Ubuntu sixteen.04: Uses updated packages provided by the MySQL project.<\/li>\n<li>How To <strong>tack together MySQL Backups with Percona XtraBackup on Ubuntu sixteen.04<\/strong>: This guide sets up a neighborhood MySQL backup answer victimization the Percona XtraBackup tools.<\/li>\n<\/ul>\n<p>In addition to the on top of tutorials, you&#8217;ll conjointly generate Associate in Nursing access key and secret key to act together with your object storage account victimization the <strong>API<\/strong>. If you&#8217;re victimization DigitalOcean areas, you&#8217;ll be able to verify the way to generate these credentials by following our the way to produce a DigitalOcean area and API Key guide. You&#8217;ll save each the <strong>API access key and API secret price<\/strong>.<\/p>\n<p>When you&#8217;re finished with the previous guides, log back to your <a href=\"https:\/\/www.wikitechy.com\/tutorials\/apache\/apache-web-server\" target=\"_blank\" rel=\"noopener\">server<\/a> as your sudo user to urge started.<\/p>\n<h3 id=\"installing-the-dependencies\"><span style=\"color: #003366;\">Installing the Dependencies<\/span><\/h3>\n<ul>\n<li>To generate our backups, we would use <strong>python and bash scripts<\/strong> and then transfer them to an remote object storage to keep safe.<\/li>\n<li>We can like the <strong>boto3<\/strong> Python library to act with the article storage API. we will transfer this with pip, Python&#8217;s package manager.<\/li>\n<li>Refresh our native package index then install the <a href=\"https:\/\/www.wikitechy.com\/tutorials\/python\/install-python\" target=\"_blank\" rel=\"noopener\">Python<\/a> three version of <strong>pip<\/strong> from Ubuntu&#8217;s default repositories victimization <strong>apt-get<\/strong> by typing:<\/li>\n<\/ul>\n<div class=\"code-embed-wrapper\"> <div class=\"code-embed-infos\"> <\/div> <pre class=\"language-bash code-embed-pre line-numbers\"  data-start=\"1\" data-line-offset=\"0\"><code class=\"language-bash code-embed-code\">$ sudo apt-get update<br\/>$ sudo apt-get install python3-pip<\/code><\/pre> <\/div>\n<ul>\n<li>Because Ubuntu maintains its <strong>own package life cycle<\/strong>, the version of pip in Ubuntu&#8217;s repositories isn&#8217;t unbroken in a set with recent releases. However, we will update to a more modern version of pip victimization the tool itself.<\/li>\n<li>\u00a0 We can use sudo to put in globally and embrace the<strong> -H flag<\/strong> to line the <strong>$HOME<\/strong> variable to a price pip expects:<\/li>\n<\/ul>\n<div class=\"code-embed-wrapper\"> <div class=\"code-embed-infos\"> <\/div> <pre class=\"language-bash code-embed-pre line-numbers\"  data-start=\"1\" data-line-offset=\"0\"><code class=\"language-bash code-embed-code\">$ sudo -H pip3 install --upgrade pip<\/code><\/pre> <\/div>\n<p>Afterward, we will install boto3 beside the <strong>pytz module<\/strong>, that we&#8217;ll use to check times accurately victimization the offset-aware format that the article storage API returns:<\/p>\n<div class=\"code-embed-wrapper\"> <div class=\"code-embed-infos\"> <\/div> <pre class=\"language-bash code-embed-pre line-numbers\"  data-start=\"1\" data-line-offset=\"0\"><code class=\"language-bash code-embed-code\">$ sudo -H pip3 install boto3 pytz<\/code><\/pre> <\/div>\n<p>We ought to currently have all of the Python modules we want to act with the article storage API.<\/p>\n<h3 id=\"create-associate-in-nursing-object-storage-configuration-file\"><span style=\"color: #003366;\">Create Associate in Nursing Object Storage Configuration File<\/span><\/h3>\n<ul>\n<li>Our backup and transfer scripts can act with the article storage API so as to transfer files and transfer older backup artifacts after we ought to restore.<\/li>\n<li>They can <strong>use the access keys<\/strong> we tend to generate within the requirement section. instead of keeping these values within the scripts themselves, we&#8217;ll place them in an exceedingly dedicated file which will be browsed by our scripts.<\/li>\n<li>This fashion, we are able to <strong>share our scripts<\/strong> without worrying of exposing our credentials and that we can lock down the credentials additional heavily than the script itself.<\/li>\n<li>In the last guide, we tend to create the<strong> \/backups\/mysql<\/strong> directory to store our backups and our coding key. we&#8217;ll place the configuration file here aboard our alternative assets. Produce a file known as <strong>object_storage_config.sh<\/strong>:<\/li>\n<\/ul>\n<div class=\"code-embed-wrapper\"> <div class=\"code-embed-infos\"> <\/div> <pre class=\"language-bash code-embed-pre line-numbers\"  data-start=\"1\" data-line-offset=\"0\"><code class=\"language-bash code-embed-code\">$ sudo nano \/backups\/mysql\/object_storage_config.sh<\/code><\/pre> <\/div>\n<p>Inside, paste the subsequent contents, dynamical the access key and secret key to the prices you obtained from your object storage account and also the bucket name to a novel value. Set the endpoint universal resource locator and region name to the values provided by your object storage service (we can use the values related to DigitalOcean&#8217;s <strong>NYC3<\/strong> region for areas here):<\/p>\n<div class=\"code-embed-wrapper\"> <div class=\"code-embed-infos\"> <span class=\"code-embed-name\">\/backups\/mysql\/object_storage_config.sh<\/span> <\/div> <pre class=\"language-bash code-embed-pre line-numbers\"  data-start=\"1\" data-line-offset=\"0\"><code class=\"language-bash code-embed-code\">#!\/bin\/bash<br\/><br\/>export MYACCESSKEY=&quot;my_access_key&quot;<br\/>export MYSECRETKEY=&quot;my_secret_key&quot;<br\/>export MYBUCKETNAME=&quot;your_unique_bucket_name&quot;<br\/>export MYENDPOINTURL=&quot;https:\/\/nyc3.digitaloceanspaces.com&quot;<br\/>export MYREGIONNAME=&quot;nyc3&quot;<\/code><\/pre> <\/div>\n<p>These lines outline 2 surroundings variables known as <strong>MYACCESSKEY<\/strong> and <strong>MYSECRETKEY<\/strong> to carry our access and secret keys severally.<\/p>\n<p>The <strong>MYBUCKETNAME<\/strong> variable defines the article storage bucket we wish to use to store our backup files.<\/p>\n<p>Bucket names should be universally distinctive, therefore you want to select a reputation that no alternative user has elect. Our script can check the bucket price to envision if it&#8217;s already claimed by another user and automatically produce it if it&#8217;s on the market.<\/p>\n<p>We export the variables we tend to outline so any processes we tend to decision from at intervals our scripts can have access to those values.<\/p>\n<h3 id=\"myendpointurl-and-myregionname\"><span style=\"color: #003366;\"><strong>MYENDPOINTURL and MYREGIONNAME:<\/strong><\/span><\/h3>\n<p>The <strong>MYENDPOINTURL and MYREGIONNAME<\/strong> variables contain the API endpoint and also the specific region symbol offered by your object storage supplier.<\/p>\n<p>DigitalOcean areas, the endpoint is going to be <strong>https:\/\/region_name.digitaloceanspaces.com<\/strong>. You&#8217;ll be able to notice them on the market regions for areas within the DigitalOcean panel (at the time of this writing, solely &#8220;<strong>nyc3<\/strong>&#8221; is available).<\/p>\n<p>Save and shut the file once you square measure finished.<\/p>\n<p>Anyone WHO will access our API keys has complete access to our object storage account, therefore it&#8217;s necessary to limit access to the configuration file to the backup user. we will offer the backup user and cluster possession of the file then revoke all alternative access by typing:<\/p>\n<div class=\"code-embed-wrapper\"> <div class=\"code-embed-infos\"> <\/div> <pre class=\"language-bash code-embed-pre line-numbers\"  data-start=\"1\" data-line-offset=\"0\"><code class=\"language-bash code-embed-code\">$ sudo chown backup:backup \/backups\/mysql\/object_storage_config.sh<br\/>$ sudo chmod 600 \/backups\/mysql\/object_storage_config.sh<\/code><\/pre> <\/div>\n<p>Our <strong>object_storage_config.sh<\/strong> file ought to currently solely be accessible to the backup user.<\/p>\n<h3 id=\"creating-the-remote-backup-scripts\"><span style=\"color: #0000ff;\">Creating the Remote Backup Scripts<\/span><\/h3>\n<p>Now that we&#8217;ve got Associate in Nursing object storage configuration file, we will act and start making our scripts. we&#8217;ll be making the subsequent scripts:<\/p>\n<ul>\n<li><span style=\"color: #800080;\"><strong>object_storage.py:<\/strong> <\/span>This script is <strong>chargeable for interacting<\/strong> with the article storage API to form buckets, transfer files, transfer content, and prune older backups. Our alternative scripts can decision this script anytime they have to act with the remote object storage account.<\/li>\n<li><span style=\"color: #800080;\"><strong>remote-backup-mysql.sh:<\/strong> <\/span>This script <strong>backs up the MySQL databases by encrypting<\/strong> and pressing the files into one whole then uploading it to the remote object store. It creates a full backup at the start of {every} day then Associate in Nursing progressive backup every hour after. It automatically prunes all files from the remote bucket that square measure older than thirty days.<\/li>\n<li><span style=\"color: #800080;\"><strong>download-day.sh:<\/strong> <\/span>This script <strong>permits the U.S.A. to transfer all of the backups<\/strong> related to a given day. as a result of our backup script creates a full backup every morning then progressive backups throughout the day, this script will transfer all of the assets necessary to revive to any hourly stop.<\/li>\n<\/ul>\n<p>Along with the new scripts on top of, we&#8217;ll leverage the <strong>extract-mysql.sh<\/strong> and <strong>prepare-mysql.sh<\/strong> scripts from the previous guide to assist restore our files. You&#8217;ll be able to read the scripts within the repository for this tutorial on GitHub at any time. If you are doing not need to repeat and paste the contents below, you&#8217;ll be able to transfer the new files directly from GitHub by typing:<\/p>\n<div class=\"code-embed-wrapper\"> <div class=\"code-embed-infos\"> <\/div> <pre class=\"language-bash code-embed-pre line-numbers\"  data-start=\"1\" data-line-offset=\"0\"><code class=\"language-bash code-embed-code\">$ cd \/tmp<br\/>$ curl -LO https:\/\/raw.githubusercontent.com\/do-community\/ubuntu-1604-mysql-backup\/master\/object_storage.py<br\/>$ curl -LO https:\/\/raw.githubusercontent.com\/do-community\/ubuntu-1604-mysql-backup\/master\/remote-backup-mysql.sh<br\/>$ curl -LO https:\/\/raw.githubusercontent.com\/do-community\/ubuntu-1604-mysql-backup\/master\/download-day.sh<\/code><\/pre> <\/div>\n<ul>\n<li>Be sure to inspect the scripts after downloading to make sure they were retrieved successfully and that you approve of the actions they will perform. If you are pleased, then mark the scripts that are executable and then move them into the <strong>\/usr\/local\/bin directory<\/strong> by typing:<\/li>\n<\/ul>\n<div class=\"code-embed-wrapper\"> <div class=\"code-embed-infos\"> <\/div> <pre class=\"language-markup code-embed-pre line-numbers\"  data-start=\"1\" data-line-offset=\"0\"><code class=\"language-markup code-embed-code\">$ chmod +x \/tmp\/{remote-backup-mysql.sh,download-day.sh,object_storage.py}<br\/>$ sudo mv \/tmp\/{remote-backup-mysql.sh,download-day.sh,object_storage.py} \/usr\/local\/bin<\/code><\/pre> <\/div>\n<ul>\n<li>Next, we will set up each of these scripts and discuss them in more detail.<\/li>\n<\/ul>\n<h3 id=\"create-the-object_storage-py-script\"><span style=\"color: #0000ff;\">Create the object_storage.py Script<\/span><\/h3>\n<ul>\n<li>If you didn&#8217;t download the py script from GitHub, create a new file in the <strong>\/usr\/local\/bin<\/strong> directory called <strong>object_storage.py:<\/strong><\/li>\n<\/ul>\n<div class=\"code-embed-wrapper\"> <div class=\"code-embed-infos\"> <\/div> <pre class=\"language-bash code-embed-pre line-numbers\"  data-start=\"1\" data-line-offset=\"0\"><code class=\"language-bash code-embed-code\">$ sudo nano \/usr\/local\/bin\/object_storage.py<\/code><\/pre> <\/div>\n<ul>\n<li>Copy and paste the script contents into the file:<\/li>\n<\/ul>\n<div class=\"code-embed-wrapper\"> <div class=\"code-embed-infos\"> <span class=\"code-embed-name\">\/usr\/local\/bin\/object_storage.py<\/span> <\/div> <pre class=\"language-bash code-embed-pre line-numbers\"  data-start=\"1\" data-line-offset=\"0\"><code class=\"language-bash code-embed-code\">#!\/usr\/bin\/env python3<br\/><br\/>import argparse<br\/>import os<br\/>import sys<br\/>from datetime import datetime, timedelta<br\/><br\/>import boto3<br\/>import pytz<br\/>from botocore.client import ClientError, Config<br\/>from dateutil.parser import parse<br\/><br\/># &quot;backup_bucket&quot; must be a universally unique name, so choose something<br\/># specific to your setup.<br\/># The bucket will be created in your account if it does not already exist<br\/>backup_bucket = os.environ[&#039;MYBUCKETNAME&#039;]<br\/>access_key = os.environ[&#039;MYACCESSKEY&#039;]<br\/>secret_key = os.environ[&#039;MYSECRETKEY&#039;]<br\/>endpoint_url = os.environ[&#039;MYENDPOINTURL&#039;]<br\/>region_name = os.environ[&#039;MYREGIONNAME&#039;]<br\/><br\/><br\/>class Space():<br\/>    def __init__(self, bucket):<br\/>        self.session = boto3.session.Session()<br\/>        self.client = self.session.client(&#039;s3&#039;,<br\/>                                          region_name=region_name,<br\/>                                          endpoint_url=endpoint_url,<br\/>                                          aws_access_key_id=access_key,<br\/>                                          aws_secret_access_key=secret_key,<br\/>                                          config=Config(signature_version=&#039;s3&#039;)<br\/>                                          )<br\/>        self.bucket = bucket<br\/>        self.paginator = self.client.get_paginator(&#039;list_objects&#039;)<br\/><br\/>    def create_bucket(self):<br\/>        try:<br\/>            self.client.head_bucket(Bucket=self.bucket)<br\/>        except ClientError as e:<br\/>            if e.response[&#039;Error&#039;][&#039;Code&#039;] == &#039;404&#039;:<br\/>                self.client.create_bucket(Bucket=self.bucket)<br\/>            elif e.response[&#039;Error&#039;][&#039;Code&#039;] == &#039;403&#039;:<br\/>                print(&quot;The bucket name \\&quot;{}\\&quot; is already being used by &quot;<br\/>                      &quot;someone.  Please try using a different bucket &quot;<br\/>                      &quot;name.&quot;.format(self.bucket))<br\/>                sys.exit(1)<br\/>            else:<br\/>                print(&quot;Unexpected error: {}&quot;.format(e))<br\/>                sys.exit(1)<br\/><br\/>    def upload_files(self, files):<br\/>        for filename in files:<br\/>            self.client.upload_file(Filename=filename, Bucket=self.bucket,<br\/>                                    Key=os.path.basename(filename))<br\/>            print(&quot;Uploaded {} to \\&quot;{}\\&quot;&quot;.format(filename, self.bucket))<br\/><br\/>    def remove_file(self, filename):<br\/>        self.client.delete_object(Bucket=self.bucket,<br\/>                                  Key=os.path.basename(filename))<br\/><br\/>    def prune_backups(self, days_to_keep):<br\/>        oldest_day = datetime.now(pytz.utc) - timedelta(days=int(days_to_keep))<br\/>        try:<br\/>            # Create an iterator to page through results<br\/>            page_iterator = self.paginator.paginate(Bucket=self.bucket)<br\/>            # Collect objects older than the specified date<br\/>            objects_to_prune = [filename[&#039;Key&#039;] for page in page_iterator<br\/>                                for filename in page[&#039;Contents&#039;]<br\/>                                if filename[&#039;LastModified&#039;] &lt; oldest_day]<br\/>        except KeyError:<br\/>            # If the bucket is empty<br\/>            sys.exit()<br\/>        for object in objects_to_prune:<br\/>            print(&quot;Removing \\&quot;{}\\&quot; from {}&quot;.format(object, self.bucket))<br\/>            self.remove_file(object)<br\/><br\/>    def download_file(self, filename):<br\/>        self.client.download_file(Bucket=self.bucket,<br\/>                                  Key=filename, Filename=filename)<br\/><br\/>    def get_day(self, day_to_get):<br\/>        try:<br\/>            # Attempt to parse the date format the user provided<br\/>            input_date = parse(day_to_get)<br\/>        except ValueError:<br\/>            print(&quot;Cannot parse the provided date: {}&quot;.format(day_to_get))<br\/>            sys.exit(1)<br\/>        day_string = input_date.strftime(&quot;-%m-%d-%Y_&quot;)<br\/>        print_date = input_date.strftime(&quot;%A, %b. %d %Y&quot;)<br\/>        print(&quot;Looking for objects from {}&quot;.format(print_date))<br\/>        try:<br\/>            # create an iterator to page through results<br\/>            page_iterator = self.paginator.paginate(Bucket=self.bucket)<br\/>            objects_to_grab = [filename[&#039;Key&#039;] for page in page_iterator<br\/>                               for filename in page[&#039;Contents&#039;]<br\/>                               if day_string in filename[&#039;Key&#039;]]<br\/>        except KeyError:<br\/>            print(&quot;No objects currently in bucket&quot;)<br\/>            sys.exit()<br\/>        if objects_to_grab:<br\/>            for object in objects_to_grab:<br\/>                print(&quot;Downloading \\&quot;{}\\&quot; from {}&quot;.format(object, self.bucket))<br\/>                self.download_file(object)<br\/>        else:<br\/>            print(&quot;No objects found from: {}&quot;.format(print_date))<br\/>            sys.exit()<br\/><br\/><br\/>def is_valid_file(filename):<br\/>    if os.path.isfile(filename):<br\/>        return filename<br\/>    else:<br\/>        raise argparse.ArgumentTypeError(&quot;File \\&quot;{}\\&quot; does not exist.&quot;<br\/>                                         .format(filename))<br\/><br\/><br\/>def parse_arguments():<br\/>    parser = argparse.ArgumentParser(<br\/>        description=&#039;&#039;&#039;Client to perform backup-related tasks with<br\/>                     object storage.&#039;&#039;&#039;)<br\/>    subparsers = parser.add_subparsers()<br\/><br\/>    # parse arguments for the &quot;upload&quot; command<br\/>    parser_upload = subparsers.add_parser(&#039;upload&#039;)<br\/>    parser_upload.add_argument(&#039;files&#039;, type=is_valid_file, nargs=&#039;+&#039;)<br\/>    parser_upload.set_defaults(func=upload)<br\/><br\/>    # parse arguments for the &quot;prune&quot; command<br\/>    parser_prune = subparsers.add_parser(&#039;prune&#039;)<br\/>    parser_prune.add_argument(&#039;--days-to-keep&#039;, default=30)<br\/>    parser_prune.set_defaults(func=prune)<br\/><br\/>    # parse arguments for the &quot;download&quot; command<br\/>    parser_download = subparsers.add_parser(&#039;download&#039;)<br\/>    parser_download.add_argument(&#039;filename&#039;)<br\/>    parser_download.set_defaults(func=download)<br\/><br\/>    # parse arguments for the &quot;get_day&quot; command<br\/>    parser_get_day = subparsers.add_parser(&#039;get_day&#039;)<br\/>    parser_get_day.add_argument(&#039;day&#039;)<br\/>    parser_get_day.set_defaults(func=get_day)<br\/><br\/>    return parser.parse_args()<br\/><br\/><br\/>def upload(space, args):<br\/>    space.upload_files(args.files)<br\/><br\/><br\/>def prune(space, args):<br\/>    space.prune_backups(args.days_to_keep)<br\/><br\/><br\/>def download(space, args):<br\/>    space.download_file(args.filename)<br\/><br\/><br\/>def get_day(space, args):<br\/>    space.get_day(args.day)<br\/><br\/><br\/>def main():<br\/>    args = parse_arguments()<br\/>    space = Space(bucket=backup_bucket)<br\/>    space.create_bucket()<br\/>    args.func(space, args)<br\/><br\/><br\/>if __name__ == &#039;__main__&#039;:<br\/>    main()<\/code><\/pre> <\/div>\n<p>This script is\u00a0answerable for\u00a0managing the backups\u00a0inside\u00a0your object storage account. It will transfer files,\u00a0take away\u00a0files, prune\u00a0recent\u00a0backups, and\u00a0<strong>transfer<\/strong>\u00a0files from object storage. Instead o<strong>f<\/strong>\u00a0interacting with\u00a0the article\u00a0storage API directly, our<strong>\u00a0different\u00a0<\/strong>scripts\u00a0can\u00a0use the\u00a0practicality\u00a0outlined\u00a0here to\u00a0move\u00a0with remote resources. The commands it defines are:<\/p>\n<ul>\n<li><span style=\"color: #800080;\"><strong>upload:<\/strong> <\/span>Uploads to <strong>object storage\u00a0<\/strong>each\u00a0of the files that\u00a0are\u00a0passed in as arguments. Multiple files\u00a0are also more.<\/li>\n<li><span style=\"color: #800080;\"><strong>download<\/strong>:<\/span> Downloads<strong>\u00a0one\u00a0file from remote object storage<\/strong>,\u00a0that\u00a0is passed in as associate argument<\/li>\n<li><span style=\"color: #800080;\"><strong>prune<\/strong>:<\/span> <strong>Removes\u00a0each\u00a0file older than a precise\u00a0age<\/strong> from\u00a0the article\u00a0storage location. By default, this removes files older than\u00a0thirty\u00a0days. You&#8217;ll\u00a0change\u00a0this by specifying the &#8211;days-to-keep\u00a0choice\u00a0once occupation\u00a0prune.<\/li>\n<li><span style=\"color: #800080;\"><strong>get_day<\/strong>:<\/span> Pass\u00a0within the\u00a0day to\u00a0<strong>transfer\u00a0as\u00a0associate\u00a0argument<\/strong>\u00a0employing a\u00a0normal\u00a0date format (using quotations if the date has whitespace in it). Therefore the\u00a0tool\u00a0can\u00a0conceive to\u00a0take apart\u00a0it and\u00a0transfer all of the files from that date.<\/li>\n<\/ul>\n<p>The script\u00a0tries\u00a0to\u00a0browse\u00a0the article\u00a0storage credentials and bucket name from\u00a0<strong>atmosphere<\/strong>\u00a0variables,\u00a0thus we\u00a0are\u00a0going to\u00a0ensure\u00a0those are\u00a0<strong>inhabited<\/strong>\u00a0from the <strong>object_storage_config.sh<\/strong> file before\u00a0occupation\u00a0the <strong>object_storage.py script.<\/strong><\/p>\n<p>When\u00a0you&#8217;re\u00a0finished, save\u00a0and shut\u00a0the file.<\/p>\n<p>Next, if you haven&#8217;t already done\u00a0this,\u00a0create\u00a0the script\u00a0workable\u00a0by typing:<\/p>\n<div class=\"code-embed-wrapper\"> <div class=\"code-embed-infos\"> <\/div> <pre class=\"language-bash code-embed-pre line-numbers\"  data-start=\"1\" data-line-offset=\"0\"><code class=\"language-bash code-embed-code\">$ sudo chmod +x \/usr\/local\/bin\/object_storage.py<\/code><\/pre> <\/div>\n<p>Now that the <strong>object_storage.py script<\/strong>\u00a0<strong>is accessible<\/strong>\u00a0to\u00a0<strong>move<\/strong>\u00a0with the API,\u00a0we are able to\u00a0produce\u00a0the Bash scripts that use it to\u00a0<strong>duplicate<\/strong>\u00a0and\u00a0<strong>transfer<\/strong>\u00a0files.<\/p>\n<h3 id=\"create-the-remote-backup-mysql-sh-script\"><span style=\"color: #0000ff;\">Create the remote-backup-mysql.sh Script<\/span><\/h3>\n<ul>\n<li>Next,\u00a0we are going to\u00a0produce\u00a0the <strong>remote-backup-mysql.sh<\/strong> script. This may\u00a0perform\u00a0several\u00a0of\u00a0a similar function\u00a0because of the\u00a0original <strong>backup-mysql.sh\u00a0native<\/strong>\u00a0backup script, with an\u00a0additional\u00a0basic organization structure (since maintaining backups on the\u00a0<strong>native<\/strong>\u00a0filesystem\u00a0isn&#8217;t\u00a0necessary)\u00a0and<strong> a few<\/strong>\u00a0<strong>extra<\/strong>\u00a0steps to\u00a0<strong>transfer<\/strong>\u00a0to object storage.<\/li>\n<li>If\u00a0you probably did not\u00a0<strong>transfer<\/strong>\u00a0the script from the repository,\u00a0<strong>produce<\/strong>\u00a0and open a file\u00a0known as <strong>remote-backup-mysql.sh<\/strong> in the <strong>\/usr\/local\/bin directory:<\/strong><\/li>\n<\/ul>\n<div class=\"code-embed-wrapper\"> <div class=\"code-embed-infos\"> <\/div> <pre class=\"language-bash code-embed-pre line-numbers\"  data-start=\"1\" data-line-offset=\"0\"><code class=\"language-bash code-embed-code\">$ sudo nano \/usr\/local\/bin\/remote-backup-mysql.sh<\/code><\/pre> <\/div>\n<p><strong>Inside, paste the following script:<\/strong><\/p>\n<div class=\"code-embed-wrapper\"> <div class=\"code-embed-infos\"> <span class=\"code-embed-name\">\/usr\/local\/bin\/remote-backup-mysql.sh<\/span> <\/div> <pre class=\"language-bash code-embed-pre line-numbers\"  data-start=\"1\" data-line-offset=\"0\"><code class=\"language-bash code-embed-code\">#!\/bin\/bash<br\/><br\/>export LC_ALL=C<br\/><br\/>days_to_keep=30<br\/>backup_owner=&quot;backup&quot;<br\/>parent_dir=&quot;\/backups\/mysql&quot;<br\/>defaults_file=&quot;\/etc\/mysql\/backup.cnf&quot;<br\/>working_dir=&quot;${parent_dir}\/working&quot;<br\/>log_file=&quot;${working_dir}\/backup-progress.log&quot;<br\/>encryption_key_file=&quot;${parent_dir}\/encryption_key&quot;<br\/>storage_configuration_file=&quot;${parent_dir}\/object_storage_config.sh&quot;<br\/>now=&quot;$(date)&quot;<br\/>now_string=&quot;$(date -d&quot;${now}&quot; +%m-%d-%Y_%H-%M-%S)&quot;<br\/>processors=&quot;$(nproc --all)&quot;<br\/><br\/># Use this to echo to standard error<br\/>error () {<br\/>    printf &quot;%s: %s\\n&quot; &quot;$(basename &quot;${BASH_SOURCE}&quot;)&quot; &quot;${1}&quot; &gt;&amp;2<br\/>    exit 1<br\/>}<br\/><br\/>trap &#039;error &quot;An unexpected error occurred.&quot;&#039; ERR<br\/><br\/>sanity_check () {<br\/>    # Check user running the script<br\/>    if [ &quot;$(id --user --name)&quot; != &quot;$backup_owner&quot; ]; then<br\/>        error &quot;Script can only be run as the \\&quot;$backup_owner\\&quot; user&quot;<br\/>    fi<br\/><br\/>    # Check whether the encryption key file is available<br\/>    if [ ! -r &quot;${encryption_key_file}&quot; ]; then<br\/>        error &quot;Cannot read encryption key at ${encryption_key_file}&quot;<br\/>    fi<br\/><br\/>    # Check whether the object storage configuration file is available<br\/>    if [ ! -r &quot;${storage_configuration_file}&quot; ]; then<br\/>        error &quot;Cannot read object storage configuration from ${storage_configuration_file}&quot;<br\/>    fi<br\/><br\/>    # Check whether the object storage configuration is set in the file<br\/>    source &quot;${storage_configuration_file}&quot;<br\/>    if [ -z &quot;${MYACCESSKEY}&quot; ] || [ -z &quot;${MYSECRETKEY}&quot; ] || [ -z &quot;${MYBUCKETNAME}&quot; ]; then<br\/>        error &quot;Object storage configuration are not set properly in ${storage_configuration_file}&quot;<br\/>    fi<br\/>}<br\/><br\/>set_backup_type () {<br\/>    backup_type=&quot;full&quot;<br\/><br\/><br\/>    # Grab date of the last backup if available<br\/>    if [ -r &quot;${working_dir}\/xtrabackup_info&quot; ]; then<br\/>        last_backup_date=&quot;$(date -d&quot;$(grep start_time &quot;${working_dir}\/xtrabackup_info&quot; | cut -d&#039; &#039; -f3)&quot; +%s)&quot;<br\/>    else<br\/>            last_backup_date=0<br\/>    fi<br\/><br\/>    # Grab today&#039;s date, in the same format<br\/>    todays_date=&quot;$(date -d &quot;$(date -d &quot;${now}&quot; &quot;+%D&quot;)&quot; +%s)&quot;<br\/><br\/>    # Compare the two dates<br\/>    (( $last_backup_date == $todays_date ))<br\/>    same_day=&quot;${?}&quot;<br\/><br\/>    # The first backup each new day will be a full backup<br\/>    # If today&#039;s date is the same as the last backup, take an incremental backup instead<br\/>    if [ &quot;$same_day&quot; -eq &quot;0&quot; ]; then<br\/>        backup_type=&quot;incremental&quot;<br\/>    fi<br\/>}<br\/><br\/>set_options () {<br\/>    # List the xtrabackup arguments<br\/>    xtrabackup_args=(<br\/>        &quot;--defaults-file=${defaults_file}&quot;<br\/>        &quot;--backup&quot;<br\/>        &quot;--extra-lsndir=${working_dir}&quot;<br\/>        &quot;--compress&quot;<br\/>        &quot;--stream=xbstream&quot;<br\/>        &quot;--encrypt=AES256&quot;<br\/>        &quot;--encrypt-key-file=${encryption_key_file}&quot;<br\/>        &quot;--parallel=${processors}&quot;<br\/>        &quot;--compress-threads=${processors}&quot;<br\/>        &quot;--encrypt-threads=${processors}&quot;<br\/>        &quot;--slave-info&quot;<br\/>    )<br\/><br\/>    set_backup_type<br\/><br\/>    # Add option to read LSN (log sequence number) if taking an incremental backup<br\/>    if [ &quot;$backup_type&quot; == &quot;incremental&quot; ]; then<br\/>        lsn=$(awk &#039;\/to_lsn\/ {print $3;}&#039; &quot;${working_dir}\/xtrabackup_checkpoints&quot;)<br\/>        xtrabackup_args+=( &quot;--incremental-lsn=${lsn}&quot; )<br\/>    fi<br\/>}<br\/><br\/>rotate_old () {<br\/>    # Remove previous backup artifacts<br\/>    find &quot;${working_dir}&quot; -name &quot;*.xbstream&quot; -type f -delete<br\/><br\/>    # Remove any backups from object storage older than 30 days<br\/>    \/usr\/local\/bin\/object_storage.py prune --days-to-keep &quot;${days_to_keep}&quot;<br\/>}<br\/><br\/>take_backup () {<br\/>    find &quot;${working_dir}&quot; -type f -name &quot;*.incomplete&quot; -delete<br\/>    xtrabackup &quot;${xtrabackup_args[@]}&quot; --target-dir=&quot;${working_dir}&quot; &gt; &quot;${working_dir}\/${backup_type}-${now_string}.xbstream.incomplete&quot; 2&gt; &quot;${log_file}&quot;<br\/><br\/>    mv &quot;${working_dir}\/${backup_type}-${now_string}.xbstream.incomplete&quot; &quot;${working_dir}\/${backup_type}-${now_string}.xbstream&quot;<br\/>}<br\/><br\/>upload_backup () {<br\/>    \/usr\/local\/bin\/object_storage.py upload &quot;${working_dir}\/${backup_type}-${now_string}.xbstream&quot;<br\/>}<br\/><br\/>main () {<br\/>    mkdir -p &quot;${working_dir}&quot;<br\/>    sanity_check &amp;&amp; set_options &amp;&amp; rotate_old &amp;&amp; take_backup &amp;&amp; upload_backup<br\/><br\/>    # Check success and print message<br\/>    if tail -1 &quot;${log_file}&quot; | grep -q &quot;completed OK&quot;; then<br\/>        printf &quot;Backup successful!\\n&quot;<br\/>        printf &quot;Backup created at %s\/%s-%s.xbstream\\n&quot; &quot;${working_dir}&quot; &quot;${backup_type}&quot; &quot;${now_string}&quot;<br\/>    else<br\/>        error &quot;Backup failure! If available, check ${log_file} for more information&quot;<br\/>    fi<br\/>}<br\/><br\/>main<\/code><\/pre> <\/div>\n<ul>\n<li>This script handles the particular <strong>MySQL backup<\/strong> procedure, controls the backup schedule, and automatically removes older backups from remote storage.<\/li>\n<li>You will opt for what number of days of backups you would like to stay on-hand by adjusting the days_to_keep variable.<\/li>\n<li>The native <strong>backup-mysql.sh<\/strong> script we tend to utilize in the last article maintained separate directories for every day&#8217;s backups.<\/li>\n<li>Since we tend to area unit storing backups remotely, we&#8217;ll solely store the newest backup domestically so as to attenuate the disc space dedicated to backups.<\/li>\n<li>Previous backups may be downloaded from object storage as required for restoration.<\/li>\n<li>As with the previous script, once checking that many basic necessities area unit glad and configuring the kind of backup that ought to be taken, we tend to encipher and compress every backup into one file archive.<\/li>\n<li>The previous computer file is aloof from the native filesystem and any remote backups that area unit older than the worth outlined in <strong>days_to_keep area<\/strong> unit removed.<\/li>\n<li>Save and shut the file once you area unit finished. Afterward, make sure that the script is practicable by typing:<\/li>\n<\/ul>\n<div class=\"code-embed-wrapper\"> <div class=\"code-embed-infos\"> <\/div> <pre class=\"language-bash code-embed-pre line-numbers\"  data-start=\"1\" data-line-offset=\"0\"><code class=\"language-bash code-embed-code\">$ sudo chmod +x \/usr\/local\/bin\/remote-backup-mysql.sh<\/code><\/pre> <\/div>\n<p>This script may be used as a replacement for the <strong>backup-mysql.sh<\/strong> script on this method to change from creating native backups to remote backups.<\/p>\n<h3 id=\"create-the-download-day-sh-script\"><span style=\"color: #800080;\">Create the download-day.sh Script<\/span><\/h3>\n<ul>\n<li>Finally, transfer or produce the download-day.sh script at intervals the <strong>\/usr\/local\/bin directory.<\/strong> This script may be accustomed to transfer all of the backups related to a selected day.<\/li>\n<li>Create the script to enter your text editor if you probably did not transfer it earlier:<\/li>\n<\/ul>\n<div class=\"code-embed-wrapper\"> <div class=\"code-embed-infos\"> <\/div> <pre class=\"language-bash code-embed-pre line-numbers\"  data-start=\"1\" data-line-offset=\"0\"><code class=\"language-bash code-embed-code\">$ sudo nano \/usr\/local\/bin\/download-day.sh<\/code><\/pre> <\/div>\n<ul>\n<li><strong>Inside, paste the following contents:<\/strong><\/li>\n<\/ul>\n<div class=\"code-embed-wrapper\"> <div class=\"code-embed-infos\"> <span class=\"code-embed-name\">\/usr\/local\/bin\/download-day.sh<\/span> <\/div> <pre class=\"language-bash code-embed-pre line-numbers\"  data-start=\"1\" data-line-offset=\"0\"><code class=\"language-bash code-embed-code\">#!\/bin\/bash<br\/><br\/>export LC_ALL=C<br\/><br\/>backup_owner=&quot;backup&quot;<br\/>storage_configuration_file=&quot;\/backups\/mysql\/object_storage_config.sh&quot;<br\/>day_to_download=&quot;${1}&quot;<br\/><br\/># Use this to echo to standard error<br\/>error () {<br\/>    printf &quot;%s: %s\\n&quot; &quot;$(basename &quot;${BASH_SOURCE}&quot;)&quot; &quot;${1}&quot; &gt;&amp;2<br\/>    exit 1<br\/>}<br\/><br\/>trap &#039;error &quot;An unexpected error occurred.&quot;&#039; ERR<br\/><br\/>sanity_check () {<br\/>    # Check user running the script<br\/>    if [ &quot;$(id --user --name)&quot; != &quot;$backup_owner&quot; ]; then<br\/>        error &quot;Script can only be run as the \\&quot;$backup_owner\\&quot; user&quot;<br\/>    fi<br\/><br\/>    # Check whether the object storage configuration file is available<br\/>    if [ ! -r &quot;${storage_configuration_file}&quot; ]; then<br\/>        error &quot;Cannot read object storage configuration from ${storage_configuration_file}&quot;<br\/>    fi<br\/><br\/>    # Check whether the object storage configuration is set in the file<br\/>    source &quot;${storage_configuration_file}&quot;<br\/>    if [ -z &quot;${MYACCESSKEY}&quot; ] || [ -z &quot;${MYSECRETKEY}&quot; ] || [ -z &quot;${MYBUCKETNAME}&quot; ]; then<br\/>        error &quot;Object storage configuration are not set properly in ${storage_configuration_file}&quot;<br\/>    fi<br\/>}<br\/><br\/>main () {<br\/>    sanity_check<br\/>    \/usr\/local\/bin\/object_storage.py get_day &quot;${day_to_download}&quot;<br\/>}<br\/><br\/>main<\/code><\/pre> <\/div>\n<p>This script may be known as to transfer all of the archives from a particular day. Since day by day starts with a full backup and accumulates progressive backups throughout the remainder of the day, this may transfer all of the relevant files necessary to revive to any hourly photograph.<\/p>\n<p>The script takes one argument that could be a date or day. It uses the Python&#8217;s <strong>dateutil.parser.parse<\/strong> perform to browse and interpret a date string provided as an argument.<\/p>\n<p>The performance is fairly versatile and may interpret dates in a very style of formats, together with relative strings like &#8220;<strong>Friday<\/strong>&#8220;, as an example. To avoid ambiguity but, it&#8217;s best to use additional well-defined dates. take care to wrap dates in quotations if the format you would like to use contains whitespace.<\/p>\n<p>When you&#8217;re able to continue, save and shut the file. build the script practicable by typing:<\/p>\n<div class=\"code-embed-wrapper\"> <div class=\"code-embed-infos\"> <\/div> <pre class=\"language-bash code-embed-pre line-numbers\"  data-start=\"1\" data-line-offset=\"0\"><code class=\"language-bash code-embed-code\">$ sudo chmod +x \/usr\/local\/bin\/download-day.sh<\/code><\/pre> <\/div>\n<p>We currently have the flexibility to transfer the backup files from object storage for a particular date after we wish to revive.<\/p>\n<h3 id=\"testing-the-remote-mysql-backup-and-transfer-scripts\"><span style=\"color: #800080;\">Testing the Remote MySQL Backup and transfer Scripts<\/span><\/h3>\n<ul>\n<li>Now that we&#8217;ve our scripts in situ, we should always check to form positive they perform obviously.<\/li>\n<\/ul>\n<h3 id=\"perform-a-full-backup\"><span style=\"color: #3366ff;\">Perform a Full Backup<\/span><\/h3>\n<ul>\n<li>Begin by vocation the remote-mysql-backup.sh script with the backup user. Since this is often the primary time we tend to area unit running this command, it ought to produce a full backup of our MySQL information.<\/li>\n<\/ul>\n<div class=\"code-embed-wrapper\"> <div class=\"code-embed-infos\"> <\/div> <pre class=\"language-bash code-embed-pre line-numbers\"  data-start=\"1\" data-line-offset=\"0\"><code class=\"language-bash code-embed-code\">$ sudo -u backup remote-backup-mysql.sh<\/code><\/pre> <\/div>\n<h4 id=\"note\"><span style=\"color: #993300;\">Note:<\/span><\/h4>\n<p>If you receive a blunder indicating that the bucket name you chose is already in use, you may choose a special name. amendment the worth of MYBUCKETNAME within the<strong> \/backups\/mysql\/object_storage_config.<\/strong>sh file and delete the native backup directory (<strong>sudo rm -rf \/backups\/mysql\/working<\/strong>) so the script will try a full backup with the new bucket name. once you area unit prepared, rerun the command higher than to do once more.<\/p>\n<p>If everything goes well, you may see output like the following:<\/p>\n<h3 id=\"output\"><span style=\"color: #008000;\">Output:<\/span><\/h3>\n<div class=\"code-embed-wrapper\"> <div class=\"code-embed-infos\"> <\/div> <pre class=\"language-bash code-embed-pre line-numbers\"  data-start=\"1\" data-line-offset=\"0\"><code class=\"language-bash code-embed-code\">Uploaded \/backups\/mysql\/working\/full-10-17-2017_19-09-30.xbstream to &quot;your_bucket_name&quot;<br\/>Backup successful!<br\/>Backup created at \/backups\/mysql\/working\/full-10-17-2017_19-09-30.xbstream<\/code><\/pre> <\/div>\n<p>This indicates that a full backup has been created at intervals the <strong>\/backups\/mysql\/working<\/strong> directory. it&#8217;s conjointly been uploaded to remote object storage exploitation the bucket outlined within the <strong>object_storage_config.sh file<\/strong>.<\/p>\n<p>If we glance at intervals the <strong>\/backups\/mysql\/working<\/strong> directory, we will see files like those created by the <strong>backup-mysql.sh<\/strong> script from the last guide:<\/p>\n<div class=\"code-embed-wrapper\"> <div class=\"code-embed-infos\"> <\/div> <pre class=\"language-bash code-embed-pre line-numbers\"  data-start=\"1\" data-line-offset=\"0\"><code class=\"language-bash code-embed-code\">$ ls \/backups\/mysql\/working<\/code><\/pre> <\/div>\n<h3 id=\"output-2\"><span style=\"color: #008000;\">Output:<\/span><\/h3>\n<div class=\"code-embed-wrapper\"> <div class=\"code-embed-infos\"> <\/div> <pre class=\"language-bash code-embed-pre line-numbers\"  data-start=\"1\" data-line-offset=\"0\"><code class=\"language-bash code-embed-code\">backup-progress.log  full-10-17-2017_19-09-30.xbstream  xtrabackup_checkpoints  xtrabackup_info<\/code><\/pre> <\/div>\n<p>The backup-progress.log file contains the output from the xtrabackup command, whereas xtrabackup_checkpoints and xtrabackup_info contain info concerning choices used, the kind and scope of the backup, and different information.<\/p>\n<h3 id=\"perform-a-progressive-backup\"><span style=\"color: #0000ff;\">Perform a progressive Backup<\/span><\/h3>\n<p>Let&#8217;s build a tiny low amendment to our instrumentation table so as to make the extra information not found in our initial backup. we will enter a replacement row within the table by typing:<\/p>\n<div class=\"code-embed-wrapper\"> <div class=\"code-embed-infos\"> <\/div> <pre class=\"language-bash code-embed-pre line-numbers\"  data-start=\"1\" data-line-offset=\"0\"><code class=\"language-bash code-embed-code\">$ mysql -u root -p -e &#039;INSERT INTO playground.equipment (type, quant, color) VALUES (&quot;sandbox&quot;, 4, &quot;brown&quot;);&#039;<\/code><\/pre> <\/div>\n<p>Enter your database&#8217;s body countersign to feature the new record.<\/p>\n<p>Now, we will take an extra backup. after we decide the script once more, a progressive backup ought to be created as long because it remains a similar day because of the previous backup (according to the server&#8217;s clock):<\/p>\n<div class=\"code-embed-wrapper\"> <div class=\"code-embed-infos\"> <\/div> <pre class=\"language-bash code-embed-pre line-numbers\"  data-start=\"1\" data-line-offset=\"0\"><code class=\"language-bash code-embed-code\">$ sudo -u backup remote-backup-mysql.sh<\/code><\/pre> <\/div>\n<h3 id=\"output-3\"><span style=\"color: #008000;\">Output:<\/span><\/h3>\n<div class=\"code-embed-wrapper\"> <div class=\"code-embed-infos\"> <\/div> <pre class=\"language-bash code-embed-pre line-numbers\"  data-start=\"1\" data-line-offset=\"0\"><code class=\"language-bash code-embed-code\">Uploaded \/backups\/mysql\/working\/incremental-10-17-2017_19-19-20.xbstream to &quot;your_bucket_name&quot;<br\/>Backup successful!<br\/>Backup created at \/backups\/mysql\/working\/incremental-10-17-2017_19-19-20.xbstream<\/code><\/pre> <\/div>\n<p>The higher than output indicates that the backup was created at intervals a similar directory domestically, and was once more uploaded to object storage. If we tend to check the \/backups\/mysql\/working directory, we&#8217;ll realize that the new backup is present which the previous backup has been removed:<\/p>\n<div class=\"code-embed-wrapper\"> <div class=\"code-embed-infos\"> <\/div> <pre class=\"language-bash code-embed-pre line-numbers\"  data-start=\"1\" data-line-offset=\"0\"><code class=\"language-bash code-embed-code\">$ ls \/backups\/mysql\/working<\/code><\/pre> <\/div>\n<h3 id=\"output-4\"><span style=\"color: #008000;\">Output:<\/span><\/h3>\n<div class=\"code-embed-wrapper\"> <div class=\"code-embed-infos\"> <\/div> <pre class=\"language-bash code-embed-pre line-numbers\"  data-start=\"1\" data-line-offset=\"0\"><code class=\"language-bash code-embed-code\">backup-progress.log  incremental-10-17-2017_19-19-20.xbstream  xtrabackup_checkpoints  xtrabackup_info<\/code><\/pre> <\/div>\n<p>Since our files area unit uploaded remotely. Deleting the native copy helps to scale back the quantity of disc space used.<\/p>\n<h3 id=\"download-the-backups-from-a-given-day\"><span style=\"color: #3366ff;\">Download the Backups from a given Day<\/span><\/h3>\n<ul>\n<li>Since our backups area unit holds on remotely. We&#8217;ll pull down the remote files if we&#8217;d like to revive our files. To do this, we will use the <strong>download-day.sh script<\/strong>.<\/li>\n<li>Begin by making and so stepping into a directory that the backup user will safely write to:<\/li>\n<\/ul>\n<div class=\"code-embed-wrapper\"> <div class=\"code-embed-infos\"> <\/div> <pre class=\"language-bash code-embed-pre line-numbers\"  data-start=\"1\" data-line-offset=\"0\"><code class=\"language-bash code-embed-code\">$ sudo -u backup mkdir \/tmp\/backup_archives<br\/>$ cd \/tmp\/backup_archives<\/code><\/pre> <\/div>\n<p>Next, decision the download-day.sh script because of the backup user. Pass within the day of the archives you want to transfer. The date format is fairly versatile, however, it&#8217;s best to undertake to be unambiguous:<\/p>\n<div class=\"code-embed-wrapper\"> <div class=\"code-embed-infos\"> <\/div> <pre class=\"language-bash code-embed-pre line-numbers\"  data-start=\"1\" data-line-offset=\"0\"><code class=\"language-bash code-embed-code\">$ sudo -u backup download-day.sh &quot;Oct. 17&quot;<\/code><\/pre> <\/div>\n<p>If there area unit archives that match the date you provided, they&#8217;ll be downloaded to this directory:<\/p>\n<h3 id=\"output-5\"><span style=\"color: #008000;\">Output:<\/span><\/h3>\n<div class=\"code-embed-wrapper\"> <div class=\"code-embed-infos\"> <\/div> <pre class=\"language-bash code-embed-pre line-numbers\"  data-start=\"1\" data-line-offset=\"0\"><code class=\"language-bash code-embed-code\">Looking for objects from Tuesday, Oct. 17 2017<br\/>Downloading &quot;full-10-17-2017_19-09-30.xbstream&quot; from your_bucket_name<br\/>Downloading &quot;incremental-10-17-2017_19-19-20.xbstream&quot; from your_bucket_name<\/code><\/pre> <\/div>\n<p>Verify that the files are downloaded to the native filesystem:<\/p>\n<div class=\"code-embed-wrapper\"> <div class=\"code-embed-infos\"> <\/div> <pre class=\"language-markup code-embed-pre line-numbers\"  data-start=\"1\" data-line-offset=\"0\"><code class=\"language-markup code-embed-code\">$ ls<\/code><\/pre> <\/div>\n<h3 id=\"output-6\"><span style=\"color: #008000;\">Output:<\/span><\/h3>\n<div class=\"code-embed-wrapper\"> <div class=\"code-embed-infos\"> <\/div> <pre class=\"language-bash code-embed-pre line-numbers\"  data-start=\"1\" data-line-offset=\"0\"><code class=\"language-bash code-embed-code\">full-10-17-2017_19-09-30.xbstream  incremental-10-17-2017_19-19-20.xbstream<\/code><\/pre> <\/div>\n<p>The compressed, encrypted archives area unit currently back on the server once more.<\/p>\n<h3 id=\"extract-and-prepare-the-backups\"><span style=\"color: #993366;\">Extract and Prepare the Backups<\/span><\/h3>\n<p>Once the files area unit collected, we will method them identical approach we have a tendency to processed native backups.<\/p>\n<p>First, pass the .xbstream files to the extract-mysql.sh script victimization the backup user:<\/p>\n<div class=\"code-embed-wrapper\"> <div class=\"code-embed-infos\"> <\/div> <pre class=\"language-bash code-embed-pre line-numbers\"  data-start=\"1\" data-line-offset=\"0\"><code class=\"language-bash code-embed-code\">$ sudo -u backup extract-mysql.sh *.xbstream<\/code><\/pre> <\/div>\n<p>This can decode and decompress the archives into a directory referred to as restore. Enter that directory and prepare the files with the prepare-mysql.sh script:<\/p>\n<div class=\"code-embed-wrapper\"> <div class=\"code-embed-infos\"> <\/div> <pre class=\"language-bash code-embed-pre line-numbers\"  data-start=\"1\" data-line-offset=\"0\"><code class=\"language-bash code-embed-code\">$ cd restore<br\/>$ sudo -u backup prepare-mysql.sh<\/code><\/pre> <\/div>\n<h3 id=\"output-7\"><span style=\"color: #008000;\">Output:<\/span><\/h3>\n<div class=\"code-embed-wrapper\"> <div class=\"code-embed-infos\"> <\/div> <pre class=\"language-bash code-embed-pre line-numbers\"  data-start=\"1\" data-line-offset=\"0\"><code class=\"language-bash code-embed-code\">Backup looks to be fully prepared.  Please check the &quot;prepare-progress.log&quot; file<br\/>to verify before continuing.<br\/><br\/>If everything looks correct, you can apply the restored files.<br\/><br\/>First, stop MySQL and move or remove the contents of the MySQL data directory:<br\/><br\/>        sudo systemctl stop mysql<br\/>        sudo mv \/var\/lib\/mysql\/ \/tmp\/<br\/><br\/>Then, recreate the data directory and  copy the backup files:<br\/><br\/>        sudo mkdir \/var\/lib\/mysql<br\/>        sudo xtrabackup --copy-back --target-dir=\/tmp\/backup_archives\/restore\/full-10-17-2017_19-09-30<br\/><br\/>Afterward the files are copied, adjust the permissions and restart the service:<br\/><br\/>        sudo chown -R mysql:mysql \/var\/lib\/mysql<br\/>        sudo find \/var\/lib\/mysql -type d -exec chmod 750 {} \\;<br\/>        sudo systemctl start mysql<\/code><\/pre> <\/div>\n<p>The full backup within the \/tmp\/backup_archives\/restore directory ought to currently be ready. we will follow the directions within the output to revive the MySQL information on our system.<\/p>\n<h3 id=\"restore-the-backup-information-to-the-mysql-information-directory\"><span style=\"color: #993366;\">Restore the Backup information to the MySQL information Directory<\/span><\/h3>\n<p>Before we tend to restore the backup information, we&#8217;d like to maneuver this information out of the approach.<\/p>\n<p>Start by the move down MySQL to avoid corrupting the info or bloody the service once we replace its information files.<\/p>\n<div class=\"code-embed-wrapper\"> <div class=\"code-embed-infos\"> <\/div> <pre class=\"language-bash code-embed-pre line-numbers\"  data-start=\"1\" data-line-offset=\"0\"><code class=\"language-bash code-embed-code\">$ sudo systemctl stop mysql<\/code><\/pre> <\/div>\n<p>Next, we will move this information directory to the <strong>\/tmp directory<\/strong>. This way, we will simply move it back if the restore has issues. Since we tend to affect the files to <strong>\/tmp\/mysql<\/strong> within the last article. We will move the files to <strong>\/tmp\/mysql-remote<\/strong> this time:<\/p>\n<div class=\"code-embed-wrapper\"> <div class=\"code-embed-infos\"> <\/div> <pre class=\"language-bash code-embed-pre line-numbers\"  data-start=\"1\" data-line-offset=\"0\"><code class=\"language-bash code-embed-code\">$ sudo mv \/var\/lib\/mysql\/ \/tmp\/mysql-remote<\/code><\/pre> <\/div>\n<p>Next, recreate associate degree empty <strong>\/var\/lib\/mysq<\/strong>l directory:<\/p>\n<div class=\"code-embed-wrapper\"> <div class=\"code-embed-infos\"> <\/div> <pre class=\"language-bash code-embed-pre line-numbers\"  data-start=\"1\" data-line-offset=\"0\"><code class=\"language-bash code-embed-code\">$ sudo mkdir \/var\/lib\/mysql<\/code><\/pre> <\/div>\n<p>Now, we will sort the xtrabackup restore command that the prepare-mysql.sh command provided to repeat the backup files into the <strong>\/var\/lib\/mysql<\/strong> directory:<\/p>\n<div class=\"code-embed-wrapper\"> <div class=\"code-embed-infos\"> <\/div> <pre class=\"language-bash code-embed-pre line-numbers\"  data-start=\"1\" data-line-offset=\"0\"><code class=\"language-bash code-embed-code\">$ sudo xtrabackup --copy-back --target-dir=\/tmp\/backup_archives\/restore\/full-10-17-2017_19-09-30<\/code><\/pre> <\/div>\n<p>Once the method completes, modify the directory permissions and possession to make sure that the MySQL method has access:<\/p>\n<div class=\"code-embed-wrapper\"> <div class=\"code-embed-infos\"> <\/div> <pre class=\"language-bash code-embed-pre line-numbers\"  data-start=\"1\" data-line-offset=\"0\"><code class=\"language-bash code-embed-code\">$ sudo chown -R mysql:mysql \/var\/lib\/mysql<br\/>$ sudo find \/var\/lib\/mysql -type d -exec chmod 750 {} \\;<\/code><\/pre> <\/div>\n<p>When this finishes, begin MySQL once more and certify our information has been properly restored:<\/p>\n<div class=\"code-embed-wrapper\"> <div class=\"code-embed-infos\"> <\/div> <pre class=\"language-bash code-embed-pre line-numbers\"  data-start=\"1\" data-line-offset=\"0\"><code class=\"language-bash code-embed-code\">$ sudo systemctl start mysql<br\/>$ mysql -u root -p -e &#039;SELECT * FROM playground.equipment;&#039;<\/code><\/pre> <\/div>\n<h3 id=\"output-8\"><span style=\"color: #008000;\">Output:<\/span><\/h3>\n<div class=\"code-embed-wrapper\"> <div class=\"code-embed-infos\"> <\/div> <pre class=\"language-bash code-embed-pre line-numbers\"  data-start=\"1\" data-line-offset=\"0\"><code class=\"language-bash code-embed-code\">+----+---------+-------+--------+<br\/>| id | type    | quant | color  |<br\/>+----+---------+-------+--------+<br\/>|  1 | slide   |     2 | blue   |<br\/>|  2 | swing   |    10 | yellow |<br\/>|  3 | sandbox |     4 | brown  |<br\/>+----+---------+-------+--------+<\/code><\/pre> <\/div>\n<ul>\n<li>The knowledge is on the market, that indicates that it&#8217;s been with success remodeled.<\/li>\n<li>After restoring your knowledge, it&#8217;s vital to travel back and delete the restore directory. Future progressive backups cannot be applied to the total backup once it&#8217;s been ready, therefore we should always take away it. what is more, the backup directories shouldn&#8217;t be left unencrypted on disk for security reasons:<\/li>\n<\/ul>\n<div class=\"code-embed-wrapper\"> <div class=\"code-embed-infos\"> <\/div> <pre class=\"language-bash code-embed-pre line-numbers\"  data-start=\"1\" data-line-offset=\"0\"><code class=\"language-bash code-embed-code\">$ cd ~<br\/>$ sudo rm -rf \/tmp\/backup_archives\/restore<\/code><\/pre> <\/div>\n<ul>\n<li>The next time we want clean copies of the backup directories. We are able to extract them once more from the backup archive files.<\/li>\n<\/ul>\n<h3 id=\"creating-a-cron-job-to-run-backups-hourly\"><span style=\"color: #008080;\">Creating a Cron Job to Run Backups Hourly<\/span><\/h3>\n<ul>\n<li>We created a <strong>cron job<\/strong> to mechanically backup up our information regionally within the last guide. We&#8217;ll establish a replacement <strong>cron job<\/strong> to require remote backups so disable the native backup job.<\/li>\n<li>We will simply switch between native and remote backups as necessary by sanctioning or disabling the cron scripts.<\/li>\n<li>To start, produce a file referred to as remote-backup-mysql within the<strong> \/etc\/cron.hourly<\/strong> directory:<\/li>\n<\/ul>\n<div class=\"code-embed-wrapper\"> <div class=\"code-embed-infos\"> <\/div> <pre class=\"language-bash code-embed-pre line-numbers\"  data-start=\"1\" data-line-offset=\"0\"><code class=\"language-bash code-embed-code\">$ sudo nano \/etc\/cron.hourly\/remote-backup-mysql<\/code><\/pre> <\/div>\n<ul>\n<li>Inside, we&#8217;ll decide our <strong>remote-backup-mysql.sh<\/strong> script with the backup user through the systemd-cat command, that permits the North American nation to log the output to journald:<\/li>\n<\/ul>\n<div class=\"code-embed-wrapper\"> <div class=\"code-embed-infos\"> <span class=\"code-embed-name\">\/etc\/cron.hourly\/remote-backup-mysql<\/span> <\/div> <pre class=\"language-bash code-embed-pre line-numbers\"  data-start=\"1\" data-line-offset=\"0\"><code class=\"language-bash code-embed-code\">#!\/bin\/bash <br\/>sudo -u backup systemd-cat --identifier=remote-backup-mysql \/usr\/local\/bin\/remote-backup-mysql.sh<\/code><\/pre> <\/div>\n<ul>\n<li>Save and shut the file after you square measure finished in python.<\/li>\n<li>We can modify our new cron job and disable the previous one by manipulating the viable permission bit on each file in python:<\/li>\n<\/ul>\n<div class=\"code-embed-wrapper\"> <div class=\"code-embed-infos\"> <\/div> <pre class=\"language-bash code-embed-pre line-numbers\"  data-start=\"1\" data-line-offset=\"0\"><code class=\"language-bash code-embed-code\">$ sudo chmod -x \/etc\/cron.hourly\/backup-mysql<br\/>$ sudo chmod +x \/etc\/cron.hourly\/remote-backup-mysql<\/code><\/pre> <\/div>\n<ul>\n<li>Test the new remote backup job by executing the script manually:<\/li>\n<\/ul>\n<div class=\"code-embed-wrapper\"> <div class=\"code-embed-infos\"> <\/div> <pre class=\"language-bash code-embed-pre line-numbers\"  data-start=\"1\" data-line-offset=\"0\"><code class=\"language-bash code-embed-code\">$ sudo \/etc\/cron.hourly\/remote-backup-mysql<\/code><\/pre> <\/div>\n<ul>\n<li>Once the prompt returns, we are able to check the log entries with journalctl:<\/li>\n<\/ul>\n<div class=\"code-embed-wrapper\"> <div class=\"code-embed-infos\"> <\/div> <pre class=\"language-bash code-embed-pre line-numbers\"  data-start=\"1\" data-line-offset=\"0\"><code class=\"language-bash code-embed-code\">$ sudo journalctl -t remote-backup-mysql<\/code><\/pre> <\/div>\n<div class=\"code-embed-wrapper\"> <div class=\"code-embed-infos\"> <\/div> <pre class=\"language-bash code-embed-pre line-numbers\"  data-start=\"1\" data-line-offset=\"0\"><code class=\"language-bash code-embed-code\">[seconary_label Output]<br\/>-- Logs begin at Tue 2017-10-17 14:28:01 UTC, end at Tue 2017-10-17 20:11:03 UTC. --<br\/>Oct 17 20:07:17 myserver remote-backup-mysql[31422]: Uploaded \/backups\/mysql\/working\/incremental-10-17-2017_22-16-09.xbstream to &quot;your_bucket_name&quot;<br\/>Oct 17 20:07:17 myserver remote-backup-mysql[31422]: Backup successful!<br\/>Oct 17 20:07:17 myserver remote-backup-mysql[31422]: Backup created at \/backups\/mysql\/working\/incremental-10-17-2017_20-07-13.xbstream<\/code><\/pre> <\/div>\n<ul>\n<li>Check back in a very few hours to form certain that further backups are being taken on schedule.<\/li>\n<\/ul>\n<h3 id=\"backing-up-the-extraction-key\"><span style=\"color: #008080;\">Backing Up the Extraction Key<\/span><\/h3>\n<ul>\n<li>One final thought that you just can handle is the way to make a copy the cryptography key (found at <strong>\/backups\/mysql\/encryption_key<\/strong>).<\/li>\n<li>The cryptography key&#8217;s needed to revive any of the files secured mistreatment this method. However, storing the cryptography key within the same location because the info files eliminate the protection provided by cryptography.<\/li>\n<li>Because of this, it&#8217;s necessary to stay a replica of the cryptography key in a very separate location in order that you&#8217;ll be able to still use the backup archives if your info server fails or must be remodeled.<\/li>\n<li>While an entire backup resolution for non-database files is outside the scope of this text. You&#8217;ll be able to copy the key to your native laptop for keeping. To do so, read the contents of the file by typing:<\/li>\n<\/ul>\n<div class=\"code-embed-wrapper\"> <div class=\"code-embed-infos\"> <\/div> <pre class=\"language-bash code-embed-pre line-numbers\"  data-start=\"1\" data-line-offset=\"0\"><code class=\"language-bash code-embed-code\">$ sudo less \/backups\/mysql\/encryption_key<\/code><\/pre> <\/div>\n<ul>\n<li>Open a document on your native laptop and paste the worth within. If you ever have to be compelled to restore backups onto a distinct server, copy the contents of the file to <strong>\/backups\/mysql\/encryption_key<\/strong> on the new machine, originated the system made public during this guide. And so restore mistreatment the provided scripts.<\/li>\n<\/ul>\n<h3 id=\"conclusion\"><span style=\"color: #333399;\">Conclusion<\/span><\/h3>\n<ul>\n<li>In this guide, we&#8217;ve lined however take hourly backups of a MySQL info and transfer them mechanically to a foreign object space for storing.<\/li>\n<li>The system can take a full backup each morning and so hourly progressive backups subsequently to supply the flexibility to revive to any hourly stop. Anytime the backup script runs, it checks for backups in object storage that are older than thirty days and removes them.<\/li>\n<\/ul>\n","protected":false},"excerpt":{"rendered":"<p>To Backup MySQL Databases to Object Storage with Percona on Ubuntu 16.04 The databases square measure store a number of the foremost valuable data in your infrastructure. As a result of it&#8217;s necessary to own reliable backups to protect against information loss within the event of Associate in Nursing accident or hardware failure. The Percona [&hellip;]<\/p>\n","protected":false},"author":2,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[23,27,4148,85748],"tags":[86277,86278,86279,86280,86281,86282,86283,86284,86285,86286,86287,86288,86289,86290,86291,86292,86293,86294],"class_list":["post-32063","post","type-post","status-publish","format-standard","hentry","category-database","category-mysql","category-python","category-ubuntu","tag-backup-mysql-database-command-line","tag-backup-mysql-database-ubuntu","tag-digitalocean-dump-mysql","tag-export-import-database-ubuntu","tag-get-dump-of-mysql-database-ubuntu","tag-how-to-export-mysql-database-using-command-line-in-ubuntu","tag-how-to-mysql-dump-ubuntu","tag-how-to-restore-mysql-database-in-ubuntu","tag-how-to-take-backup-of-database-in-ubuntu","tag-mysql-auto-backup-ubuntu","tag-mysql-backup-database-script","tag-mysql-backup-database-to-file","tag-mysql-dump","tag-mysql-import-database-command-line","tag-mysql-restore-all-databases","tag-mysql-restore-table-from-dump-file","tag-mysqldump-multiple-databases","tag-ubuntu-mysql-backup-restore"],"_links":{"self":[{"href":"https:\/\/www.wikitechy.com\/technology\/wp-json\/wp\/v2\/posts\/32063","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.wikitechy.com\/technology\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.wikitechy.com\/technology\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.wikitechy.com\/technology\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.wikitechy.com\/technology\/wp-json\/wp\/v2\/comments?post=32063"}],"version-history":[{"count":0,"href":"https:\/\/www.wikitechy.com\/technology\/wp-json\/wp\/v2\/posts\/32063\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.wikitechy.com\/technology\/wp-json\/wp\/v2\/media?parent=32063"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.wikitechy.com\/technology\/wp-json\/wp\/v2\/categories?post=32063"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.wikitechy.com\/technology\/wp-json\/wp\/v2\/tags?post=32063"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}