↑ Return to D70 Backup Strategy

PRIV D75 Server Backup Howto

Responsible
r.karadjov
My articles
Follow on:

Page no: D75
Explanation
Video and Pics

When Manual Server Backup is done?

We do a manual restore of the database when Backup Buddy does not work.

What does it restore:

  • Files
  • Database
  • Links in the database

 

Source: Link

Server Information

Gather Information about the Source System

Before we begin migrating, we should take the initial steps to set up our target system to match our source system.

We will want to match as much as we can between the current server and the one we plan on migrating to. If you want the migration to go smoothly, then you shouldn’t take this as an opportunity to upgrade to the newest version or try new things. Making changes can lead to instability and problems down the line.

Most of the basic information that will help you decide which server system to create for the new machine can be retrieved with a simple uname command:

This is the version of the kernel that our current system is running. In order to make things go smoothly, it’s always a good idea to try to match that on the target system.

This is the system architecture. i686 indicates that this is a 32-bit system. If the returned string was x86_64, this would mean that this is a 64-bit system.

You should also try to match the distribution and version of your source server. If you don’t know the version of the distribution that you have installed on the source machine, you can find out by typing:

You should create your new server with these same parameters if possible. In this case, we would create a 32-bit Ubuntu 12.04 system. If possible, we’d also attempt to match the kernel version on the new system.

Set Up SSH Key Access between Source and Target Servers

We’ll need our servers to be able to communicate so that they can transfer files. The easiest way to do this is with SSH keys. You can learn how to configure SSH keys on a Linux server here.

We want to create a new key on our target server so that we can add that to our source server’s authorized_keys file. This is cleaner than the other way around, because then the new server will not have a stray key in its authorized_keys file when the migration is complete.

First, on your destination machine, check that your root user doesn’t already have an SSH key (you should be logged in as root) by typing:

If you see files called id_rsa.pub and id_rsa, then you already have keys and you’ll just need to transfer them.

If you don’t see those files, create a new key pair by typing:

Press “Enter” through all of the prompts to accept the defaults.

Now, transfer the key to the source server by typing:

You should now be able to SSH freely to your source server from the target system by typing:

You should not be prompted for a password if you configured this correctly.

Create a List of Requirements

This is actually the first part where you’re going to be doing in-depth analysis of your system and requirements.

During the course of operations, your software requirements can change. Sometimes old servers have some services and software that were needed at one point, but have been replaced.

While unneeded services should be disabled and, if completely unnecessary, uninstalled, this doesn’t always happen. You need to discover what services are being used on your source server, and then decide if those services should exist on your new server.

The way that you discover services and runlevels largely depends on the type of “init” system that your server employs. The init system is responsible for starting and stopping services, either at the user’s command or automatically.

 

Discovering Services and Runlevels on System V Servers

System V is one of the older init systems still in use on many servers today. Ubuntu has attempted to switch to the Upstart init system, and in the future may be transitioning to a Systemd init system.

Currently, both System V style init files and the newer Upstart init files can be found on the same systems, meaning you’ll have more places to look. Other systems use System V as well. You can see if your server uses System V by typing:

This will list all of the services that the System-V init system knows about. The “+” means that the service is started, the “-” means it is stopped, and the “?” means that System-V doesn’t know the state of the service.

If System-V doesn’t know the state of the service, it’s possible that it is controlled by an alternative init system. On Ubuntu systems, this is usually Upstart.

Other than figuring out which services are running currently, another good piece of information to have is what runlevel a service is active in. Runlevels dictate which services should be made available when the server is in different states. You will probably want to match the source server’s configuration on the new system.

You can discover the runlevels that each service will be active for using a number of tools. One way is through tools like chkconfig or sysv-rc-conf.

On an Ubuntu or Debian system, you can install and use chkconfig to check for which System V services are available at different runlevels like this. Most RHEL-based systems should already have this software installed:

Another alternative is sysv-rc-conf, which can be installed and run like this:

If you would like to manually check instead of using a tool, you can do that by checking a number of directories that take the form of /etc/rc*.d/. The asterisk will be replaced with the number of the runlevel.

For instance, to see what services are activated by System V in runlevel 2, you can check the files there: (see screen)

These are links to configuration files located in /etc/init.d/. Each link that begins with an “S” means that it is used to start a service. Scripts that start with a “K” kill services off at that runlevel.

 

Discovering Services and Runlevels on Upstart Servers

Ubuntu and Ubuntu-based servers are pretty much the only servers that implement the Upstart init system by default. These are typically used as the main init system, with System V being configured for legacy services.

To see if your server has an Upstart init system, type:

If you receive a path to the executable as we did above, then your server has Upstart capabilities and you should investigate which services are controlled by Upstart.

You can see which services are started by Upstart by typing:

This will tell you the current state of all Upstart managed services. You can tell which services are being run currently and maybe see if there are services that provide the same functionality where one has taken over for a legacy service that is no longer in use.

Again, you should become familiar with what services are supposed to be available at each runlevel.

You can do this with the initctl command by typing:

This spits out a lot of configuration information for each service. The part to look for is the runlevel specification.

If you would rather gather this information manually, you can look at the files located in the /etc/init directory (notice the omission of the “.d” after the “init” here).

Inside, you will find a number of configuration files. Within these files, there are runlevel specifications given like this:

You should have a good idea of different ways of discovering Upstart services and runlevels.

Discovering Services and Runlevels on Systemd Servers

A newer init style that is increasingly being adopted by distributions is the systemd init system.

Systemd is rather divergent from the other types of init systems, but is incredibly powerful. You can find out about running services by typing:

Systemd doesn’t exactly replicate the runlevels concept of other init systems. Instead, it implements the concept of “targets”. While systems with traditional init systems can only be in one runlevel at a time, a server that uses systemd can reach several targets at the same time.

Because of this, figuring out what services are active when is a little bit more difficult.

You can see which targets are currently active by typing:

You can list all available targets by typing:

From here, we can find out which services are associated with each target. Targets can have services or other targets as dependencies, so we can see what policies each target implements by typing:

For instance, you might type something like this:

This will list the dependency tree of that target, giving you a list of services and other targets that get started when that target is reached.

Double Checking Services Through Other Methods

While most services will be configured through the init system, there are possibly some areas where a process or service will slip through the cracks and be controlled independently.

We can try to find these other services and processes by looking at the side effects of these services. In most cases, services communicate with each other or outside entities in some way. There are only a specific number of ways that services can communicate, and checking those interfaces is a good way to spot other services.

One tool that we can use to discover network ports and Unix sockets that are being used by processes to communicate is netstat. We can issue a command like this to get an overview of some of our services:

The port numbers in the first section are associated with the programs on the far right. Similarly, the bottom portion focuses on Unix sockets that are being used by programs.

If you see services here that you do not have information about through the init system, you’ll have to figure out why that is and what kind of information you’ll need to gather about that service.

You can get similar information about the ports services are making available by using the lsof command:

You can get some great information from the ss command on what processes are using what ports and sockets:

Gathering Package Versions

After all of that exploration, you should have a good idea about what services are running on your source machine that you should be implementing on your target server.

You should have a list of services that you know you will need to implement. For the transition to go smoothly, it is important to attempt to match versions where ever it is possible.

You obviously won’t be able to go through every single package installed on the source system and attempt to replicate it on the new system, but you should check the software components that are important for your needs and try to find the version number.

You can try to get version numbers from the software itself, sometimes by passing -v or --version flags to the commands, but usually this is easier to accomplish through your package manager. If you are on an Ubuntu/Debian based system, you can see which version of the packages are installed from the package manager by typing:

If you are instead on a RHEL-based system, you can use this command to check the installed version instead.

This will give you a good idea of the program version you are looking to installed.

Keep a list of the version numbers of the important components that you wish to install. We will attempt to acquire these on the target system.

 

Create the backup

Creating a Migration Script

We will be making these decisions as we go, and adding them to a migration script.

This will give you a number of important advantages. It will allow you to easily re-run the commands again if there is a problem or in order to capture data changes on the source system after the first run. It will self-document the commands you used to transfer the data. It will also allow your source server to continue onto the next item of data transfer without user interaction.

As you write the script, you should be able to run it multiple times, refining it as you go. Most of the files will by transferred through rsync, which will only transfer file changes. If the other data transfer portions take a long time, you can safely comment them out until you are fairly sure your script is in its final state.

This article will mostly be a guide on what to add to your migration script to make your migration successful. It will provide general guidelines more often than specifics.

We can create a simple migration script in the root user’s home directory on the target system. We will use this to automate a large portion of our data migration operations:

 

Inside the file, begin with a standard script heading (we will use “sh” to make this more portable, but you can use “bash” if you would like to use the extended features it offers and have it available on both systems):

We will add to this as we continue. For now though, let’s exit the file quickly so that we can make it executable.

Back on the command line, make the script executable by typing:

To run the script at any time, you can now call it using its absolute path:

Or its relative path:

You should test the script regularly as you go along to see if there are issues that come up.

Install Needed Programs and Services

The first step that we need to take prior to automation is to acquire the packages that you need to get these services up and running. We could also add this to the script, but it is easier to just do this portion by hand and document it in our script.

The configuration details will come later. For now, we need these applications installed and basic access configured so that we can get to work. You should have a list of required packages and versions from your source machine.

Add Additional Repositories if Necessary

Before we attempt to get these versions from our package manager, we should inspect our source system to see if any additional repositories have been added.

On Ubuntu/Debian machines, you can see if alternative software sources are present on your source system by investigating a few locations:

This is the main source list. Additional source lists can be contained in the sources.list.d directory:

If you need to, add the same sources to your target machine to have the same package versions available.

On a RHEL-based system, you can use yum to list the repositories configured for the server:

You can then add additional repositories to your target system by typing:

If you make any changes to your source list, add them as comments at the top of your migration script. This way, if you have to start from a fresh install, you will know what procedures need to happen before attempting a new migration.

Save and close the file.

Specifying Version Constraints and Installing

You now have the repositories updated to match your source machine.

On Ubuntu/Debian machines, you can now attempt to install the version of the software that you need on your target machine by typing:

Many times, if the version of the package is older, it will have been removed from the official repositories. In this case, you may have to manually hunt down the older version of the .deb files and their dependencies and install them manually with:

This is necessary if matching the software version is important for your application. Otherwise, you can just install regularly with your package manager.

For RHEL-based systems, you can install specific versions of software by typing:

If you need to hunt down rpm files that have been removed from the repository in favor of newer versions, you can install them with yum after you’ve found them like this:

Install any relevant software that is available from your package manager into the new system. In the event that the software you need is not available through a repository or other easy means and has been installed by source or pulled in as a binary from a project’s website, you will have to replicate this process on the target system.

Again, keep track of what operations you are performing here. We will include them as comments in a script we are creating:

Again, save and close the file.

Date Transfer to new System

We transfer for example from

  • Site (snbchf) to Test
    User test has 3 subdomains
  • Each user has its own database(s) and files.
    Reason: Security against hackers

Start Transferring Data

The actual transfer of data can easily be the most time-intensive part of the migration. If you are migrating a server with a lot of data, it is probably a good idea to start transferring data sooner rather than later. You can refine your commands later on, and rsync only transfers the differences between files, so this shouldn’t be a problem.

We can begin by starting an rsync of any large chunks of user data that need to be transferred. In this context, we are using “user” data to refer to any significant data needed by your server except database data. This includes site data, user home directories, configuration files, etc.

Installing and Using Screen

To do this effectively, we’re going to want to start a screen session on our target system that you can leave running while you continue to work.

You can install screen using your distribution’s package manager. On Ubuntu or Debian, you could type this:

You can find out how to operate screen by checking out this link.

Basically, you need to start a new screen session like this on your target server:

A screen session will start, and drop you back into a command line. It will probably look like nothing has happened, but you’re now operating a terminal that is contained within the screen program.

All of the work that we will do during our migration will happen within a screen session. This allows us to easily jump between multiple terminal sessions, and allows us to pick up where we left off if we have to leave our local terminal or we get disconnected.

 

You can issue commands here and then disconnect the terminal, allowing it to continue running. You can disconnect at any time by typing:

You can reconnect later by typing:

If you need to create another terminal window within your screen session, type:

To switch between windows, type these two to cycle through windows in either direction:

Destroy a window by typing:

Begin File Transfers Early

Inside of your screen session, start any rsync tasks that you anticipate taking a long time to complete. The time scale here depends on the amount of significant (non-database) data you have to transfer.

The general command you’ll want to use is:

You can find out more about how to create appropriate rsync commands by reading this article. You may have to create the directories leading up to the destination in order for the command to execute properly.

When you have your rsync session running, create a new screen window and switch to it by typing:

Check back periodically to see if the syncing is complete and perhaps to start a subsequent sync by typing:

Adjusting the Script to Sync Data and Files

Now, you should add the same rsync command that you just executed into the script you are creating. Add any additional rsync commands that you need in order to get all of your significant user and application data onto your target server.

We will not worry about database files at this point, because there are better methods of transferring those files. We will discuss these in a later section.

You should add any rsync commands that you need to transfer your data and configurations off of the source system.

This does not need to be perfect, because we can always go back and adjust it, so just try your best. If you’re unsure of whether you need something right now, leave it out for the time being and just add a comment instead.

We will be running the script multiple times, allowing you to modify it to pick up additional files if you end up needing them. Being conservative about what you transfer will keep your target system clean of unnecessary files.

We are trying to replicate the functionality and data of the original system, and not necessarily the mess.

Modifying Configuration Files

Although many pieces of software will work exactly the same after transferring the relevant configuration details and data from the original server, some configuration will likely need to be modified.

This presents a slight problem with our syncing script. If we run the script to sync our data, and then modify the values to reflect the correct information for its new home, these changes will be wiped out the next time we run the script again.

Remember, we will likely be running the rsync script multiple times to catch up with changes that have occurred on the source system since we’ve started our migration. The source system can change significantly during the course of migrating and testing the new server.

There are two general paths that we can take to avoid wiping out our changes. First, I’ll discuss the easy way, and follow up with what I consider the more robust problem.

 

The Quick and Dirty Way

The easy way of addressing this is to modify the files as needed on the target system after the first sync operation. Afterwards, you then can modify the rsync commands in your script to exclude the files that you adjusted.

This will cause rsync to not sync these files on subsequent runs, which would overwrite your changes with the original files again.

This can be accomplished by commenting out the previous sync command and adding a new one with some exclude statements like this:

You should add exclusion lines for any files under the rsync directory specification that have been modified. It would also be a good idea to add a comment as to what was modified in the file, in case you actually do need to recreate it at any point.

While the above method addresses the problem in some ways, it’s really just avoiding the issue instead of solving it. We can do better.

Linux systems include a variety of text manipulators that are very useful for scripting. In fact, most of these programs are made specifically to allow their use in a scripted environment.

The two most useful utilities for this task are sed and awk. You can click here to learn how to use the sed stream editor, and check out this link to see how to use awk to manipulate text.

The basic idea is that we can script any changes that we would be making manually, so that the script itself will perform any necessary modifications.

So in the previous example, instead of adding an exclusion for the file we modified after the fact, we could keep that rsync command and make that change automatically using a sed command:

This will change the socket location in every instance of the file, each time the file is transferred. Make sure that the text manipulation lines come after the lines that sync the files that they operate on.

In a similar way, we can easily script changes made to tabular data files using awk. For instance, the /etc/shadow file is divided into tabs delimited by the colon (:) character. We could use awk to remove the hashed root password from the second column like this:

This command is telling awk that both the original and the output delimiter should be “:” instead of the default space. We then specify that if column 1 is equal to “root”, then column 2 should be set to an empty string.

Up until fairly new versions of awk, there was no option to edit in place, so here we are writing this file to a temporary file, overwriting the original file, and then removing the temporary file.

We should do our best to script all of the changes needed in our files. This way, it will be easy to reuse some of the lines from our migration script for other migrations, with some easy modification.

An easy way of doing this is to go through your script and add comments to your script for each file that needs to be modified. After you know your requirements, go back and add the commands that will perform the necessary operations.

Add these changes to your script and let’s move on.

Transfer DB Files

Dump and Transfer your Database files

If your system is using a database management system, you will want to dump the database using the methods available for your system. This will vary depending on the DBMS you use (MySQL, MariaDB, PostgreSQL, etc.).

For a regular MySQL system, you can export the database using something like this:

MySQL dump options are highly dependent on the context, so you’ll have to explore which options are right for your system before deciding. This is beyond the scope of this article.

Let’s go over what these options will do for the database dump.

  • -Q: This option is enabled by default, but is added here for extra safety. It puts identifiers like database names inside quotes to avoid misinterpretation.
  • -q: This stands for quick and can help speed up large table dumps. In actuality, it is telling MySQL to operate on a row-by-row basis instead of trying to handle the entire table at once.
  • -e: This creates smaller dump files by grouping insert statements together instead of handling them individually when the dump file is loaded.
  • -R: This allows MySQL to also dump stored routines along with the rest of the data.
  • –add-drop-table: This option specifies that MySQL should issue a DROP TABLE command prior to each CREATE TABLE to avoid running into an error if the table already exists.
  • -A: This option specifies that MySQL should dump all of the databases.
  • -u: This details the MySQL user to use for the connection. This should be root.
  • -p: This is the password needed for the MySQL root account.

This will create a MySQL dump of the source system’s MySQL data on the original system. We can wrap this in an SSH command to have it execute remotely:

We can then use a normal rsync command to retrieve the file when it is finished:

After that, we can import the dump into the target system’s MySQL instance:

Another option is to configure a replication setup between the original database and the target system’s database. This can allow you to simply swap the master and the slave when you are finished, in order to finalize the database migration.

This is also beyond this article’s scope, but you can find details about how to configure master-slave replication here.

If you go this route, make sure to add comments to your script specifying your configuration. If there is a big issue, you want to be able to have good information on what you did so that you can avoid it on a second attempt.

Migrate Users and Groups

Although your primary concern may be for your services and programs, we need to pay attention to users and groups as well.

Most services that need specific users to operate will create these users and groups at installation. However, this still leaves users and groups that have been created manually or through other methods.

Luckily, all of the information for users and groups is contained within a few files. The main files we need to look at are:

  • /etc/passwd: This file defines our users and basic attributes. Despite its name, this file no longer contains any password information. Instead, it focuses on username, user and primary group numbers, home directories, and default shells.
  • /etc/shadow: This file contains the actual information about passwords for each user. It should contain a line for each of the users defined in the passwd file, along with a hash of their password and some information about password policies.
  • /etc/group: This file defines each group available on your system. Basically, this just contains the group name and the associated group number, along with any usernames that use this as a supplementary group.
  • /etc/gshadow: This file contains a line for each group on the system. It basically lists the group, a password that can be used by non-group members to access the group, a list of administrators and non-administrators.

While it may seem like a good idea to just copy these files directly from the source system onto the new system, this can cause complications and is not recommended.

One of the main issues that can come up is conflicting group and user id numbers. If software that creates its own users and groups is installed in a different order between the systems, the user and group numbers can be different, causing conflicts.

It is instead better to leave the majority of these files alone and only adjust the values that we need. We can do this in a number of ways.

Creating Migration Files

Regardless of the method we’d like to use to add users to our new system, we should generate a list of the users, groups, etc. that should be transferred and added.

A method that has been floating around the internet for awhile is mentioned below:

We will create a file associated with each of the above files that we need to modify. They will contain all of the appropriate transfer information.

First, figure out what the ID limit between regular and system users is on your machine. This is typically either 500 or 1000 depending on your system. If you have a regular user, an easy way to find out is to inspect the /etc/passwd file and see where the regular user accounts start:

Afterwards, we can use this number (the first regular user ID number, in the 3rd column) to set the limit on our command. We won’t be exporting users or groups below this limit. We will also exclude the “nobody” account that is given the user ID of “65534”.

We can create a sync file for our /etc/passwd file by typing this. Substitute the limit# with the lowest regular user number you discovered in the /etc/passwd file:

Afterwards, we can do a similar thing to make a group sync file:

We can use the usernames within the range we’re interested in from our /etc/passwd file to get the values we want from our shadow file:

For the /etc/gshadow file, we’ll do a similar operation:

Once we know the commands we want to run, we can add them to our script after a regular SSH command and then rsync them off, like this:

Optional: Adding Users

Manually Add Users

If we want to just add a comment to our script file and do this manually, the vipw and vigr commands are recommended, because they lock the files while editing and guard against corruption. You can edit the files manually by typing:

Passing the -s flag edits the associated shadow file, and passing the -g flag edits the group file.

You may be tempted to just add the lines from the files directly onto the end of the associated file on the new system like this:

If you choose to go this route, you must be aware that there can be ID conflicts if the ID is already taken by another user on the new system.

You can also add each username using the available tools on the system after getting a list from the source computer. The useradd command can allow you to quickly create user accounts to match the source computer:

You can use the *.sync files for reference and add them in this way.

Automatically Add Users

If we instead want to script the user and group additions within our file, we can easily do that too. We’ll want to comment these out after the first successful run though, because the script will attempt to create users/groups multiple times otherwise.

There is a command called newusers that can bulk add users from a file. This is perfect for us, but we want to modify our files first to remove the user and group IDs. The command will generate the next available users and groups for the new system.

We can strip the group and user IDs from the passwd file like this:

 We can apply this new modified file like this:

This will add all of the users from the file to the local /etc/passwd file. It will also create the associated user group automatically. You will have to manually have to add additional groups that aren’t associated with a user to the /etc/group file. Use your migration files to edit the appropriate files.

For the /etc/shadow file, you can copy the second column from your shadow.sync file into the second column of the associated account in the new system. This will transfer the passwords for your accounts to the new system.

You can attempt to script these changes, but this may be one case where it is easier to do it by hand. Remember to comment out any user or group lines after the users and groups are configured.

Transfer Mail and Jobs to New System

Now that your users are transferred from the old system, and have your user’s home directories populated by the rsync commands that have been running, you can migrate the mail of each user as well. We want to replicate the cron jobs too.

We can begin by doing another rsync command for the spool directory. Within the spool directory on our source system, we can usually see some important files:

We want to transfer the mail directory to our target server, so we can add an rsync line that looks like this to our migration script:

Another directory within the /var/spool directory that we want to pay attention to is the cron directory. This directory keeps cron and at jobs, which are used for scheduling. The crontabs directory within contains individual user’s crontab are used to schedule jobs.

We want to preserve the automated tasks that our users have assigned. We can do this with yet another rsync command:

This will get individual user’s crontabs onto our new system. However, there are other crontabs that we need to move. Within the /etc directory, there is a crontab and a number of other directories that containing cron info.

The crontab file contains system-wide cron details. The other items are directories that contain other cron information. Look into them and decide if they contain any information you need.

Once again, use rsync to transfer the relevant cron information to the new system.

Once you have your cron information on your new system, you should verify that it works. This is a manual step, so you’ll have to do this at the end.

The only way of doing this correctly is to log in as each individual user and run the commands in each user’s crontab manually. This will make sure that there are no permissions issues or missing file paths that would prevent these commands from silently failing when running automatically.

Links

This video contains how we modify the links/URLs for the new installation.

Origin page

 

 

 

Testing

Restart Services

At the end of your migration script, you should make sure that all of the appropriate services are restarted, reloaded, flushed, etc. You need to do this using whatever mechanisms are appropriate for the operating system that you are using.

For instance, if we’re migrating a LAMP stack on Ubuntu, we can restart the important processes by typing:

You can add these to the end of your migration script as-is, and they should operate as expected.

Test Sites and Services

After you have finished your migration script and ran it with all of the syncing and modifications, as well as performed all of the necessary manual steps, you should test out your new system.

There are quite a few areas that you’ll want to check. Pay attention to any associated log files as you’re testing to see if any issues come up.

First, you’ll want to test the directory sizes after you’ve transferred. For instance, if you have a /data partition that you’ve rsynced, you will want to go to that directory on both the source and target computers and run the du command:

Verify that the sizes are close to the same. There might be slight differences between the original and the new system, but they should be close. If there is a large disparity, you should investigate as to why.

Next, you can check the processes that are running on each machine. You can do this by looking for important information in the ps output:

You also can replicate some of the checks that you did initially on the source machine to see if you have emulated the environment on the new machine:

Again, another option is:

You should go through the package versions of your important services like we did in the first article in order to verify if you matched version for important packages. The way to do this will be system dependent.

If you transferred a web server or a LAMP stack, you should definitely test your sites on the new server.

You can do this easily by modifying your hosts file (on your local computer) to point to your new server instead of the old one. You can then test to see if your server accepts requests correctly and that all of the components are operating together in the correct way.

The way that you modify your local hosts file differs depending on the operating system you are using. If you are using an operating system with *nix based design, like OS X or Linux, you can modify the hosts file on your local system like this:

Inside, you need to add an entry to point your domain name to the IP address of your new server, so that your computer intercepts the request and routes it to the new location for testing.

The lines you can add may look something like this:

Add any subdomains that are used throughout your site configuration as well (images.domain.com, files.domain.com, etc.). Once you have added the host lines, save and close the file.

If you are on OS X, you will need to flush your hosts file for your computer to see the new content:

On Linux, this should work automatically.

On Windows, you’ll have to edit the C:\Windows\Wystem32\Drivers\etc\hosts file as an administrator. Add the lines in the same fashion that we did above for the *nix versions.

After your hosts file is edited on your local workstation, you should be able to access the test server by going to your domain name. Test everything you possibly can and make sure that all of the components can communicate with each other and respond in the correct way.

After you have completed testing, remember to open the hosts file again and remove the lines you added.

Migrate Firewall Rules

Remember that you need to migrate your firewall rules to your new server. To learn how to do this, follow this tutorial: How To Migrate Iptables Firewall Rules to a New Server.

Keep in mind that, prior to loading the rules into your new server, you will want to review them for anything that needs to be updated, such as changed IP addresses or ranges.

 

Change DNS Settings

When you’ve thoroughly tested your new server, look through your migration script and make sure that no portion of it is going to be reversing modifications you’ve made.

Afterwards, run the script one more time to bring over the most recent data from your source server.

Once you have all of the newest data on your target server, you can modify the DNS servers for your domain to point to your new server. Make sure that every reference to the old server’s IP is replaced with the new server’s information.

The DNS servers will take some time to update. After all of the DNS servers have gotten your new changes, you may have to run the migration script a final time to make sure that any stray requests that were still going to your original server are transferred.

Look closely at your MySQL commands to ensure that you are not throwing away or overwriting data that has been written to either the old or new servers.

Conclusion

If all went well, your new server should now be up and running, accepting requests and handling all of the data that was on your previous server. You should continue to closely monitor the situation and keep an eye out for any anomalies that may come up.

Migrations, when done properly, are not trivial, and many issues can come up. The best chance of successfully migrating a live server is to understand your system as best as you can before you begin. Every system is different and each time, you will have to work around new issues. Do not attempt to migrate if you do not have time to troubleshoot issues that may arise.

 

See more for D7x Backup Strategy