AppAssure Backup

AppAssure Backup

Backup is one of those things that are a necessary evil in every organization. In an enterprise environment, they have historically been expensive, complicated, have a large foot print, require a lot of resources, and a high initial investment. Moreover, backups are usually not very visible throughout organizations, unless there is a need for restores; so justifying that purchase to those in management is a bit tougher than a solution that would immediately bring a “product” into the users’ hands.

Traditionally, the backup software concept has been based on file/folder selections into backup sets, and then scheduled on a recurring basis to target destinations to perform the backups. This, of course is the very basic concept. This, as I said, is the traditional method, and works well. However, businesses and their dynamics have changed in the past years, and with the emergence of virtual infrastructures and platforms, as well as online availability for a lot of business, the concept of a maintenance windows (aka. backup window) is no longer as practical as it once was.

Backup software companies have been rethinking backup from the ground up, keeping the new business models in mind. Products like Veeam backup and AppAssure are two examples that are doing it differently. Veeam works more on a virtual disk level (i.e: VMDK or VHD), where AppAssure works on a volume level. Both perform block level backups. The main difference between these 2 products and the former mentioned ones, is the latter ones eliminate the need for a Backup window altogether, and employing technologies that make the backups more efficient, more frequent, and most importantly non disruptive.

My backup background:
Being at a K-12 organization, and being in fire-fighting mode most of the time, allocating a budget for backups was rarely a priority, as there were always more important and basic things to use the budget for (i.e: refreshing old dying servers, upgrading an email system, etc ). Being an admin, and regardless of the circumstances and resources, I knew that having some sort of backup for the data was implicitly mandated to me, so I was using Open Source products and Windows Backup solutions (ntbackup on 2003 and the Server Backup utility on 2008 R2). This worked ok, but was obviously not streamlined and reliably manageable.

What is AppAssure?

AppAssure Main Screen

I looked at, and worked with earlier backup solutions, from traditional (i.e: BackupExec, NetBackup) to a bit less traditional, like Veeam Backup, which deals with backing up VMDKs.

So far I’ve mostly talked about other backup solutions, so let’s move on to AppAssure. In a nutshell, this solution is a bit different in that it processes backups on an interval based schedule, while eliminating the need for an actual backup window; also, backups happen on a per-volume basis. This obviously has some advantages and disadvantages, one of the main disadvantages is that if you bring in AppAssure to an existing environment, it may trigger the need to reorganize some of the data on some of your volumes to allow you to be more selective in what you want to back up. On the upside, however, AppAssure has a very powerful engine that compresses, and deduplicates on a block level all data that is being backed up to it. Now, of course, there are caveats to that, which I’ll discuss later in the article.

The AppAssure backup software is also storage agnostic, meaning, you can use any existing storage that you have (up to 255 repositories) on each core engine to back up to Direct Attached Storage, iSCSI volumes, SMB shares mapped via UNC, this is  a big advantage for those with limited storage, or decentralized storage, as it will allow them to use bits and pieces of storage, though not ideal, it is certainly doable.

In the next few sections, I would like to walk you through my experience of installing it, and give you a non biased opinion on what I think is good, and what is bad with the AppAssure solution. Hopefully by doing so, you’ll be able to make a more informed decision on what is appropriate for your organization.

How it works on a basic level, and how it’s licensed:
The AppAssure solution is licensed on a per processor socket basis. Quite honestly, I don’t have any physical servers being backed up, so I don’t have much detail regarding the licensing for this type of server, however, when licensing virtual hosts, the licenses are per processor socket. So, for instance, if you have 5 Virtual hosts with a 2 Proc / Quad Core, then you would end up paying for 10 AppAssure licenses. How many VMs you can backup on this? well, that depends on how many VMs you can fit on 5 hosts.

An agent gets installed on each server/vm/workstation you want backed up. (there are agents available for Linux RedHat, Suse, Ubuntu, CentOS and Windows). You will also install a core server which would basically run the AppAssure engine on it. Though the core server requires a license to be installed, it is not limited to one server, meaning you can install as many cores as you like with the license that you have. This provides a tremendous advantage to the solution: You are able to install many core servers as you like on different physical sites, and replicate (and deduplicate) data to those sites, without any additional cost. This is a huge feature, which, with other solutions would cost a lot of money. The only requirements to have your disaster recovery sites (or Replication sites) are that you have some hardware to install the Core Server on, and enough storage to replicate whatever you want to replicate from your main site. From that point on, you can have as many cores as you like whether on the same site, or in different physical location. There is no added cost to do that part.

For every agent installed, and added to the core server, the settings for transfer rates, scheduled snapshots, and data retention can be either inherited from the server settings, or overridden on an agent by agent basis.

For each agent, an initial base image is taken, and incremental snapshots thereafter. The engine has an algorithm to determine whether a base image is needed. All this does not involve the user, it happens in the background. Every night, SQL attachability checks happen for servers that have databases, to ensure that a restore would be possible. It is also possible to enable log truncation for SQL databases which are set to Simple Recovery Mode.

Once snapshots are taken, it is as simple as choosing a recovery point at any point in time, and mounting it to the server. This happens quite fast, as no restores are being done at this point, but rather a dynamic reconstruction of recovery points. AppAssure prides itself in their “Live Recovery” technology, which allows for the restoration of volumes near instantaneously by initially making the volume and file headers available to the clients, and dynamically prioritizing the availability of that data based on user requests.

Some warnings about requirements:
Since I said that this is an unbiased review, I need to mention some items to be careful about. The AppAssure software does work on a virtual machine for the core server. AppAssure will flat-out tell you that you will need a beefy server for it, meaning about 10Gb of RAM on that server and at least a dual core, to handle the backups, having good network throughput is also useful.

One word of caution, even if you think that you may have the resources on your Virtual Cluster to run AppAssure on it, think twice before doing so. The I/O required for the server is quite high, and if running next to production servers, you risk creating some I/O issues, and in my case, I had a few All Paths Dead on my iSCSI volumes. I have spent hours troubleshooting my VMware hosts which were randomly not responding and have to be cold rebooted, until I figured that it was the AppAssure server, so, do yourself a favor, and install your core server elsewhere. You can still install the Core on a VM if you like, but put it on an ESXi server that is outside of your production cluster.

Installation:
My experience with the installation was fairly pleasant. The process was extremely simple for both the agent and the core. The agent, though relatively small on its own (about 106Mb), will require an additional 220Mb of pre-requisites, (.NET Framework 4.0, C++ Redistributables, etc…) Don’t be too alarmed though, the relatively large storage footprint of the agent does not equally mean that it will also be that heavy on resources, as a matter of a fact, it only takes about 60Mb of RAM, consistently, no more, no less. On no server did I feel that the agent was causing shortage in resources.

The core server was equally easy to install. It also requires the same pre-requisites, but is also as simple as clicking next on the installer, and you will have a functional core. Upgrades to both the core and the server are just as simple. There are no specific settings that are needed. For the SQL attachability checks, an instance of an SQL engine is required on the Core server. AppAssure support will tell you that only a full version of the SQL engine is supported. I am running it just as successfully with SQL Express 2008 R2. Keep in mind that the version of SQL that you install on your core has to be of equal version or higher than your highest production SQL server, or attachability checks may not function.

Once the core is installed, it is possible to deploy agents (single or bulk), directly from the core, though I noticed that, for server that don’t already have the pre-requisites, (and even assuming that your firewall on your servers is correctly configured), 100% of the time, these installations have failed for me, so I’ve had to install the agents manually, or via other means. Subsequent installs of the agents (i.e: for an upgrade) work ok, as the prerequisites were already installed.

The second  part of the installation is the repositories, which is the main part of the Backup engine. Adding those is quite easy: for local drives, you simply point the repository to the drive letter, and specify the size of the repository. The space specified will immediately get consumed (AppAssure calls that a blob). For any UNC path shares, it’s also easy to add a repository, simply give it the UNC share name, and credentials with write access to the share. A blob would get created the same way.

Agents:
As I had mentioned earlier the installation of the agents is fairly simple, and can easily be done remotely if desired as well. on OSes that support a delayed start of a service (Windows server 2008 and newer), the agent is configured for a delayed start. In my experience, the agents don’t seem to always start-up with the server, so I had to change the “Recovery” option to Restart Service on 1st, 2nd and 3rd attempt. That works most of the time, though in some cases, I still need to go and start-up the agent manually. This is something that AppAssure stated they would be working on after opening a ticket with them, though, at this point, I’m considering running a script on an interval to check for the agent service and start it if it’s not running.

The Core Server:
The Core Server is very easy to figure out. Once the agents are installed, it is just a matter of going to the core, and choosing to protect a machine, and choosing its repository. The retention policies of the agent will, by default, get inherited from the main server configuration, though it is possible to do individual configuration directly on the agent config.

The Core Server on a Replication Site:
This process is as easy as setting up the core on the main site. The process is as follows:

  • Build a core on your remote site
  • Connect it to a large enough repository
  • Choose to replicate one or more agents to the remote site (you will have the option to choose which core to replicate to, and which repository on that particular core to use for storing the replicated data)

As you can see, the process is quite simple above, and for the most part, it works very well. The Core Server also has a very detailed event notification system, this can easily keep you abreast of any abnormalities of any of your agents or snapshots.

Compression ratios and repositories

Compression and Deduplication:
Most people are going to view AppAssure for its high claim of deduplication and compression, and to that, I have to give them kudos: if setup correctly, and because deduplication happens on a block level, it is fairly important to be selective about the type of data you would backup. If you do, you can get up to 80% compression! yes, that is pretty amazing. As an example, my ERP SQL server needed about 900Gb of storage to back up, including its transaction logs, retained for 5 days. With AppAssure, I backed up that same SQL server with about 350Gb of space, and the incremental snapshots are so small that you almost can’t tell the size difference. In addition, that accounts for a much longer retention policy, on a much more frequent basis.

The restores of really large files, if not restoring a whole volume can be quite slow, depending on the distance of the last base image from the most recent incremental, these restores can be as slow as 5-8Mb/s, even though a 1Gb iSCSI connection should optimally provide a good 60 or 70Mb/s. According to AppAssure, that is to be expected, for the reason I stated above. When rolling back a full volume however, the Live Recovery technology would kick in, and the availability of data would be much quicker. Unfortunately, for most restores, you will likely be using the former method rather than the latter.

Repositories, Replication and Encryption Keys:
Some of the misconceptions I had with AppAssure were due to its extremely bad documentation. It’s unfortunate that a lot of my knowledge about it was from trial and error, a feat that I could’ve happily done without had the documentation been more adequate. Hopefully, you will get some of these missing details from this article.

Compression and Deduplication are great, however, they do have limitations, ones which unless you ask specifically about, your AppAssure sales engineer may not casually mention.

You see, repositories are a bunch of disk blocks that allow the writing of snapshots. These are predetermined from the time you setup your repository. So, if you setup a repo that is 1Tb large, and you run out of space, your option would be to either relocate some of your snapshots to a different repository, or enlarge your current repository. The first option is probably better than the second, though, it can be very time-consuming: we’re talking, a process that could take upwards of 12 hours or more to complete for a single agent, while moving, potentially hundreds of Gbs across the wire to a new repository. The second option, (Extending the size of the repository) is certainly a viable option, and is a fairly quick one, provided you have that space to make available to the repository. Each repository have the option to add extents to it, so you can extent a 1Tb repo to 10Tb if you wish. However, and this is important, you may think that because you are backing up a bunch of server to one repository, that your data will be deduplicated on that repository and all its extents. Do not be fooled, the deduping will only happen on each extent of the repository, regardless of whether they belong to the repository or not. Similarly, and probably obviously, the deduping will also not happen across multiple repositories. So, I would recommend, that if you have a number of servers running Windows 2003, and are running AD, for example, that you put those on the same repository,  as they would benefit from a high level of deduplication.

When you are replicating data to a remote physical site on a different core, then the deduping process will happen on that destination, even if you are replicating multiple agents from the source core across different repositories, that being because the remote replication storage is still one, and therefore, the roll up process will deduplicate this data on the replication site.

One additional feature that I have not yet mentioned, which may be quite important depending on your organizational needs: encryption. You can opt to create a different encryption keys for one or more backup agents. This will end up separating the data being snapshotted, and prevent any potential data leaks through the deduplication process. Though this is a security measure that would be welcome for many organizations, and at first glance may prompt some to create a different encryption key for each of their servers, you have to keep in mind that a set of agents with one encryption key will NOT get their data deduplication with another set of agents protected with a different encryption key. So you have to use your judgement as to what’s more important in your case. By the way, an encryption key does not necessarily need to be used.

Operational Experience:
I have talked about a lot of theory around the concept of AppAssure, but I haven’t really touched much about what it’s like day to day. This is usually the most important part for any admin who is going to be maintaining it. I do have to disclose that my installation was originally somewhat problematic, so I would think that most users will not have the trouble that I’ve had with the product. That said, I can also say that the backup engine is pretty sensitive, meaning that any type of unclean shutdown or weirdness that happens with the engine, the repositories tend to become unclean, and require a check. Granted, it’s not the end of the world, but it is something to keep an eye on.

Also, a word to the wise, if you were getting into AppAssure, as I’m sure it is with a lot of other backup solutions, patience is key. Make sure that you don’t keep doing steps and expect immediate results. Remember that the UI is a web interface, which means that if you click on a link, you have a created a web request, and that request will be in effect even if you close the browser, so the more you click around, the more the server will be busy and not respond to your requests. So over all, be patient, and you have to trust that the product is working well. 95% of the time, it actually is working well, and most of the errors account to being “patience errors”.

Another piece of advice, is that if you’re backing up more than 35 servers or so, and of course, that also depends on your retention policies, and more importantly your backup intervals, you might definitely want to consider having more than just one core that performs backups.

Unfortunately, and even though the product is fairly self explanatory, the documentation for it is absolute rubbish. The knowledge base articles are a little bit better, but the experience I’ve had was, for the most part trial and error, and I’ve had absolutely no guidance in getting the sizing of the servers, the repositories, retention policies, or transfer rates correctly. So, unless AppAssure cleans up their act in that aspect, I would say that you would have to be a hands on person, who is fairly knowledgeable in regards to their infrastructure, and know a thing or two about bandwidth, and other components.

Conclusion:
Overall, I think AppAssure is a solution that gives a lot of value for the money. It is fairly easy to setup, use and understand. The documentation is lacking, but fortunately, it’s fairly simple to figure out the product. You really want to do some trial and error testing to be able to do the correct sizing for your environment, and figure out the real space required for your repositories, and your retention policies.

In my opinion, I still don’t think that the solution is quite enterprise like others available, but if you have some 50-100 servers to backup, I think it’s a viable solution. The flexibility of the requirements of the storage, and the replication makes it a versatile solution that would be advisable for an organization that doesn’t have hundreds of thousands of dollars to spend on an enterprise backup solution.

The best part is that AppAssure is almost a “set it and forget it” solution. The support isn’t stellar, and I haven’t yet gotten a call back for an issue that I considered critical in any good amount of time. I’ve been told that they’re experiencing a growth spurt, and are dealing with that, which is affecting their support. Make of that what you will, as I have yet to experience anything different. But I will still give them the benefit of the doubt. Another reason to do so, is due to the great performance of the product; It really is a good product. It needs some tune ups, and some more robustness. The support can also be worked on.

Finally, If you decide to give that a shot, you can always get a trial license, and try it. As of my purchase of the product, AppAssure also had a 30 day money back guarantee if you are not satisfied with the product. Hopefully they will keep that for the long run.[/fusion_builder_column][/fusion_builder_row][/fusion_builder_container]

Share and Enjoy !

Shares
close
Facebook IconVisit Our Store