A modern backup solution with the following features:
- Extensive reporting
- Administrative graphical user interface (GUI)
Reduced backup times
- Ability to keep all backups for two months and occasionally keep a backup for 10 years
- High availability
- No operational impact on the clusters being backed up
- Multiple copies of the data without moving data over the corporate IP data network
Figure 1 shows the solution that was designed for this
customer's needs. The solutions chosen to address individual needs are
discussed in the sections following the figure.
Figure 1 Production Cluster and Development Cluster Connections to SANs and CDLs
A Modern Backup Solution
EMC Legato NetWorker V7 was chosen because of its flexibility
to deliver a solution that meets all the basic requirements. In conjunction
with NetWorker Management Console (NMC), this solution provides a secure
environment where the operations staff can be limited to performing only
certain functions using a Java-based, web-start application. NMC also provides
sophisticated and extensive reporting, and all actions taken on NetWorker are
Reduced Backup Times and Storage Capacity
It was determined that a disk-to-disk solution would
provide the highest possible throughput for both restores and backups. However,
with the amount of data required to be stored, EMC Symmetrix disk was too
expensive to use as a backup device. The solution required lower cost disks
such as serial ATA disks. EMC's mid-range disk solution is CLARiiON, which, in
the spring of 2005, does not support OpenVMS. However, EMC had just launched
the CLARiiON Disk Library (CDL), which was a perfect solution.
The CDL is a virtual tape library. It is essentially a
collection of fibre-attached, serial ATA disks in a cabinet fronted by a
high-availability pair of Linux servers that emulate a tape library using
FalconStor VirtualTape Library (VTL) software. HP has just launched its
equivalent, the StorageWorks 6000 Virtual Library System.
A CDL was implemented at each site with enough space to
keep all data for two months. To ensure that the CDL is always available using
fibre channel, two ports were dedicated to local data backup.
Each CDL was set up to emulate an ATL P3000 tape
library, which is physically the same as an HP ESL9000, with 8 SDLT 320 drives.
Site 1 also had a smaller library set up and was connected to the local MDS,
which is described later.
The NetWorker server, which holds the media database and
client file indexes and controls the execution of backups, was installed on a
Sun Solaris server at Site 1. To comply with the customer's standards, high
availability was achieved by having a second server available at Site 1 and
having the metadata stored on a partition on the Symmetrix; this data was then
duplicated to the remote site where a third server was set up and ready to take
over from Site 1 if necessary.
As already noted, the CDLs contain a pair of high
availability Linux systems. All the
disks are configured in RAID 5 sets, giving the CDLs a much better MTBF (mean
time between failures) than can be provided by a real tape library.
No Operational Impact on the Clusters Being Backed Up
The customer had recently consolidated their OpenVMS systems,
moving many servers onto fewer GS Series servers. This move had freed up a
number of smaller AlphaServers, which could now be used as backup appliances in
each cluster and as NetWorker storage nodes. An AlphaServer 4000 could push
four data streams at once before saturating the CPU, at which point any more
streams would degrade overall performance. Each storage node was connected to
the local CDL using two fibre channel host-based adapters. At Site 2, one
storage node was also connected to the local MDS so it could see the small
virtual tape library at Site 1.
Multiple Copies of the Data Without Moving Data Over the Corporate IP Data Network
Because the production data is replicated at both sites, all
that was required was to split the business continuation volumes (BCVs)
simultaneously at both sites and back up the data locally. This was done using
the preprocessing and postprocessing capabilities of NetWorker, which allow you
to run command procedures before and after the backup on a particular client.
The steps are as follows:
- Breaks each disk in the backup saveset out from the BCVs.
- Mounts each disk locally with the correct logical name, so it supersedes the SYSTEM logical.
- The backup runs and backs up the mounted BCV disk.
- When the backup is complete, postprocessing does the following:
- Dismounts the disks.
- Puts the disks back into the BCVs.
Because clients at both sites can split BCVs at the same time,
duplicate backups are achieved, but are backed up directly to local CDLs.
The development data was a different matter because it is not
replicated from Site 2 to Site 1. Again, the preprocessing and postprocessing
capabilities were used to split the BCVs just as with the production data.
However, after the backups are finished, NetWorker initiates an automatic clone
operation to perform a copy of all the development savesets from the CDL in
Site 2 to the small CDL in Site 1.
It took about three weeks to develop and test the
preprocessing and postprocessing command procedures.
All the customer requirements were met, and the
solution went into production. With no special tuning, one product system was
backed up and cloned in the time it used to take to perform just the backup.
This was done with a single clone stream running. It is possible to initiate
multiple clone streams and cut cloning time by two thirds.