HP OpenVMS Systems Documentation

Content starts here

Guidelines for OpenVMS Cluster Configurations

Previous Contents Index

11.6 State Transition Strategies

OpenVMS Cluster state transitions occur when a system joins or leaves an OpenVMS Cluster system and when the OpenVMS Cluster recognizes a quorum-disk state change. The connection manager handles these events to ensure the preservation of data integrity throughout the OpenVMS Cluster.

State transitions should be a concern only if systems are joining or leaving an OpenVMS Cluster system frequently enough to cause disruption.

A state transition's duration and effect on users and applications is determined by the reason for the transition, the configuration, and the applications in use. By managing transitions effectively, system managers can control:

  • Detection of failures and how long the transition takes
  • Side effects of the transition, such as volume shadowing copy and merge operations

11.6.1 Dealing with State Transitions

The following guidelines describe effective ways of dealing with transitions so that you can minimize the actual transition time as well as the side effects after the transition.

  • Be proactive in preventing nodes from leaving an OpenVMS Cluster by:
    • Providing interconnect redundancy between all systems.
    • Preventing resource exhaustion of disks and memory as well as saturation of interconnects, processors, and adapters.
    • Using an uninterruptible power supply (UPS).
    • Informing users that shutting off a workstation in a large OpenVMS Cluster disrupts the operation of all systems in the cluster.
  • Do not use a quorum disk unless your OpenVMS Cluster has only two nodes.
  • Where possible, ensure that shadow set members reside on shared buses to increase availability.
  • The time to detect the failure of nodes, disks, adapters, interconnects, and virtual circuits is controlled by system polling parameters. Reducing polling time makes the cluster react quickly to changes, but it also results in lower tolerance to temporary outages. When setting timers, try to strike a balance between rapid recovery from significant failures and "nervousness" resulting from temporary failures.
    Table 11-5 describes OpenVMS Cluster polling parameters that you can adjust for quicker detection time. Compaq recommends that these parameters be set to the same value in each OpenVMS Cluster.

    Table 11-5 OpenVMS Cluster Polling Parameters
    Parameter Description
    QDSKINTERVAL Specifies the quorum disk polling interval.
    RECNXINTERVL Specifies the interval during which the connection manager attempts to restore communication to another system.
    TIMVCFAIL Specifies the time required for detection of a virtual circuit failure.
  • Include application recovery in your plans. When you assess the effect of a state transition on application users, consider that the application recovery phase includes activities such as replaying a journal file, cleaning up recovery units, and users logging in again.

Reference: For more detailed information about OpenVMS Cluster transitions and their phases, system parameters, quorum management, see OpenVMS Cluster Systems.

11.7 Migration and Warranted Support for Multiple Versions

Compaq provides two levels of support, warranted and migration, for mixed-version and mixed-architecture OpenVMS Cluster systems.

Warranted support means that Compaq has fully qualified the two versions coexisting in an OpenVMS Cluster and will answer all problems identified by customers using these configurations.

Migration support is a superset of the Rolling Upgrade support provided in earlier releases of OpenVMS and is available for mixes that are not warranted. Migration support means that Compaq has qualified the versions for use together in configurations that are migrating in a staged fashion to a newer version of OpenVMS VAX or of OpenVMS Alpha. Problem reports submitted against these configurations will be answered by Compaq. However, in exceptional cases, Compaq may request that you move to a warranted configuration as part of answering the problem.

Compaq supports only two versions of OpenVMS running in a cluster at the same time, regardless of architecture. Migration support helps customers move to warranted OpenVMS Cluster version mixes with minimal impact on their cluster environments.

Table 11-6 shows the level of support provided for all possible version pairings.

Table 11-6 OpenVMS Cluster Warranted and Migration Support
  Alpha/VAX V7.3 Alpha V7.2--xxx/
VAX V7.2
Alpha/VAX V7.1
Alpha/VAX V7.3 WARRANTED Migration Migration
Alpha V7.2-- xxx/
VAX V7.2
Migration WARRANTED Migration
Alpha/VAX V7.1 Migration Migration WARRANTED

In a mixed-version cluster, you must install remedial kits on earlier versions of OpenVMS. For OpenVMS Version 7.3, two new features, XFC and Volume Shadowing minicopy, cannot be run on any node in a mixed version cluster unless all nodes running earlier versions of OpenVMS have installed the required remedial kits. Remedial kits are available for both features for all versions, except for Volume Shadowing minicopy for OpenVMS Alpha/VAX Version 7.1.

For a complete list of required remedial kits, see the OpenVMS Version 7.3 Release Notes.

11.8 Alpha and VAX Systems in the Same OpenVMS Cluster

OpenVMS Alpha and OpenVMS VAX systems can work together in the same OpenVMS Cluster to provide both flexibility and migration capability. You can add Alpha processing power to an existing VAXcluster, enabling you to utilize applications that are system specific or hardware specific.

Table 11-6 depicts the OpenVMS version pairs for which Compaq provides migration and warranted support.

11.8.1 OpenVMS Cluster Satellite Booting Across Architectures

OpenVMS Alpha Version 7.1 and OpenVMS VAX Version 7.1 enable VAX boot nodes to provide boot service to Alpha satellites and Alpha boot nodes to provide boot service to VAX satellites. This support, called cross-architecture booting, increases configuration flexibility and higher availability of boot servers for satellites.

Two configuration scenarios make cross-architecture booting desirable:

  • You want the Alpha system disk configured in the same highly available and high-performance area as your VAX system disk.
  • Your Alpha boot server shares CI or DSSI storage with the VAX boot server. If your only Alpha boot server fails, you want to be able to reboot an Alpha satellite before the Alpha boot server reboots.

11.8.2 Restrictions

You cannot perform OpenVMS operating system and layered product installations and upgrades across architectures. For example, you must install and upgrade OpenVMS Alpha software using an Alpha system. When you configure OpenVMS Cluster systems that take advantage of cross-architecture booting, ensure that at least one system from each architecture is configured with a disk that can be used for installations and upgrades.

System disks can contain only a single version of the OpenVMS operating system and are architecture specific. For example, OpenVMS VAX Version 7.1 cannot coexist on a system disk with OpenVMS Alpha Version 7.1.

11.9 Determining Backup and Storage Management Strategies

In any system, hardware and electrical failures as well as human errors occur. All important data must be backed up to limit the effects of these errors. You can do this in a number of ways, depending on the time and resources available.

11.9.1 Steps for Determining a Backup Strategy

Follow these steps to determine a backup strategy:
Step Description
1 Decide how much lost work is acceptable in the event of a failure. This determines how often the data needs to be backed up.
2 Decide how long the data can remain unavailable while it is being backed up. This determines the methods of backup.
3 Establish a backup schedule, including the frequency and times of the day and week that backups will occur.


  • How much data will be backed up daily, weekly, and monthly?
  • Will you conduct full or incremental backups? How often for each?
4 Make sure that sufficient backup media are available. Determine both the initial amount of backup media needed and its growth rate.
5 Determine if your backup strategy requires backup media to be stored off site.

11.10 Disk Backup

Table 11-7 describes ways to provide a copy of data for backup.

Table 11-7 Backup Methods for Data
Type of Data Backup Method
Database is continually changing; transactions cannot be lost. Use a combination of database backup (at a time when it is known to be static) and journaling transactions to the database.

Reference: See the following manuals for additional information:

  • RMS Journaling for OpenVMS Manual
  • Guide to OpenVMS File Applications
  • DEC Rdb Guide to Database Design and Definition
  • DEC DBMS Database Design Guide
  • DEC DBMS Database Maintenance and Performance Guide
Data must be accessible at all times, including nights and weekends. Use Volume Shadowing for OpenVMS software to accomplish rapid disk backup. Remove a member from a three-member shadow set by dismounting the shadow set, remounting the shadow set with two members, and copying the third disk to magnetic tape. After this, the third disk can be included again in the shadow set.
Data can be unavailable for an extended period of time for backup. Use the OpenVMS Backup utility (BACKUP) to make an image backup of a volume or a file-by-file copy of specified sets of files. BACKUP can make a copy to another disk (or set of disks) or to magnetic tape. Restoring from an image copy requires that the entire image be written to a disk. When you restore specific files, they are copied from the restored disk to the intended destination.

On the other hand, an image copy is faster than a file-by-file copy, which copies files one at a time. Restoring a single file from the backup copy is easy. Also, a file-by-file restore greatly reduces fragmentation of the restored disk.

Data is static. Archiving copies of the data on magnetic tape and excluding the online files from other backup procedures may be sufficient. Examples are program sources, documentation files, and distribution kits.
Scratch files and intermediate files. You can choose not to provide any backup for these files.

11.11 Tape Backup

Backup tape storage provides the least expensive storage medium. Tapes are the most common medium for offline storage and provide a range of capacities, cost, and shelf life. In general, tape storage is removable and generally off line.

11.11.1 For More Information

Backup procedures are described in detail in the following manuals:

  • OpenVMS System Manager's Manual
  • OpenVMS System Management Utilities Reference Manual

11.11.2 Benefits of Unattended Backup

With current tape-drive technology, you can initiate a large backup operation that completes without operator intervention (that is, changing tapes). Such unattended backups can save significant time and reduce staffing costs. Cartridge tape loaders with tape magazines, such as the Tx8x7 or the TA91, allow unattended backups of up to nearly 42 GB of online storage. Backups can also be performed on robot-accessible media, such as the StorageTek 4400 ACS through the TC44 interconnect adapter, which provides terabyte capacity for backup archives.

11.11.3 Archive/Backup System for OpenVMS

Archive/Backup System for OpenVMS is a replacement for the Storage Library System (SLS). Archive/Backup provides lower system management costs, reduced equipment costs, and data security. It uses the POLYCENTER Media Library Manager (MLM) and the POLYCENTER Media Robot Manager (MRM) to move data to inexpensive tapes, and allows you to find and restore backed up and archived data easily. POLYCENTER MLM and MRM are the first Compaq products to provide OpenVMS users secure, highly reliable, fully automated access to tape and optical removable media through cost-effective media robots, such as the Odetics 5480 and the Tx8x7 family.

11.11.4 StorageTek 4400 ACS

You can attach the StorageTek 4400 ACS, a storage silo, to either an HSC using the TC44 adapter or directly to the XMI bus of a system using a KCM44 adapter. The StorageTek Silo automates access to a library of IBM 3480 compatible cartridge tapes. The library can contain up to 16 library storage modules. Each module can hold up to 1.2 TB of data in 6000 tape cartridges. A robotic arm can find and mount a requested tape within 45 to 90 seconds. Data movement for tape applications, such as the OpenVMS Backup utility, is performed the same way as with a TA90 tape drive.

11.11.5 Tape-Drive Performance and Capacity

Table 11-8 describes the performance and capacity of various tape drives and the interconnects to which they attach.

Table 11-8 Tape-Drive Performance and Capacity
Interconnect Description
CI (STI tapes) The TA92 can transfer at a rate of 2.6 MB/s. Its magazine of IBM 3480 compatible cartridge tapes lets it back up 38 GB unattended. To achieve highest performance, connect the TA92 through a KDM70 controller or configure it with multiple CI adapters, so that the path to the tape drives is separate from the path to the disk drives.
DSSI The TF867 offers the best tape performance. Its magazine of half-inch cartridge tapes can hold up to 42 GB of data for unattended backup. Its transfer rate is 0.8 MB/s. The TF857 can read TK50 and TK70 tapes, and its magazine can hold up to 18 GB of data.
SCSI The TSZ07 allows SCSI configurations to access 9-track reel-to-reel tapes. It has a capacity of 140 MB per reel and a 750 KB/s transfer rate. The TZK10 offers a less expensive but slower-performing tape solution for SCSI configurations. It uses a quarter-inch cartridge that holds 525 MB and can transfer at a rate of 200 KB/s.

Appendix A
SCSI as an OpenVMS Cluster Interconnect

One of the benefits of OpenVMS Cluster systems is that multiple computers can simultaneously access storage devices connected to a OpenVMS Cluster storage interconnect. Together, these systems provide high performance and highly available access to storage.

This appendix describes how OpenVMS Cluster systems support the Small Computer Systems Interface (SCSI) as a storage interconnect. Multiple Alpha computers, also referred to as hosts or nodes, can simultaneously access SCSI disks over a SCSI interconnect. Such a configuration is called a SCSI multihost OpenVMS Cluster. A SCSI interconnect, also called a SCSI bus, is an industry-standard interconnect that supports one or more computers, peripheral devices, and interconnecting components.

The discussions in this chapter assume that you already understand the concept of sharing storage resources in an OpenVMS Cluster environment. OpenVMS Cluster concepts and configuration requirements are also described in the following OpenVMS Cluster documentation:

  • OpenVMS Cluster Systems
  • OpenVMS Cluster Software Software Product Description (SPD 29.78.xx)

This appendix includes two primary parts:

  • Section A.1 through Section A.6.6 describe the fundamental procedures and concepts that you would need to plan and implement a SCSI multihost OpenVMS Cluster system.
  • Section A.7 and its subsections provide additional technical detail and concepts.

A.1 Conventions Used in This Appendix

Certain conventions are used throughout this appendix to identify the ANSI Standard and for elements in figures.

A.1.1 SCSI ANSI Standard

OpenVMS Cluster systems configured with the SCSI interconnect must use standard SCSI--2 or SCSI--3 components. The SCSI--2 components must be compliant with the architecture defined in the American National Standards Institute (ANSI) Standard SCSI--2, X3T9.2, Rev. 10L. The SCSI--3 components must be compliant with approved versions of the SCSI--3 Architecture and Command standards. For ease of discussion, this appendix uses the term SCSI to refer to both SCSI--2 and SCSI--3.

A.1.2 Symbols Used in Figures

Figure A-1 is a key to the symbols used in figures throughout this appendix.

Figure A-1 Key to Symbols Used in Figures

A.2 Accessing SCSI Storage

In OpenVMS Cluster configurations, multiple VAX and Alpha hosts can directly access SCSI devices in any of the following ways:

  • CI interconnect with HSJ or HSC controllers
  • Digital Storage Systems Interconnect (DSSI) with HSD controller
  • SCSI adapters directly connected to VAX or Alpha systems

You can also access SCSI devices indirectly using the OpenVMS MSCP server.

The following sections describe single-host and multihost access to SCSI storage devices.

A.2.1 Single-Host SCSI Access in OpenVMS Cluster Systems

Prior to OpenVMS Version 6.2, OpenVMS Cluster systems provided support for SCSI storage devices connected to a single host using an embedded SCSI adapter, an optional external SCSI adapter, or a special-purpose RAID (redundant arrays of independent disks) controller. Only one host could be connected to a SCSI bus.

A.2.2 Multihost SCSI Access in OpenVMS Cluster Systems

Beginning with OpenVMS Alpha Version 6.2, multiple Alpha hosts in an OpenVMS Cluster system can be connected to a single SCSI bus to share access to SCSI storage devices directly. This capability allows you to build highly available servers using shared access to SCSI storage.

Figure A-2 shows an OpenVMS Cluster configuration that uses a SCSI interconnect for shared access to SCSI devices. Note that another interconnect (for example, a local area network [LAN]) is required for host-to-host OpenVMS Cluster (System Communications Architecture [SCA]) communications.

Figure A-2 Highly Available Servers for Shared SCSI Access

You can build a three-node OpenVMS Cluster system using the shared SCSI bus as the storage interconnect, or you can include shared SCSI buses within a larger OpenVMS Cluster configuration. A quorum disk can be used on the SCSI bus to improve the availability of two- or three-node configurations. Host-based RAID (including host-based shadowing) and the MSCP server are supported for shared SCSI storage devices.

A.3 Configuration Requirements and Hardware Support

This section lists the configuration requirements and supported hardware for multihost SCSI OpenVMS Cluster systems.

A.3.1 Configuration Requirements

Table A-1 shows the requirements and capabilities of the basic software and hardware components you can configure in a SCSI OpenVMS Cluster system.

Table A-1 Requirements for SCSI Multihost OpenVMS Cluster Configurations
Requirement Description
Software All Alpha hosts sharing access to storage on a SCSI interconnect must be running:
  • OpenVMS Alpha Version 6.2 or later
  • OpenVMS Cluster Software for OpenVMS Alpha Version 6.2 or later
Hardware Table A-2 lists the supported hardware components for SCSI OpenVMS Cluster systems. See also Section A.7.7 for information about other hardware devices that might be used in a SCSI OpenVMS Cluster configuration.
SCSI tape, floppies, and CD-ROM drives You cannot configure SCSI tape drives, floppy drives, or CD-ROM drives on multihost SCSI interconnects. If your configuration requires SCSI tape, floppy, or CD-ROM drives, configure them on single-host SCSI interconnects. Note that SCSI tape, floppy, or CD-ROM drives may be MSCP or TMSCP served to other hosts in the OpenVMS Cluster configuration.
Maximum hosts on a SCSI bus You can connect up to three hosts on a multihost SCSI bus. You can configure any mix of the hosts listed in Table A-2 on the same shared SCSI interconnect.
Maximum SCSI buses per host You can connect each host to a maximum of six multihost SCSI buses. The number of nonshared (single-host) SCSI buses that can be configured is limited only by the number of available slots on the host bus.
Host-to-host communication All members of the cluster must be connected by an interconnect that can be used for host-to-host (SCA) communication; for example, DSSI, CI, Ethernet, FDDI, or MEMORY CHANNEL.
Host-based RAID (including host-based shadowing) Supported in SCSI OpenVMS Cluster configurations.
SCSI device naming The name of each SCSI device must be unique throughout the OpenVMS Cluster system. When configuring devices on systems that include a multihost SCSI bus, adhere to the following requirements:
  • A host can have, at most, one adapter attached to a particular SCSI interconnect.
  • All host controllers attached to a given SCSI interconnect must have the same OpenVMS device name (for example, PKA0), unless port allocation classes are used (see OpenVMS Cluster Systems).
  • Each system attached to a SCSI interconnect must have the a nonzero node disk allocation class value. These node disk allocation class values may differ as long as either of the following conditions is true:
    • The SCSI interconnect has a positive, non-zero port allocation class
    • The only devices attached to the SCSI interconnect are accessed by HSZ70 or HSZ80 controllers that have a non-zero controller allocation class.

    If you have multiple SCSI interconnects, you must consider all the SCSI interconnects to determine whether you can chose a different value for the node disk allocation class on each system. Note, also, that the addition of a SCSI device to an existing SCSI interconnect requires a revaluation of whether the node disk allocation classes can still be different. Therefore, Compaq recommends that you use the same node disk allocation class value for all systems attached to the same SCSI interconnect. For more information about allocation classes, see OpenVMS Cluster Systems.

Previous Next Contents Index