HP OpenVMS Systems Documentation

Content starts here

Guidelines for OpenVMS Cluster Configurations

Previous Contents Index

5.4 Determining Disk Availability Requirements

For storage subsystems, availability is determined by the availability of the storage device as well as the availability of the path to the device.

5.4.1 Availability Requirements

Some costs are associated with optimizing your storage subsystems for higher availability. Part of analyzing availability costs is weighing the cost of protecting data against the cost of unavailable data during failures. Depending on the nature of your business, the impact of storage subsystem failures may be low, moderate, or high.

Device and data availability options reduce and sometimes negate the impact of storage subsystem failures.

5.4.2 Device and Data Availability Optimizers

Depending on your availability requirements, choose among the availability optimizers described in Table 5-4 for applications and data with the greatest need.

Table 5-4 Storage Availability Optimizers
Availability Optimizer Description
Redundant access paths Protect against hardware failures along the path to the device by configuring redundant access paths to the data.
Volume Shadowing for OpenVMS software Replicates data written to a virtual disk by writing the data to one or more physically identical disks that form a shadow set. With replicated data, users can access data even when one disk becomes unavailable. If one shadow set member fails, the shadowing software removes the drive from the shadow set, and processing continues with the remaining drives. Shadowing is transparent to applications and allows data storage and delivery during media, disk, controller, and interconnect failure.

A shadow set can contain up to three members, and shadow set members can be anywhere within the storage subsystem of an OpenVMS Cluster system.

Reference: See Volume Shadowing for OpenVMS for more information about volume shadowing.

System disk redundancy Place system files judiciously on disk drives with multiple access paths. OpenVMS Cluster availability increases when you form a shadow set that includes the system disk. You can also configure an OpenVMS Cluster system with multiple system disks.

Reference: For more information, see Section 11.2.

Database redundancy Keep redundant copies of certain files or partitions of databases that are, for example, updated overnight by batch jobs. Rather than using shadow sets, which maintain a complete copy of the entire disk, it might be sufficient to maintain a backup copy on another disk or even on a standby tape of selected files or databases.
DECevent DECevent, in conjunction with volume shadowing, can detect most imminent device failures with sufficient lead time to move the data to a spare device.

Enhance device reliability with appropriate software tools. Use device-failure prediction tools, such as DECevent, where high availability is needed.

Newer devices Protect against failure by choosing newer devices. Typically, newer devices provide improved reliability and mean time between failures (MTBF). Newer controllers also improve reliability by employing updated chip technologies.
Implement thorough backup strategies Frequent and regular backups are the most effective way to ensure the availability of your data.

5.5 CI Based Storage

The CI interconnect provides the highest OpenVMS Cluster availability with redundant, independent transmit-and-receive CI cable pairs. The CI offers multiple access paths to disks and tapes by means of dual-ported devices between HSC or HSJ controllers.

5.5.1 Supported Controllers and Devices

The following controllers and devices are supported by the CI interconnect:

  • HSJ storage controllers
    • SCSI devices (RZ,TZ, EZ)
  • HSC storage controllers
    • SDI and STI devices (RA, ESE, TA)
    • K.SCSI devices (RZ, TZ, EZ)

5.6 DSSI Storage

DSSI-based configurations provide shared direct access to storage for systems with moderate storage capacity. The DSSI interconnect provides the lowest-cost shared access to storage in an OpenVMS Cluster.

The storage tables in this section may contain incomplete lists of products.

5.6.1 Supported Devices

DSSI configurations support the following devices:

  • EF-series solid-state disks
  • RF-series disks
  • TF-series tapes
  • DECarray storage arrays
  • HSD storage controller
    • SCSI devices (RZ,TZ, EZ)

Reference: RZ, TZ, and EZ SCSI storage devices are described in Section 5.7.

5.7 SCSI-Based Storage

The Small Computer Systems Interface (SCSI) bus is a storage interconnect based on an ANSI industry standard. You can connect up to a total of 8 or 16 nodes (3 of which can be CPUs) to the SCSI bus.

5.7.1 Supported Devices

The following devices can connect to a single host or multihost SCSI bus:

  • RZ-series disks
  • HSZ storage controllers

The following devices can connect only to a single host SCSI bus:

  • EZ-series disks
  • RRD-series CD-ROMs
  • TZ-series tapes

5.8 Fibre Channel Based Storage

The Fibre Channel interconnect is a storage interconnect that is based on an ANSI industry standard.

5.8.1 Storage Devices

The HSG storage controllers can connect to a single host or to a multihost Fibre Channel interconnect.

5.9 Host-Based Storage

Host-based storage devices can be connected locally to OpenVMS Cluster member systems using local adapters. You can make this locally connected storage available to other OpenVMS Cluster members by configuring a node as an MSCP server.

You can use local adapters to connect each disk to two access paths (dual ports). Dual porting allows automatic failover of disks between nodes.

5.9.1 Internal Buses

Locally connected storage devices attach to a system's internal bus.

Alpha systems use the following internal buses:

  • PCI
  • EISA
  • XMI
  • SCSI
  • TURBOchannel
  • Futurebus+

VAX systems use the following internal buses:

  • XMI
  • Q-bus
  • SCSI

5.9.2 Local Adapters

Following is a list of local adapters and their bus types:

  • PB2HA (EISA)
  • PMAZB (TURBOchannel)
  • PMAZC (TURBOchannel)
  • KDM70 (XMI)
  • KDB50 (VAXBI)
  • KDA50 (Q-bus)

Chapter 6
Configuring Multiple Paths to SCSI and Fibre Channel Storage

This chapter describes multipath SCSI support, which is available on OpenVMS Alpha Version 7.2 and later. The SCSI protocol is used on both the parallel SCSI interconnect and the Fibre Channel interconnect. The term SCSI is used to refer to either parallel SCSI or Fibre Channel (FC) devices throughout the chapter.


The V7.2-2S1 kit provides support for failover between local and MSCP served paths to SCSI disk devices. This capability is enabled by setting the MPDEV_REMOTE system parameter to 1. The default value of MPDEV_REMOTE is 0. MPDEV_REMOTE must stay set to 0 unless the V7.2-2S1 kit is installed.

The V7.2-2S1 kit includes fixes and changes that are beneficial even if MPDEV_REMOTE is left off, such as avoiding controller failover when a device is mounted.

This SCSI multipath feature may be incompatible with some third-party disk caching, disk shadowing, or similar products. Compaq advises that you not use such software on SCSI devices that are configured for multipath failover (for example, SCSI devices that are connected to HSZ70 and HSZ80 controllers in multibus mode) until this feature is supported by the producer of the software.

Refer to Section 6.2 for important requirements and restrictions for using the multipath SCSI function.

Note that the Fibre Channel and parallel SCSI interconnects are shown generically in this chapter. Each is represented as a horizontal line to which the node and storage subsystems are connected. Physically, the Fibre Channel interconnect is always radially wired from a switch, as shown in Figure 7-1. Parallel SCSI can be radially wired to a hub or can be a daisy-chained bus.

The representation of multiple SCSI disks and SCSI buses in a storage subsystem is also simplified. The multiple disks and SCSI buses, which one or more HSZx or HSGx controllers serve as a logical unit to a host, are shown in the figures as a single logical unit.

The following topics are presented in this chapter:

6.1 Overview of Multipath SCSI Support

A multipath SCSI configuration provides failover from one path to a device to another path to the same device. Multiple paths to the same device increase the availability of that device for I/O operations. Multiple paths also offer higher aggregate performance. Figure 6-1 shows a multipath SCSI configuration. Two paths are configured from a computer to the same virtual storage device.

Multipath SCSI configurations can use either parallel SCSI or Fibre Channel as the storage interconnect, as illustrated by Figure 6-1.

Two or more paths to a single device are called a multipath set. When the system configures a path to a device, it checks for an existing device with the same name but a different path. If such a device is found, and multipath support is enabled, the system either forms a multipath set or adds the new path to an existing set. If multipath support is not enabled, then no more than one path to a device is configured.

The system presents a multipath set as a single device. The system selects one path to the device as the "current" path, and performs all I/O over this path until there is a failure or the system manager requests that the system switch to another path.

Multipath SCSI support provides the following types of failover:

  • Direct SCSI to direct SCSI
  • Direct SCSI to MSCP served
  • MSCP served to direct SCSI

Direct SCSI to direct SCSI failover requires the use of multiported SCSI devices. Direct SCSI to MSCP served failover requires multiple hosts per SCSI bus, but does not require multiported SCSI devices. These two failover types can be combined. Each type and the combination of the two are described next.

6.1.1 Direct SCSI to Direct SCSI Failover

Direct SCSI to direct SCSI failover can be used on systems with multiported SCSI devices. The dual HSZ70, the HSZ80 and the HSG80 are examples of multiported SCSI devices. A multiported SCSI device can be configured with multiple ports on the same physical interconnect so that if one of the ports fails, the host can continue to access the device through another port. This is known as transparent failover mode and has been supported by OpenVMS since Version 6.2.

OpenVMS Version 7.2 introduced support for a new failover mode in which the multiported device can be configured with its ports on different physical interconnects. This is known as multibus failover mode.

The HSx failover modes are selected by HSx console commands. Transparent and multibus modes are described in more detail in Section 6.3.

Figure 6-1 is a generic illustration of a multibus failover configuration.


Configure multiple direct SCSI paths to a device only when multipath support is enabled on all connected nodes, and the HSZ/G is in multibus failover mode.

The two logical disk devices shown in Figure 6-1 represent virtual storage units that are presented to the host by the HSx controller modules. Each logical storage unit is "on line" to one of the two HSx controller modules at a time. When there are multiple logical units, they can be on line to different HSx controllers so that both HSx controllers can be active at the same time.

In transparent mode, a logical unit switches from one controller to the other when an HSx controller detects that the other controller is no longer functioning.

In multibus mode, as shown in Figure 6-1, a logical unit switches from one controller to the other when one of the following events occurs:

  • One HSx controller detects that the other controller is no longer functioning.
  • The OpenVMS multipath software detects that the current path has failed and issues a command to cause a switch.
  • The OpenVMS system manager issues a command to cause a switch.

Figure 6-1 Multibus Failover Configuration

Note the following about Figure 6-1:

  • Host has two adapters.
  • Interconnects can both be parallel SCSI (HSZ70 or HSZ80) or both be Fibre Channel (HSG80) but not mixed.
  • Storage cabinet contains two HSx controllers configured for multibus failover mode.

The multibus configuration offers the following advantages over transparent failover:

  • Higher aggregate performance with two host adapters and two HSx controller modules in operation.
  • Higher availability because the storage is still accessible when a host adapter, the interconnect, or the HSx controller module on a path fails.

6.1.2 Direct SCSI to MSCP Served Failover

OpenVMS provides support for multiple hosts that share a SCSI bus. This is known as a multihost SCSI OpenVMS Cluster system. In this configuration, the SCSI bus is a shared storage interconnect. Cluster communication occurs over a second interconnect (LAN, DSSI, CI, or MEMORY CHANNEL).

Multipath support in a multihost SCSI OpenVMS Cluster system enables failover from directly attached SCSI storage to MSCP served SCSI storage, as shown in Figure 6-2.

Figure 6-2 Direct SCSI to MSCP Served Configuration With One Interconnect

Note the following about this configuration:

  • Two hosts are connected to a shared storage interconnect.
  • Two hosts are connected by a second interconnect (LAN, CI, DSSI, or MEMORY CHANNEL) for cluster communications.
  • The storage devices can have a single port or multiple ports.
  • If node Edgar's SCSI connection to the storage fails, the SCSI storage is MSCP served by the remaining host over the cluster interconnect.

Multipath support in such a multihost SCSI OpenVMS Cluster system also enables failover from MSCP served SCSI storage to directly attached SCSI storage. For example, the following sequence of events can occur on the configuration shown in Figure 6-2:

  • Node POE is using node EDGAR as an MSCP server to access some storage device on the shared storage interconnect.
  • On node EDGAR the direct connection to the shared storage fails, or node EDGAR is shutdown, or node EDGAR becomes unreachable via the cluster interconnect.
  • Node POE switches to using its direct path to the shared storage.


In this document, the capability to fail over from direct SCSI to MSCP served paths implies the ability to fail over in either direction between direct and served paths.

6.1.3 Configurations Combining Both Types of Multipath Failover

In a multihost SCSI OpenVMS cluster system, you can increase storage availability by configuring the cluster for both types of multipath failover (direct SCSI to direct SCSI and direct SCSI to MSCP served SCSI), as shown in Figure 6-3.

Figure 6-3 Direct SCSI to MSCP Served Configuration With Two Interconnects

Note the following about this configuration:

  • Both nodes are directly connected to both storage interconnects.
  • Both nodes are connected to a second interconnect for cluster communications.
  • Each HSx storage controller is connected to only one interconnect.
  • Both HSx storage controllers are in the same cabinet.

This configuration provides the advantages of both direct SCSI failover and direct to MSCP served failover.

6.2 Configuration Requirements and Restrictions

The requirements for multipath SCSI and FC configurations are presented in Table 6-1.

Table 6-1 Multipath SCSI and FC Configuration Requirements
Component Description
Host adapter For parallel SCSI, the KZPBA-CB must be used. It is the only SCSI host adapter that supports multipath failover on OpenVMS.
Alpha console firmware For systems with HSZ70 and HSZ80, the minimum revision level is 5.3 or 5.4, depending on your AlphaServer. For systems with HSG80, the minimum revision level is 5.4
Controller firmware For HSZ70, the minimum revision level is 7.3; for HSZ80, it is 8.3; for HSG80, it is 8.4.
Controller module mode Must be set to multibus mode. The selection is made at the HS x console.
Full connectivity All hosts that are connected to an HS x in multibus mode must have a path to both HS x controller modules. This is because hosts that are connected exclusively to different controllers will switch the logical unit back and forth between controllers, preventing any I/O from executing.

To prevent this from happening, always provide full connectivity from hosts to controller modules. If a host's connection to a controller fails, then take one of the following steps to avoid indefinite path switching:

  • Repair the connection promptly.
  • Prevent the other hosts from switching to the partially-connected controller. This is done by either disabling switching to the paths that lead to the partially-connected controller (see Section 6.7.11), or by shutting down the partially-connected controller.
  • Disconnect the partially-connected host from both controllers.
Allocation classes For parallel SCSI, a valid HSZ allocation class is required (refer to Section 6.5.3). If a SCSI bus is configured with HSZ controllers only, and all the controllers have a valid HSZ allocation class, then it is not necessary to adhere to the older SCSI device naming rules for that bus. That is, the adapters do not require a matching port allocation class, or a matching node allocation class and matching OpenVMS adapter device names.

However, if there are non-HSZ devices on the bus, or HSZ controllers without an HSZ allocation class, then the standard rules for node and port allocation class assignments and controller device names for shared SCSI buses must be followed.

Booting from devices with an HSZ allocation class is supported on all AlphaServers that support the KZPBA-CB except for the AlphaServer 2 x00(A).

The controller allocation class is not used for FC devices.

The restrictions for multipath FC and SCSI configurations are presented in Table 6-2.

Table 6-2 Multipath FC and SCSI Configuration Restrictions
Component Description
Devices supported DKDRIVER disk devices attached to HSZ70, HSZ80, and HSG80 controller modules are supported. Other device types, such as tapes, and generic class drivers, such as GKDRIVER, are not supported.

Note that under heavy load, a host-initiated manual or automatic switch from one controller to another may fail on an HSZ70 or HSZ80 controller. Testing has shown this to occur infrequently. This problem has been fixed for the HSZ70 with the firmware HSOF V7.7 and later versions. The problem will be fixed for the HSZ80 in a future release. This problem does not occur on an HSG80 controller.

Mixed-version and mixed-architecture clusters All hosts that are connected to an HSZ or HSG in multibus mode must be running OpenVMS Version 7.2 or higher.

As long as MPDEV_REMOTE is off, you can install this kit (<REFERENCE>(sekvmsv)) on any subset of the V7.2-2 nodes in your cluster. This is the way to do a rolling upgrade of this kit in your cluster.

Before you set MPDEV_REMOTE to 1 on a system, then all systems that share direct access with this system to any SCSI/Fibre Channel disk must also be running <REFERENCE>(sekvmsv). Because this kit requires V7.2-2, all these nodes must be running V7.2-2. In particular, such a node cannot be running V7.3.

If you enable MPDEV_REMOTE on one system, Compaq recommends that you enable it on all systems that have direct access to shared SCSI/Fibre Channel devices, which results in higher data availability. Perhaps more importantly, this is the configuration that has gotten the majority of testing. But there are no known problems if MPDEV_REMOTE is enabled on a subset of such nodes.

SCSI to MSCP failover
MSCP to SCSI failover
Multiple hosts must attached to SCSI disk devices via a shared SCSI bus (either parallel SCSI or Fibre Channel). All the hosts on the shared SCSI bus must be running V7.2-2S1 and the MPDEV_REMOTE system parameter must be set to 1 on these hosts.

Previous Next Contents Index