HP OpenVMS Systems Documentation
HP OpenVMS Cluster Systems
Depending on the level of network security required, you might also want to consider how other security mechanisms, such as protocol encryption and decryption, can promote additional security protection across the cluster.
Reference: See the HP OpenVMS Guide to System Security.
To prepare a common user environment for an OpenVMS Cluster system that includes more than one common OpenVMS Integrity server system disk or more than one common OpenVMS Alpha system disk, you must coordinate the system files on those disks.
Rules: The following rules apply to the procedures described in Table 5-4:
5.8.2 Network Database Files
In OpenVMS Cluster systems on the LAN and in mixed-interconnect clusters, you must also coordinate the SYS$MANAGER:NETNODE_UPDATE.COM file, which is a file that contains all essential network configuration data for satellites. NETNODE_UPDATE.COM is updated each time you add or remove a satellite or change its Ethernet or FDDI hardware address. This file is discussed more thoroughly in Section 10.4.2.
In OpenVMS Cluster systems configured with DECnet for OpenVMS software,
you must also coordinate NETNODE_REMOTE.DAT, which is the remote node
When a computer joins the cluster, the cluster attempts to set the joining computer's system time to the current time on the cluster. Although it is likely that the system time will be similar on each cluster computer, there is no assurance that the time will be set. Also, no attempt is made to ensure that the system times remain similar throughout the cluster. (For example, there is no protection against different computers having different clock rates.)
An OpenVMS Cluster system spanning multiple time zones must use a
single, clusterwide common time on all nodes. Use of a common time
ensures timestamp consistency (for example, between applications,
file-system instances) across the OpenVMS Cluster members.
Use the SYSMAN command CONFIGURATION SET TIME to set the time across the cluster. This command issues warnings if the time on all nodes cannot be set within certain limits. Refer to the HP OpenVMS System Manager's Manual for information about the SET TIME command.
|Method||Device Access||Comments||Illustrated in|
|Local||Restricted to the computer that is directly connected to the device.||Can be set up to be served to other systems.||Figure 6-3|
|Dual ported||Using either of two physical ports, each of which can be connected to separate controllers. A dual-ported disk can survive the failure of a single controller by failing over to the other controller.||As long as one of the controllers is available, the device is accessible by all systems in the cluster.||Figure 6-1|
|Shared||Through a shared interconnect to multiple systems.||Can be set up to be served to systems that are not on the shared interconnect.||Figure 6-2|
|Served||Through a computer that has the MSCP or TMSCP server software loaded.||MSCP and TMSCP serving are discussed in Section 6.3.||Figures 6-2 and 6-3|
|Dual pathed||Possible through more than one path.||If one path fails, the device is accessed over the other path. Requires the use of allocation classes (described in Section 6.2.1 to provide a unique, path-independent name.)||Figure 6-2|
|Note: The path to an individual disk may appear to be local from some nodes and served from others.|
When storage subsystems are connected directly to a specific system, the availability of the subsystem is lower due to the reliance on the host system. To increase the availability of these configurations, OpenVMS Cluster systems support dual porting, dual pathing, and MSCP and TMSCP serving.
Figure 6-1 shows a dual-ported configuration, in which the disks have independent connections to two separate computers. As long as one of the computers is available, the disk is accessible by the other systems in the cluster.
Figure 6-1 Dual-Ported Disks
Note: Disks can be shadowed using Volume Shadowing for OpenVMS. The automatic recovery from system failure provided by dual porting and shadowing is transparent to users and does not require any operator intervention.
Figure 6-2 shows a dual-pathed FC and Ethernet configuration. The disk devices, accessible through a shared SCSI interconnect, are MSCP served to the client nodes on the LAN.
Rule: A dual-pathed DSA disk cannot be used as a system disk for a directly connected CPU. Because a device can be on line to one controller at a time, only one of the server nodes can use its local connection to the device. The second server node accesses the device through the MSCP (or the TMSCP server). If the computer that is currently serving the device fails, the other computer detects the failure and fails the device over to its local connection. The device thereby remains available to the cluster.
Dual-pathed disks or tapes can be failed over between two computers that serve the devices to the cluster, provided that:
Caution: Failure to observe these requirements can endanger data integrity.
You can set up HSG or HSV storage devices to be dual ported between two storage subsystems, as shown in Figure 6-3.
Figure 6-3 Configuration with Cluster-Accessible Devices
By design, HSG and HSV disks and tapes are directly accessible by all OpenVMS Cluster nodes that are connected to the same star coupler. Therefore, if the devices are dual ported, they are automatically dual pathed. Computers connected by FC can access a dual-ported HSG or HSV device by way of a path through either subsystem connected to the device. If one subsystem fails, access fails over to the other subsystem.
Note: To control the path that is taken during failover, you can specify a preferred path to force access to disks over a specific path. Section 6.1.3 describes the preferred-path capability.
See Chapter 6 of Guidelines for OpenVMS Cluster
Configurations, Configuring Multiple Paths to SCSI and Fibre
Channel Storage for more information on FC storage devices.
6.1.3 Specifying a Preferred Path
The operating system supports specifying a preferred path for DSA disks, including RA series disks and disks that are accessed through the MSCP server. (This function is not available for tapes.) If a preferred path is specified for a disk, the MSCP disk class drivers use that path:
You can specify the preferred path by using the SET PREFERRED_PATH DCL command or by using the $QIO function (IO$_SETPRFPATH), with the P1 parameter containing the address of a counted ASCII string (.ASCIC). This string is the node name of the HSG or HSV, or of the OpenVMS system that is to be the preferred path.
Rule: The node name must match an existing node running the MSCP server that is known to the local node.
Reference: For more information about the use of the SET PREFERRED_PATH DCL command, refer to the HP OpenVMS DCL Dictionary: N--Z.
For more information about the use of the IO$_SETPRFPATH function,
refer to the HP OpenVMS I/O User's Reference Manual.
6.2 Naming OpenVMS Cluster Storage Devices
The naming convention of Fibre Channel devices is documented in the Fibre Channel chapter of Guidelines for OpenVMS Cluster Configurations. The naming of all other devices is described in this section.
In the OpenVMS operating system, a device name takes the form of ddcu, where:
For SCSI, the controller letter is assigned by OpenVMS, based on the system configuration. The unit number is determined by the SCSI bus ID and the logical unit number (LUN) of the device.
Because device names must be unique in an OpenVMS Cluster, and because every cluster member must use the same name for the same device, OpenVMS adds a prefix to the device name, as follows:
The purpose of allocation classes is to provide unique and unchanging device names. The device name is used by the OpenVMS Cluster distributed lock manager in conjunction with OpenVMS facilities (such as RMS and the XQP) to uniquely identify shared devices, files, and data.
Allocation classes are required in OpenVMS Cluster configurations where storage devices are accessible through multiple paths. Without the use of allocation classes, device names that relied on node names would change as access paths to the devices change.
Prior to OpenVMS Version 7.1, only one type of allocation class existed, which was node based. It was named allocation class. OpenVMS Version 7.1 introduced a second type, port allocation class, which is specific to a single interconnect and is assigned to all devices attached to that interconnect. Port allocation classes were originally designed for naming SCSI devices. Their use has been expanded to include additional devices types: floppy disks, PCI RAID controller disks, and IDE disks.
The use of port allocation classes is optional. They are designed to solve the device-naming and configuration conflicts that can occur in certain configurations, as described in Section 6.2.3.
To differentiate between the earlier node-based allocation class and the newer port allocation class, the term node allocation class was assigned to the earlier type.
Prior to OpenVMS Version 7.2, all nodes with direct access to the same multipathed device were required to use the same nonzero value for the node allocation class. OpenVMS Version 7.2 introduced the MSCP_SERVE_ALL system parameter, which can be set to serve all disks or to exclude those whose node allocation class differs.
If SCSI devices are connected to multiple hosts and if port allocation classes are not used, then all nodes with direct access to the same multipathed devices must use the same nonzero node allocation class.
Multipathed MSCP controllers also have an allocation class parameter,
which is set to match that of the connected nodes. (If the allocation
class does not match, the devices attached to the nodes cannot be
6.2.2 Specifying Node Allocation Classes
A node allocation class can be assigned to computers, HSG or HSV controllers. The node allocation class is a numeric value from 1 to 255 that is assigned by the system manager.
The default node allocation class value is 0. A node allocation class value of 0 is appropriate only when serving a local, single-pathed disk. If a node allocation class of 0 is assigned, served devices are named using the node-name$device-name syntax, that is, the device name prefix reverts to the node name.
The following rules apply to specifying node allocation class values:
System managers provide node allocation classes separately for disks and tapes. The node allocation class for disks and the node allocation class for tapes can be different.
The node allocation class names are constructed as follows:
Caution: Failure to set node allocation class values and device unit numbers correctly can endanger data integrity and cause locking conflicts that suspend normal cluster operations.
Figure 6-5 includes satellite nodes that access devices $1$DUA17 and $1$MUA12 through the JUPITR and NEPTUN computers. In this configuration, the computers JUPITR and NEPTUN require node allocation classes so that the satellite nodes are able to use consistent device names regardless of the access path to the devices.
Note: System management is usually simplified by using the same node allocation class value for all servers, HSG and HSV subsystems; you can arbitrarily choose a number between 1 and 255. Note, however, that to change a node allocation class value, you must shut down and reboot the entire cluster (described in Section 8.6). If you use a common node allocation class for computers and controllers, ensure that all devices have unique unit numbers.