HP OpenVMS Systems Documentation

Content starts here

HP OpenVMS Cluster Systems

Previous Contents Index

4.5.7 Starting DECnet

If you are using DECnet--Plus, a separate step is not required to start the network. DECnet--Plus starts automatically on the next reboot after the node has been configured using the NET$CONFIGURE.COM procedure.

If you are using DECnet for OpenVMS, at the system prompt, enter the following command to start the network:


To ensure that the network is started each time an OpenVMS Cluster computer boots, add that command line to the appropriate startup command file or files. (Startup command files are discussed in Section 5.5.)

4.5.8 What is Cluster Alias?

The cluster alias acts as a single network node identifier for an OpenVMS Cluster system. When enabled, the cluster alias makes all the OpenVMS Cluster nodes appear to be one node from the point of view of the rest of the network.

Computers in the cluster can use the alias for communications with other computers in a DECnet network. For example, networked applications that use the services of an OpenVMS Cluster should use an alias name. Doing so ensures that the remote access will be successful when at least one OpenVMS Cluster member is available to process the client program's requests.


  • DECnet for OpenVMS (Phase IV) allows a maximum of 64 OpenVMS Cluster computers to participate in a cluster alias. If your cluster includes more than 64 computers, you must determine which 64 should participate in the alias and then define the alias on those computers.
    At least one of the OpenVMS Cluster nodes that uses the alias node identifier must have level 1 routing enabled.
    • On Integrity servers and Alpha nodes, routing between multiple circuits is not supported. However, routing is supported to allow cluster alias operations. Level 1 routing is supported only for enabling the use of a cluster alias. The DVNETEXT PAK must be used to enable this limited function.
    • On Integrity servers, Alpha, and VAX systems, all cluster nodes sharing the same alias node address must be in the same area.
  • DECnet--Plus allows a maximum of 96 OpenVMS Cluster computers to participate in the cluster alias.
    DECnet--Plus does not require that a cluster member be a routing node, but an adjacent Phase V router is required to use a cluster alias for DECnet--Plus systems.
  • A single cluster alias can include nodes running either DECnet for OpenVMS or DECnet--Plus, but not both.

4.5.9 Enabling Alias Operations

If you have defined a cluster alias and have enabled routing as shown in Section 4.5.6, you can enable alias operations for other computers after the computers are up and running in the cluster. To enable such operations (that is, to allow a computer to accept incoming connect requests directed toward the alias), follow these steps:

  1. Log in as system manager and invoke the SYSMAN utility. For example:

  2. At the SYSMAN> prompt, enter the following commands:

    %SYSMAN-I-ENV, current command environment: 
            Clusterwide on local cluster 
            Username SYSTEM  will be used on nonlocal nodes
    %SYSMAN-I-OUTPUT, command execution on node X...
    %SYSMAN-I-OUTPUT, command execution on node X...
    %SYSMAN-I-OUTPUT, command execution on node X...

Note: HP does not recommend enabling alias operations for satellite nodes.

Reference: For more details about DECnet for OpenVMS networking and cluster alias, see the DECnet for OpenVMS Networking Manual and DECnet for OpenVMS Network Management Utilities. For equivalent information about DECnet--Plus, see the DECnet--Plus documentation.

4.5.10 Configuring TCP/IP

For information on how to configure and start TCP/IP, see the HP TCP/IP Services for OpenVMS Installation and Configuration guide and the HP TCP/IP Services for OpenVMS Version 5.7 Release Notes.

Chapter 5
Preparing a Shared Environment

In any OpenVMS Cluster environment, it is best to share resources as much as possible. Resource sharing facilitates workload balancing because work can be distributed across the cluster.

5.1 Shareable Resources

Most, but not all, resources can be shared across nodes in an OpenVMS Cluster. The following table describes resources that can be shared.

Shareable Resources Description
System disks All members of the same architecture 1 can share a single system disk, each member can have its own system disk, or members can use a combination of both methods.
Data disks All members can share any data disks. For local disks, access is limited to the local node unless you explicitly set up the disks to be cluster accessible by means of the MSCP server.
Tape drives All members can share tape drives. (Note that this does not imply that all members can have simultaneous access.) For local tape drives, access is limited to the local node unless you explicitly set up the tapes to be cluster accessible by means of the TMSCP server. Only DSA tapes can be served to all OpenVMS Cluster members.
Batch and print queues Users can submit batch jobs to any queue in the OpenVMS Cluster, regardless of the processor on which the job will actually execute. Generic queues can balance the load among the available processors.
Applications Most applications work in an OpenVMS Cluster just as they do on a single system. Application designers can also create applications that run simultaneously on multiple OpenVMS Cluster nodes, which share data in a file.
User authorization files All nodes can use either a common user authorization file (UAF) for the same access on all systems or multiple UAFs to enable node-specific quotas. If a common UAF is used, all user passwords, directories, limits, quotas, and privileges are the same on all systems.

1Data on system disks can be shared between Integrity servers and Alpha computers. However, Integrity server nodes cannot boot from an Alpha system disk, and Alpha nodes cannot boot from an Integrity server system disk.

5.1.1 Local Resources

The following table lists resources that are accessible only to the local node.
Nonshareable Resources Description
Memory Each OpenVMS Cluster member maintains its own memory.
User processes When a user process is created on an OpenVMS Cluster member, the process must complete on that computer, using local memory.
Printers A printer that does not accept input through queues is used only by the OpenVMS Cluster member to which it is attached. A printer that accepts input through queues is accessible by any OpenVMS Cluster member.

5.1.2 Sample Configuration

Figure 5-1 shows an OpenVMS Cluster system that shares FC SAN storage between the Integrity servers and Alpha systems. Each architecture has its own system disk.

Figure 5-1 Resource Sharing in Mixed-Architecture Cluster System (Integrity servers and Alpha)

5.1.3 Storage in a Mixed-Architecture Cluster

This section describes the rules pertaining to storage, including system disks, in a mixed-architecture cluster consisting of OpenVMS Integrity servers and OpenVMS Alpha systems.

Figure 5-2 is a simplified version of a mixed-architecture cluster of OpenVMS Integrity servers and OpenVMS Alpha systems with locally attached storage and a shared Storage Area Network (SAN).

Figure 5-2 Resource Sharing in Mixed-Architecture Cluster System (Integrity servers and Alpha)

Integrity server systems in a mixed-architecture OpenVMS Cluster system:

  • Must have an Integrity server system disk, either a local disk or a shared Fibre Channel disk.
  • Can use served Alpha disks and served Alpha tapes.
  • Can use SAN disks and tapes.
  • Can share the same SAN data disk with Alpha systems.
  • Can serve disks and tapes to other cluster members, both Integrity servers and Alpha systems.

Alpha systems in a mixed-architecture OpenVMS Cluster system:

  • Must have an Alpha system disk, which can be shared with other clustered Alpha systems.
  • Can use locally attached tapes and disks.
  • Can serve disks and tapes to both Integrity servers and Alpha systems.
  • Can use Integrity servers served data disks.
  • Can use SAN disks and tapes.
  • Can share the same SAN data disk with Integrity server systems.

5.2 Common-Environment and Multiple-Environment Clusters

Depending on your processing needs, you can prepare either an environment in which all environmental files are shared clusterwide or an environment in which some files are shared clusterwide while others are accessible only by certain computers.

The following table describes the characteristics of common- and multiple-environment clusters.

Cluster Type Characteristics Advantages
Common environment
Operating environment is identical on all nodes in the OpenVMS Cluster. The environment is set up so that:
  • All nodes run the same programs, applications, and utilities.
  • All users have the same type of user accounts, and the same logical names are defined.
  • All users can have common access to storage devices and queues. (Note that access is subject to how access control list [ACL] protection is set up for each user.)
  • All users can log in to any node in the configuration and work in the same environment as all other users.
Easier to manage because you use a common version of each system file.
Multiple environment
Operating environment can vary from node to node. An individual processor or a subset of processors are set up to:
  • Provide multiple access according to the type of tasks users perform and the resources they use.
  • Share a set of resources that are not available on other nodes.
  • Perform specialized functions using restricted resources while other processors perform general timesharing work.
  • Allow users to work in environments that are specific to the node where they are logged in.
Effective when you want to share some data among computers but you also want certain computers to serve specialized needs.

5.3 Directory Structure on Common System Disks

The installation or upgrade procedure for your operating system generates a common system disk, on which most operating system and optional product files are stored in a system root directory.

5.3.1 Directory Roots

The system disk directory structure is the same on Integrity servers and Alpha systems. Whether the system disk is for an Integrity server system or Alpha, the entire directory structure---that is, the common root plus each computer's local root is stored on the same disk. After the installation or upgrade completes, you use the CLUSTER_CONFIG.COM or CLUSTER_CONFIG_LAN.COM command procedure described in Chapter 8 to create a local root for each new computer to use when booting into the cluster.

In addition to the usual system directories, each local root contains a [SYSn.SYSCOMMON] directory that is a directory alias for [VMS$COMMON], the cluster common root directory in which cluster common files actually reside. When you add a computer to the cluster, the com procedure defines the common root directory alias.

5.3.2 Directory Structure Example

Figure 5-3 illustrates the directory structure set up for computers JUPITR and SATURN, which are run from a common system disk. The disk's master file directory (MFD) contains the local roots (SYS0 for JUPITR, SYS1 for SATURN) and the cluster common root directory, [VMS$COMMON].

Figure 5-3 Directory Structure on a Common System Disk

5.3.3 Search Order

The logical name SYS$SYSROOT is defined as a search list that points first to a local root (SYS$SYSDEVICE:[SYS0.SYSEXE]) and then to the common root (SYS$COMMON:[SYSEXE]). Thus, the logical names for the system directories (SYS$SYSTEM, SYS$LIBRARY, SYS$MANAGER, and so forth) point to two directories.

Figure 5-4 shows how directories on a common system disk are searched when the logical name SYS$SYSTEM is used in file specifications.

Figure 5-4 File Search Order on Common System Disk

Important: Keep this search order in mind when you manipulate system files on a common system disk. Computer-specific files must always reside and be updated in the appropriate computer's system subdirectory.


  1. MODPARAMS.DAT must reside in SYS$SPECIFIC:[SYSEXE], which is [SYS0.SYSEXE] on JUPITR, and in [SYS1.SYSEXE] on SATURN. Thus, to create a new MODPARAMS.DAT file for JUPITR when logged in on JUPITR, enter the following command:


    Once the file is created, you can use the following command to modify it when logged on to JUPITR:


    Note that if a MODPARAMS.DAT file does not exist in JUPITR's SYS$SPECIFIC:[SYSEXE] directory when you enter this command, but there is a MODPARAMS.DAT file in the directory SYS$COMMON:[SYSEXE], the command edits the MODPARAMS.DAT file in the common directory. If there is no MODPARAMS.DAT file in either directory, the command creates the file in JUPITR's SYS$SPECIFIC:[SYSEXE] directory.
  2. To modify JUPITR's MODPARAMS.DAT when logged in on any other computer that boots from the same common system disk, enter the following command:

  3. To modify records in the cluster common system authorization file in a cluster with a single, cluster-common system disk, enter the following commands on any computer:

  4. To modify records in a computer-specific system authorization file when logged in to another computer that boots from the same cluster common system disk, you must set your default directory to the specific computer. For example, if you have set up a computer-specific system authorization file (SYSUAF.DAT) for computer JUPITR, you must set your default directory to JUPITR's computer-specific [SYSEXE] directory before invoking AUTHORIZE, as follows:


5.4 Clusterwide Logical Names

Clusterwide logical names, introduced in OpenVMS Version 7.2, extend the convenience and ease-of-use features of shareable logical names to OpenVMS Cluster systems. Clusterwide logical names are available on OpenVMS Integrity servers and OpenVMS Alpha systems, in a single or a mixed architecture OpenVMS Cluster.

Existing applications can take advantage of clusterwide logical names without any changes to the application code. Only a minor modification to the logical name tables referenced by the application (directly or indirectly) is required.

New logical names are local by default. Clusterwide is an attribute of a logical name table. In order for a new logical name to be clusterwide, it must be created in a clusterwide logical name table.

Some of the most important features of clusterwide logical names are:

  • When a new node joins the cluster, it automatically receives the current set of clusterwide logical names.
  • When a clusterwide logical name or name table is created, modified, or deleted, the change is automatically propagated to every other node in the cluster running OpenVMS Version 7.2 or later. Modifications include security profile changes to a clusterwide table.
  • Translations are done locally so there is minimal performance degradation for clusterwide name translations.
  • Because LNM$CLUSTER_TABLE and LNM$SYSCLUSTER_TABLE exist on all systems running OpenVMS Version 7.2 or later, the programs and command procedures that use clusterwide logical names can be developed, tested, and run on nonclustered systems.

5.4.1 Default Clusterwide Logical Name Tables

To support clusterwide logical names, the operating system creates two clusterwide logical name tables and their logical names at system startup, as shown in Table 5-1. These logical name tables and logical names are in addition to the ones supplied for the process, job, group, and system logical name tables. The names of the clusterwide logical name tables are contained in the system logical name directory, LNM$SYSTEM_DIRECTORY.

Table 5-1 Default Clusterwide Logical Name Tables and Logical Names
Name Purpose
LNM$SYSCLUSTER_TABLE The default table for clusterwide system logical names. It is empty when shipped. This table is provided for system managers who want to use clusterwide logical names to customize their environments. The names in this table are available to anyone translating a logical name using SHOW LOGICAL/SYSTEM, specifying a table name of LNM$SYSTEM, or LNM$DCL_LOGICAL (DCL's default table search list), or LNM$FILE_DEV (system and RMS default).
LNM$SYSCLUSTER The logical name for LNM$SYSCLUSTER_TABLE. It is provided for convenience in referencing LNM$SYSCLUSTER_TABLE. It is consistent in format with LNM$SYSTEM_TABLE and its logical name, LNM$SYSTEM.
LNM$CLUSTER_TABLE The parent table for all clusterwide logical name tables, including LNM$SYSCLUSTER_TABLE. When you create a new table using LNM$CLUSTER_TABLE as the parent table, the new table will be available clusterwide.
LNM$CLUSTER The logical name for LNM$CLUSTER_TABLE. It is provided for convenience in referencing LNM$CLUSTER_TABLE.

5.4.2 Translation Order

The definition of LNM$SYSTEM has been expanded to include LNM$SYSCLUSTER. When a system logical name is translated, the search order is LNM$SYSTEM_TABLE, LNM$SYSCLUSTER_TABLE. Because the definitions for the system default table names, LNM$FILE_DEV and LNM$DCL_LOGICALS, include LNM$SYSTEM, translations using those default tables include definitions in LNM$SYSCLUSTER.

The current precedence order for resolving logical names is preserved. Clusterwide logical names that are translated against LNM$FILE_DEV are resolved last, after system logical names. The precedence order, from first to last, is process --> job --> group --> system --> cluster, as shown in Figure 5-5.

Figure 5-5 Translation Order Specified by LNM$FILE_DEV

5.4.3 Creating Clusterwide Logical Name Tables

You might want to create additional clusterwide logical name tables for the following purposes:

  • For a multiprocess clusterwide application to use
  • For members of a UIC group to share

To create a clusterwide logical name table, you must have create (C) access to the parent table and write (W) access to LNM$SYSTEM_DIRECTORY, or the SYSPRV (system) privilege.

A shareable logical name table has UIC-based protection. Each class of user (system (S), owner (O), group (G), and world (W)) can be granted four types of access: read (R), write (W), create (C), or delete (D).

You can create additional clusterwide logical name tables in the same way that you can create additional process, job, and group logical name tables---with the CREATE/NAME_TABLE command or with the $CRELNT system service. When creating a clusterwide logical name table, you must specify the /PARENT_TABLE qualifier and provide a value for the qualifier that is a clusterwide table name. Any existing clusterwide table used as the parent table will make the new table clusterwide.

The following example shows how to create a clusterwide logical name table:

_$ new-clusterwide-logical-name-table

Previous Next Contents Index