HP OpenVMS Systems Documentation
HP OpenVMS Cluster Systems
10.7.2 Sharing Dump Files
Another option for saving dump-file space is to share a single dump file among multiple computers. While this technique makes it possible to analyze isolated computer failures, dumps will be lost if multiple computers fail at the same time or if a second computer fails before you can analyze the first failure. Because boot server failures have a greater impact on cluster operation than do failures of other computers you should configure dump files on boot servers to help ensure speedy analysis of problems.
Dump files cannot be shared between architectures. However, you can share a single dump file among multiple Alpha computers, and another single dump file among multiple Integrity computers and another single dump file among VAX computers. Follow these steps for each operating system:
10.8 Maintaining the Integrity of OpenVMS Cluster Membership
Because multiple LAN and mixed-interconnect clusters coexist on a single extended LAN, the operating system provides mechanisms to ensure the integrity of individual clusters and to prevent access to a cluster by an unauthorized computer.
The following mechanisms are designed to ensure the integrity of the cluster:
The purpose of the cluster group number and password is to prevent accidental access to the cluster by an unauthorized computer. Under normal conditions, the system manager specifies the cluster group number and password either during installation or when you run CLUSTER_CONFIG.COM (see Example 8-13) to convert a standalone computer to run in an OpenVMS Cluster system.
OpenVMS Cluster systems use these mechanisms to protect the integrity of the cluster in order to prevent problems that could otherwise occur under circumstances like the following:
The cluster authorization file, SYS$COMMON:[SYSEXE]CLUSTER_AUTHORIZE.DAT, contains the cluster group number and (in scrambled form) the cluster password. The CLUSTER_AUTHORIZE.DAT file is accessible only to users with the SYSPRV privilege.
Under normal conditions, you need not alter records in the CLUSTER_AUTHORIZE.DAT file interactively. However, if you suspect a security breach, you may want to change the cluster password. In that case, you use the SYSMAN utility to make the change.
Example 10-2 illustrates the use of the SYSMAN utility to change the cluster password.
10.9 Adjusting Packet Size for LAN or IP Configurations
You can adjust the maximum packet size for LAN configurations with the
NISCS_MAX_PKTSZ system parameter.
Starting with OpenVMS Version 7.3, the operating system (PEdriver) automatically detects the maximum packet size of all the virtual circuits to which the system is connected. If the maximum packet size of the system's interconnects is smaller than the default packet-size setting, PEdriver automatically reduces the default packet size.
Starting with OpenVMS 8.4, OpenVMS can make use of HP TCP/IP services
for cluster communications using the UDP protocol. NISCS_MAX_PKTSZ will
only affect the LAN channel payload size. To affect the IP channel
payload size use the NISCS_UDP_PKTSZ parameter. For more information
about the NISCS_UDP_PKTSZ parameter, see HELP.
To obtain this parameter's current, default, minimum, and maximum values, issue the following command:
You can use the NISCS_MAX_PKTSZ parameter to reduce packet size, which in turn can reduce memory consumption. However, reducing packet size can also increase CPU utilization for block data transfers, because more packets will be required to transfer a given amount of data. Lock message packets are smaller than the minimum value, so the NISCS_MAX_PKTSZ setting will not affect locking performance.
You can also use NISCS_MAX_PKTSZ to force use of a common packet size on all LAN paths by bounding the packet size to that of the LAN path with the smallest packet size. Using a common packet size can avoid VC closure due to packet size reduction when failing down to a slower, smaller packet size network.
If a memory-constrained system, such as a workstation, has adapters to
a network path with large-size packets, such as FDDI or Gigabit
Ethernet with jumbo packets, then you may want to conserve memory by
reducing the value of the NISCS_MAX_PKTSZ parameter.
This parameter specifies the upper limit on the size, in bytes, of the user data area in the largest packet sent by NISCA on any IP network.
NISCS_UDP_PKTSZ allows the system manager to change the packet size used for cluster communications over IP on network communication paths.
PEdriver uses NISCS_UDP_PKTSZ to compute the maximum amount of data to transmit in any packet.
Currently, the maximum payload over an IP channel is defined by one of the following three parameters. The least of the 3 values will be in effect.
10.9.4 Editing Parameter Files
If you decide to change the value of the NISCS_MAX_PKTSZ or
NISCS_UDP_PKTSZ parameter, edit the SYS$SPECIFIC:[SYSEXE]MODPARAMS.DAT
file to permit AUTOGEN to factor the changed packet size into its
On Alpha systems, process quota default values in SYSUAF.DAT are often
higher than the SYSUAF.DAT defaults on VAX systems. How, then, do you
choose values for processes that could run on Alpha systems or on VAX
systems in an OpenVMS Cluster? Understanding how a process is assigned
quotas when the process is created in a dual-architecture OpenVMS
Cluster configuration will help you manage this task.
The quotas to be used by a new process are determined by the OpenVMS LOGINOUT software. LOGINOUT works the same on OpenVMS Alpha and OpenVMS VAX systems. When a user logs in and a process is started, LOGINOUT uses the larger of:
Example: LOGINOUT compares the value of the account's
ASTLM process limit (as defined in the common SYSUAF.DAT) with the
value of the PQL_MASTLM system parameter on the host Alpha system or on
the host VAX system in the OpenVMS Cluster.
The letter M in PQL_M means minimum. The PQL_Mquota system parameters set a minimum value for the quotas. In the Current and Default columns of the following edited SYSMAN display, note how the current value of each PQL_Mquota parameter exceeds its system-defined default value in most cases.
In this display, the values for many PQL_Mquota parameters
increased from the defaults to their current values. Typically, this
happens over time when the AUTOGEN feedback is run periodically on your
system. The PQL_Mquota values also can change, of course, when
you modify the values in MODPARAMS.DAT or in SYSMAN. If you plan to use
a common SYSUAF.DAT in an OpenVMS Cluster, with both Integrity servers
and Alpha computers, remember the dynamic nature of the
The following table summarizes common SYSUAF.DAT scenarios and probable results on Integrity servers and Alpha computers in an OpenVMS Cluster system.
You might decide to experiment with the higher process-quota values that usually are associated with an OpenVMS Alpha system's SYSUAF.DAT as you determine values for a common SYSUAF.DAT in an OpenVMS Cluster environment. The higher Alpha-level process quotas might be appropriate for processes created on host Integrity server nodes in the OpenVMS Cluster if the Integrity server systems have large available memory resources.
You can determine the values that are appropriate for processes on your Integrity server and Alpha systems by experimentation and modification over time. Factors in your decisions about appropriate limit and quota values for each process will include the following:
10.11 Restoring Cluster Quorum
During the life of an OpenVMS Cluster system, computers join and leave the cluster. For example, you may need to add more computers to the cluster to extend the cluster's processing capabilities, or a computer may shut down because of a hardware or fatal software error. The connection management software coordinates these cluster transitions and controls cluster operation.
When a computer shuts down, the remaining computers, with the help of
the connection manager, reconfigure the cluster, excluding the computer
that shut down. The cluster can survive the failure of the computer and
continue process operations as long as the cluster votes total is
greater than the cluster quorum value. If the cluster votes total falls
below the cluster quorum value, the cluster suspends the execution of
For process execution to resume, the cluster votes total must be restored to a value greater than or equal to the cluster quorum value. Often, the required votes are added as computers join or rejoin the cluster. However, waiting for a computer to join the cluster and increasing the votes value is not always a simple or convenient remedy. An alternative solution, for example, might be to shut down and reboot all the computers with a reduce quorum value.
After the failure of a computer, you may want to run the Show Cluster utility and examine values for the VOTES, EXPECTED_VOTES, CL_VOTES, and CL_QUORUM fields. (See the HP OpenVMS System Management Utilities Reference Manual for a complete description of these fields.) The VOTES and EXPECTED_VOTES fields show the settings for each cluster member; the CL_VOTES and CL_QUORUM fields show the cluster votes total and the current cluster quorum value.
To examine these values, enter the following commands:
Note: If you want to enter SHOW CLUSTER commands interactively, you must specify the /CONTINUOUS qualifier as part of the SHOW CLUSTER command string. If you do not specify this qualifier, SHOW CLUSTER displays cluster status information returned by the DCL command SHOW CLUSTER and returns you to the DCL command level.
If the display from the Show Cluster utility shows the CL_VOTES value equal to the CL_QUORUM value, the cluster cannot survive the failure of any remaining voting member. If one of these computers shuts down, all process activity in the cluster stops.