HP OpenVMS Systems Documentation

Content starts here

OpenVMS Version 7.3
New Features and Documentation Overview

Previous Contents Index

4.4 Dedicated CPU Lock Manager (Alpha)

The Dedicated CPU Lock Manager is a new feature that improves performance on large SMP systems that have heavy lock manager activity. The feature dedicates a CPU to performing lock manager operations.

A dedicated CPU has the following advantages for overall system performance as follows:

  • Reduces the amount of MP_SYNCH time
  • Provides good CPU cache utilization

4.4.1 Implementing the Dedicated CPU Lock Manager

For the Dedicated CPU Lock Manager to be effective, systems must have a high CPU count and a high amount of MP_SYNCH due to the lock manager. Use the MONITOR utility and the MONITOR MODE command to see the amount of MP_SYNCH. If your system has more than five CPUs and if MP_SYNCH is higher than 200%, your system may be able to take advantage of the Dedicated CPU Lock Manager. You can also use the spinlock trace feature in the System Dump Analyzer (SDA) to help determine if the lock manager is contributing to the high amount of MP_SYNCH time.

The Dedicated CPU Lock Manager is implemented by a LCKMGR_SERVER process. This process runs at priority 63. When the Dedicated CPU Lock Manager is turned on, this process runs in a compute bound loop looking for lock manager work to perform. Because this process polls for work, it is always computable; and with a priority of 63 the process will never give up the CPU, thus consuming a whole CPU.

If the Dedicated CPU Lock Manager is running when a program calls either the $ENQ or $DEQ system services, a lock manager request is placed on a work queue for the Dedicated CPU Lock Manager. While a process waits for a lock request to be processed, the process spins in kernel mode at IPL 2. After the dedicated CPU processes the request, the status for the system service is returned to the process.

The Dedicated CPU Lock Manager is dynamic and can be turned off if there are no perceived benefits. When the Dedicated CPU Lock Manager is turned off, the LCKMGR_SERVER process is in a HIB (hibernate) state. The process may not be deleted once started.

4.4.2 Enabling the Dedicated CPU Lock Manager

To use the Dedicated CPU Lock Manager, set the LCKMGR_MODE system parameter. Note the following about the LCKMGR_MODE system parameter:

  • Zero (0) indicates the Dedicated CPU Lock Manager is off (the default).
  • A number greater than zero (0) indicates the number of CPUs that should be active before the Dedicated CPU Lock Manager is turned on.

Setting LCKMGR_MODE to a number greater than zero (0) triggers the creation of a detached process called LCKMGR_SERVER. The process is created, and it starts running if the number of active CPUs equals the number set by the LCKMGR_MODE system parameter.

In addition, if the number of active CPUs should ever be reduced below the required threshold by either a STOP/CPU command or by CPU reassignment in a Galaxy configuration, the Dedicated CPU Lock Manager automatically turns off within one second, and the LCKMGR_SERVER process goes into a hibernate state. If the CPU is restarted, the LCKMGR_SERVER process again resumes operations.

4.4.3 Using the Dedicated CPU Lock Manager With Affinity

The LCKMGR_SERVER process uses the affinity mechanism to set the process to the lowest CPU ID other than the primary. You can change this by indicating another CPU ID with the LCKMGR_CPUID system parameter. The Dedicated CPU Lock Manager then attempts to use this CPU. If this CPU is not available, it reverts back to the lowest CPU other than the primary.

The following shows how to dynamically change the CPU used by the LCKMGR_SERVER process:


This change applies to the currently running system. A reboot reverts back to the lowest CPU other than the primary. To permanently change the CPU used by the LCKMGR_SERVER process, set LCKMGR_CPUID in your MODPARAMS.DAT file.

To verify the CPU dedicated to the lock manager, use the SHOW SYSTEM command, as follows:

OpenVMS V7.3 on node JYGAL  24-OCT-2000 10:10:11.31  Uptime  3 20:16:56
  Pid    Process Name    State  Pri      I/O       CPU       Page flts  Pages
4CE0021C LCKMGR_SERVER   CUR  2  63        9   3 20:15:47.78        70     84

Note that the State field shows the process is currently running on CPU 2.

Compaq highly recommends that a process not be given hard affinity to the CPU used by the Dedicated CPU Lock Manager. With hard affinity when such a process becomes computable, it cannot obtain any CPU time, because the LCKMGR_SERVER process is running at the highest possible real-time priority of 63. However, the LCKMGR_SERVER detects once per second if there are any computable processes that are set by the affinity mechanism to the dedicated lock manager CPU. If so, the LCKMGR_SERVER switches to a different CPU for one second to allow the waiting process to run.

4.4.4 Using the Dedicated CPU Lock Manager with Fast Path Devices

OpenVMS Version 7.3 also introduces Fast Path for SCSI and Fibre Channel Controllers along with the existing support of CIPCA adapters. The Dedicated CPU Lock Manager supports both the LCKMGR_SERVER process and Fast Path devices on the same CPU. However, this may not produce optimal performance.

By default, the LCKMGR_SERVER process runs on the first available nonprimary CPU. Compaq recommends that the CPU used by the LCKMGR_SERVER process not have any Fast Path devices. This can be accomplished in either of the following ways:

  • You can eliminate the first available nonprimary CPU as an available Fast Path CPU. To do so, clear the bit associated with the CPU ID from the IO_PREFER_CPUS system parameter.
    For example, let's say your system has eight CPUs with CPU IDs from zero to seven and four SCSI adapters that will use Fast Path. Clearing bit 1 from IO_PREFER_CPUs would result in the four SCSI devices being bound to CPUs 2, 3, 4, and 5. CPU 1, which is the default CPU the lock manager will use, will not have any Fast Path devices.
  • You can set the LCKMGR_CPUID system parameter to tell the LCKMGR_SERVER process to use a CPU other than the default. For the above example, setting this system parameter to 7 would result in the LCKMGR_SERVER process running on CPU 7. The Fast Path devices would by default be bound to CPUs 1, 2, 3, and 4.

4.4.5 Using the Dedicated CPU Lock Manager on the AlphaServer GS Series Systems

The new AlphaServer GS Series Systems (GS80, GS160, and the GS320) have NUMA memory characteristics. When using the Dedicated CPU Lock Manager on one of these systems, the best performance is obtained by utilizing a CPU and memory from within a single Quad Building Block (QBB).

For OpenVMS Version 7.3, the Dedicated CPU Lock Manager does not yet have the ability to decide from where QBB memory should be allocated. However, there is a method to preallocate lock manager memory from the low QBB. This can be done with the LOCKIDTBL system parameter. This system parameter indicates the initial size of the Lock ID Table, along with the initial amount of memory to preallocate for lock manager data structures.

To preallocate the proper amount of memory, this system parameter should be set to the highest number of locks plus resources on the system. The command MONITOR LOCK can provide this information. If MONITOR indicates the system has 100,000 locks and 50,000 resources, then setting LOCKIDTBL to the sum of these two values will ensure that enough memory is initially allocated. Adding in some additional overhead may also be beneficial. Setting LOCKIDTBL to 200,000 thus might be appropriate.

If necessary, use the LCKMGR_CPUID system parameter to ensure that the LCKMGR_SERVER runs on a CPU in the low QBB.

4.5 OpenVMS Enterprise Directory for e-Business (Alpha)1

OpenVMS Enterprise Directory for e-Business is a massively scalable directory service, providing both X.500 and LDAPv3 services on OpenVMS Alpha with no separate license fee. OpenVMS Enterprise Directory for e-Business provides the following:

  • Large percentage of the Fortune 500 already deploy Compaq X.500 Directory Service (the forerunner of OpenVMS Enterprise Directory for e-Business)
  • World's first 64-bit directory service
  • Seamlessly combines the scalability and distribution features of X.500 with the popularity and interoperability offered by LDAPv3
  • Inherent replication/shadowing features may be exploited to guarantee 100% up-time
  • Systems distributed around the world can be managed from a single point
  • Ability to store all types of authentication and security certificates across the enterprise accessible from any location
  • Highly configurable schema
  • In combination with AlphaServer technology and in-memory database delivers market leading performance and low initiation time

For more detailed information, refer to the Compaq OpenVMS e-Business Infrastructure CD-ROM package which is included in the OpenVMS Version 7.3 CD-ROM kit.


1 On OpenVMS VAX a similar service, but without LDAP support and with more limited performance, is still available with Compaq X.500 Directory Service Version 3.1.

4.6 Extended File Cache (Alpha)

The Extended File Cache (XFC) is a new virtual block data cache provided with OpenVMS Alpha Version 7.3 as a replacement for the Virtual I/O Cache.

Similar to the Virtual I/O Cache, the XFC is a clusterwide, file system data cache. Both file system data caches are compatible and coexist in an OpenVMS Cluster.

The XFC improves I/O performance with the following features that are not available with the Virtual I/O Cache:

  • Read-ahead caching
  • Automatic resizing of the cache
  • Larger maximum cache size
  • No limit on the number of closed files that can be cached
  • Control over the maximum size of I/O that can be cached
  • Control over whether cache memory is static or dynamic

For more information, refer to the chapter on Managing Data Caches in the OpenVMS System Manager's Manual, Volume 2: Tuning, Monitoring, and Complex Systems.

4.7 /ARB_SUPPORT Qualifier Added to INSTALL Utility (Alpha)

Beginning with OpenVMS Alpha Version 7.3, you can use the /ARB_SUPPORT qualifier with the ADD, CREATE, and REPLACE commands in the INSTALL utility. The ARB_SUPPORT qualifier provides Access Rights Block (ARB) support to products that have not yet been updated the per-thread security Persona Security Block (PSB) data structure.

This new qualifier is included in the INSTALL utility documentation in the OpenVMS System Management Utilities Reference Manual.

4.8 MONITOR Utility New Features

The MONITOR utility has two new class names, RLOCK and TIMER, which you can use as follows:

  • MONITOR RLOCK: the dynamic lock remastering statistics of a node
  • MONITOR TIMER: Timer Queue Entry (TQE) statistics

These enhancements are discussed in more detail in the MONITOR section of the OpenVMS System Management Utilities Reference Manual and in the appendix that discusses MONITOR record formats in that manual.

Also in the MONITOR utility, the display screens of MONITOR CLUSTER, PROCESSES/TOPCPU, and SYSTEM now have new and higher scale values. Refer to the OpenVMS System Management Utilities Reference Manual: M--Z for more information.

4.9 OpenVMS Cluster Systems

The following OpenVMS Cluster features are discussed in this section:

  • Clusterwide intrusion detection
  • Fast Path for SCSI and Fibre Channel (Alpha)
  • Floppy disks served in an OpenVMS Cluster system (Alpha)
  • New Fibre Channel support (Alpha)
  • Switched LAN as a cluster interconnect
  • Warranted and migration support

4.9.1 Clusterwide Intrusion Detection

OpenVMS Version 7.3 includes clusterwide intrusion detection, which extends protection against attacks of all types throughout the cluster. Intrusion data and information from each system are integrated to protect the cluster as a whole. Member systems running versions of OpenVMS prior to Version 7.3 and member systems that disable this feature are protected individually and do not participate in the clusterwide sharing of intrusion information.

You can modify the SECURITY_POLICY system parameter on the member systems in your cluster to maintain either a local or a clusterwide intrusion database of unauthorized attempts and the state of any intrusion events.

If bit 7 in SECURITY_POLICY is cleared, all cluster members are made aware if a system is under attack or has any intrusion events recorded. Events recorded on one system can cause another system in the cluster to take restrictive action. (For example, the person attempting to log in is monitored more closely and limited to a certain number of login retries within a limited period of time. Once a person exceeds either the retry or time limitation, he or she cannot log in.) The default for bit 7 in SECURITY_POLICY is clear.

For more information on the system services $DELETE_INTRUSION, $SCAN_INTRUSION, and $SHOW_INTRUSION, refer to the OpenVMS System Services Reference Manual.

For more information on the DCL commands DELETE/INTRUSION_RECORD and SHOW INTRUSION, refer to the OpenVMS DCL Dictionary.

For more information on clusterwide intrusion detection, refer to the OpenVMS Guide to System Security.

4.9.2 Fast Path for SCSI and Fibre Channel (Alpha)

Fast Path for SCSI and Fibre Channel (FC) is a new feature with OpenVMS Version 7.3. This feature improves the performance of Symmetric Multi-Processing (SMP) machines that use certain SCSI ports, or FC.

In previous versions of OpenVMS, SCSI and FC I/O completion was processed solely by the primary CPU. When Fast Path is enabled, the I/O completion processing can occur on all the processors in the SMP system. This substantially increases the potential I/O throughput on an SMP system, and helps to prevent the primary CPU from becoming saturated.

See Section 4.12.2 for information about the SYSGEN parameter, FAST_PATH_PORTS, that has been introduced to control Fast Path for SCSI and FC.

4.9.3 Floppy Disks Served in an OpenVMS Cluster System (Alpha)

Until this release, MSCP was limited to serving disks. Beginning with OpenVMS Version 7.3, serving floppy disks in an OpenVMS Cluster system is supported, enabled by MSCP.

For floppy disks to be served in an OpenVMS Cluster system, floppy disk names must conform to the naming conventions for port allocation class names. For more information about device naming with port allocation classes, refer to the OpenVMS Cluster Systems manual.

OpenVMS VAX clients can access floppy disks served from OpenVMS Alpha Version 7.3 MSCP servers, but OpenVMS VAX systems cannot serve floppy disks. Client systems can be any version that supports port allocation classes.

4.9.4 New Fibre Channel Support (Alpha)


This section has been corrected to show that the MDR is supported on OpenVMS Alpha Version 7.3, not on OpenVMS Alpha Version 7.2-1, as stated in the printed version of this manual.

Support for new Fibre Channel hardware, larger configurations, Fibre Channel Fast Path, and larger I/O operations is included in OpenVMS Version 7.3. The benefits include:

  • Support for a broader range of configurations: the lower cost HSG60 controller supports two SCSI buses instead of six SCSI buses supported by the HSG80; multiple DSGGB 16-port Fibre Channel switches enable very large configurations.
  • Backup operations to tape, enabled by the new Modular Data Router (MDR), using existing SCSI tape subsystems
  • Distances up to 100 kilometers between systems, enabling more configuration choices for multiple-site OpenVMS Cluster systems
  • Better performance for certain types of I/O due to Fibre Channel Fast Path and support for larger I/O requests

The following new Fibre Channel hardware has been qualified on OpenVMS Version 7.3 and on OpenVMS Version 7.2-1 (except for MDR):

  • KGPSA-CA host adapter
  • DSGGB-AA switch (8 ports) and DSGGB-AB switch (16 ports)
  • HSG60 storage controller (MA6000 storage subsystem)
  • Compaq Modular Data Router (MDR) (OpenVMS Version 7.3)

OpenVMS now supports Fibre Channel fabrics. A Fibre Channel fabric is multiple Fibre Channel switches connected together. (A Fibre Channel fabric is also known as cascaded switches.)

Configurations that use Fibre Channel fabrics can be extremely large. Distances up to 100 kilometers are supported in a multisite OpenVMS Cluster system. OpenVMS supports the Fibre Channel SAN configurations described in the Compaq StorageWorks Heterogeneous Open SAN Design Reference Guide, available at the following Compaq web site:


Enabling Fast Path for Fibre Channel can substantially increase the I/O throughput on an SMP system. For more information about this new feature, see Section 4.9.2.

Prior to OpenVMS Alpha Version 7.3, I/O requests larger than 127 blocks were segmented by the Fibre Channel driver into multiple I/O requests. Segmented I/O operations generally have lower performance than one large I/O. In OpenVMS Version 7.3, I/O requests up to and including 256 blocks are done without segmenting.

For more information about Fibre Channel usage in OpenVMS Cluster configurations, refer to the Guidelines for OpenVMS Cluster Configurations. New Fibre Channel Tape Support (Alpha)

Fibre Channel tape functionality refers to the support of SCSI tapes and SCSI tape libraries in an OpenVMS Cluster system with shared Fibre Channel storage. The SCSI tapes and libraries are connected to the Fibre Channel by a Fibre-to-SCSI bridge known as the Modular Data Router (MDR).

For configuration information, refer to the Guidelines for OpenVMS Cluster Configurations.

4.9.5 LANs as Cluster Interconnects

An OpenVMS Cluster system can use several LAN interconnects for node-to-node communication, including Ethernet, Fast Ethernet, Gigabit Ethernet, ATM, and FDDI.

PEDRIVER, the cluster port driver, provides cluster communications over LANs using the NISCA protocol. Originally designed for broadcast media, PEDRIVER has been redesigned to exploit all the advantages offered by switched LANs, including full duplex transmission and more complex network topologies.

Users of LANs for their node-to-node cluster communication will derive the following benefits from the redesigned PEDRIVER:

  • Removal of restrictions for using Fast Ethernet, Gigabit Ethernet, and ATM as cluster interconnects
  • Improved performance due to better path selection, multipath load distribution, and support of full duplex communication
  • Greater scalability
  • Ability to monitor, manage, and display information needed to diagnose problems with cluster use of LAN adapters and paths SCA Control Program

The SCA Control Program (SCACP) utility is designed to monitor and manage cluster communications. (SCA is the abbreviation of Systems Communications Architecture, which defines the communications mechanisms that enable nodes in an OpenVMS Cluster system to communicate.)

In OpenVMS Version 7.3, you can use SCACP to manage SCA use of LAN paths. In the future, SCACP might be used to monitor and manage SCA communications over other OpenVMS Cluster interconnects.

This utility is described in more detail in a new chapter in the OpenVMS System Management Utilities Reference Manual: M--Z. New Error Message About Packet Loss

Prior to OpenVMS Version 7.3, an SCS virtual circuit closure was the first indication that a LAN path had become unusable. In OpenVMS Version 7.3, whenever the last usable LAN path is losing packets at an excessive rate, PEDRIVER displays the following console message:

%PEA0, Excessive packet losses on LAN Path from local-device-name -
 _  to device-name on REMOTE NODE node-name

This message is displayed after PEDRIVER performs an excessively high rate of packet retransmissions on the LAN path consisting of the local device, the intervening network, and the device on the remote node. The message indicates that the LAN path has degraded and is approaching, or has reached, the point where reliable communications with the remote node are no longer possible. It is likely that the virtual circuit to the remote node will close if the losses continue. Furthermore, continued operation with high LAN packet losses can result in a significant loss in performance because of the communication delays resulting from the packet loss detection timeouts and packet retransmission.

The corrective steps to take are:

  1. Check the local and remote LAN device error counts to see if a problem exists on the devices. Issue the following commands on each node:

    $ SHOW DEVICE local-device-name
    $ MC SCACP
    SCACP> SHOW LAN device-name
    $ MC LANCP
    LANCP> SHOW DEVICE device-name/COUNT
  2. If device error counts on the local devices are within normal bounds, contact your network administrators to request that they diagnose the LAN path between the devices.
    If necessary, contact your COMPAQ support representative for assistance in diagnosing your LAN path problems.

For additional PEDRIVER troubleshooting information, see Appendix F of the OpenVMS Cluster Systems manual.

4.9.6 Warranted and Migration Support

Compaq provides two levels of support, warranted and migration, for mixed-version and mixed-architecture OpenVMS Cluster systems.

Warranted support means that Compaq has fully qualified the two versions coexisting in an OpenVMS Cluster and will answer all problems identified by customers using these configurations.

Migration support is a superset of the Rolling Upgrade support provided in earlier releases of OpenVMS and is available for mixes that are not warranted. Migration support means that Compaq has qualified the versions for use together in configurations that are migrating in a staged fashion to a newer version of OpenVMS VAX or of OpenVMS Alpha. Problem reports submitted against these configurations will be answered by Compaq. However, in exceptional cases, Compaq may request that you move to a warranted configuration as part of answering the problem.

Compaq supports only two versions of OpenVMS running in a cluster at the same time, regardless of architecture. Migration support helps customers move to warranted OpenVMS Cluster version mixes with minimal impact on their cluster environments.

Table 4-2 shows the level of support provided for all possible version pairings.

Table 4-2 OpenVMS Cluster Warranted and Migration Support
  Alpha/VAX V7.3 Alpha V7.2--xxx/
VAX V7.2
Alpha/VAX V7.1
Alpha/VAX V7.3 WARRANTED Migration Migration
Alpha V7.2-- xxx/
VAX V7.2
Migration WARRANTED Migration
Alpha/VAX V7.1 Migration Migration WARRANTED

In a mixed-version cluster with OpenVMS Version 7.3, you must install remedial kits on earlier versions of OpenVMS. For OpenVMS Version 7.3, two new features, XFC and Volume Shadowing minicopy, cannot be run on any node in a mixed version cluster unless all nodes running earlier versions of OpenVMS have installed the required remedial kit or upgrade. Remedial kits are available now for XFC. An upgrade for systems running OpenVMS Version 7.2-xx that supports minicopy will be made available soon after the release of OpenVMS Version 7.3.

For a complete list of required remedial kits, refer to the OpenVMS Version 7.3 Release Notes.

Previous Next Contents Index