HP OpenVMS Systems Documentation

Content starts here HP OpenVMS Version 8.4 Release Notes

HP OpenVMS Version 8.4 Release Notes

Previous Contents Index

Chapter 4
System Management Release Notes

This chapter contains information that applies to system maintenance and management, performance management, and networking.

For information about new features included in this version of the software, see the HP OpenVMS Version 8.4 New Features and Documentation Overview.

4.1 SYS$TIMEZONE_RULE Logical Replaces Hyphen (-) with Caret (^)


Starting from Version 8.2 onwards, the SYS$TIMEZONE_RULE logical is modified to replace the "-" character with the "^" character. This change is done in TDF to support DTSS. DTSS cannot handle the commonly used UNIX "GMT-X" timezone rules and does not support timezone rule strings that are identical to the timezone name.

For example, the "GMT-1" timezone rule generates a SYS$TIMEZONE_RULE string of "GMT-1". The DTSS did not function properly because of the matching rule file name of "GMT-1" and the rule string of "GMT-1".

The CRTL and DTSS components are also modified to support this change.

For example, the Timezone logical before this change:

"SYS$TIMEZONE_RULE" = "CET-1CEST-2,M3.5.0/02,M10.4.0/03" 

Timezone logical after this change:

"SYS$TIMEZONE_RULE" = "CET^1CEST^2,M3.5.0/02,M10.4.0/03" 

4.2 Licenses with Virtual Option


Licenses with the "Virtual" option will load on OpenVMS cluster members running pre-V8.4 OpenVMS versions. This load will not affect the functioning on the guest systems, but it is recommended that /INCLUDE or /EXCLUDE lists must be used to prevent the load(s).

For information about licensing OpenVMS guests on Integrity VM, see the HP OpenVMS License Management Utility Manual.

4.3 iSCSI Demo Kit not Supported


The iSCSI demo kit is no longer supported on OpenVMS Version 8.4. HP recommends that you do not use the iSCSI demo kit on OpenVMS Version 8.4.

4.4 OpenVMS as a Guest Operating System on Integrity VM

OpenVMS Version 8.4 now supports HP Virtualization and can be installed as a guest operating system on HP Integrity Virtual Machines (Integrity VM). For more information about product-specific limitations, see the respective product documentation.

This section describes known problems and restrictions in the OpenVMS guest on Integrity VM.

4.4.1 Shutdown Behaviour Changes


When you execute the SYS$SYSTEM:SHUTDOWN.COM command procedure without specifying reboot, the system always uses the "POWER_OFF" option. If the guest node is in the cluster, quorum will be adjusted using the "REMOVE_NODE" option along with the "POWER_OFF" option.

A known consequence of using this option is that, the virtual machine is shutdown and must be restarted by the MP command "pc -on" in the virtual console or alternately enter the following command on the Host:

# hpvmstart -P <<OpenVMS guest name>> 

4.4.2 OpenVMS Guest Does not Support Attached I/O Devices


The OpenVMS guest does not support attached devices such as CD/DVD burners, media changers and tape devices. If you want to use tape devices, you can connect them to a physical system that is in a cluster with the OpenVMS guest and TMSCP serves the tape devices.

4.4.3 Networking or Storage Interface Support


The OpenVMS guest supports the Accelerated Virtual I/O (AVIO) interface only.

Integrity VM commands enable you to configure VIO devices to a guest, which might not give any apparent errors during the startup. However, VIO devices are not part of the supported configuration of a guest running OpenVMS Operating System.

4.4.4 Known Limitation on HP-UX Guests and OpenVMS Guests Sharing the Same Virtual Switch


If you configure an HP-UX guest and an OpenVMS guest with the same virtual switch, the network communication between these guests will fail. This problem will be fixed in a future release of OpenVMS.

The workaround for this problem is to configure the HP-UX guest and OpenVMS guest with different virtual switches.

4.4.5 Known Issue on OpenVMS Guest When vNICs are not Configured


If the vNICs (Virtual Network Interface Cards) on an OpenVMS guest are not configured and if TCP/IP is started after the DECnet startup, it results in a crash. HP recommends that you use the OpenVMS guest with at least one vNIC configured.

Without a vNIC, DECnet and TCP/IP can work individually on the OpenVMS guest.

4.5 HP Availability Manager Release Notes


This section describes the known issue with HP Availability Manager Version 3.1.

  • On OpenVMS Alpha and OpenVMS Integrity servers, no events will be posted on the event window of Data Analyzer for the managed nodes, on which Data Collector is stopped by executing the SYS$STARTUP:AMDS$STARTUP STOP command and started again by executing the SYS$STARTUP:AMDS$STARTUP START command.
    1. Restart the Data Collector by entering the following command:


    2. Restart the Availability Manager Server by entering the following command:


    3. Restart the Availability Manager Analyzer by entering the following command:

  • The Availability Manager Analyzer reports "Path Lost" PATHLST events for all the remote nodes and stops displaying data after some elapsed time.
    The workaround for this problem is to modify LAN_FLAGS bit to 16 in SYSGEN parameter, which restores normal behavior. The command is as follows:


4.6 Provisioning OpenVMS Using HP SIM

The following release notes pertain to Provisioning OpenVMS Using HP SIM, Version 4.0.

4.6.1 Provisioning OpenVMS Guest Limitation


Provisioning is not supported with OpenVMS as a guest operating system on Integrity VM.

4.6.2 System Firmware


The system firmware version of the BL860c and BL870c servers must be at 4.21 or later. The system firmware version of the rx3600 and rx6600 servers must be at 4.11 or later.

4.6.3 Provisioning Multiple Servers


  • HP SIM provisioning using InfoServer can provision up to eight servers simultaneously.
  • HP SIM provisioning using vMedia can provision only one server at a time.

4.6.4 Provisioning From HP SIM Central Management Server


OpenVMS can be provisioned from an HP SIM Central Management Station, an HP ProLiant server running Microsoft Windows.

4.6.5 InfoServer Name Length


The InfoServer name must be less than 12 characters long for provisioning to work. This is a temporary restriction.

4.6.6 OpenVMS InfoServer and the Integrity servers on the Same LAN


The OpenVMS InfoServer and the Integrity servers must be on the same local area network (LAN) to provision the server blade.

4.6.7 EFI Firmware


The EFI firmware for the BladeSystem must be version 5.0 or later.

4.6.8 Management Processor


The Management Processor must be running the Advanced iLO2 firmware.

4.6.9 Known Issues With Configuring OpenVMS TCP/IP Using Provisioning


The TCP/IP server components BIND, LPD, LBROKER, and SMTP, if selected to be enabled on the target server, do not start up when OpenVMS TCP/IP is configured through Provisioning.

The workaround for this problem is to configure and restart these services manually after configuring TCP/IP with Provisioning.

4.6.10 OpenVMS TCP/IP Provisioning Restrictions


The following are the known restrictions while configuring OpenVMS TCP/IP using Provisioning:

  • Configuration of TCP/IP is supported with IPv4 addresses only; IPv6 addresses are currently not supported.
  • Configuration of an alias or secondary IP address is not supported.
  • Configuration of the DHCP server component on a target server is not supported.
  • Provisioning allows you to configure up to one network interface on each target server.
  • Configuration of optional components in HP TCP/IP Services for OpenVMS is not supported.
  • Provisioning does not support setting up logical LAN devices and LAN failover configurations.

4.6.11 AutoBoot Timeout Configuration


When using Provisioning to deploy OpenVMS, the AutoBoot Timeout value for each target server needs to be set to at least 5 seconds. This parameter can be configured through the EFI Boot Manager menu (Boot Configuration -> AutoBoot Configuration -> Set AutoBoot Timeout).

4.7 OpenVMS Management using Insight Software


For more information about the Insight software, see the following website:


4.8 Performance Enhancements


The following performance enhancements have been made to the OpenVMS Version 8.4 release.

4.8.1 Enhancements to Write Bitmaps


The write Bitmaps (WBM) is a feature used by OpenVMS volume shadowing during the minimerge and minicopy operations. Information, about which blocks on a disk are written, is transmitted to other nodes within the cluster. The following updates have been made in this release. WBM_MSG_INT Parameter Updates


The WBM_MSG_INT parameter indicates the time by which a SetBit message can be delayed when it is in buffered mode. If the SetBit buffer does not fill with SetBit messages by this time interval, then the message is sent. The parameter is in milliseconds, however, the conversion factor used for this timer was off by a factor of 10. Earlier, a WBM_MSG_INT value of 10 was resulting in a 100 millisecond delay when in buffered mode. This problem is corrected so that a value of 10 now indicates only a 10 millisecond delay. WBM_MSG_UPPER and WBM_MSG_LOWER Parameter Updates


WBM_MSG_UPPER is the threshold used to determine if a switch should occur to the buffered message mode, when operating in a single message mode. If WBM_MSG_UPPER or more SetBit operations are done in a 100 millisecond window, the messaging mode will be switched to the buffered mode. The default value is 80.

WBM_MSG_LOWER is the threshold used to determine if a switch should occur to the single message mode, when operating in the buffered message mode. If WBM_MSG_LOWER or fewer SetBit operations are done in a 100 millisecond window, the messaging mode will be switched to single mode. The default value is 20. Asynchronous SetBit Messages


There can be multiple master bitmap nodes for a shadow set. Currently, SetBit messages are sent to the multiple master bitmap nodes synchronously. Only when the response for the SetBit message is received from the first remote master bitmap node, is the message sent to the next master bitmap node. When done with all of the remote master bitmap nodes, the I/O is resumed.

SetBit messages are now sent to all the multiple master bitmap nodes asynchronously. The I/O operation is resumed when the responses from all the master bitmap nodes are received. This reduces the stall time of the I/O operation by the write bitmap code. Reduced SetBit Messages for Sequential I/O


If sequential writes occur to a disk, it results in sending Setbit messages that set sequential bits in the remote bitmap. The WBM code will now recognize where a number of prior bits in the bitmap have already been set. In this scenario, the WBM code will set additional bits so that if sequential writes should continue, fewer Setbit messages are required. Assuming the sequential I/O continues, the number of Setbit messages will be reduced by about a factor of 10 and thus improve the I/O rate for sequential writes.

4.8.2 Exception Handling Performance Improvements (Integrity servers Only)


Some performance improvements have been made to exception handling for OpenVMS Integrity server systems. The change will reduce the overhead of exception handling in some, but not all cases of exception handling.

The OpenVMS Version 8.4 caches the decoded unwind data. The cache is used in the user-callable calling standard routines, during the exception handling. These calling standard routines are also used in the RTLs, to implement the programming language constructs, such as the try/throw/catch constructs in C++ and the setjmp/longjmp constructs in C programming language.

In case of unexpected errors, the cache can be disabled temporarily using the VMS system parameter, KTK_D3. Its default value of zero enables the cache. A value of one disables the cache. The special parameter, KTK_D3 may have been used by the HP supplied debug/test images. If you had such test images on your system, make sure that it is reset to its default value zero.

4.8.3 Image Activation (Integrity servers Only)


During image activation and over the life of the image, paging IO brings pages of the image into memory. On Integrity server systems, an I-cache flush must be performed on these pages in case the page has code that is executed. This resulted on the I-cache flush occurring on many pages that would never be executed. To avoid the I-cache flush on pages that are never executed, the I-cache is now only done on pages when an instruction is first executed on the page. This avoids the I-cache flush on the pages that are never executed and provides an overall system performance benefit.

4.8.4 Global Section Creation and Deletion


Performance improvements have been made to areas of the operating system that create and delete various types of global sections. The benefits of the changes will be seen on large SMP systems as a reduction in MP Synch.

4.8.5 Dedicated CPU Lock Manager


The Dedicated CPU Lock Manager is a feature used on systems with 16 or more CPUs and very high locking rates. Improvements have been made to the Dedicated CPU Lock Manager that results in an increase in the rate at which locking operations can be performed.

4.8.6 Ctrl/T Alignment Faults


A Ctrl/T operation at a terminal resulted in a number of alignment faults. These have been corrected for OpenVMS Version 8.4.

4.9 Error and Warning Messages from ACPI During Boot


The following message might be displayed by VMS during boot on cell-based machines (for example, rx8640 or rx7640):

ACPI Error (utmutex-0430): Mutex [1] is not acquired, cannot release [20071219] 

The following message might be displayed by VMS during boot on certain systems that have power management enabled (for example, an rx2660 with the latest processors):

ACPI Warning (nseval-0250): Excess arguments - method [_OST] needs 3, found 7 [20080701] 

These messages can be ignored. They will be fixed in a future release.

4.10 Large Device Name Support for Accounting Utility


The accounting utility is modified to handle long device names. It can now display device names having seven characters or more, for example, Terminal (TNA) of unit number >9999, MBA device of unit number >999, and other large device names such as TNA10000:, MBA1000:, and so on.

Earlier, the utility displayed arbitrary characters if a device name exceeded seven characters. A new accounting record version (version4) is used to write new records into the accounting.dat file and the utility can read and display these new records.

4.11 PAGED_LAL_SIZE New System Parameter

PAGED_LAL_SIZE sets the maximum size, in bytes, to use the page dynamic pool lookaside lists.

4.11.1 Paged Pool Lookaside Lists


Paged dynamic pool now allows the use of lookaside lists to increase system performance in some cases. It is controlled by the SYSGEN parameter PAGED_LAL_SIZE and is off (0) by default.

If the variable paged pool freelist becomes fragmented, you might benefit by enabling the use of these lookaside lists. The SYSGEN parameter PAGED_LAL_SIZE sets the maximum size, in bytes, to use these lookaside lists. Packets larger than this size will still be allocated from the variable paged pool freelist. A modest value, 512 bytes, might help systems performing intensive logical name creation and deletion operations.

Because the parameter is dynamic it can be enabled, adjusted, or disabled as needed. If it is enabled and then lowered, there might be some packets on the paged pool lookaside lists that are no longer actively in use. These show up as "Over-limit Lookaside Blocks" in DCL's and SDA's SHOW MEMORY/POOL/FULL command. These packets were used before but are now larger than the new PAGED_LAL_SIZE. These packets will be used again if the SYSGEN parameter is increased to include them, or if there is a paged pool shortage and the packets are reclaimed from the lookaside lists.

To help prevent a runaway condition where packets on a lookaside list starts to consume most or all of paged pool, the paged pool lookaside lists will not be used for packets in the last quarter of paged dynamic pool. If there is a paged pool memory shortage packets on the lookaside lists will be reclaimed as well.

If disabled, at the default value of 0, paged pool behaves as it did in previous versions of OpenVMS, allocating and deallocating packets from the paged pool variable freelist.

4.12 2 TiB Disk Volume Support Restrictions


OpenVMS Version 8.4 supports disk volumes up to 2 TiB in size with the following restrictions:

  • With OpenVMS versions prior to version 8.4, there is no support for volumes larger than 1 TiB in size or for mounting of volumes larger than 1 TiB. To prevent accidental mounts on earlier versions of OpenVMS, the latest patches for MOUNT will explicitly disallow mounting of volumes larger than 1 TiB on such systems.
  • The F$GETDVI() lexical function items MAXBLOCK, FREEBLOCKS, EXPSIZE, and VOLSIZE are typically used to return information that depends on the target disk size. On OpenVMS Version 8.4, if the target disk size exceeds 1 TiB, these F$GETDVI() items can return apparently negative numbers. This is because DCL does 32-bit signed integer arithmetic and comparisons. Command procedures that use F$GETDVI( ) with these item codes may need to be modified to work with volumes larger than 1 TiB.
    For more information about handling numeric values outside the range of DCL integer representation using DCL, see the HP OpenVMS DCL Dictionary.

4.13 Configuring SAS Tape Drives


The SAS tape drives can be named and configured using the same commands that are used to configure Fibre Channel tape drives. For more information, see the section 7.5 "Fibre Channel Tape Support" in the Guidelines for OpenVMS Cluster Configurations.

4.14 External SAS Disk Device Naming


The external SAS drives that are served by the non-Smart array controllers can be configured as $3$DGA<UDID>, where UDID is unique device ID for the LUN. The Fibre Channel disk device names use an allocation class value of 1 whereas the external SAS disk device names use an allocation class value of 3 to differentiate a SAS device from an Fibre Channel device.

4.15 External Authentication

This section contains release notes pertaining to external authentication. External authentication is an optional feature introduced in OpenVMS Version 7.1 that enables OpenVMS systems to authenticate designated users with their external user IDs and passwords. For information about using external authentication, see the HP OpenVMS Guide to System Security.


A special note for external authentication users.

If you are using the SYS$ACM-enabled LOGINOUT.EXE and SETP0.EXE (SET PASSWORD) images that supports external authentication, an upgrade to OpenVMS Version 8.4 will restore the SYS$ACM-enabled images.

For information on installing the ACMELOGIN kit, see the SYS $HELP:ACME_DEV_README.TXT.

4.15.1 External Authentication and Password Policy


If you are using external authentication to authenticate users against a source other than the SYSUAF.DAT, and using the password policy for customized password processing, it is necessary to restart the ACME Server after the Password Policy shareable image is installed, and the LOAD_PWD_POLICY system parameter is enabled.

Use the following command to restart the ACME Server:


4.15.2 Integrity servers External Authentication Support


The Advanced Server for OpenVMS V7.3A ECO4 (and later) product kit includes the standalone external authentication software for Integrity servers in an OpenVMS cluster.

If you want to enable NT LAN Manager external authentication on OpenVMS Cluster member nodes running Integrity servers, copy the Integrity servers standalone external authentication images from an Alpha system on which the Advanced Server is installed to the Integrity servers member node, and complete the setup as described in the Advanced Server kit release notes.

4.15.3 SET PASSWORD Behavior Within a DECterm Terminal Session


A DECterm terminal session does not have access to the external user name used for login and must prompt for one during SET PASSWORD operations. The external user name defaults to the process's OpenVMS user name. If the default is not appropriate (that is, if the external user name and mapped OpenVMS user name are different), you must enter the correct external user name.

The following example shows a SET PASSWORD operation initiated by a user with the external user name JOHN_DOE. The mapped OpenVMS user name is JOHNDOE and is the default used by the SET PASSWORD operation. In this case, the default is incorrect and the actual external user name was specified by the user.

$ set password 
External user name not known; Specify one (Y/N)[Y]? Y 
External user name [JOHNDOE]: JOHN_DOE 
Old password: 
New password: 
%SET-I-SNDEXTAUTH, Sending password request to external authenticator 
%SET-I-TRYPWDSYNCH, Attempting password synchronization 

4.15.4 No Password Expiration Notification on Workstations


In the LAN Manager domain, a user cannot log in once a password expires.

PC users receive notification of impending external user password expiration and can change passwords before they expire. However, when a user logs in from an OpenVMS workstation using external authentication, the login process cannot determine whether the external password is about to expire. Therefore, sites that enforce password expiration and whose users do not primarily use PCs can choose not to use external authentication for workstation users.

4.15.5 Restriction in ACME_SERVER Process (Integrity servers only)

The SET SERVER ACME/CONFIG=THREAD_MAX command is ignored on Integrity servers for this release because only one worker thread is active.


Do not increase the number of threads on Integrity servers. Increasing the number of threads on Integrity servers might lead to ACME_SERVER process crash and login failures.

4.16 Itanium Primary Bootstrap (IPB) Fails to Find the Valid Dump Devices


Connecting a bridged device such as, AD221, HP PCIe combo Card on the PCI bus, where dump devices (DOSD) are configured on another HBA that is already connected might cause the PCI bus numbering of the dump devices to be renumbered and making it difficult to find the valid dump devices.


After connecting a new I/O card, validate the boot/dump option. Then, refresh the DUMP_DEV and boot device list.

4.17 SHUTDOWN.COM Changes


SHUTDOWN.COM is modified to execute a pre-queue system shutdown procedure SYSHUTDWN_0010.COM if it is present. The template contains three sample routines that can help force the queue system to shutdown and restart or failover faster.

4.18 OpenVMS Cluster Systems

The release notes in this section pertain to OpenVMS Cluster systems.

4.18.1 Cluster over IP (IP Cluster Interconnect)

HP OpenVMS Version 8.4 is enhanced with the Cluster over IP feature. This feature provides the ability to form clusters beyond a single LAN or VLAN segment using industry standard Internet protocol. It also provides improved disaster tolerant capability to OpenVMS clusters.

This section describes the known problems and restrictions in Cluster over IP. Software Requirements


Cluster over IP is available only on OpenVMS Version 8.4 Alpha and Integrity servers. Cluster over IP also requires HP TCP/IP services for OpenVMS, Version 5.7. Integrity servers Satellite Node and Bootserver in the Same LAN


An Integrity server satellite node must be in the same LAN as its boot server for the satellite node to initialize cluster over IP successfully and to join the cluster successfully.

It is also necessary to have LAN cluster communication between Integrity servers satellite node and the boot server for the satellite node to be able to initialize cluster over IP during the satellite bootup. Alpha Satellite Node Requires LAN Channels With Disk Server


Alpha satellite boot fails in an IP only environment. That is, while booting an Alpha satellite, if all the nodes, including the boot servers, are using only IP channels for cluster communication, the satellite boot fails with the following message:

cluster-W-PROTOCOL_TIMEOUT, NISCA protocol timeout %VMScluster-I-REINIT_WAIT, 
Waiting for access to the system disk server IPv6 Support


Cluster over IP does not support the IPv6 type address for cluster communication. Dynamic Host Configuration Protocol (DHCP) or Secondary Address Support


Cluster over IP requires the addresses that are used for cluster communication, which are static, primary address on that interface. Furthermore, the IP address and interface used for cluster communication must not be used for Failsafe configuration. Multiple IP Interface Configuration


If you configure multiple IP interface with the same default gateway, loss of communication on any interface may result in disrupted cluster communication with CLUEXITS. ifconfig Command Usage


If the interface used for cluster communication is reactivated by ifconfig , it results in losing cluster communication to other nodes, and also results in cluexit of nodes. Multiple Gateway Configuration


The Cluster over IP configuration information is stored in the configuration files, which are loaded early in the boot time. This configuration information also includes the default route or gateway that is used by TCP/IP. Currently, only one default route can be entered in the configuration file and used during the node bootup. Block Transfer XMIT Chaining


The PEdriver emulates each IP interface used for cluster communication similar to the LAN interface (BUS). An IP bus will have the characteristics of Xchain_Disabled status as shown. This means that the block transfer packets transmitted through TCP/IP are copied from the PEdriver to the TCP/IP buffers.

$ mc scacp show ip 
NODEG PEA0 Device Summary 16-FEB-2009 12:29:15.92: 
        Device  Errors +                 Mgt   Buffer  MgtMax   Line     Total     Current 
 Device  Type    Events   Status      Priority  Size   BufSiz  Speed   Pkts(S+R)  IP Address 
 ------  ----    ------   ------      --------  -----   ------  -----  ---------  ----------- 
  IE0             184  Run Online            0   1394        0    N/A    1419711 
                       XChain_Disabled              LANCP for Downline Load


Cluster over IP requires LANCP, instead of DECnet for downline load on Alpha because the changes related to configuring cluster over IP and enabling cluster over IP is available only with CLUSTER_CONFIG_LAN.COM. This restriction will be fixed in a future release. Duplex Mismatch


A duplex mode mismatch or a change in duplex mode from half to full on the host duplex can result in CLUEXIT when IP is used for cluster communication. It is recommended that you check for the duplex mismatch issues to avoid cluexit. Shared System Disk Upgrade


In a shared system disk configuration, during an upgrade from earlier versions of OpenVMS to Version 8.4, Cluster over IP can be enabled for the node on which upgrade is being performed. However, on the other nodes, after upgrade, execute CLUSTER_CONFIG_LAN command procedure to enable Cluster over IP.

For example, consider systems PIPER and MARLIN have roots SYS0 and SYS1 respectively on a shared system disk. If upgrade is performed on node PIPER, PIPER can be enabled with Cluster over IP. To enable Cluster over IP on MARLIN, execute CLUSTER_CONFIG_LAN command procedure.

This restriction will be removed in a future release. Enhanced CLUSTER_CONFIG_LAN Command Procedure


CLUSTER_CONFIG_LAN command procedure is enhanced to configure Cluster over IP. This command procedure provides the ability to enable Cluster over IP and use IP for cluster communication.

The following message is displayed when a standalone node is added to a cluster using the command procedure:

 "IA64 node, using LAN for cluster communications.  PEDRIVER will be loaded. 
  No other cluster interconnects are supported for IA64 nodes.". 

Note that despite the message printed by the configuration procedure on Integrity servers node, either LAN or IP or both can be used for cluster communication. LAN is enabled by default when the node's characteristic is changed to a cluster member. IP can be optionally enabled using the CLUSTER_CONFIG_LAN command procedure. PEdriver will be loaded for both LAN and IP communications.

The CLUSTER_CONFIG_LAN command procedure message will be fixed in a future release.

4.18.2 OpenVMS Cluster Support for Integrity VM


OpenVMS for Integrity servers Version 8.4 is supported as a guest operating system on Integrity VM. The OpenVMS guest can be configured in a cluster. Cluster Interconnect for OpenVMS Guest


The OpenVMS guest can use both LAN or Cluster over IP (IPCI) to communicate with other nodes in the cluster. MSCP Support for Clusters in Integrity VM Environment


MSCP is used to provide shared storage capability in cluster consisting of OpenVMS guest systems. Online Migration Support


Online migration of the OpenVMS guest that are part of cluster is not supported.

4.18.3 Mixed Platform Support


  • A supported production cluster containing an Integrity servers cannot include a VAX system. VAX systems can be included in these clusters for the purposes of development and migration with the understanding that any problems arising from the existence of VAX systems in these clusters will result in the need for either the VAX or Integrity servers to be removed. See the OpenVMS Cluster Software SPD for more information.
  • Currently, only two architectures are allowed for supported production environments in an OpenVMS Cluster system. For a list of supported cluster configurations, see the HP OpenVMS Version 8.2 Upgrade and Installation Manual.

4.18.4 Satellite Systems using Port Allocation Class


The Integrity server Satellite systems that use device naming (also known as port allocation classes) require an additional step to operate correctly in this release. On the satellite boot server node, edit the file device:


device is the disk that contains the satellite's root.
n is the root of the satellite system.

Add the following line to the file:


You can ignore the "Do Not Edit" comment at the top of the file in this case. The list of files in SYS$MEMORYDISK.DAT is not order-dependent. This problem is expected to be resolved for the final release.

4.19 Mixed-version Cluster Compatibility of a Six-member Shadowset


OpenVMS Version 8.4 supports the "Extended Membership" volume shadowing feature. This feature allows shadowsets to have more than three and up to six-members. This feature is enabled when a fourth member is added to the shadowset. Following are some of the important points in a mixed-version OpenVMS cluster:

  • To use the "Extended Membership" shadowing feature, all the systems that mount the shadowset must be running OpenVMS Version 8.4.
  • If you attempt to mount a shadowset on an OpenVMS Version 8.4 system using "Extended Memberships" shadowing feature, the mount fails if the shadowset is already mounted on systems with earlier versions of OpenVMS in the cluster.
  • If you attempt to mount a shadowset on a system that is not capable of the "Extended Memberships" shadowing feature on earlier versions of OpenVMS, the mount fails if shadowset is already mounted on an OpenVMS Version 8.4 system in the cluster using the "Extended Memberships" shadowing feature.
  • After the shadowset is enabled to use the "Extended Memberships" shadowing feature, the characteristic is maintained even if the membership is reduced to less than four members. The characteristic is retained until the shadowset is dismounted clusterwide.
  • This shadowing feature is not supported on OpenVMS VAX. If a shadowset is mounted on OpenVMS Alpha or OpenVMS Integrity servers without enabling this feature, the shadowset will mount on the OpenVMS VAX systems. The Virtual Unit characteristic voting ensures compatibility.

4.20 Backward Compatibility of a Six-member Shadowset


A new area of the Storage Control Block (SCB) of disk stores the extended membership arrays required to support the "Extended Membership" shadowing feature. Therefore, an attempt to mount a six-member shadowset on earlier versions of OpenVMS works only if the members are specified in the command line (that is, maximum of three members) or if the members are in the Index 0. 1, or 2 (old) slots.

In earlier versions of OpenVMS, the $MOUNT/INCLUDE qualifier which is used for reconstructing the shadowset, can find only the existing membership list and not the new membership area in the SCB. Hence, it does not mount any members from the new extended membership area in the SCB.

4.21 WBEM Services and WBEM Providers for OpenVMS

This section describes the known problems and restrictions in WBEM.

4.21.1 WBEM Services for OpenVMS Based on OpenPegasus 2.9

WBEM Services for OpenVMS Version 2.9 is based on the OpenPegasus 2.9 code stream of The Open Group's Pegasus open source project.

4.21.2 WBEM Providers Support for OpenVMS Guest


The WBEM Providers running on the OpenVMS guest do not support the WBEM instance data and event indications for CPU, memory, enclosure, chassis, fan, power supply, and management processor, the guest being a virtual machine. These will be supported by WBEM providers running on the underlying VM Host operating system.

4.21.3 Restart cimserver.exe to Unload Providers on OpenVMS

After entering the cimprovider -r command, stop and restart the cimserver to complete the process of replacing a provider. (OpenVMS does not support unloading a dynamically loaded image.)

4.21.4 Use Quotes Around Command Line Options

Ensure that you use quotes around a command line option to preserve its case. For example,
$ cimmofl "-E" "--xml"
$ cimmof -E -xml

4.22 Monitor Utility Changes

The Monitor utility (MONITOR) has undergone several changes since OpenVMS Version 7.3-2. Most of these changes are related to providing improved formatting of the recording file and including additional class data. These changes have introduced some compatibility issues between data collected by one version of MONITOR that is subsequently processed by another version. This section discusses these issues.

4.22.1 Guest Operating System on Integrity VM


OpenVMS Integrity servers Version 8.4 supports guest operating system on Integrity VM. When the OpenVMS Integrity servers is running as a guest on an Integrity VM system, the Monitor utility indicates the amount of CPU time used by the guest. The Monitor utility also indicates the amount of CPU time allocated to the guest by Integrity VM.

The MONITOR MODES and MONITOR SYSTEM /ALL commands provide this information. When the system is running as a guest, the above commands display "In use by Host" instead of "Compatibility Mode". This field is to be interpreted as the amount of CPU time that was unavailable to the current guest and that is being used by the other guests or Integrity VM. The display is scaled based on the number of vCPUs (Virtual CPUs) configured for the guest irrespective of the actual number of physical CPUs in the host.

                            OpenVMS Monitor Utility 
            +-----+         TIME IN PROCESSOR MODES 
            | CUR |              on node VMSG7 
            +-----+          5-FEB-2009 12:35:39.74 
                                     0         25        50        75       100 
                                     + - - - - + - - - - + - - - - + - - - - + 
 Interrupt State                     | 
                                     |         |         |         |         | 
 MP Synchronization                  | 
                                     |         |         |         |         | 
 Kernel Mode                         | 
                                     |         |         |         |         | 
 Executive Mode                      | 
                                     |         |         |         |         | 
 Supervisor Mode                     | 
                                     |         |         |         |         | 
 User Mode                        99 |&brvgar;&brvgar;&brvgar;&brvgar;&brvgar;&brvgar;&brvgar;&brvgar;&brvgar;&brvgar;&brvgar;&brvgar;&brvgar;&brvgar;&brvgar;&brvgar;&brvgar;&brvgar;&brvgar;&brvgar;&brvgar;&brvgar;&brvgar;&brvgar;&brvgar;&brvgar;&brvgar;&brvgar;&brvgar;&brvgar;&brvgar;&brvgar;&brvgar;&brvgar;&brvgar;&brvgar;&brvgar;&brvgar;&brvgar; 
                                     |         |         |         |         | 
 In use By Host                    1 |                      
                                     |         |         |         |         | 
 Idle Time                           | 
                                     + - - - - + - - - - + - - - - + - - - - + 
                            OpenVMS Monitor Utility 
                               SYSTEM STATISTICS 
                                 on node VMSG9 
                            5-FEB-2009 12:36:44.88 
                                       CUR        AVE        MIN        MAX 
    Interrupt State                   0.00       0.12       0.00       0.33 
    MP Synchronization                0.00       0.00       0.00       0.00 
    Kernel Mode                       0.00       0.06       0.00       0.50 
    Executive Mode                    0.00       0.00       0.00       0.00 
    Supervisor Mode                   0.00       0.00       0.00       0.00 
    User Mode                        98.33      98.03      96.50      98.50 
    In use By Host                    1.66       1.77       1.33       3.33 
    Idle Time                         0.00       0.00       0.00       0.00 
    Process Count                    25.00      24.72      24.00      25.00 
    Page Fault Rate                   0.00      10.96       0.00      47.50 
    Page Read I/O Rate                0.00       0.96       0.00       3.16 
    Free List Size                46851.00   46945.54   46850.00   47105.00 
    Modified List Size              317.00     316.90     316.00     317.00 
    Direct I/O Rate                   0.00       1.37       0.00       5.50 
    Buffered I/O Rate                 1.00       2.68       0.66       9.83 


The data that is displayed when MONITOR MODES and MONITOR SYSTEM /ALL commands are executed on a guest is the time that the guest spends on the virtual CPUs.

4.22.2 Version-to-Version Compatibility of MONITOR Data

Because the body of data MONITOR collects can change at each release, it is not always possible to view the MONITOR data collected in one version on a different version.

The level of compatibility between releases depends on whether you examine recorded binary data from a file (that is, playback) or live data from another cluster node. In general, playing back recorded data provides more compatibility than monitoring live remote data.

4.22.3 Playing Back Data from a Recording File

Each file of recorded MONITOR binary data is identified by a MONITOR recording file-structure level ID. You can see this ID by entering the DCL command DUMP /HEADER /PAGE on the file. The following table lists some recent MONITOR versions and their associated structure level IDs:

Operating System Version MONITOR Recording File Structure ID
OpenVMS Version 7.3-2 with remedial kit 1 MON31050
OpenVMS Versions 8.2, 8.2-1 with remedial kit 1 MON01060
OpenVMS Versions 8.3, 8.3-1H1, 8.4 MON01060

1These remedial kits are proposed kits that might be issued for the sole purpose of providing improved compatibility.

Usually, to be able to play back a single MONITOR recording file, the last two digits of the structure level ID must match those of the running MONITOR version. For example, if you are running OpenVMS Version 7.3-2, you can play back a file from Version 7.3-2 but not one from Version 8.2.

However, MONITOR Versions 8.2 and higher are specially coded to read recording files with structure level IDs ending in "50." In addition, a utility in SYS$EXAMPLES, called MONITOR_CONVERT.C, converts a MONxx060 file to a MON31050 file. This allows the resulting file to be read by versions prior to Version 8.2. For instructions to build and run the program, see MONITOR_CONVERT.C.

Even though you can play back a file, certain MONITOR data classes within the file might not be available. This can happen if you are using an older MONITOR version to play back a file created by a newer MONITOR version.

When you produce a multifile summary from several recording files, all eight characters of the structure level ID from all the files must match.

4.23 System Parameters


This release also contains the new GH_RES_CODE_S2 parameter, which specifies the size in pages of the resident 64-bit S2 space resident image code granularity hint region.

Only images linked with the /SEGMENT=CODE=P2 qualifier can have code placed in this region. For more information, see the HP OpenVMS Linker Utility Manual and the INSTALL utility in the HP OpenVMS System Manager's Manual.

GH_RES_CODE has the AUTOGEN and FEEDBACK attributes.

4.24 SYS$LDDRIVER Restriction


SYS$LDDRIVER.EXE is a freeware pseudo device driver that allows OpenVMS operating system to create virtual disks. For OpenVMS Version 7.3-1 and succeeding versions, this driver was placed in SYS$COMMON:[SYS$LDR] to support the creation of the source virtual disk for mastering a CD or DVD using CDRECORD or COPY/RECORDABLE_MEDIA. This is the only supported use of this freeware driver. All other uses of this driver continue to be subject to the following documented freeware usage restrictions:

The OpenVMS Freeware is provided as is without a warranty. HP imposes no restrictions on its distribution or redistribution. HP does not support services for this software, fix the software, or guarantee that it works correctly.

4.25 CPU_POWER_MGMT Default Value Changed


The default value for the sysgen parameter CPU_POWER_MGMT has been restored to 1 (that is to on). An improved idle power saving algorithm reduces interrupt latency while CPU_POWER_MGMT is on.

4.26 Booting A Satellite System with Reserved Memory


To use the SYSMAN reserved memory feature on an Integrity server satellite system, the file SYS$SYSTEM:VMS$RESERVED_MEMORY.DATA must allow world READ+EXECUTE access. Failure to set this access protection results in the warning when booting the satellite:


After running SYSMAN to add memory reservations to a satellite, execute SYS$MANAGER:CLUSTER_CONFIG_LAN.COM to set the correct protection on the VMS$RESERVED_MEMORY.DATA file. To set the protection, from the cluster configuration procedure "Main Menu" select:

3. CHANGE a cluster member's characteristics. 

From the "CHANGE Menu" select the following:

13. Reset an IA64 satellite node's boot environment file protections. 
    What is the satellite name (leave blank to use a specific device and root)? 

Enter the satellite name or satellite boot device and root for the system where you added the memory reservation. SYSMAN will be fixed in a later release to eliminate this condition.

4.27 SCACP Error Counter Reports Retransmit Errors


If the PEA0: device on the system shows a number of errors, these errors might be retransmit errors and not actual errors. To verify actual errors, use the SCACP utility to confirm whether there are a number of retransmits on the PEA0 channels. Use the LANCP utility to identify whether any actual devices errors exist on the LAN devices that the PEdriver uses. If there are retransmits and no devices errors, then the PEA0: device errors are likely retransmits and not actual errors.

4.28 Virtual Connect

The following section pertains to Virtual Connect.

4.28.1 Failover and RECNXINTERVAL


RECNXINTERVAL might need to be increased above the default of 20 to allow time for Virtual Connect Manager failovers. This is especially true in larger clusters.

4.29 INITIALIZE/ERASE=INIT Before Using Media


HP recommends that you issue the DCL command INITIALIZE/ERASE=INIT on storage media prior to using them for the first time. This eliminates any stale data that was left from previous use by another operating system or diagnostics.

An indication of such stale data is three question marks (???) in the console command output, as shown in the following example:

Shell> ls fs1:\
Directory of: fs1:\
 00/00/07 19:16p     1,788,984,016 ??? 
 00/00/80 12:00a           0 ??? 
     2 File(s)  1,788,984,016 bytes 
     0 Dir(s) 

The problem will be corrected in a future release.

4.30 Performance Data Collector for OpenVMS (TDC)


TDC Version 2.3-20 is included in the OpenVMS Version 8.4 installation. TDC Version 2.3-20 is not qualified in Multinet and TCPWare environments.

4.31 Recovering From System Hangs or Crashes (Integrity servers Only)


If your system hangs and you want to force a crash, press Ctrl/P from the console. The method of forcing a crash dump varies depending on whether XDELTA is loaded.

If XDELTA is loaded, pressing Ctrl/P causes the system to enter XDELTA. The system displays the instruction pointer and the current instruction. You can force a crash from XDELTA by entering ;C , as shown in the following example:

Console Brk at 8068AD40 
8068AD40!       add      r16 = r24, r16 ;;  (New IPL = 3) 

If XDELTA is not loaded, pressing Ctrl/P a second time causes the system to prompt "Crash? (Y/N)". Entering Y causes the system to crash. Entering any other character has no effect on the system.

4.32 DECdtm/XA with Oracle 8i and 9i (Alpha Only)


If you use DECdtm/XA to coordinate transactions with the Oracle 8i/9i XA Compliant Resource Manager (RM), do not use the dynamic registration XA switch (xaoswd). Version of the Oracle shareable library that supports dynamic registration does not work. Always use the static registration XA switch (xaosw) to bind the Oracle RM to the DECdtm/XA Veneer.

The DECdtm/XA V2.1 Gateway now has clusterwide transaction recovery support. Transactions from applications that use a clusterwide DECdtm Gateway Domain Log can now be recovered from any single-node failure. Gateway servers running on the remaining cluster nodes can initiate the transaction recovery process on behalf of the failed node.

4.33 Device Unit Number Increased


In the past, OpenVMS would never create more than 10,000 cloned device units, and unit numbers would wrap after 9999. This had become a limitation for some devices, such as mailboxes or TCP/IP sockets.

Starting with OpenVMS Version 7.3-2, OpenVMS will create up to 32,767 devices if the DEV$V_NNM bit is clear in UCB$L_DEVCHAR2 and if bit 2 is clear in the DEVICE_NAMING system parameter. This does not require any device driver change.

However, programs and command procedures that are coded to assume a maximum device number of 9999 may need to be modified.

4.34 EDIT/FDL: Fixing Recommended Bucket Size


Prior to OpenVMS Version 7.3, when running EDIT/FDL, the calculated bucket sizes were always rounded up to the closest disk-cluster boundary, with a maximum bucket size of 63. This could cause problems when the disk-cluster size was large, but the "natural" bucket size for the file was small, because the bucket size was rounded up to a much larger value than required. Larger bucket sizes increase record and bucket lock contention, and can seriously impact performance.

OpenVMS Version 7.3 or higher modifies the algorithms for calculating the recommended bucket size to suggest a more reasonable size when the disk cluster is large.

4.35 Using EFI$CP Utility not Recommended


The OpenVMS EFI$CP utility is presently considered undocumented and unsupported. HP recommends against using this utility. Certain privileged operations within this utility could render OpenVMS Integrity servers unbootable.

4.36 Error Log Viewer (ELV) Utility: TRANSLATE/PAGE Command


If a message is signaled while you are viewing a report using the /PAGE qualifier with the TRANSLATE command, the display might become corrupted. The workaround for this problem is to refresh the display using Ctrl/W.

If you press Ctrl/Z immediately after a message is signaled, the program abruptly terminates. The workaround for this problem is to scroll past the signaled message before pressing Ctrl/Z.

Previous Next Contents Index