HP OpenVMS Systems Documentation
HP OpenVMS Version 8.4 Release Notes
"SYS$TIMEZONE_RULE" = "CET-1CEST-2,M3.5.0/02,M10.4.0/03"
Timezone logical after this change:
"SYS$TIMEZONE_RULE" = "CET^1CEST^2,M3.5.0/02,M10.4.0/03"
Licenses with the "Virtual" option will load on OpenVMS cluster members running pre-V8.4 OpenVMS versions. This load will not affect the functioning on the guest systems, but it is recommended that /INCLUDE or /EXCLUDE lists must be used to prevent the load(s).
For information about licensing OpenVMS guests on Integrity VM, see the
HP OpenVMS License Management Utility Manual.
4.3 iSCSI Demo Kit not Supported
The iSCSI demo kit is no longer supported on OpenVMS Version 8.4. HP
recommends that you do not use the iSCSI demo kit on OpenVMS Version
4.4 OpenVMS as a Guest Operating System on Integrity VM
OpenVMS Version 8.4 now supports HP Virtualization and can be installed as a guest operating system on HP Integrity Virtual Machines (Integrity VM). For more information about product-specific limitations, see the respective product documentation.
This section describes known problems and restrictions in the OpenVMS
guest on Integrity VM.
4.4.1 Shutdown Behaviour Changes
When you execute the SYS$SYSTEM:SHUTDOWN.COM command procedure without specifying reboot, the system always uses the "POWER_OFF" option. If the guest node is in the cluster, quorum will be adjusted using the "REMOVE_NODE" option along with the "POWER_OFF" option.
A known consequence of using this option is that, the virtual machine is shutdown and must be restarted by the MP command "pc -on" in the virtual console or alternately enter the following command on the Host:
# hpvmstart -P <<OpenVMS guest name>>
The OpenVMS guest does not support attached devices such as CD/DVD
burners, media changers and tape devices. If you want to use tape
devices, you can connect them to a physical system that is in a cluster
with the OpenVMS guest and TMSCP serves the tape devices.
4.4.3 Networking or Storage Interface Support
The OpenVMS guest supports the Accelerated Virtual I/O (AVIO) interface only.
Integrity VM commands enable you to configure VIO devices to a guest,
which might not give any apparent errors during the startup. However,
VIO devices are not part of the supported configuration of a guest
running OpenVMS Operating System.
4.4.4 Known Limitation on HP-UX Guests and OpenVMS Guests Sharing the Same Virtual Switch
If you configure an HP-UX guest and an OpenVMS guest with the same virtual switch, the network communication between these guests will fail. This problem will be fixed in a future release of OpenVMS.
The workaround for this problem is to configure the HP-UX guest and
OpenVMS guest with different virtual switches.
4.4.5 Known Issue on OpenVMS Guest When vNICs are not Configured
If the vNICs (Virtual Network Interface Cards) on an OpenVMS guest are not configured and if TCP/IP is started after the DECnet startup, it results in a crash. HP recommends that you use the OpenVMS guest with at least one vNIC configured.
Without a vNIC, DECnet and TCP/IP can work individually on the OpenVMS
4.5 HP Availability Manager Release Notes
This section describes the known issue with HP Availability Manager Version 3.1.
$ @SYS$STARTUP:AMDS$STARTUP RESTART
$ MC SYSGEN SET LAN_FLAGS 16
The following release notes pertain to Provisioning OpenVMS Using HP
SIM, Version 4.0.
4.6.1 Provisioning OpenVMS Guest Limitation
Provisioning is not supported with OpenVMS as a guest operating system
on Integrity VM.
4.6.2 System Firmware
The system firmware version of the BL860c and BL870c servers must be at
4.21 or later. The system firmware version of the rx3600 and rx6600
servers must be at 4.11 or later.
4.6.3 Provisioning Multiple Servers
OpenVMS can be provisioned from an HP SIM Central Management Station,
an HP ProLiant server running Microsoft Windows.
4.6.5 InfoServer Name Length
The InfoServer name must be less than 12 characters long for
provisioning to work. This is a temporary restriction.
4.6.6 OpenVMS InfoServer and the Integrity servers on the Same LAN
The OpenVMS InfoServer and the Integrity servers must be on the same
local area network (LAN) to provision the server blade.
4.6.7 EFI Firmware
The EFI firmware for the BladeSystem must be version 5.0 or later.
4.6.8 Management Processor
The Management Processor must be running the Advanced iLO2 firmware.
4.6.9 Known Issues With Configuring OpenVMS TCP/IP Using Provisioning
The TCP/IP server components BIND, LPD, LBROKER, and SMTP, if selected to be enabled on the target server, do not start up when OpenVMS TCP/IP is configured through Provisioning.
The workaround for this problem is to configure and restart these
services manually after configuring TCP/IP with Provisioning.
4.6.10 OpenVMS TCP/IP Provisioning Restrictions
The following are the known restrictions while configuring OpenVMS TCP/IP using Provisioning:
When using Provisioning to deploy OpenVMS, the AutoBoot Timeout value
for each target server needs to be set to at least 5 seconds. This
parameter can be configured through the EFI Boot Manager menu
(Boot Configuration -> AutoBoot Configuration -> Set
4.7 OpenVMS Management using Insight Software
For more information about the Insight software, see the following website:
The following performance enhancements have been made to the OpenVMS
Version 8.4 release.
4.8.1 Enhancements to Write Bitmaps
The write Bitmaps (WBM) is a feature used by OpenVMS volume shadowing
during the minimerge and minicopy operations. Information, about which
blocks on a disk are written, is transmitted to other nodes within the
cluster. The following updates have been made in this release.
126.96.36.199 WBM_MSG_INT Parameter Updates
The WBM_MSG_INT parameter indicates the time by which a SetBit message
can be delayed when it is in buffered mode. If the SetBit buffer does
not fill with SetBit messages by this time interval, then the message
is sent. The parameter is in milliseconds, however, the conversion
factor used for this timer was off by a factor of 10. Earlier, a
WBM_MSG_INT value of 10 was resulting in a 100 millisecond delay when
in buffered mode. This problem is corrected so that a value of 10 now
indicates only a 10 millisecond delay.
188.8.131.52 WBM_MSG_UPPER and WBM_MSG_LOWER Parameter Updates
WBM_MSG_UPPER is the threshold used to determine if a switch should occur to the buffered message mode, when operating in a single message mode. If WBM_MSG_UPPER or more SetBit operations are done in a 100 millisecond window, the messaging mode will be switched to the buffered mode. The default value is 80.
WBM_MSG_LOWER is the threshold used to determine if a switch should
occur to the single message mode, when operating in the buffered
message mode. If WBM_MSG_LOWER or fewer SetBit operations are done in a
100 millisecond window, the messaging mode will be switched to single
mode. The default value is 20.
184.108.40.206 Asynchronous SetBit Messages
There can be multiple master bitmap nodes for a shadow set. Currently, SetBit messages are sent to the multiple master bitmap nodes synchronously. Only when the response for the SetBit message is received from the first remote master bitmap node, is the message sent to the next master bitmap node. When done with all of the remote master bitmap nodes, the I/O is resumed.
SetBit messages are now sent to all the multiple master bitmap nodes
asynchronously. The I/O operation is resumed when the responses from
all the master bitmap nodes are received. This reduces the stall time
of the I/O operation by the write bitmap code.
220.127.116.11 Reduced SetBit Messages for Sequential I/O
If sequential writes occur to a disk, it results in sending Setbit
messages that set sequential bits in the remote bitmap. The WBM code
will now recognize where a number of prior bits in the bitmap have
already been set. In this scenario, the WBM code will set additional
bits so that if sequential writes should continue, fewer Setbit
messages are required. Assuming the sequential I/O continues, the
number of Setbit messages will be reduced by about a factor of 10 and
thus improve the I/O rate for sequential writes.
4.8.2 Exception Handling Performance Improvements (Integrity servers Only)
Some performance improvements have been made to exception handling for OpenVMS Integrity server systems. The change will reduce the overhead of exception handling in some, but not all cases of exception handling.
The OpenVMS Version 8.4 caches the decoded unwind data. The cache is used in the user-callable calling standard routines, during the exception handling. These calling standard routines are also used in the RTLs, to implement the programming language constructs, such as the try/throw/catch constructs in C++ and the setjmp/longjmp constructs in C programming language.
In case of unexpected errors, the cache can be disabled temporarily
using the VMS system parameter, KTK_D3. Its default value of
zero enables the cache. A value of one disables the cache. The special
parameter, KTK_D3 may have been used by the HP supplied
debug/test images. If you had such test images on your system, make
sure that it is reset to its default value zero.
4.8.3 Image Activation (Integrity servers Only)
During image activation and over the life of the image, paging IO
brings pages of the image into memory. On Integrity server systems, an
I-cache flush must be performed on these pages in case the page has
code that is executed. This resulted on the I-cache flush occurring on
many pages that would never be executed. To avoid the I-cache flush on
pages that are never executed, the I-cache is now only done on pages
when an instruction is first executed on the page. This avoids the
I-cache flush on the pages that are never executed and provides an
overall system performance benefit.
4.8.4 Global Section Creation and Deletion
Performance improvements have been made to areas of the operating
system that create and delete various types of global sections. The
benefits of the changes will be seen on large SMP systems as a
reduction in MP Synch.
4.8.5 Dedicated CPU Lock Manager
The Dedicated CPU Lock Manager is a feature used on systems with 16 or
more CPUs and very high locking rates. Improvements have been made to
the Dedicated CPU Lock Manager that results in an increase in the rate
at which locking operations can be performed.
4.8.6 Ctrl/T Alignment Faults
A Ctrl/T operation at a terminal resulted in a number of alignment
faults. These have been corrected for OpenVMS Version 8.4.
4.9 Error and Warning Messages from ACPI During Boot
The following message might be displayed by VMS during boot on cell-based machines (for example, rx8640 or rx7640):
ACPI Error (utmutex-0430): Mutex  is not acquired, cannot release 
The following message might be displayed by VMS during boot on certain systems that have power management enabled (for example, an rx2660 with the latest processors):
ACPI Warning (nseval-0250): Excess arguments - method [_OST] needs 3, found 7 
These messages can be ignored. They will be fixed in a future release.
4.10 Large Device Name Support for Accounting Utility
The accounting utility is modified to handle long device names. It can now display device names having seven characters or more, for example, Terminal (TNA) of unit number >9999, MBA device of unit number >999, and other large device names such as TNA10000:, MBA1000:, and so on.
Earlier, the utility displayed arbitrary characters if a device name
exceeded seven characters. A new accounting record version (version4)
is used to write new records into the accounting.dat file and the
utility can read and display these new records.
4.11 PAGED_LAL_SIZE New System Parameter
PAGED_LAL_SIZE sets the maximum size, in bytes, to use the page dynamic
pool lookaside lists.
4.11.1 Paged Pool Lookaside Lists
Paged dynamic pool now allows the use of lookaside lists to increase system performance in some cases. It is controlled by the SYSGEN parameter PAGED_LAL_SIZE and is off (0) by default.
If the variable paged pool freelist becomes fragmented, you might benefit by enabling the use of these lookaside lists. The SYSGEN parameter PAGED_LAL_SIZE sets the maximum size, in bytes, to use these lookaside lists. Packets larger than this size will still be allocated from the variable paged pool freelist. A modest value, 512 bytes, might help systems performing intensive logical name creation and deletion operations.
Because the parameter is dynamic it can be enabled, adjusted, or disabled as needed. If it is enabled and then lowered, there might be some packets on the paged pool lookaside lists that are no longer actively in use. These show up as "Over-limit Lookaside Blocks" in DCL's and SDA's SHOW MEMORY/POOL/FULL command. These packets were used before but are now larger than the new PAGED_LAL_SIZE. These packets will be used again if the SYSGEN parameter is increased to include them, or if there is a paged pool shortage and the packets are reclaimed from the lookaside lists.
To help prevent a runaway condition where packets on a lookaside list starts to consume most or all of paged pool, the paged pool lookaside lists will not be used for packets in the last quarter of paged dynamic pool. If there is a paged pool memory shortage packets on the lookaside lists will be reclaimed as well.
If disabled, at the default value of 0, paged pool behaves as it did in
previous versions of OpenVMS, allocating and deallocating packets from
the paged pool variable freelist.
4.12 2 TiB Disk Volume Support Restrictions
OpenVMS Version 8.4 supports disk volumes up to 2 TiB in size with the following restrictions:
The SAS tape drives can be named and configured using the same commands
that are used to configure Fibre Channel tape drives. For more
information, see the section 7.5 "Fibre Channel Tape Support" in the
Guidelines for OpenVMS Cluster Configurations.
4.14 External SAS Disk Device Naming
The external SAS drives that are served by the non-Smart array
controllers can be configured as $3$DGA<UDID>, where UDID is
unique device ID for the LUN. The Fibre Channel disk device names use
an allocation class value of 1 whereas the external SAS disk device
names use an allocation class value of 3 to differentiate a SAS device
from an Fibre Channel device.
4.15 External Authentication
This section contains release notes pertaining to external authentication. External authentication is an optional feature introduced in OpenVMS Version 7.1 that enables OpenVMS systems to authenticate designated users with their external user IDs and passwords. For information about using external authentication, see the HP OpenVMS Guide to System Security.
A special note for external authentication users.
If you are using the SYS$ACM-enabled LOGINOUT.EXE and SETP0.EXE (SET PASSWORD) images that supports external authentication, an upgrade to OpenVMS Version 8.4 will restore the SYS$ACM-enabled images.
For information on installing the ACMELOGIN kit, see the SYS $HELP:ACME_DEV_README.TXT.
If you are using external authentication to authenticate users against a source other than the SYSUAF.DAT, and using the password policy for customized password processing, it is necessary to restart the ACME Server after the Password Policy shareable image is installed, and the LOAD_PWD_POLICY system parameter is enabled.
Use the following command to restart the ACME Server:
$ SET SERVER ACME_SERVER /RESTART
The Advanced Server for OpenVMS V7.3A ECO4 (and later) product kit includes the standalone external authentication software for Integrity servers in an OpenVMS cluster.
If you want to enable NT LAN Manager external authentication on OpenVMS
Cluster member nodes running Integrity servers, copy the Integrity
servers standalone external authentication images from an Alpha system
on which the Advanced Server is installed to the Integrity servers
member node, and complete the setup as described in the Advanced Server
kit release notes.
4.15.3 SET PASSWORD Behavior Within a DECterm Terminal Session
A DECterm terminal session does not have access to the external user name used for login and must prompt for one during SET PASSWORD operations. The external user name defaults to the process's OpenVMS user name. If the default is not appropriate (that is, if the external user name and mapped OpenVMS user name are different), you must enter the correct external user name.
The following example shows a SET PASSWORD operation initiated by a user with the external user name JOHN_DOE. The mapped OpenVMS user name is JOHNDOE and is the default used by the SET PASSWORD operation. In this case, the default is incorrect and the actual external user name was specified by the user.
$ set password External user name not known; Specify one (Y/N)[Y]? Y External user name [JOHNDOE]: JOHN_DOE Old password: New password: Verification: %SET-I-SNDEXTAUTH, Sending password request to external authenticator %SET-I-TRYPWDSYNCH, Attempting password synchronization $
In the LAN Manager domain, a user cannot log in once a password expires.
PC users receive notification of impending external user password
expiration and can change passwords before they expire. However, when a
user logs in from an OpenVMS workstation using external authentication,
the login process cannot determine whether the external password is
about to expire. Therefore, sites that enforce password expiration and
whose users do not primarily use PCs can choose not to use external
authentication for workstation users.
4.15.5 Restriction in ACME_SERVER Process (Integrity servers only)
The SET SERVER ACME/CONFIG=THREAD_MAX command is ignored on Integrity servers for this release because only one worker thread is active.
Do not increase the number of threads on Integrity servers. Increasing the number of threads on Integrity servers might lead to ACME_SERVER process crash and login failures.
Connecting a bridged device such as, AD221, HP PCIe combo Card on the PCI bus, where dump devices (DOSD) are configured on another HBA that is already connected might cause the PCI bus numbering of the dump devices to be renumbered and making it difficult to find the valid dump devices.
After connecting a new I/O card, validate the boot/dump option. Then,
refresh the DUMP_DEV and boot device list.
4.17 SHUTDOWN.COM Changes
is modified to execute a pre-queue system shutdown procedure
SYSHUTDWN_0010.COM if it is present. The template contains three sample
routines that can help force the queue system to shutdown and restart
or failover faster.
4.18 OpenVMS Cluster Systems
The release notes in this section pertain to OpenVMS Cluster systems.
4.18.1 Cluster over IP (IP Cluster Interconnect)
HP OpenVMS Version 8.4 is enhanced with the Cluster over IP feature. This feature provides the ability to form clusters beyond a single LAN or VLAN segment using industry standard Internet protocol. It also provides improved disaster tolerant capability to OpenVMS clusters.
This section describes the known problems and restrictions in Cluster
18.104.22.168 Software Requirements
Cluster over IP is available only on OpenVMS Version 8.4 Alpha and
Integrity servers. Cluster over IP also requires HP TCP/IP services for
OpenVMS, Version 5.7.
22.214.171.124 Integrity servers Satellite Node and Bootserver in the Same LAN
An Integrity server satellite node must be in the same LAN as its boot server for the satellite node to initialize cluster over IP successfully and to join the cluster successfully.
It is also necessary to have LAN cluster communication between
Integrity servers satellite node and the boot server for the satellite
node to be able to initialize cluster over IP during the satellite
126.96.36.199 Alpha Satellite Node Requires LAN Channels With Disk Server
Alpha satellite boot fails in an IP only environment. That is, while booting an Alpha satellite, if all the nodes, including the boot servers, are using only IP channels for cluster communication, the satellite boot fails with the following message:
cluster-W-PROTOCOL_TIMEOUT, NISCA protocol timeout %VMScluster-I-REINIT_WAIT, Waiting for access to the system disk server
Cluster over IP does not support the IPv6 type address for cluster
188.8.131.52 Dynamic Host Configuration Protocol (DHCP) or Secondary Address Support
Cluster over IP requires the addresses that are used for cluster
communication, which are static, primary address on that interface.
Furthermore, the IP address and interface used for cluster
communication must not be used for Failsafe configuration.
184.108.40.206 Multiple IP Interface Configuration
If you configure multiple IP interface with the same default gateway,
loss of communication on any interface may result in disrupted cluster
communication with CLUEXITS.
220.127.116.11 ifconfig Command Usage
If the interface used for cluster communication is reactivated by
, it results in losing cluster communication to other nodes, and also
results in cluexit of nodes.
18.104.22.168 Multiple Gateway Configuration
The Cluster over IP configuration information is stored in the
configuration files, which are loaded early in the boot time. This
configuration information also includes the default route or gateway
that is used by TCP/IP. Currently, only one default route can be
entered in the configuration file and used during the node bootup.
22.214.171.124 Block Transfer XMIT Chaining
The PEdriver emulates each IP interface used for cluster communication similar to the LAN interface (BUS). An IP bus will have the characteristics of Xchain_Disabled status as shown. This means that the block transfer packets transmitted through TCP/IP are copied from the PEdriver to the TCP/IP buffers.
$ mc scacp show ip NODEG PEA0 Device Summary 16-FEB-2009 12:29:15.92: Device Errors + Mgt Buffer MgtMax Line Total Current Device Type Events Status Priority Size BufSiz Speed Pkts(S+R) IP Address ------ ---- ------ ------ -------- ----- ------ ----- --------- ----------- IE0 184 Run Online 0 1394 0 N/A 1419711 126.96.36.199 XChain_Disabled
Cluster over IP requires LANCP, instead of DECnet for downline load on
Alpha because the changes related to configuring cluster over IP and
enabling cluster over IP is available only with CLUSTER_CONFIG_LAN.COM.
This restriction will be fixed in a future release.
188.8.131.52 Duplex Mismatch
A duplex mode mismatch or a change in duplex mode from half to full on
the host duplex can result in CLUEXIT when IP is used for cluster
communication. It is recommended that you check for the duplex mismatch
issues to avoid cluexit.
184.108.40.206 Shared System Disk Upgrade
In a shared system disk configuration, during an upgrade from earlier versions of OpenVMS to Version 8.4, Cluster over IP can be enabled for the node on which upgrade is being performed. However, on the other nodes, after upgrade, execute CLUSTER_CONFIG_LAN command procedure to enable Cluster over IP.
For example, consider systems PIPER and MARLIN have roots SYS0 and SYS1 respectively on a shared system disk. If upgrade is performed on node PIPER, PIPER can be enabled with Cluster over IP. To enable Cluster over IP on MARLIN, execute CLUSTER_CONFIG_LAN command procedure.
This restriction will be removed in a future release.
220.127.116.11 Enhanced CLUSTER_CONFIG_LAN Command Procedure
CLUSTER_CONFIG_LAN command procedure is enhanced to configure Cluster over IP. This command procedure provides the ability to enable Cluster over IP and use IP for cluster communication.
The following message is displayed when a standalone node is added to a cluster using the command procedure:
"IA64 node, using LAN for cluster communications. PEDRIVER will be loaded. No other cluster interconnects are supported for IA64 nodes.".
Note that despite the message printed by the configuration procedure on Integrity servers node, either LAN or IP or both can be used for cluster communication. LAN is enabled by default when the node's characteristic is changed to a cluster member. IP can be optionally enabled using the CLUSTER_CONFIG_LAN command procedure. PEdriver will be loaded for both LAN and IP communications.
The CLUSTER_CONFIG_LAN command procedure message will be fixed in a
4.18.2 OpenVMS Cluster Support for Integrity VM
OpenVMS for Integrity servers Version 8.4 is supported as a guest
operating system on Integrity VM. The OpenVMS guest can be configured
in a cluster.
18.104.22.168 Cluster Interconnect for OpenVMS Guest
The OpenVMS guest can use both LAN or Cluster over IP (IPCI) to
communicate with other nodes in the cluster.
22.214.171.124 MSCP Support for Clusters in Integrity VM Environment
MSCP is used to provide shared storage capability in cluster consisting
of OpenVMS guest systems.
126.96.36.199 Online Migration Support
Online migration of the OpenVMS guest that are part of cluster is not
4.18.3 Mixed Platform Support
The Integrity server Satellite systems that use device naming (also known as port allocation classes) require an additional step to operate correctly in this release. On the satellite boot server node, edit the file device:
device is the disk that contains the satellite's root.
n is the root of the satellite system.
Add the following line to the file:
You can ignore the "Do Not Edit" comment at the top of the file in this
case. The list of files in SYS$MEMORYDISK.DAT is not order-dependent.
This problem is expected to be resolved for the final release.
4.19 Mixed-version Cluster Compatibility of a Six-member Shadowset
OpenVMS Version 8.4 supports the "Extended Membership" volume shadowing feature. This feature allows shadowsets to have more than three and up to six-members. This feature is enabled when a fourth member is added to the shadowset. Following are some of the important points in a mixed-version OpenVMS cluster:
A new area of the Storage Control Block (SCB) of disk stores the extended membership arrays required to support the "Extended Membership" shadowing feature. Therefore, an attempt to mount a six-member shadowset on earlier versions of OpenVMS works only if the members are specified in the command line (that is, maximum of three members) or if the members are in the Index 0. 1, or 2 (old) slots.
In earlier versions of OpenVMS, the $MOUNT/INCLUDE qualifier which is
used for reconstructing the shadowset, can find only the existing
membership list and not the new membership area in the SCB. Hence, it
does not mount any members from the new extended membership area in the
4.21 WBEM Services and WBEM Providers for OpenVMS
This section describes the known problems and restrictions in WBEM.
4.21.1 WBEM Services for OpenVMS Based on OpenPegasus 2.9
WBEM Services for OpenVMS Version 2.9 is based on the OpenPegasus 2.9
code stream of The Open Group's Pegasus open source project.
4.21.2 WBEM Providers Support for OpenVMS Guest
The WBEM Providers running on the OpenVMS guest do not support the WBEM
instance data and event indications for CPU, memory, enclosure,
chassis, fan, power supply, and management processor, the guest being a
virtual machine. These will be supported by WBEM providers running on
the underlying VM Host operating system.
4.21.3 Restart cimserver.exe to Unload Providers on OpenVMS
After entering the cimprovider -r command, stop and restart
the cimserver to complete the process of replacing a provider. (OpenVMS
does not support unloading a dynamically loaded image.)
4.21.4 Use Quotes Around Command Line Options
Ensure that you use quotes around a command line option to preserve its
case. For example,
$ cimmofl "-E" "--xml"
$ cimmof -E -xml
4.22 Monitor Utility Changes
The Monitor utility (MONITOR) has undergone several changes since
OpenVMS Version 7.3-2. Most of these changes are related to providing
improved formatting of the recording file and including additional
class data. These changes have introduced some compatibility issues
between data collected by one version of MONITOR that is subsequently
processed by another version. This section discusses these issues.
4.22.1 Guest Operating System on Integrity VM
OpenVMS Integrity servers Version 8.4 supports guest operating system on Integrity VM. When the OpenVMS Integrity servers is running as a guest on an Integrity VM system, the Monitor utility indicates the amount of CPU time used by the guest. The Monitor utility also indicates the amount of CPU time allocated to the guest by Integrity VM.
The MONITOR MODES and MONITOR SYSTEM /ALL commands provide this information. When the system is running as a guest, the above commands display "In use by Host" instead of "Compatibility Mode". This field is to be interpreted as the amount of CPU time that was unavailable to the current guest and that is being used by the other guests or Integrity VM. The display is scaled based on the number of vCPUs (Virtual CPUs) configured for the guest irrespective of the actual number of physical CPUs in the host.
$ MONITOR MODES OpenVMS Monitor Utility +-----+ TIME IN PROCESSOR MODES | CUR | on node VMSG7 +-----+ 5-FEB-2009 12:35:39.74 0 25 50 75 100 + - - - - + - - - - + - - - - + - - - - + Interrupt State | | | | | | MP Synchronization | | | | | | Kernel Mode | | | | | | Executive Mode | | | | | | Supervisor Mode | | | | | | User Mode 99 |&brvgar;&brvgar;&brvgar;&brvgar;&brvgar;&brvgar;&brvgar;&brvgar;&brvgar;&brvgar;&brvgar;&brvgar;&brvgar;&brvgar;&brvgar;&brvgar;&brvgar;&brvgar;&brvgar;&brvgar;&brvgar;&brvgar;&brvgar;&brvgar;&brvgar;&brvgar;&brvgar;&brvgar;&brvgar;&brvgar;&brvgar;&brvgar;&brvgar;&brvgar;&brvgar;&brvgar;&brvgar;&brvgar;&brvgar; | | | | | In use By Host 1 | | | | | | Idle Time | + - - - - + - - - - + - - - - + - - - - + $ MONITOR SYSTEM/ALL OpenVMS Monitor Utility SYSTEM STATISTICS on node VMSG9 5-FEB-2009 12:36:44.88 CUR AVE MIN MAX Interrupt State 0.00 0.12 0.00 0.33 MP Synchronization 0.00 0.00 0.00 0.00 Kernel Mode 0.00 0.06 0.00 0.50 Executive Mode 0.00 0.00 0.00 0.00 Supervisor Mode 0.00 0.00 0.00 0.00 User Mode 98.33 98.03 96.50 98.50 In use By Host 1.66 1.77 1.33 3.33 Idle Time 0.00 0.00 0.00 0.00 Process Count 25.00 24.72 24.00 25.00 Page Fault Rate 0.00 10.96 0.00 47.50 Page Read I/O Rate 0.00 0.96 0.00 3.16 Free List Size 46851.00 46945.54 46850.00 47105.00 Modified List Size 317.00 316.90 316.00 317.00 Direct I/O Rate 0.00 1.37 0.00 5.50 Buffered I/O Rate 1.00 2.68 0.66 9.83
The data that is displayed when MONITOR MODES and MONITOR SYSTEM /ALL commands are executed on a guest is the time that the guest spends on the virtual CPUs.
Because the body of data MONITOR collects can change at each release, it is not always possible to view the MONITOR data collected in one version on a different version.
The level of compatibility between releases depends on whether you
examine recorded binary data from a file (that is, playback) or live
data from another cluster node. In general, playing back recorded data
provides more compatibility than monitoring live remote data.
4.22.3 Playing Back Data from a Recording File
Each file of recorded MONITOR binary data is identified by a MONITOR recording file-structure level ID. You can see this ID by entering the DCL command DUMP /HEADER /PAGE on the file. The following table lists some recent MONITOR versions and their associated structure level IDs:
|Operating System Version||MONITOR Recording File Structure ID|
|OpenVMS Version 7.3-2 with remedial kit 1||MON31050|
|OpenVMS Versions 8.2, 8.2-1 with remedial kit 1||MON01060|
|OpenVMS Versions 8.3, 8.3-1H1, 8.4||MON01060|
Usually, to be able to play back a single MONITOR recording file, the last two digits of the structure level ID must match those of the running MONITOR version. For example, if you are running OpenVMS Version 7.3-2, you can play back a file from Version 7.3-2 but not one from Version 8.2.
However, MONITOR Versions 8.2 and higher are specially coded to read recording files with structure level IDs ending in "50." In addition, a utility in SYS$EXAMPLES, called MONITOR_CONVERT.C, converts a MONxx060 file to a MON31050 file. This allows the resulting file to be read by versions prior to Version 8.2. For instructions to build and run the program, see MONITOR_CONVERT.C.
Even though you can play back a file, certain MONITOR data classes within the file might not be available. This can happen if you are using an older MONITOR version to play back a file created by a newer MONITOR version.
When you produce a multifile summary from several recording files, all
eight characters of the structure level ID from all the files must
4.23 System Parameters
This release also contains the new GH_RES_CODE_S2 parameter, which specifies the size in pages of the resident 64-bit S2 space resident image code granularity hint region.
Only images linked with the /SEGMENT=CODE=P2 qualifier can have code placed in this region. For more information, see the HP OpenVMS Linker Utility Manual and the INSTALL utility in the HP OpenVMS System Manager's Manual.
GH_RES_CODE has the AUTOGEN and FEEDBACK attributes.
4.24 SYS$LDDRIVER Restriction
SYS$LDDRIVER.EXE is a freeware pseudo device driver that allows OpenVMS operating system to create virtual disks. For OpenVMS Version 7.3-1 and succeeding versions, this driver was placed in SYS$COMMON:[SYS$LDR] to support the creation of the source virtual disk for mastering a CD or DVD using CDRECORD or COPY/RECORDABLE_MEDIA. This is the only supported use of this freeware driver. All other uses of this driver continue to be subject to the following documented freeware usage restrictions:
The OpenVMS Freeware is provided as is without a warranty. HP imposes
no restrictions on its distribution or redistribution. HP does not
support services for this software, fix the software, or guarantee that
it works correctly.
4.25 CPU_POWER_MGMT Default Value Changed
The default value for the sysgen parameter CPU_POWER_MGMT has been
restored to 1 (that is to on). An improved idle power saving algorithm
reduces interrupt latency while CPU_POWER_MGMT is on.
4.26 Booting A Satellite System with Reserved Memory
To use the SYSMAN reserved memory feature on an Integrity server satellite system, the file SYS$SYSTEM:VMS$RESERVED_MEMORY.DATA must allow world READ+EXECUTE access. Failure to set this access protection results in the warning when booting the satellite:
%VMS_LOADER-W-Warning: Unable to load file SYS$SYSTEM:VMS$RESERVED_MEMORY.DATA
After running SYSMAN to add memory reservations to a satellite, execute SYS$MANAGER:CLUSTER_CONFIG_LAN.COM to set the correct protection on the VMS$RESERVED_MEMORY.DATA file. To set the protection, from the cluster configuration procedure "Main Menu" select:
3. CHANGE a cluster member's characteristics.
From the "CHANGE Menu" select the following:
13. Reset an IA64 satellite node's boot environment file protections. What is the satellite name (leave blank to use a specific device and root)?
Enter the satellite name or satellite boot device and root for the
system where you added the memory reservation. SYSMAN will be fixed in
a later release to eliminate this condition.
4.27 SCACP Error Counter Reports Retransmit Errors
If the PEA0: device on the system shows a number of errors, these errors might be retransmit errors and not actual errors. To verify actual errors, use the SCACP utility to confirm whether there are a number of retransmits on the PEA0 channels. Use the LANCP utility to identify whether any actual devices errors exist on the LAN devices that the PEdriver uses. If there are retransmits and no devices errors, then the PEA0: device errors are likely retransmits and not actual errors.
RECNXINTERVAL might need to be increased above the default of 20 to
allow time for Virtual Connect Manager failovers. This is especially
true in larger clusters.
4.29 INITIALIZE/ERASE=INIT Before Using Media
HP recommends that you issue the DCL command INITIALIZE/ERASE=INIT on storage media prior to using them for the first time. This eliminates any stale data that was left from previous use by another operating system or diagnostics.
An indication of such stale data is three question marks (???) in the console command output, as shown in the following example:
Shell> ls fs1:\ Directory of: fs1:\ 00/00/07 19:16p 1,788,984,016 ??? 00/00/80 12:00a 0 ??? 2 File(s) 1,788,984,016 bytes 0 Dir(s)
The problem will be corrected in a future release.
4.30 Performance Data Collector for OpenVMS (TDC)
TDC Version 2.3-20 is included in the OpenVMS Version 8.4 installation.
TDC Version 2.3-20 is not qualified in Multinet and TCPWare
4.31 Recovering From System Hangs or Crashes (Integrity servers Only)
If your system hangs and you want to force a crash, press Ctrl/P from the console. The method of forcing a crash dump varies depending on whether XDELTA is loaded.
If XDELTA is loaded, pressing Ctrl/P causes the system to enter XDELTA. The system displays the instruction pointer and the current instruction. You can force a crash from XDELTA by entering ;C , as shown in the following example:
$ Console Brk at 8068AD40 8068AD40! add r16 = r24, r16 ;; (New IPL = 3) ;C
If XDELTA is not loaded, pressing Ctrl/P a second time
causes the system to prompt "Crash? (Y/N)". Entering Y causes
the system to crash. Entering any other character has no effect on the
4.32 DECdtm/XA with Oracle 8i and 9i (Alpha Only)
If you use DECdtm/XA to coordinate transactions with the Oracle 8i/9i XA Compliant Resource Manager (RM), do not use the dynamic registration XA switch (xaoswd). Version 188.8.131.52.0 of the Oracle shareable library that supports dynamic registration does not work. Always use the static registration XA switch (xaosw) to bind the Oracle RM to the DECdtm/XA Veneer.
The DECdtm/XA V2.1 Gateway now has clusterwide transaction recovery
support. Transactions from applications that use a clusterwide DECdtm
Gateway Domain Log can now be recovered from any single-node failure.
Gateway servers running on the remaining cluster nodes can initiate the
transaction recovery process on behalf of the failed node.
4.33 Device Unit Number Increased
In the past, OpenVMS would never create more than 10,000 cloned device units, and unit numbers would wrap after 9999. This had become a limitation for some devices, such as mailboxes or TCP/IP sockets.
Starting with OpenVMS Version 7.3-2, OpenVMS will create up to 32,767 devices if the DEV$V_NNM bit is clear in UCB$L_DEVCHAR2 and if bit 2 is clear in the DEVICE_NAMING system parameter. This does not require any device driver change.
However, programs and command procedures that are coded to assume a
maximum device number of 9999 may need to be modified.
4.34 EDIT/FDL: Fixing Recommended Bucket Size
Prior to OpenVMS Version 7.3, when running EDIT/FDL, the calculated bucket sizes were always rounded up to the closest disk-cluster boundary, with a maximum bucket size of 63. This could cause problems when the disk-cluster size was large, but the "natural" bucket size for the file was small, because the bucket size was rounded up to a much larger value than required. Larger bucket sizes increase record and bucket lock contention, and can seriously impact performance.
OpenVMS Version 7.3 or higher modifies the algorithms for calculating
the recommended bucket size to suggest a more reasonable size when the
disk cluster is large.
4.35 Using EFI$CP Utility not Recommended
The OpenVMS EFI$CP utility is presently considered undocumented and
unsupported. HP recommends against using this utility. Certain
privileged operations within this utility could render OpenVMS
Integrity servers unbootable.
4.36 Error Log Viewer (ELV) Utility: TRANSLATE/PAGE Command
If a message is signaled while you are viewing a report using the /PAGE qualifier with the TRANSLATE command, the display might become corrupted. The workaround for this problem is to refresh the display using Ctrl/W.