HP OpenVMS Systems Documentation

Content starts here

HP OpenVMS Alpha Version 7.3--2 Release Notes

Previous Contents Index

4.13.3 SCSI Tape Drives: MEDOFL Errors After Tape Dismount


Occasionally, a "%SYSTEM-F-MEDOFL, medium is offline" error can occur on the first command executed on a SCSI tape after the tape has been dismounted using a DISMOUNT/NOUNLOAD command. For example, this can happen when you attempt to initialize or mount the tape immediately after dismounting it. The error is returned because the tape is still rewinding as part of the dismount operation.

If the tape unit is a member of a multipath set, a path switch might occur (instead of a MEDOFL error) as part of multipath recovery. These MEDOFL errors and path switches are more likely to occur on certain models of SCSI tape drives such as the LTO-2 HP Ultrium 460.

If a path switch occurs on the first command after a tape DISMOUNT command, this indicates that the tape has recovered and no user action is required. If a MEDOFL error occurs, retry the failed command after the tape finishes rewinding. The SCSI tape driver will be modified in a future remedial kit to eliminate the need for such manual retries.

4.13.4 CLUSTER_CONFIG.COM and Limits on Root Directory Names


This note updates Table 8-3 (Data Requested by CLUSTER_CONFIG_LAN.COM and CLUSTER_CONFIG.COM) in the OpenVMS Cluster Systems manual.

The documentation specifies a limit on the number of hexadecimal digits you can use for computers with direct access to the system disk. The limit is correct for VAX computers but not for Alpha computers.

The command procedure prompts for the following information:

Computer's root directory name on cluster system disk:

The documentation currently states:

Press Return to accept the procedure-supplied default, or specify a name in the form SYSx:

  • For computers with direct access to the system disk, x is a hexadecimal digit in the range of 1 through 9 or A through D (for example, SYS1 OR SYSA)
  • For satellites, x must be in the range of 10 through FFFF

The limit on the range of hexadecimal values with direct access to the system disk is correct for VAX computers. For Alpha computers with direct access to the system disk, the valid range of hexadecimal values is much larger. It includes both the VAX range of 1 through 9 or A through D, and also the range 10 through FFFF. Note that SYSE and SYSF are reserved for system use.

The OpenVMS Cluster Systems manual will include this information in its next revision.

4.13.5 Booting Satellites Over FDDI in a Mixed-Version Cluster


Changes to OpenVMS Version 7.3 (or higher) may affect satellite booting over FDDI for satellites running versions of OpenVMS earlier than Version 7.3. The problem can occur when the system parameter NISCS_LAN_OVRHD is set to a value less than 6 (the default is 18), and the system parameter NISCS_MAX_PKTSZ is set for maximum size FDDI packets (4468). NISCS_LAN_OVRHD decreases the maximum packet size used for LAN communications to accommodate devices such as the DESNC (an Ethernet encryption device). For OpenVMS Version 7.3 or higher, NISCS_LAN_OVRHD is not used, so the maximum packet size is not reduced.

The problem is that the buffer size used by the FDDI boot driver is 12 bytes too short. The FDDI boot driver portion of the satellite boot typically causes 12 bytes of incorrect data (often zeros) to be interspersed throughout the images loaded during SYSBOOT. This generally results in an obscure failure or halt very early in the life of the system (measured in seconds).

The solution is to obtain a Boot Driver patch kit that corrects the problem and to install the patch on the satellite system root. Alternatively, on the systems serving the system disk to the satellite, ensure that the value of the system parameter NISCS_MAX_PKTSZ is at least 12 bytes less than the maximum FDDI packet size.

The following systems are affected:

  • Alpha satellite using an FDDI adapter that is booting from an OpenVMS Version 7.3 or higher Alpha or VAX system whose NISCS_MAX_PKTSZ value is greater than 4456.
  • Alpha satellite using an FDDI adapter that is booting from a pre-OpenVMS Version 7.3 system, which is serving a system disk via FDDI, and the value of NISCS_MAX_PKTSZ minus NISCS_LAN_OVRHD is greater than 4456. The served system disk may be running OpenVMS Version 7.3 or higher, or an earlier version. The problem is more likely to occur if the system disk is Version 7.3 or higher, because NISCS_LAN_OVRHD is most likely set to 18 for prior versions.

4.13.6 PEdriver Error Message Change


In the final build of OpenVMS Version 7.3-2, it was discovered that a last-minute bug changed the way the error message is displayed when PEdriver is closing a virtual circuit. Prior to Version 7.3-2, the error message dislayed the remote node name. For example:

%PEA0, Software is Closing Virtual Circuit - REMOTE NODE LARRY

The Version 7.3-2 message displays PEdriver's internally assigned number for the remote port instead of the remote node name. For example:

%PEA0, Software is Closing Virtual Circuit - REMOTE PORT 219

Unfortunately, there is no easy way to determine the mapping between remote port numbers and the name of the node associated with that numeric value.

This problem will be fixed in the next release.

4.13.7 PEdriver Channels with Priority of -128 Not Used


Starting with OpenVMS Version 7.3-2, a PEdriver channel whose priority is -128 will not be used for cluster communications. Therefore, you can disable cluster communications for a particular channel by using SCACP or the Availability Manager to set the channel's priority to -128.

A channel's priority is the sum of the management priorities assigned to the local LAN device and the channel itself. Therefore, you can assign any combination of channel and LAN device management priority values to achieve a total of -128.

4.13.8 Cluster Performance Reduced with CI-LAN Circuit Switching


In rare cases, in an OpenVMS Cluster configuration with both CI and multiple FDDI, 100 Mb/s or Gb/s Ethernet-based circuits, you might observe that SCS connections are moving between CI and LAN circuits at intervals of approximately 1 minute. This frequent circuit switching can result in reduced cluster performance and may trigger mount verification of shadow set members.

PEdriver can detect and respond to LAN congestion that persists for a few seconds. When it detects a significant delay increase or packet losses on a LAN path, the PEdriver removes the path from use. When it detects that the path has improved, it begins using it again.

Under marginal conditions, the additional load on a LAN path resulting from its use for cluster traffic may cause its delay or packet losses to increase beyond acceptable limits. When the cluster load is removed, the path might appear to be sufficiently improved so that it will again come into use.

If a marginal LAN path's contribution to the LAN circuit's load class increases the circuit's load class above the CI's load class value of 140 when the marginal path is included (and, conversely, decreases the LAN circuit's load class below 140 when the path is excluded), SCS connections will move between CI and LAN circuits.

You can observe connections moving between LAN and CI circuits by using SHOW CLUSTER with the CONNECTION and CIRCUITS classes added.


If excessively frequent connection moves are observed, you can use one of the following workarounds:

  • You can use SCACP or AM to assign a higher priority to the circuit, or the port you wish to be used, thus overriding automatic connection assignment and moving.
    Examples of SCACP commands are:

    $ MC SCACP
    SCACP> SET PORT PNA0 /PRIORITY=2    ! This will cause circuits from local
                                        ! CI port PNA0 to be chosen over
                                        ! lower priority circuits.
    SCACP> SET PORT PEA0 /PRIORITY=2    ! This will cause LAN circuits to be
                                        ! chosen over lower priority circuits.
  • You can use the SCACP SHOW CHANNEL commands to determine which channels are being switched into/out of use. Then you can use SCACP to explicitly exclude a specific channel by assigning it a lower priority value than the desired channels. For example:


    Note that CHANNEL and LAN device priority values in the range of max, max-1 are considered equivalent; that is, they are treated as if they both had the maximum priority value. A difference of 2 or more in priority values is necessary to exclude a channel or LAN device from use.

4.13.9 Gigabit Ethernet Switch Restriction in an OpenVMS Cluster System

Permanent Restriction

Attempts to add a Gigabit Ethernet node to an OpenVMS Cluster system over a Gigabit Ethernet switch will fail if the switch does not support autonegotiation. The DEGPA enables autonegotiation by default, but not all Gigabit Ethernet switches support autonegotiation.

In addition, the messages that are displayed may be misleading. If the node is being added using CLUSTER_CONFIG.COM and the option to install a local page and swap disk is selected, the problem may look like a disk-serving problem. The node running CLUSTER_CONFIG.COM displays the message "waiting for node-name to boot," while the booting node displays "waiting to tune system." The list of available disks is never displayed because of a missing network path. The network path is missing because of the autonegotiation mismatch between the DEGPA and the switch.

To avoid this problem, disable autonegotiation on the new node's DEGPA, as follows:

  • Perform a conversational boot when first booting the node into the cluster.
  • Set the new node's system parameter LAN_FLAGS to a value of 32 to disable autonegotiation on the DEGPA.

4.13.10 Multipath Tape Failover Restriction


While the INITIALIZE command is in progress on a device in a Fibre Channel multipath tape set, multipath failover to another member of the set is not supported. If the current path fails while another multipath tape device is being initialized, retry the INITIALIZE command after the tape device fails over to a functioning path.

This restriction will be removed in a future release.

4.13.11 No Automatic Failover for SCSI Multipath Medium Changers


Automatic path switching is not implemented in OpenVMS Alpha Version 7.3-1 or higher for SCSI medium changers (tape robots) attached to Fibre Channel using a Fibre-to-SCSI tape bridge. Multiple paths can be configured for such devices, but the only way to switch from one path to another is to use manual path switching with the SET DEVICE/SWITCH command.

This restriction will be removed in a future release.

4.14 OpenVMS Galaxy

OpenVMS provides Galaxy support on AlphaServer ES47, ES80, and GS1280 systems. Galaxy support on these systems requires Version 6.6 firmware and may require additional Version 7.3-2 patch kits. The firmware can be obtained from the following website:


Eventually, the Version 6.6 firmware will also be available on CD-ROM.

The following sections contain release notes pertaining to OpenVMS Galaxy systems. Also see related notes in Section 6.5.

4.14.1 OpenVMS Graphical Configuration Manager

The OpenVMS Graphical Configuration Manager (GCM) is not supported for AlphaServer ES47/ES80/GS1280 Galaxy configurations at this time. However, the Graphical Configuration Utility (GCU) is supported. This restriction will be removed in the future.

4.14.2 Smart Array 5300 Restrictions

The Smart Array 5300 (KZPDC) Backplane Raid Controller is currently supported only as a data device in ES47/ES80/GS1280 Galaxy configurations. Boot and crash dump capability are not supported at this time on these controllers. The goal is to provide support with corrected firmware or corrected OpenVMS software.

For information about configuring a Galaxy on an AlphaServer ES47/ES80/GS1280 system, see the HP OpenVMS Alpha Partitioning and Galaxy Guide.

4.14.3 Firmware and Patch Kit Requirements

Hard partition support, which requires a firmware update and a patch kit, has been qualified and is now available on the AlphaServer ES47/ES80/GS1280 systems. The HP OpenVMS Alpha Partitioning and Galaxy Guide provides more information about the firmware and patch kit requirements and describes how to configure hard partitions on these systems.


The former limitation of hard partitions on system building block boundaries, only, has been removed. Hard partitions on subsystem building block boundaries are now supported, as described in the HP OpenVMS Alpha Partitioning and Galaxy Guide. Please note the constraints on hard partitions in subsystem building blocks as described in the HP OpenVMS Alpha Partitioning and Galaxy Guide. Hard partitions on ES47/ES80/GS1280 systems can support up to 64 processors.

4.14.4 Shared-Memory Global Section Creation Can Return Incorrect Status


Calls to SYS$CRMPSC_GDZRO_64 with the flag SEC$M_SHMGS can fail with status SS$_INFMEM instead of status SS$_INSF_SHM_REG.

The most likely explanation for this error is that the Galaxy shared-memory code has run out of internal SHM_REG data structures. To correct this condition, increase the value of the SYSGEN parameter GLX_SHM_REG and reboot all Galaxy instances with this larger parameter value.

Note that each SHM_REG data structure consumes only a small amount of memory. Therefore, you can safely increase this parameter to a relatively high number (for example, double the number of expected shared-memory regions) to avoid changing this parameter in small increments and having to reboot the entire Galaxy more than once.

In a mixed-version cluster, driver kits VMS73_DRIVER-V0300 or later and VMS722_DRIVER-V0300 or later should be installed to avoid Galaxy shared-memory interconnect errors.

4.14.5 Galaxy on ES40: Uncompressed Dump Limitation

Permanent Restriction

On AlphaServer ES40 Galaxy systems, you cannot write a raw (uncompressed) dump from instance 1 if instance 1's memory starts at or above 4 GB (physical). Instead, you must write a compressed dump.

4.14.6 Galaxy on ES40: Turning Off Fast Path


When you implement Galaxy on an AlphaServer ES40 system, you must turn off Fast Path on instance 1. Do this by setting the SYSGEN parameter FAST_PATH to 0 on that instance.

If you do not turn off Fast Path on instance 1, I/O on instance 1 will hang when instance 0 is rebooted. This hang will continue until the PCI bus is reset and instance 1 rebooted. If there is shared SCSI or Fibre Channel, I/O will hang on the sharing nodes and all paths to those devices will be disabled.

4.15 OpenVMS Management Station


Version 3.2B is the recommended version of OpenVMS Management Station for OpenVMS Alpha Version 7.3-2. However, OpenVMS Management Station is backward compatible with OpenVMS Version 6.2 and higher.

The OpenVMS Alpha Version 7.3-2 installation includes OpenVMS Management Station Version 3.2B, which is also available on the web.

4.16 OpenVMS Registry Can Corrupt Version 2 Format Database


If you create eight or more volatile subkeys in a key tree and then reboot a standalone system or a cluster, the OpenVMS Registry server can corrupt a Version 2 format Registry database when the server starts up after the reboot.

To avoid this problem, do one of the following:

  • Do not use volatile keys.
  • Use a Version 1 format database.

Note that Advanced Server for OpenVMS and COM for OpenVMS do not create volatile keys.

4.17 RMS Journaling

The following release notes pertain to RMS Journaling for OpenVMS.

For more information about RMS Journaling, refer to the RMS Journaling for OpenVMS Manual. You can access this manual on the OpenVMS Documentation CD-ROM (in the archived manuals directory).

4.17.1 Recovery Unit Journaling Incompatible with Kernel Threads


Because DECdtm Services is not supported in a multiple kernel threads environment and RMS recovery unit journaling relies on DECdtm Services, RMS recovery unit journaling is not supported in a process with multiple kernel threads enabled.

4.17.2 Modified Journal File Creation


Prior to Version 7.2, recovery unit (RU) journals were created temporarily in the [SYSJNL] directory on the same volume as the file that was being journaled. The file name for the recovery unit journal had the form RMS$process_id (where process_id is the hexadecimal representation of the process ID) and a file type of RMS$JOURNAL.

The following changes have been introduced to RU journal file creation in OpenVMS Version 7.2:

  • The files are created in node-specific subdirectories of the [SYSJNL] directory.
  • The file name for the recovery unit journal has been shortened to the form: YYYYYYYY, where YYYYYYYY is the hexadecimal representation of the process ID in reverse order.

These changes reduce the directory overhead associated with journal file creation and deletion.

The following example shows both the previous and current versions of journal file creation:

Previous versions: [SYSJNL]RMS$214003BC.RMS$JOURNAL;1
Current version: [SYSJNL.NODE1]CB300412.;1

If RMS does not find either the [SYSJNL] directory or the node-specific directory, RMS creates them automatically.

4.17.3 Remote Access of Recovery Unit Journaled Files in an OSI Environment


OSI nodes that host recovery unit journaled files that are to be accessed remotely from other nodes in the network must define SYS$NODE to be a Phase IV-style node name. The node name specified by SYS$NODE must be known to any remote node attempting to access the recovery unit journaled files on the host node. It must also be sufficiently unique for the remote node to use this node name to establish a DECnet connection to the host node. This restriction applies only to recovery unit journaled files accessed across the network in an OSI or mixed OSI and non-OSI environment.

4.17.4 After-Image (AI) Journaling


You can use after-image (AI) journaling to recover a data file that becomes unusable or inaccessible. AI recovery uses the AI journal file to roll forward a backup copy of the data file to produce a new copy of the data file at the point of failure.

In the case of either a process deletion or system failure, an update can be written to the AI journal file, but not make it to the data file. If only AI journaling is in use, the data file and journal are not automatically made consistent. If additional updates are made to the data file and are recorded in the AI journal, a subsequent roll forward operation could produce an inconsistent data file.

If you use Recovery Unit (RU) journaling with AI journaling, the automatic transaction recovery restores consistency between the AI journal and the data file.

Under some circumstances, an application that uses only AI journaling can take proactive measures to guard against data inconsistencies after process deletions or system failures. For example, a manual roll forward of AI-journaled files ensures consistency after a system failure involving either an unshared AI application (single accessor) or a shared AI application executing on a standalone system.

However, in a shared AI application, there may be nothing to prevent further operations from being executed against a data file that is out of synchronization with the AI journal file after a process deletion or system failure in a cluster. Under these circumstances, consistency among the data files and the AI journal file can be provided by using a combination of AI and RU journaling.

4.17.5 VFC Format Sequential Files

VAX V5.0
Alpha V1.0

You cannot update variable fixed-length control (VFC) sequential files when using before-image or recovery unit journaling. The VFC sequential file format is indicated by the symbolic value FAB$C_VFC in the FAB$B_RFM field of the FAB.

4.18 Security: Changes to DIRECTORY Command Output


In OpenVMS Version 7.1 and higher, if you execute the DCL command DIRECTORY/SECURITY or DIRECTORY/FULL for files that contain Advanced Server (PATHWORKS) access control entries (ACEs), the hexadecimal representation for each Advanced Server ACE is no longer displayed. Instead, the total number of Advanced Server ACEs encountered for each file is summarized in the message, "Suppressed n PATHWORKS ACEs."

To display the suppressed ACEs, use the SHOW SECURITY command. You must have the SECURITY privilege to display these ACEs. Note that, in actuality, the command displays OpenVMS ACEs, including the %x86 ACE that reveals the Windows NT® security descriptor information. The Windows NT security descriptor information pertains to the Advanced Server.

Previous Next Contents Index