The Question is:
I have a DS10 machine that I removed from a cluster to check standalone
performance. The system password was not valid so I tried to change it using
the uafalt method and then the "spawn" method in the web based FAQ. On both
occasions I had a graphics ter
minal in use and so neither method worked. The "spawn" method seemed to freeze
the system. I had to power cycle the system many times before I realised I
needed a serial console. I eventually changed it using the uafalt method but
confirmed that the "spaw
n" method worked too (using serial console).
When I reset vaxcluster to 2 (or 1 which was its original setting) it fails to
complete the boot with an error - %sysinit-e-error mounting system device,
status=0072832C. This error occurs just after is says "Now a cluster member"
and is followed by a BUG
CHECK - code = 0000036C: PROCGONE, Process not in system
Crash CPU:00 Primary CPU:00 Active CPUs:1
Current Process = sysinit
Current PSB ID = 1
Image Name = sysinit.exe
The Answer is :
The two common errors in this case are:
DIFVOLMNT, different volume already mounted on this device
Facility: MOUNT, Mount Utility
Explanation: Previously, a different volume was mounted on this device
on another node in the cluster. The device may be in mount
verification on the other node. Either the original volume
was removed from the device and replaced with another, or
its volume identification was overwritten.
User Action: Restore the previously mounted volume to the device. If
this is not possible, dismount the device on all nodes that
currently have it mounted. Then retry the mount operation.
VOLALRMNT, another volume of same label already mounted
Facility: MOUNT, Mount Utility
Explanation: This message can occur under either of the following
o A request was made to mount a volume that has the same
label as a volume already mounted. Shared, group, and
system volumes that are mounted concurrently must have
unique volume labels.
o A request was made to mount a volume that is already
mounted /GROUP for another group.
User Action: Take one of the following actions, as appropriate:
o Mount the volume as a private volume if it does not have to
o Mount the volume as a private volume and change its label
using the DCL command SET VOLUME/LABEL. Then dismount the
volume and mount it as originally intended.
o Wait until the conflicting volume has been dismounted.
o If the volume is already mounted to another group, wait for
the volume to be dismounted from that group.
You can determine the status and ownership of a conflicting
volume by using the DCL command SHOW DEVICES/FULL/MOUNTED.
When booting standalone, you are in effect partitioning the cluster,
and will want to ensure correct values for VAXCLUSTER, NISCS_LOAD_PEA0,
VOTES and EXPECTED_VOTES, as cited in the OpenVMS FAQ. Do ensure you
reset the local copies of both VAXCLUSTER and NISCS_LOAD_PEA0 on the
host being removed from the cluster, as cited in the OpenVMS FAQ. Do
also ensure that the VOTES and EXPECTED_VOTES are set correctly on
all nodes in the cluster, to avoid any potential incidence of severe
In the typical case, an OpenVMS system will update the volume Storage
Control Block (SCB) with the current mount time whenever it mounts
a disk volume. In the case of a volume that is shared with a cluster
and particularly that is mounted locally on a partitioned node, this
SCB update will not be reflected in the SCB of a volume accessable to
any existing cluster members.
When the host system is eventually returned to the cluster, the other
existing cluster members of the cluster can and often will still have
an expectation around the mount time, and will refuse to allow another
volume to be mounted with a conflicting value, producing the DIFVOLMNT
If the volume is entirely local to the standalone host, then the disk
data is probably still consistent as a correctly-configured cluster
could not have written to the disk. However, if the volume remained
available to the other cluster members, then you may well have corrupted
the volume, and will likely have to restore its contents from BACKUP.
To reboot this configuration, you will have to ensure all instances of
this volume have been dismounted from all cluster members.