HP OpenVMS Systems Documentation
OpenVMS I/O User's Reference Manual
188.8.131.52 Port Selection and Access Modes
The operational condition of the drive cannot be changed with the portselect switches after the drive becomes ready. To change from one modeto another, the drive must be in a nonrotating condition. After the newmode selection has been made, the drive must be restarted.
If a drive is in the neutral state and a disk controller either readsor writes to a drive register, the drive immediately connects a port tothe requesting controller. For read operations, the drive remainsconnected for the duration of the operation. For write operations, thedrive remains connected until a release command is issued by the devicedriver or a 1-second timeout occurs. After the connected port isreleased from its controller, the drive checks the other port's requestflag to determine whether there has been a request on that port. If norequest is pending, the drive returns to the neutral state.
The Autoconfigure utility currently may not be able to locate thenonactive port. For example, if a dual-ported disk drive is connectedand responding at Port A, the CPU attached to Port B might not be ableto find Port B with the Autoconfigure utility. If this problem occurs,execute the AUTOCONFIGURE ALL/LOG command after the system is running.
Do not use SYSGEN to AUTOCONFIGURE or CONFIGURE a dual-ported, non-DSAdisk that is already available on the system through use of an MSCPserver. Establishing a local connection to the disk when a remote pathis already known creates two uncoordinated paths to the same disk. Useof these two paths may corrupt files and data on any volume mounted onthe drive.
In a cluster, dual-ported non-DSA disks (MASSBUS or UNIBUS) can beconnected between two nodes of the cluster. These disks can also bemade available to the rest of the cluster using the MSCP server oneither or both of the hosts to which a disk is connected.
If the local path to the disk is not found during the bootstrap, thenthe MSCP server path from the other host will be the only availableaccess to the drive. The local path will not be found during a boot ifany of the following conditions exist:
Use of the disk is still possible through the MSCP server path.
After the configuration of the disk has reached this state, it is important not to add the local path back into the system I/O database. Because the operating system does not provide an automatic method for adding this local path, the only possible way that you can add this local path is to use the System Generation utility (SYSGEN) qualifiers AUTOCONFIGURE or CONFIGURE to configure the device. SYSGEN is currently not able to detect the presence of the disk's MSCP path, and will incorrectly build a second set of data structures to describe it. Subsequent events could lead to incompatible and uncoordinated file operations, which might corrupt the volume.
To recover the local path to the disk, it is necessary to reboot the system connected to that local path.
A dual-ported DSA disk can be failed over between the two CPUs thatserve it to the cluster under the following conditions: (1) the samedisk controller letter and allocation class are specified on both CPUsand (2) both CPUs are running the MSCP server.
However, because a DSA disk can be on line to only one controller at atime, only one of the CPUs can use its local connection to the disk.The second CPU accesses the disk through the MSCP server. If the CPUthat is currently serving the disk fails, the other CPU detects thefailure and fails the disk over to its local connection. The disk isthereby made available to the cluster once more.
2.2.4 Dual-Porting HSC Disks
By design, HSC disks are cluster accessible. Therefore, if they aredual-ported, they are automatically dual-pathed. CI-connected CPUs canaccess a dual-pathed HSC disk by way of a path through eitherHSC-connected device.
For each dual-ported HSC disk, you can control failover to a specificport using the port select buttons on the front of each drive. Bypressing either port select button (A or B) on a particular drive, youcan cause the device failover to the specified port.
With the port select button, you can select alternate ports to balancethe disk controller workload between two HSC subsystems. For example,you could set half of your disks to use port A and set the other halfto use port B.
The port select buttons also allow you to failover all the disks to analternate port manually when you anticipate the shutdown of one of theHSC subsystems.
In a dual-path configuration of MicroVAX 3300/3400 CPUs or MicroVAX3800/3900 CPUs using RF-series disks, CPUs have concurrent access toany disk on the DSSI bus. A single disk is accessed through two pathsand can be served to all satellites by either CPU.
If either CPU fails, satellites can access their disks through theremaining CPU. Note that failover occurs in the following situations:(1) when the DSSI bus is connected between SII integral adapters onboth MicroVAX 3300/3400 CPUs or (2) when the DSSI bus is connectedbetween the KFQSA adapters on pairs of MicroVAX 3300/3400s or pairs ofMicroVAX 3800/3900s.
2.2.6 Data Check
Disk drivers support data checks at the following levels:
Offset recovery is performed during a data check but error codecorrection (ECC)correction is not performed (see Section 2.2.9). For example, if a readoperation is performed and an ECC correction is applied, the data checkwould fail even though the data in memory is correct. In this case, thedriver returns a status code indicating that the operation wascompleted successfully, but the data check could not be performedbecause of an ECC correction.
Data checks on read operations are extremely rare, and you can eitheraccept the data as is, treat the ECC correction as an error, or acceptthe data but immediately move it to another area on the disk volume.
A data check operation directed to a TU58 does not compare the data inmemorywith the data on tape. Instead, either a read check or a write checkoperation is performed (see Sections 2.4.1 and 2.4.2).
The operating system ensures that when an I/O write operation returns asuccessful completion status, the data is available on the disk or tapemedia. Applications that must guarantee the successful completion of awrite operation can verify that the data is on the media by specifyingthe data check function modifier IO$M_DATACHECK. Note that theIO$M_DATACHECK data check function, which compares the data in memorywith the data on disk, affects performance because the function incursthe overhead of an additional read operation to the media.
If a system failure occurs while a multiple-block write operation is inprogress, the operating system does not guarantee the successfulcompletion of the write operation. (OpenVMS does guarantee single-blockwrite operations to DSA drives.) When a failure interrupts a writeoperation, the data may be left in any one of the following conditions:
To guarantee that a write operation either finishes successfully or (inthe event of failure) is redone or rolled back as if it were neverstarted, use additional techniques to ensure data correctness andrecovery. For example, using database journaling and recoverytechniques allows applications to recover automatically from failuressuch as the following:
2.2.8 Overlapped Seeks
A seek operation involves moving the disk read/write heads to aspecific place on the disk without any transfer of data. All transferfunctions, including data checks, are preceded by an implicit seekoperation (except when the seek is inhibited by the physical I/Ofunction modifier IO$M_INHSEEK). Seek operations can be overlappedexcept on RL02, RX01, RX02, TU58 drives, MicroVAX 2000,VAXstation 2000, or on controllers with floppy disks (for example,RQDX3) when the disk is executing I/O requests. That is, when one driveperforms a seek operation, any number of other drives can also performseek operations.
During the seek operation, the controller is free to perform transferson other units. Therefore, seek operations can also overlap datatransfer operations. For example, at any one time, seven seeks and onedata transfer could be in progress on a single controller.
This overlapping is possible because, unlike I/O transfers, seekoperations do not require the controller once they are initiated.Therefore, seeks are initiated before I/O transfers and other functionsthat require the controller for extended periods.
All DSA controllers perform extensive seek optimization functions aspart of their operation; IO$M_INHSEEK has no effect on thesecontrollers.
The error recovery algorithm uses a combination of these four types oferror recovery operations to complete an I/O operation:
184.108.40.206 Skip Sectoring
Skip sectoring is a bad block treatment technique implemented on R80disk drives (the RB80 and RM80 drives). In each track of 32 sectors,one sector is reserved for bad block replacement. Consequently, an R80drive has available only 31 sectors per track. The Get Device/VolumeInformation ($GETDVI) system service returns this value.
You can detect bad blocks when a disk is formatted. Most formattersplace these blocks in a bad block file. On an R80 drive, the first badblock encountered on a track is designated as a skip sector. This isaccomplished by setting a flag in the sector header on the disk andplacing the block in the skip sector file.
When a skip sector is encountered during a data transfer, it is skippedover, and all remaining blocks in the track are shifted by one physicalblock. For example, if block number 10 is a skip sector, and a transferrequest was made beginning at block 8 for four blocks, then blocks 8,9, 11, and 12 will be transferred. Block 10 will be skipped.
Because skip sectors are implemented at the device driver level, theyare not visible to you. The device appears to have 31 contiguoussectors per track. Sector 32 is not directly addressable, although itis accessed if a skip sector is present on the track.
Logical-block-to-physical-sector translation on RX01 and RX02 drivesadheres to the standard format. For each 512-byte logical blockselected, the driver reads or writes four 128-byte physical sectors (ortwo 256-byte physical sectors if an RX02 is in double-density mode). Tominimize rotational latency, the physical sectors are interleaved.Interleaving allows the processor time to complete a sector transferbefore the next sector in the block reaches the read/write heads. Toallow for track-to-track switch time, the next logical sector thatfalls on a new track is skewed by six sectors. (There is nointerleaving or skewing on read physical block and write physical blockI/O operations.) Logical blocks are allocated starting at track 1;track 0 is not used.
The translation procedure, in more precise terms, is as follows:
2.2.11 DIGITAL Storage Architecture (DSA) Devices
Because the operating system supports all DSA disks, it supports allcontroller-to-host aspects of DSA. Some of these disks, such as theRA60, RA80, and RA81, use the standard drive-to-controllerspecifications. Other disks, such as the RC25, RD51, RD52, RD53, andRX50, do not. Disk systems that use the standard drive-to-controllerspecifications employ the same hardware connections and use the HSC50,KDA50, KDB50, and UDA50 interchangeably. Disk systems that do not usethe drive-to-controller specifications provide their own internalcontroller, which conforms to the controller-to-host specifications.
DSA disks differ from MASSBUS and UNIBUS disks in the following ways:
220.127.116.11 Bad Block Replacement and Forced Errors for DSA Disks
Disks that are built according to the DSA specifications appear to beerror free. Some number of logical blocks are always capable ofrecording data. When a disk is formatted, every user-addressablelogical block is mapped to a functioning portion of the actual disksurface, which is known as a physical block. The physical block has thetrue data storage capacity represented by the logical block.
Additional physical blocks are set aside to replace blocks that failduring normal disk operations. These extra physical blocks are calledreplacement blocks. Whenever a physical block to whicha logical block is mapped begins to fail, the associated logical blockis remapped (revectored) to one of the replacement blocks. The processthat revectors logical blocks is called a bad blockreplacement operation. Bad block replacement operations usedata stored in a special area of the disk called theReplacement and Caching Table (RCT).
When a drive-dependent error threshold is reached, the need for a badblock replacement operation is declared. Depending on the controllerinvolved, the bad block replacement operation is performed either bythe controller itself (as is the case with HSCs) or by the host (as isthe case with UDAs). In either case, the same steps are performed.After inspecting and altering the RCT, the failing block is read andits contents are stored in a reserved section of the RCT.
The design goal of DSA disks is that this read operation proceedswithout error and that the RCT copy of the data is correct (as it wasoriginally written). The failing block is then tested with one or moredata patterns. If no errors are encountered in this test, the originaldata is copied back to the original block and no further action istaken. If the data-pattern test fails, the logical block is revectoredto a replacement block. After the block is revectored, the originaldata is copied back to the revectored logical block. In all thesecases, the original data is preserved and the bad block replacementoperation occurs without the user being aware that it happened.
However, if the original data cannot be read from the failing block, abest-attempt copy of the data is stored in the RCT and the bad blockreplacement operation proceeds. When the time comes to write-back theoriginal data, the best-attempt data (stored in the RCT) is writtenback with the forced error flag set. The forced errorflag is a signal that the data read is questionable. Reading a blockthat contains a forced error flag causes the status SS$_FORCEDERROR tobe returned. This status is displayed by the following message:
Note that most utilities and DCL commands treat the forced error flagas a fatal error and terminate operation when they encounter it.However, the Backup utility (BACKUP) continues to operate in thepresenceof most errors, including the forced error. BACKUP continues to processthe file, and the forced error flag is lost. Thus, data that wasformerly marked as questionable may become correct data.
System managers (and other users of BACKUP) should assume that forcederrors reported by BACKUP signal possible degradation of the data.
To determine what, if any, blocks on a given disk volume have theforced error flag set, use the ANALYZE /DISK_STRUCTURE /READ_CHECKcommand, which invokes the Verify utility. The Verify utility readsevery logical block allocated to every file on the disk and thenreports (but ignores) any forced error blocks encountered.