HP OpenVMS Systems Documentation
Upgrading Privileged-Code Applications on OpenVMS Alpha and OpenVMS I64 Systems
2.1.2 Changes Not Identified by Warning Messages
A few necessary source changes might not always be immediately identified by compile-time or link-time warnings. Some of these are:
2.2 I/O Changes
This section describes OpenVMS Alpha Version 7.0 changes to the I/O subsystem
that might require source changes to device drivers.
As described in Section A.9, the I/O Request Packet Extension (IRPE) structure now manages a single additional locked-down buffer instead of two. The general approach to deal with this change is to use a chain of additional IRPE structures.
Current users of the IRPE may be depending on the fact that a buffer locked for direct I/O could be fully described by theirp$l_svapte,irp$l_boff, andirp$l_bcnt values. For example, it is not uncommon for an IRPE to be used in this fashion:
This approach no longer works correctly. As described in Appendix A, the DIOBM structure that is embedded in the IRP will be needed as well. Moreover, it may not be sufficient to simply copy the DIOBM from the IRP to the IRPE. In particular, theirp$l_svapte may need to be modified if the DIOBM is moved.
The general approach to this change is to lock the buffer using the IRPE directly. This approach is shown in some detail in the following example:
This approach is easily generalized to more buffers and IRPEs. The only thing omitted from this example is the code that allocates and links together the IRPEs. The following example shows the associated error callback routine in its entirety; it can handle an arbitrary number of IRPEs.
2.2.2 Impact of MMG_STD$IOLOCK, MMG_STD$UNLOCK Changes
The interface changes to the MMG_STD$IOLOCK and MMG_STD$UNLOCK routines are
described in Appendix B. The
general approach to these changes is to use the corresponding replacement routines
and the new DIOBM structure.
OpenVMS device drivers that perform data transfers using direct I/O functions do so by locking the buffer into memory while still in process context, that is, in a driver FDT routine. The PTE address of the first page that maps the buffer is obtained and the byte offset within the page to the start of the buffer is computed. These values are saved in the IRP (irp$l_svapte andirp$l_boff). The rest of the driver then uses values in theirp$l_svapte andirp$l_boff cells and the byte count inirp$l_bcntin order to perform the transfer. Eventually when the transfer has completed and the request returns to process context for I/O postprocessing, the buffer is unlocked using theirp$l_svapte value and not the original process buffer address.
To support 64-bit addresses on a direct I/O function, one only needs to ensure the proper handling of the buffer address within the FDT routine.
Almost all device drivers that perform data transfers via a direct I/O function use OpenVMS-supplied FDT support routines to lock the buffer into memory. Because these routines obtain the buffer address either indirectly from the IRP or directly from a parameter that is passed by value, the interfaces for these routines can easily be enhanced to support 64-bit wide addresses.
However, various OpenVMS Alpha memory management infrastructure changes made to support 64-bit addressing have a potentially major impact on the use of the 32-bitirp$l_svapte cell by device drivers prior to OpenVMS Alpha Version 7.0. In general, there are two problems:
In most cases, both of these PTE access problems are solved by copying the PTEs that map the buffer into nonpaged pool and settingirp$l_svapte to point to the copies. This copy is done immediately after the buffer has been successfully locked. A copy of the PTE values is acceptable because device drivers only read the PTE values and are not allowed to modify them. These PTE copies are held in a new nonpaged pool data structure, the Direct I/O Buffer Map (DIOBM) structure. A standard DIOBM structure (also known as a fixed-size primary DIOBM) contains enough room for a vector of 9 (DIOBM$K_PTECNT_FIX) PTE values. This is sufficient for a buffer size up to 64K bytes on a system with 8 KB pages.1 It is expected that most I/O requests are handled by this mechanism and that the overhead to copy a small number of PTEs is acceptable, especially given that these PTEs have been recently accessed to lock the pages.
The standard IRP contains an embedded fixed-size DIOBM structure. When the PTEs that map a buffer fit into the embedded DIOBM, theirp$l_svapte cell is set to point to the start of the PTE copy vector within the embedded DIOBM structure in that IRP.
If the buffer requires more than 9 PTEs, then a separate "secondary" DIOBM structure that is variably-sized is allocated to hold the PTE copies. If such a secondary DIOBM structure is needed, it is pointed to by the original, or "primary" DIOBM structure. The secondary DIOBM structure is deallocated during I/O postprocessing when the buffer pages are unlocked. In this case, theirp$l_svapte cell is set to point into the PTE vector in the secondary DIOBM structure. The secondary DIOBM requires only 8 bytes of nonpaged pool for each page in the buffer. The allocation of the secondary DIOBM structure is not charged against the process BYTLM quota, but it is controlled by the process direct I/O limit (DIOLM). This is the same approach used for other internal data structures that are required to support the I/O, including the kernel process block, kernel process stack, and the IRP itself.