HP OpenVMS Systems Documentation
HP OpenVMS Programming Concepts Manual
7.4.10 Interoperation with 16-Byte and 64-Byte Value Blocks
Beginning with OpenVMS Version 8.2 on Alpha and I64 systems, the lock value block has been extended from 16 to 64 bytes. To use this feature, applications must explicitly specify both the LCK$M_XVALBLK flag and the LCK$M_VALBLK flag and provide a 64-byte buffer when reading and writing the value block.
Existing applications that use the 16-byte buffer and the LCK$M_VALBLK flag continue to operate without modifications, even when interacting with applications that use the 64-byte lock value block.
In your design of an application using the extended lock value block, you may or may not have to take interoperability into account. If your new application uses only completely new resource names in a completely new resource tree that is never referenced by an old application, from a version of OpenVMS prior to Version8.2, or from a VAX node, then you need not worry about interoperability.
If this is not the case, your design may need to take into account the possibility that the lock value block will be marked invalid as a result of interoperability. There are three situations in which the extended lock value block can be marked invalid:
The SS$_XVALNOTVALID condition value is a warning message, not an error message; therefore, the $ENQ service grants the requested lock and returns this warning on all subsequent calls to $ENQ until an application writes the value block with the LCK$M_XVALBLK flag set. SS$_XVALNOTVALID is fully described in the description of the $ENQ System Service in the HP OpenVMS System Services Reference Manual: A--GETUAI manual.
If the entire lock status block is invalid, the SS$_VALNOTVALID status
is returned and overrides SS$_XVALNOTVALID status.
When a process no longer needs a lock on a resource, you can dequeue the lock by using the Dequeue Lock Request (SYS$DEQ) system service. Dequeuing locks means that the specified lock request is removed from the queue it is in. Locks are dequeued from any queue: Granted, Waiting, or Conversion (see Section 7.2.6). When the last lock on a resource is dequeued, the lock management services delete the name of the resource from its data structures.
The four arguments to the SYS$DEQ macro (lkid, valblk, acmode, and flags) are optional. The lkid argument allows the process to specify a particular lock to be dequeued, using the lock identification returned in the lock status block.
The valblk argument contains the address of a 16-byte lock value block or, if LKC$M_XVALBLK is specified on Alpha or I64 systems, the 64-byte lock value block. If the lock being dequeued is in protected write or exclusive mode, the contents of the lock value block are stored in the lock value block associated with the resource. If the lock being dequeued is in any other mode, the lock value block is not used. The lock value block can be used only if a specific lock is being dequeued. It may not be used when the LCK$M_DEQALL flag is specified.
Three flags are available:
The following is an example of dequeuing locks:
User-mode locks are automatically dequeued when the image exits.
The lock management services provide methods for applications to
perform local buffer caching (also called distributed
buffer management). Local buffer caching allows a number of processes
to maintain copies of data (disk blocks, for example) in buffers local
to each process and to be notified when the buffers contain invalid
data because of modifications by another process. In applications where
modifications are infrequent, substantial I/O can be saved by
maintaining local copies of buffers. You can use either the lock value
block or blocking ASTs (or both) to perform buffer caching.
To support local buffer caching using the lock value block, each process maintaining a cache of buffers maintains a null mode lock on a resource that represents the current contents of each buffer. (For this discussion, assume that the buffers contain disk blocks.) The value block associated with each resource is used to contain a disk block "version number." The first time a lock is obtained on a particular disk block, the current version number of that disk block is returned in the lock value block of the process. If the contents of the buffer are cached, this version number is saved along with the buffer. To reuse the contents of the buffer, the null lock must be converted to protected read mode or exclusive mode, depending on whether the buffer is to be read or written. This conversion returns the latest version number of the disk block. The version number of the disk block is compared with the saved version number. If they are equal, the cached copy is valid. If they are not equal, a fresh copy of the disk block must be read from disk.
Whenever a procedure modifies a buffer, it writes the modified buffer
to disk and then increments the version number before converting the
corresponding lock to null mode. In this way, the next process that
attempts to use its local copy of the same buffer finds a version
number mismatch and must read the latest copy from disk rather than use
its cached (now invalid) buffer.
Blocking ASTs support local buffer caching in two ways. One technique
involves deferred buffer writes; the other technique is an alternative
method of local buffer caching without using value blocks.
When local buffer caching is being performed, a modified buffer must be
written to disk before the exclusive mode lock can be released. If a
large number of modifications are expected (particularly over a short
period of time), you can reduce disk I/O by both maintaining the
exclusive mode lock for the entire time that the modifications are
being made and by writing the buffer once. However, this prevents other
processes from using the same disk block during this interval. This
problem can be avoided if the process holding the exclusive mode lock
has a blocking AST. The AST notifies the process if another process
needs to use the same disk block. The holder of the exclusive mode lock
can then write the buffer to disk and convert its lock to null mode
(thereby allowing the other process to access the disk block). However,
if no other process needs the same disk block, the first process can
modify it many times but write it only once.
To perform local buffer caching using blocking ASTs, processes do not
convert their locks to null mode from protected read or exclusive mode
when finished with the buffer. Instead, they receive blocking ASTs
whenever another process attempts to lock the same resource in an
incompatible mode. With this technique, processes are notified that
their cached buffers are invalid as soon as a writer needs the buffer,
rather than the next time the process tries to use the buffer.
The choice between using either version numbers or blocking ASTs to perform local buffer caching depends on the characteristics of the application. An application that uses version numbers performs more lock conversions; whereas one that uses blocking ASTs delivers more ASTs. Note that these techniques are compatible; some processes can use one technique, and other processes can use the other at the same time. Generally, blocking ASTs are preferable in a low-contention environment; whereas version numbers are preferable in a high-contention environment. You can even invent combined or adaptive strategies.
In a combined strategy, the applications use specific techniques. If a process is expected to reuse the contents of a buffer in a short amount of time, the application uses blocking ASTs; if there is no reason to expect a quick reuse, the application uses version numbers.
In an adaptive strategy, an application makes evaluations based on the rate of blocking ASTs and conversions. If blocking ASTs arrive frequently, the application changes to using version numbers; if many conversions take place and the same cached copy remains valid, the application changes to using blocking ASTs.
For example, suppose one process continually displays the state of a database, while another occasionally updates it. If version numbers are used, the displaying process must always make sure that its copy of the database is valid (by performing a lock conversion); if blocking ASTs are used, the display process is informed every time the database is updated. On the other hand, if updates occur frequently, the use of version numbers is preferable to continually delivering blocking ASTs.
To share a terminal between a parent process and a subprocess, each process requests a null lock on a shared resource name. Then, each time one of the processes wants to perform terminal I/O, it requests an exclusive lock, performs the I/O, and requests a null lock.
Because the lock manager is effective only between cooperating programs, the program that created the subprocess should not exit until the subprocess has exited. To ensure that the parent does not exit before the subprocess, specify an event flag to be set when the subprocess exits (the num argument of LIB$SPAWN). Before exiting from the parent program, use SYS$WAITFR to ensure that the event flag has been set. (You can suppress the logout message from the subprocess by using the SYS$DELPRC system service to delete the subprocess instead of allowing the subprocess to exit.)
After the parent process exits, a created process cannot synchronize access to the terminal and should use the SYS$BRKTHRU system service to write to the terminal.
This part describes the use of asynchronous system traps (ASTs), and
the use of condition-handling routines and services.
|System Service||Task Performed|
|SYS$SETAST||Enable or disable reception of AST requests|
The system services that use the AST mechanism accept as an argument the address of an AST service routine, that is, a routine to be given control when the event occurs.
Table 8-2 shows some of the services that use ASTs.
|System Service||Task Performed|
|SYS$ENQ||Enqueue Lock Request|
|SYS$GETDVI||Get Device/Volume Information|
|SYS$GETJPI||Get Job/Process Information|
|SYS$GETSYI||Get Systemwide Information|
|SYS$QIO||Queue I/O Request|
|SYS$SETPRA||Set Power Recovery AST|
|SYS$UPDSEC||Update Section File on Disk|
The following sections describe in more detail how ASTs work and how to
8.2 Declaring and Queuing ASTs
Most ASTs occur as the result of the completion of an asynchronous event that is initiated by a system service (for example, a SYS$QIO or SYS$SETIMR request) when the process requests notification by means of an AST.
The Declare AST (SYS$DCLAST) system service can be called to invoke a subroutine as an AST. With this service, a process can declare an AST only for the same or for a less privileged access mode.
The following sections present programming information about declaring
and using ASTs.
8.2.1 Reentrant Code and ASTs
Compiled code that is generated by HP compilers is reentrant. Furthermore, HP compilers normally generate AST routine local data that is reentrant. Data that is shared static, shared external data, Fortran COMMON, and group or system global section data are not inherently reentrant, and usually require explicit synchronization.
Because the queuing mechanism for an AST does not provide for returning a function value or passing more than one argument, you should write an AST routine as a subroutine. This subroutine should use nonvolatile storage that is valid over the life of the AST. To establish nonvolatile storage, you can use the LIB$GET_VM run-time routine. You can also use a high-level language's storage keywords to create permanent nonvolatile storage. For instance, you can use the C language's keywords as follows:
extern static routine malloc().
In some cases, a system service that queues an AST (for example,
SYS$GETJPI) allows you to specify an argument for the AST routine . If
you choose to pass the argument, the AST routine must be written to
accept the argument.
18.104.22.168 The Call Frame
When a routine is active under OpenVMS, it has available to it temporary storage on a stack, in a construct known as a stack frame, or call frame. Each time a subroutine call is made, another call frame is pushed onto the stack and storage is made available to that subroutine. Each time a subroutine returns to its caller, the subroutine's call frame is pulled off the stack, and the storage is made available for reuse by other subroutines. Call frames therefore are nested. Outer call frames remain active longer, and the outermost call frame, the call frame associated with the main routine, is normally always available.
A primary exception to this call frame condition is when an exit handler runs. With an exit handler running, only static data is available. The exit handler effectively has its own call frame. Exit handlers are declared with the SYS$DCLEXH system service.
The use of call frames for storage means that all routine-local data is reentrant; that is, each subroutine has its own storage for the routine-local data.
The allocation of storage that is known to the AST must be in memory that is not volatile over the possible interval the AST might be pending. This means you must be familiar with how the compilers allocate routine-local storage using the stack pointer and the frame pointer. This storage is valid only while the stack frame is active. Should the routine that is associated with the stack frame return, the AST cannot write to this storage without having the potential for some severe application data corruptions.