HP OpenVMS Systems Documentation
OpenVMS Programming Concepts Manual
7.2.3 Resource Names
For two resources to be considered the same, these four parts must be identical for each resource.
The name specified by the process represents the resource being locked. Other processes that need to access the resource must refer to it using the same name. The correlation between the name and the resource is a convention agreed upon by the cooperating processes.
The access mode is determined by the caller's access mode unless a less privileged mode is specified in the call to the SYS$ENQ system service. Access modes, their numeric values, and their symbolic names are discussed in the OpenVMS Calling Standard.
The default resource domain is selected by the UIC group number for the process. You can access the system domain by setting the LCK$M_SYSTEM when you request a new root lock. Other domains can be accessed using the optional RSDM_ID parameter to SYS$ENQ. You need the SYSLCK user privilege to request systemwide locks from user or supervisor mode. No additional privilege is required to request systemwide locks from executive or kernel mode.
When a lock request is queued, it can specify the identification of a
parent lock, at which point it becomes a sublock (see Section 7.4.8).
However, the parent lock must be granted, or the lock request is not
accepted. This enables a process to lock a resource at different
degrees of granularity.
The mode of a lock determines whether the resource can be shared with other lock requests. Table 7-2 describes the six lock modes.
7.2.5 Levels of Locking and Compatibility
Locks that allow the process to share a resource are called low-level locks; locks that allow the process almost exclusive access to a resource are called high-level locks. Null and concurrent read mode locks are considered low-level locks; protected write and exclusive mode locks are considered high-level. The lock modes, from lowest- to highest-level access, are:
Note that the concurrent write and protected read modes are considered to be of the same level.
Locks that can be shared with other locks are said to have compatible lock modes. High-level lock modes are less compatible with other lock modes than are low-level lock modes. Table 7-3 shows the compatibility of the lock modes.
Key to Lock Modes:
NL = Null
7.2.6 Lock Management Queues
A queue is associated with each of the three states (see Figure 7-2).
Figure 7-2 Three Lock Queues
When you request a new lock, the lock management services first
determine whether the resource is currently known (that is, if any
other processes have locks on that resource). If the resource is new
(that is, if no other locks exist on the resource), the lock management
services create an entry for the new resource and the requested lock.
If the resource is already known, the lock management services
determine whether any other locks are waiting in either the conversion
or the waiting queue. If other locks are waiting in either queue, the
new lock request is queued at the end of the waiting queue. If both the
conversion and waiting queues are empty, the lock management services
determine whether the new lock is compatible with the other granted
locks. If the lock request is compatible, the lock is granted; if it is
not compatible, it is placed in the waiting queue. You can use a flag
bit to direct the lock management services not to queue a lock request
if one cannot be granted immediately.
Lock conversions allow processes to change the level of locks. For example, a process can maintain a low-level lock on a resource until it limits access to the resource. The process can then request a lock conversion.
You specify lock conversions by using a flag bit (see Section 7.4.6) and a lock status block. The lock status block must contain the lock identification of the lock to be converted. If the new lock mode is compatible with the currently granted locks, the conversion request is granted immediately. If the new lock mode is incompatible with the existing locks in the granted queue, the request is placed in the conversion queue. The lock retains its old lock mode and does not receive its new lock mode until the request is granted.
When a lock is dequeued or is converted to a lower-level lock mode, the
lock management services inspect the first conversion request on the
conversion queue. The conversion request is granted if it is compatible
with the locks currently granted. Any compatible conversion requests
immediately following are also granted. If the conversion queue is
empty, the waiting queue is checked. The first lock request on the
waiting queue is granted if it is compatible with the locks currently
granted. Any compatible lock requests immediately following are also
In Figure 7-3, three processes have queued requests for resources that cannot be accessed until the current locks held are dequeued (or converted to a lower lock mode).
Figure 7-3 Deadlock
If the lock management services determine that a deadlock exists, the services choose a process to break the deadlock. The chosen process is termed the victim. If the victim has requested a new lock, the lock is not granted; if the victim has requested a lock conversion, the lock is returned to its old lock mode. In either case, the status code SS$_DEADLOCK is placed in the lock status block. Note that granted locks are never revoked; only waiting lock requests can receive the status code SS$_DEADLOCK.
7.2.9 Lock Quotas and Limits
The OpenVMS lock manager was modified for OpenVMS Version 7.1. Some internal restrictions on the number of locks and resources available on the system have been eased and a method to allow enqueue limit quota (ENQLM) of greater than 32767 has been added. No changes were made to the interface and no programming changes to applications are required to take advantage of these changes.
While most processes do not require very many locks simultaneously (typically less than 100), large scale database or server applications can easily exceed this threshold.
Specifically, the OpenVMS lock manager includes the following enhancements:
If you set an ENQLM value of 32767 in the SYSUAF, the operating system
treats it as no limit and allows an application to own up to 16,776,959
locks, the architectural maximum of the OpenVMS lock manager. The
following sections describe these features in more detail.
Before the release of OpenVMS Version 7.1, the previous limit for the total number of locks a single process could own was 32767. This limit was enforced unless the process ran in a privileged mode and used the NOQUOTA flag. Further attempts to acquire locks would result in an error (SS$_EXQUOTA). Because applications generally use the lock manager for internal synchronization, this error was usually fatal to the application. While most processes do not require very many locks simultaneously (typically less than 100), large scale database or server applications can easily exceed this threshold.
Now with the release of OpenVMS Version V7.1, an ENQLM value of 32767 in a user's SYSUAF record is treated as if there is no quota limit for that user. This means that the user is allowed to own up to 16,776,959 locks, the architectural maximum of the OpenVMS lock manager.
The current maximum SYSUAF ENQLM value of 32767 is not treated as a limit. Instead, when a process is created that reads ENQLM from the SYSUAF, if the value in the SYSUAF is 32767, it is automatically extended to the new maximum. The Create Process (SYS$CREPRC) system service has been modified to allow large quotas to be passed on to the target process. Therefore, a process can be created with an arbitrary ENQLM of any value up to the new maximum if it is initialized from a process with the SYSUAF quota of 32767.
The behavior of the process quota and creation limit (PQL) parameters
for the default and minimum ENQLM quotas for detached processes has not
changed. The default SYSGEN values for the parameters have been raised
The previous maximum value for the number of sub-resources or sub-locks in a resource or lock tree (parent/children relationships) was 65535. The internal structures were reorganized from word to longword counters, which can handle sub-resource and sub-lock counts up to the current architectural limits of the lock manager. No programming or interface changes were made. As a result, SS$_EXDEPTH errors no longer occur.
In a mixed-version OpenVMS Cluster, only nodes running OpenVMS Version
7.1 are able to handle these large lock trees. Large scale locking
applications should be restricted to running on a subset of nodes
running OpenVMS Version 7.1, or the entire cluster should be upgraded
to OpenVMS Version 7.1 to avoid unpredictable results.
The resource hash table is an internal OpenVMS lock manager structure used to do quick lookups on resource names without a lengthy interactive search. Like all such tables, it results in a tradeoff of consuming memory in order to speed operation. A typical tuning goal is to have the resource hash table size (RESHASHTBL system parameter) about four times larger than the total number of resources in use on the system. Systems that have memory constraints or are not critically dependent on locking speed could set the table to a smaller size.
Previously, the limit for the RESHASHTBL was 65535, based on both the word field used for the parameter and the algorithm used to develop the hash index. This limit has been removed. The new maximum for the RESHASHTBL is 16,777,216 (224), which is the current architectural maximum for the total number of resources possible on the system.
No external changes are apparent from this modification. Large memory
systems that use very large resource namespaces can take advantage of
this change to gain a performance advantage in many locking operations.
There is no mixed-version OpenVMS Cluster impact related to this change.
The lock ID table is an internal OpenVMS lock manager structure used to find the relevant data structures for any given lock in the system. On OpenVMS Alpha, the lock ID table is allowed to expand up to the LOCKIDTBL_MAX system parameter limit. When the table is filled with current locks, the lock request is rejected with an error of SS$_NOLOCKID. This usually results in a system failure (LCKMGRERR bugcheck) as soon as a privileged system application receives such an error.
On OpenVMS VAX, however, this behavior was modified to allow for continued expansion of the lock ID table and was not constrained by LOCKIDTBL_MAX, as on OpenVMS Alpha.
With OpenVMS Version 7.1, both Alpha and VAX platforms dynamically increase the lock ID table as usage requires and if sufficient physical memory is available. The default, minimum, and maximum values for the LOCKIDTBL system parameter now allow large single tables for lock IDs. The maximum number of locks is now regulated by the amount of available nonpaged pool instead of by both nonpaged pool and the LOCKIDTBL_MAX system parameter.
The LOCKIDTBL_MAX parameter is now obsolete. In its place, an appropriate maximum value based on the total available memory is calculated at boot time and stored as the value for the parameter. Inputs (via MODPARAMS or SYSGEN) are ignored.
In addition, the lock ID table itself is now located in S2 space for OpenVMS Alpha, to avoid using large amounts of S0 space in large memory systems.
There are no visible changes for these modifications.
You use the SYS$ENQ or SYS$ENQW system service to queue lock requests. SYS$ENQ queues a lock request and returns; SYS$ENQW queues a lock request, waits until the lock is granted, and then returns. When you request new locks, the system service call must specify the lock mode, address of the lock status block, and resource name.
The format for SYS$ENQ and SYS$ENQW is as follows:
SYS$ENQ(W) ([efn] ,lkmode ,lksb ,[flags] ,[resnam] ,[parid] ,[astadr]
In this example, a number of processes access the STRUCTURE_1 data structure. Some processes read the data structure; others write to the structure. Readers must be protected from reading the structure while it is being updated by writers. The reader in the example queues a request for a protected read mode lock. Protected read mode is compatible with itself, so all readers can read the structure at the same time. A writer to the structure uses protected write or exclusive mode locks. Because protected write mode and exclusive mode are not compatible with protected read mode, no writers can write the data structure until the readers have released their locks, and no readers can read the data structure until the writers have released their locks.
The program segment in Example 7-1 requests a null lock for the resource named TERMINAL. After the lock is granted, the program requests that the lock be converted to an exclusive lock. Note that, after SYS$ENQW returns, the program checks the status of the system service and the status returned in the lock status block to ensure that the request completed successfully. (The lock mode symbols are defined in the $LCKDEF module of the system macro library.)
For more complete information on the use of SYS$ENQ, refer to the
OpenVMS System Services Reference Manual.
The previous sections discuss locking techniques and concepts that are
useful to all applications. The following sections discuss specialized
features of the lock manager.
The SYS$ENQ system service returns control to the calling program when the lock request is queued. The status code in R0 indicates whether the request was queued successfully. After the request is queued, the procedure cannot access the resource until the request is granted. A procedure can use three methods to check that a request has been granted:
These methods of synchronization are identical to the synchronization techniques used with the SYS$QIO system services (described in Chapter 23).
The $ENQW macro performs synchronization by combining the functions of the SYS$ENQ system service and the Synchronize (SYS$SYNCH) system service. The $ENQW macro has the same arguments as the $ENQ macro. It queues the lock request and then places the program in an event flag wait state (LEF) until the lock request is granted.