HP OpenVMS Systems Documentation
HP OpenVMS Programming Concepts Manual
6.7.3 Synchronization Using Process Priority
In some applications (usually real-time applications), a number of processes perform a series of tasks. In such applications, the sequence in which a process executes can be controlled or synchronized by means of process priority. The basic method of synchronization by priority involves executing the process with the highest priority while preventing the other application processes from executing.
If you use process priority for synchronization, be aware that if the higher-priority process is blocked, either explicitly or implicitly (for example, when doing an I/O), the lower-priority processes can run, resulting in corruption on the data of the higher process's activities.
Because each processor in a multiprocessor system, when idle, schedules its own work load, it is impossible to prevent all other processes in the system from executing. Moreover, because the scheduler guarantees only that the highest-priority and computable process is scheduled at any given time, it is impossible to prevent another process in an application from executing.
Thus, application programs that synchronize by process priority must be
modified to use a different serialization method to run correctly in a
The operating system provides the following techniques to synchronize multiprocess applications:
The operating system provides basic event synchronization through event flags. Common event flags can be shared among cooperating processes running on a uniprocessor or in an SMP system, though the processes must be in the same user identification code (UIC) group. Thus, if you have developed an application that requires the concurrent execution of several processes, you can use event flags to establish communication among them and to synchronize their activity. A shared, or common, event flag can represent any event that is detectable and agreed upon by the cooperating processes. See Section 6.8 for information about using event flags.
The lock management system services---Enqueue Lock Request (SYS$ENQ), and Dequeue Lock Request (SYS$DEQ)---provide multiprocess synchronization tools that can be requested from all access modes. For details about using lock management system services, see Chapter 7.
Synchronization of access to shared data by a multiprocess application should be designed to support processes that execute concurrently on different members of an SMP system. Applications that share a global section can use MACRO-32 interlocked instructions or the equivalent in other languages to synchronize access to data in the global section. These applications can also use the lock management system services for synchronization.
Most application programs that run on an operating system in a uniprocessor system also run without modification in a multiprocessor system. However, applications that access writable global sections or that rely on process priority for synchronizing tasks should be reexamined and modified according to the information contained in this section.
In addition, some applications may execute more efficiently on a
multiprocessor if they are specifically adapted to a multiprocessing
environment. Application programmers may want to decompose an
application into several processes and coordinate their activities by
means of event flags or a shared region in memory.
A spin lock is a device used by a processor to synchronize access to data that is shared by members of a symmetric multiprocessing (SMP) system. A spin lock enables a set of processors to serialize their access to shared data. The basic form of a spin lock is a bit that indicates the state of a particular set of shared data. When the bit is set, it shows that a processor is accessing the data. A bit is either tested and set or tested and cleared; it is atomic with respect to other threads of execution on the same or other processors.
A processor that needs access to some shared data tests and sets the spin lock associated with that data. To test and set the spin lock, the processor uses an interlocked bit-test-and-set instruction. If the bit is clear, the processor can have access to the data. This is called locking or acquiring the spin lock. If the bit is set, the processor must wait because another processor is already accessing the data.
Essentially, a waiting processor spins in a tight loop; it executes repeated bit test instructions to test the state of the spin lock. The term spin lock derives from this spinning. When the spin lock is in a loop, repeatedly testing the state of the spin lock, the spin lock is said to be in a state of busy wait. The busy wait ends when the processor accessing the data clears the bit with an interlocked operation to indicate that it is done. When the bit is cleared, the spin lock is said to be unlocked or released.
Spin locks are used by the operating system executive, along with the
interrupt priority level (IPL), to control access to system data
structures in a multiprocessor system.
A writable global section is an area of memory that can be accessed (read and modified) by more than one process. On uniprocessor or SMP systems, access to a single global section with an appropriate read or write instruction is atomic on OpenVMS systems. Therefore, no other synchronization is required.
An appropriate read or write on VAX systems is an instruction that is a naturally aligned byte, word, or longword, such as a MOVx instruction, where x is a B for a byte, W for a word, or L for a longword. On Alpha systems, an appropriate read or write instruction is a naturally aligned longword or quadword, for instance, an LDx or write STx instruction where x is an L for an aligned longword or Q for an aligned quadword.
On multiprocessor systems, for a read-modify-write sequence on a multiprocessor system, two or more processes can execute concurrently, one on each processor. As a result, it is possible that concurrently executing processes can access the same locations simultaneously in a writable global section. If this happens, only partial updates may occur, or data could be corrupted or lost, because the operation is not atomic. Unless proper interlocked instructions are used on VAX systems or load-locked/store-conditional instructions are used on Alpha systems, invalid data may result. You must use interlocked or load-locked/store-conditional instructions, their high-level language equivalents, or other synchronizing techniques, such as locks or event flags.
On a uniprocessor or SMP system, access to multiple locations within a global section with read or write instructions or a read-modify-write sequence is not atomic. On a uniprocessor system, an interrupt can occur that causes process preemption, allowing another process to run and access the data before the first process completes its work. On a multiprocessor system, two processes can access the global section simultaneously on different processors. You must use a synchronization technique such as a spin lock or event flags to avoid these problems.
Check existing programs that use writable global sections to ensure that proper synchronization techniques are in place. Review the program code itself; do not rely on testing alone, because an instance of simultaneous access by more than one process to a location in a writable global section is rare.
If an application must use queue instructions to control access to
writable global sections, ensure that it uses interlocked queue
Event flags are maintained by the operating system for general programming use in coordinating thread execution with asynchronous events. Programs can use event flags to perform a variety of signaling functions. Event flag services clear, set, and read event flags. They also place a thread in a wait state pending the setting of an event flag or flags.
Table 6-2 shows the two usage styles of event flags.
The wait form of system services is a variant of asynchronous services;
there is a service request and then a wait for the completion of the
request. For reliable operation in most applications, WAIT form
services must specify a status block. The status prevents the service
from completing prematurely and also provides status information.
Explicit use of event flags follows these general steps:
Implicit use of event flags may involve only step 4, or steps 1, 4, and 5.
Use run-time library routines and system services to accomplish these event flag tasks. Table 6-3 summarizes the event flag routines and services.
Some system services set an event flag to indicate the completion or
the occurrence of an event; the calling program can test the flag.
Other system services use event flags to signal events to the calling
process, such as SYS$ENQ(W), SYS$QIO(W), or SYS$SETIMR.
Each event flag has a unique number; event flag arguments in system service calls refer to these numbers. For example, if you specify event flag 1 in a call to the SYS$QIO system service, then event flag 1 is set when the I/O operation completes.
To allow manipulation of groups of event flags, the flags are ordered in clusters of 32 numbers corresponding to bits 0 through 31 (<31:0>) in a longword. The clusters are also numbered from 0 to 4. The range of event flag numbers encompasses the flags in all clusters: event flag 0 is the first flag in cluster 0, event flag 32 is the first flag in cluster 1, and so on.
Event flags are divided into five clusters: two for local event flags and two for common event flags. There is also a special local cluster 4 that supports EFN 128.
Table 6-4 summarizes the ranges of event flag numbers and the clusters to which they belong.
The same system services manipulate flags in either local and common event flag clusters. Because the event flag number implies the cluster number, you need not specify the cluster number when you call a system service that refers to an event flag.
When a system service requires an event flag cluster number as an argument, you need only specify the number of any event flag in the cluster. Thus, to read the event flags in cluster 1, you could specify any number in the range 32 through 63.
6.8.3 Using Event Flag Zero (0)
Event flag 0 is the default event flag. Whenever a process requests a system service with an event flag number argument, but does not specify a particular flag, event flag 0 is used. Therefore, event flag 0 is more likely than other event flags to be used incorrectly for multiple concurrent requests.
Code that uses any event flag should be able to tolerate spurious sets,
assuming that the only real danger is a spurious clear that causes a
thread to miss an event. Since any system service that uses an event
flag clears the flag, there is a danger that an event which has occured
but has not been responded to is masked which can result in a hang. For
further information, see the SYS$SYNCH system service in HP OpenVMS System Services Reference Manual: GETUTC--Z.
The combination of EFN$C_ENF and a status block should be used with the
wait form of system services, or with SYS$SYNCH system service.
EFN$C_ENF does not need to be initialized, nor does it need to be
reserved or freed. Multiple threads of execution may concurrently use
EFN$C_ENF without interference as long as they use a unique status
block for each concurrent asynchronous service. When EFN$C_ENF is used
with explicit event flag system services, it performs as if always set.
You should use EFN$C_ENF to eliminate the chance for event flag overlap.
Local event flags are automatically available to each program. They are not automatically initialized. However, if an event flag is passed to a system service such as SYS$GETJPI, the service initializes the flag before using it.
When using local event flags, use the event flag routines as follows:
The following Fortran example uses LIB$GET_EF to choose a local event flag and then uses SYS$CLREF to set the event flag to 0 (clear the event flag). (Note that run-time library routines require an event flag number to be passed by reference, and system services require an event flag number to be passed by value.)
220.127.116.11 Example of Event Flag Services
Local event flags are used most commonly with other system services. For example, you can use the Set Timer (SYS$SETIMR) system service to request that an event flag be set either at a specific time of day or after a specific interval of time has passed. If you want to place a process in a wait state for a specified period of time, specify an event flag number for the SYS$SETIMR service and then use the Wait for Single Event Flag (SYS$WAITFR) system service, as shown in the C example that follows:
In this example, the daytim argument refers to a
64-bit time value. For details about how to obtain a time value in the
proper format for input to this service, see Chapter 27.
Common event flags are manipulated like local event flags. However, before a process can use event flags in a common event flag cluster, the cluster must be created. The Associate Common Event Flag Cluster (SYS$ASCEFC) system service creates a named common event flag cluster. By calling SYS$ASCEFC, other processes in the same UIC group can establish their association with the cluster so they can access flags in it. Each process that associates with the cluster must use the same name to refer to it; the SYS$ASCEFC system service establishes correspondence between the cluster name and the cluster number that a process assigns to the cluster.
The first program to name a common event flag cluster creates it; all flags in a newly created cluster are clear. Other processes on the same OpenVMS cluster node that have the same UIC group number as the creator of the cluster can reference the cluster by invoking SYS$ASCEFC and specifying the cluster name.
Different processes may associate the same name with different common event flag numbers; as long as the name and UIC group are the same, the processes reference the same cluster.
Common event flags act as a communication mechanism between images executing in different processes in the same group on the same OpenVMS cluster node. Common event flags are often used as a synchronization tool for other, more complicated communication techniques, such as logical names and global sections. For more information about using event flags to synchronize communication between processes, see Chapter 3.
If every cooperating process that is going to use a common event flag cluster has the necessary privilege or quota to create a cluster, the first process to call the SYS$ASCEFC system service creates the cluster.
The following example shows how a process might create a common event flag cluster named COMMON_CLUSTER and assign it a cluster number of 2:
Other processes in the same group can now associate with this cluster. Those processes must use the same character string name to refer to the cluster; however, the cluster numbers they assign do not have to be the same.