HP OpenVMS Systems Documentation
Guide to the POSIX Threads Library
188.8.131.52 Cleanup Handlers
A cleanup handler is a routine you provide that is associated with a particular lexical scope within your program and that can be invoked when a thread exits that scope. The cleanup handler's purpose is to restore that portion of the program's state that has been changed within the handler's associated lexical scope. In particular, cleanup handlers allow a thread to react to thread-exit and cancelation requests.
Your program declares a cleanup handler for a thread by calling the pthread_cleanup_push() routine. Your program removes (and optionally invokes) a cleanup handler by calling the pthread_cleanup_pop() routine.
A cleanup handler is invoked when the calling thread exits the handler's associated lexical scope, due to:
For each call to pthread_cleanup_push() , your program must contain a corresponding call to pthread_cleanup_pop() . The two calls form a lexical scope within your program. One pair of calls to pthread_cleanup_push() and pthread_cleanup_pop() cannot overlap the scope of another pair; however, pairs of calls can be nested.
Because cleanup handlers are specified by the POSIX standard, they are
a portable mechanism. An alternative to using cleanup handlers is to
define and/or catch exceptions with the exception package. Chapter 5
describes how to use the exception package. Cleanup handler routines,
exception handling clauses (that is,
), and C++ object destructors (or
clauses) are functionally equivalent mechanisms.
Detaching a thread means to mark a thread for destruction as soon as it terminates. Destroying a thread means to free, or make available for reuse, the resources associated with that thread.
If a thread has terminated, then detaching that thread causes the Threads Library to destroy it immediately. If a thread is detached before it terminates, then the Threads Library frees the thread's resources after it terminates.
A thread can be detached explicitly or implicitly:
It is illegal for your program to attempt to join or detach a detached
thread. In general, you cannot perform any operation (for example,
cancelation) on a detached thread. This is because the thread ID might
have become invalid or might have been assigned to a new thread
immediately upon termination of the thread. The thread should not be
detached until no further references to it will be made.
Joining with a thread means to suspend this thread's execution until another thread (the target thread) terminates. In addition, the target thread is detached after it terminates.
Join is one form of thread synchronization. It is often useful when one thread needs to wait for another and possibly retrieve a single return value. (The value may be a pointer, for example to heap storage.) There is nothing special about join, though---similar results, or infinite variations, can be achieved by use of a mutex and condition variable.
A thread joins with another thread by calling the pthread_join() routine and specifying the thread identifier of the thread. If the target thread has already terminated, then this thread does not wait.
By default, the target thread of a join operation is created with the
detachstate attribute of its thread attributes object set to
. It should not be created with the detachstate attribute set to
Keep in mind these restrictions about joining with a thread:
2.3.6 Scheduling a Thread
Scheduling means to evaluate and change the states of the process' threads. As your multithreaded program runs, the Threads Library detects whether each thread is ready to execute, is waiting for a synchronization object, or has terminated, and so on.
Also, for each thread, the Threads Library regularly checks whether that thread's scheduling priority and scheduling policy, when compared with those of the process' other threads, entail forcing a change in that thread's state. Remember that scheduling priority specifies the "precedence" of a thread in the application. Scheduling policy provides a mechanism to control how the Threads Library interprets that priority as your program runs.
To understand this section, you must be familiar with the concepts presented in these sections:
184.108.40.206 Calculating the Scheduling Priority
A thread's scheduling priority falls within a range of values, depending on its scheduling policy. To specify the minimum or maximum scheduling priority for a thread, use the sched_get_priority_min() or sched_get_priority_max() routines---or use the appropriate nonportable symbol such as PRI_OTHER_MIN or PRI_OTHER_MAX . Priority values are integers, so you can specify a value between the minimum and maximum priority using an appropriate arithmetic expression.
For example, to specify a scheduling priority value that is midway between the minimum and maximum for the SCHED_OTHER scheduling policy, use the following expression (coded appropriately for your programming language):
where pri_other_mid represents the priority value you want to set.
Avoid using literal numerical values to specify a scheduling priority
setting, because the range of priorities can change from implementation
to implementation. Values outside the specified range for each
scheduling policy might be invalid.
To demonstrate the results of the different scheduling policies, consider the following example: A program has four threads, A, B, C, and D. For each scheduling policy, three scheduling priorities have been defined: minimum, middle, and maximum. The threads have the following priorities:
On a uniprocessor system, only one thread can run at any given time. The ordering of execution depends upon the relative scheduling policies and priorities of the threads. Given a set of threads with fixed priorities such as the previous list, their execution behavior is typically predictable. However, in a symmetric multiprocessor (or SMP) system the execution behavior is completely indeterminate. Although the four threads have differing priorities, a multiprocessor system might execute two or more of these threads simultaneously.
When you design a multithreaded application that uses scheduling priorities, it is critical to remember that scheduling is not a substitute for synchronization. That is, you cannot assume that a higher-priority thread can access shared data without interference from lower-priority threads. For example, if one thread has a FIFO scheduling policy and the highest scheduling priority setting, while another has default scheduling policy and the lowest scheduling priority setting, the Threads Library might allow the two threads to run at the same time. As a corollary, on a four-processor system you also cannot assume that the four highest-priority threads are executing simultaneously at any particular moment. Refer to Section 3.1.3 for more information about using thread scheduling as thread synchronization.
The following figures demonstrate how the Threads Library schedules a set of threads on a uniprocessor based on whether each thread has the FIFO, RR, or throughput setting for its scheduling policy attribute. Assume that all waiting threads are ready to execute when the current thread waits or terminates and that no higher-priority thread is awakened while a thread is executing (that is, executing during the flow shown in each figure).
Figure 2-1 shows a flow with FIFO scheduling.
Figure 2-1 Flow with FIFO Scheduling
Thread D executes until it waits or terminates. Next, although thread B and thread C have the same priority, thread B starts because it has been waiting longer than thread C. Thread B executes until it waits or terminates, then thread C executes until it waits or terminates. Finally, thread A executes.
Figure 2-2 shows a flow with RR scheduling.
Figure 2-2 Flow with RR Scheduling
Thread D executes until it waits or terminates. Next, thread B and thread C are time sliced, because they both have the same priority. Finally, thread A executes.
Figure 2-3 shows a flow with Default scheduling.
Figure 2-3 Flow with Default Scheduling
Threads D, B, C, and A are time sliced, even though thread A has a lower priority than the others. Thread A receives less execution time than thread D, B, or C if any of those are ready to execute as often as Thread A. However, the default scheduling policy protects thread A against indefinitely being blocked from executing.
Because low-priority threads eventually run, the default scheduling
policy protects against occurrences of thread starvation and priority
inversion, which are discussed in Section 3.5.2.
Canceling a thread means requesting the termination of a target thread as soon as possible. A thread can request the cancelation of another thread or itself.
Thread cancelation is a three-stage operation:
220.127.116.11 Thread Cancelation Implemented Using Exceptions
The Threads Library implements thread cancelation using exceptions. Using the exception package, it is possible for a thread (to which a cancelation request has been delivered) explicitly to catch the thread cancelation exception ( pthread_cancel_e ) defined by the Threads Library and to perform cleanup actions accordingly. After catching this exception, the exception handler code should always reraise the exception, to avoid breaking the "contract" that cancelation leads to thread termination.
When a thread is terminated due to cancelation, the Threads Library
writes the return value
into the thread's thread object. This is because cancelation prevents
the thread from calling
or returning from its start routine.
Each thread controls whether it can be canceled (that is, whether it receives requests to terminate) and how quickly it terminates after receiving the cancelation request, as follows:
A thread's cancelability state determines whether it receives a cancelation request. When created, a thread's cancelability state is enabled. If the cancelability state is disabled, the thread does not receive cancelation requests, instead, they remain pending.
If the thread's cancelability state is enabled, a thread may
If its cancelability state is disabled, the thread cannot be terminated by any cancelation request. This means that a thread could wait indefinitely if it does not come to a normal conclusion; therefore, exercise care if your software depends on cancelation.
A thread can use the pthread_setcanceltype() routine to change its cancelability type, which determines whether it responds to a cancelation request only at cancelation points (synchronous cancelation) or at any point in its execution (asynchronous cancelation).
Initially, a thread's cancelability type is deferred, which means that the thread receives a cancelation request only at cancelation points---for example, during a call to the pthread_cond_wait() routine. If you set a thread's cancelability type to asynchronous, the thread can receive a cancelation request at any time.
18.104.22.168 Deferred Cancelation Points
A cancelation point is a routine that delivers a posted cancelation request to that request's target thread.
The following routines in the pthread interface are cancelation points:
The following routines in the tis interface are cancelation points:
Other routines that are also cancelation points are mentioned in the operating system-specific appendixes of this guide. Refer to the following thread cancelability for system services topics:
22.214.171.124 Cleanup from Deferred Cancelation
When a cancelation request is delivered to a thread, the thread could be holding some resources, such as locked mutexes or allocated memory. Your program must release these resources before the thread terminates.
The Threads Library provides two equivalent mechanisms that can do the cleanup during cancelation, as follows:
126.96.36.199 Cleanup from Asynchronous Cancelation
When an application sets the cancelability type to asynchronous, cancelation may occur at any instant, even within the execution of a single instruction. Because it is impossible to predict exactly when an asynchronous cancelation request will be delivered, it is extremely difficult for a program to recover properly. For this reason, an asynchronous cancelability type should be set only within regions of code that do not need to clean up in any way, such as straight-line code or looping code that is compute-bound and that makes no calls and allocates no resources.
While a thread's cancelability type is asynchronous, it should not call any routine unless that routine is explicitly documented as "safe for asynchronous cancelation." In particular, you can never use asynchronous cancelability type in code that allocates or frees memory, or that locks or unlocks mutexes---because the cleanup code cannot reliably determine the state of the resource.