This defines interfaces and functionality to support multiple flows of control, called threads, within a process.
Threads define system interfaces to support the source portability of applications. The key elements defining the scope are:
- defining a sufficient set of functionality to support multiple threads of control within a process
- defining a sufficient set of functionality to support the realtime application domain
- defining sufficient performance constraints and performance related functions to allow a realtime application to achieve deterministic response from the system.
The definition of realtime used in defining the scope of this specification is:
- The ability of the system to provide a required level of service in a bounded response time.
Wherever possible, the requirements of other application environments are included in the interface definition. The Threads interfaces are specifically targeted at supporting tightly coupled multitasking environments including multiprocessors and advanced language constructs.
The specific functional areas covered by Threads and their scope includes:
- Thread management: the creation, control, and termination of multiple flows of control in the same process under the assumption of a common shared address space.
- Synchronisation primitives optimised for tightly coupled operation of multiple control flows in a common, shared address space.
- Harmonization of the threads interfaces with the existing BASE interfaces.
On XSI-conformant systems, _POSIX_THREADS, _POSIX_THREAD_ATTR_STACKADDR, _POSIX_THREAD_ATTR_STACKSIZE and _POSIX_THREAD_PROCESS_SHARED are always defined. Therefore, the following threads interfaces are always supported:
pthread_atfork() pthread_attr_destroy() pthread_attr_getdetachstate() pthread_attr_getschedparam() pthread_attr_getstackaddr() pthread_attr_getstacksize() pthread_attr_init() pthread_attr_setdetachstate() pthread_attr_setschedparam() pthread_attr_setstackaddr() pthread_attr_setstacksize() pthread_cancel() pthread_cleanup_pop() pthread_cleanup_push() pthread_cond_broadcast() pthread_cond_destroy() pthread_cond_init() pthread_cond_signal() pthread_cond_timedwait() pthread_cond_wait() pthread_condattr_destroy() pthread_condattr_getpshared() pthread_condattr_init() pthread_condattr_setpshared() pthread_create() pthread_detach() pthread_equal() pthread_exit() pthread_getspecific() pthread_join() pthread_key_create() pthread_key_delete() pthread_kill() pthread_mutex_destroy() pthread_mutex_init() pthread_mutex_lock() pthread_mutex_trylock() pthread_mutex_unlock() pthread_mutexattr_destroy() pthread_mutexattr_getpshared() pthread_mutexattr_init() pthread_mutexattr_setpshared() pthread_once() pthread_self() pthread_setcancelstate() pthread_setcanceltype() pthread_setspecific() pthread_sigmask() pthread_testcancel() sigwait()
X/Open Interfacespthread_attr_getguardsize() pthread_attr_setguardsize() pthread_getconcurrency() pthread_mutexattr_gettype() pthread_mutexattr_settype() pthread_rwlock_destroy() pthread_rwlock_init() pthread_rwlock_rdlock() pthread_rwlock_tryrdlock() pthread_rwlock_trywrlock() pthread_rwlock_unlock() pthread_rwlock_wrlock() pthread_rwlockattr_destroy() pthread_rwlockattr_getpshared() pthread_rwlockattr_init() pthread_rwlockattr_setpshared() pthread_setconcurrency()
On XSI-conformant systems, _POSIX_THREAD_SAFE_FUNCTIONS is always defined. Therefore, the following interfaces are always supported:
asctime_r() ctime_r() flockfile() ftrylockfile() funlockfile() getc_unlocked() getchar_unlocked() getgrgid_r() getgrnam_r() getpwnam_r() getpwuid_r() gmtime_r() localtime_r() putc_unlocked() putchar_unlocked() rand_r() readdir_r() strtok_r()
The following threads interfaces are only supported on XSI-conformant systems if the Realtime Threads Feature Group is supported (see
pthread_attr_getinheritsched() pthread_attr_getschedpolicy() pthread_attr_getscope() pthread_attr_setinheritsched() pthread_attr_setschedpolicy() pthread_attr_setscope() pthread_getschedparam() pthread_mutex_getprioceiling() pthread_mutex_setprioceiling() pthread_mutexattr_getprioceiling() pthread_mutexattr_getprotocol() pthread_mutexattr_setprioceiling() pthread_mutexattr_setprotocol() pthread_setschedparam()
All interfaces defined by this specification will be thread-safe, except that the following interfaces need not be thread-safe:
asctime() ctime() getc_unlocked() getchar_unlocked() getgrgid() getgrnam() getlogin() getopt() getpwnam() getpwuid() gmtime() localtime() putc_unlocked() putchar_unlocked() rand() readdir() strtok() ttyname()
basename() catgets() dbm_clearerr() dbm_close() dbm_delete() dbm_error() dbm_fetch() dbm_firstkey() dbm_nextkey() dbm_open() dbm_store() dirname() drand48() ecvt() encrypt() endgrent() endpwent() endutxent() fcvt() gamma() gcvt() getdate() getenv() getgrent() getpwent() getutxent() getutxid() getutxline() getw() l64a() lgamma() lrand48() mrand48() nl_langinfo() ptsname() putenv() pututxline() setgrent() setkey() setpwent() setutxent() strerror()
The interfaces ctermid() and tmpnam() need not be thread-safe if passed a NULL argument.
The interfaces in the Legacy Feature Group need not be thread-safe.
Implementations will provide internal synchronisation as necessary in order to satisfy this requirement.
Thread Implementation Models
There are various thread implementation models. At one end of the spectrum is the "library-thread model". In such a model, the threads of a process are not visible to the operating system kernel, and the threads are not kernel scheduled entities. The process is the only kernel scheduled entity. The process is scheduled onto the processor by the kernel according to the scheduling attributes of the process. The threads are scheduled onto the single kernel scheduled entity (the process) by the run-time library according to the scheduling attributes of the threads. A problem with this model is that it constrains concurrency. Since there is only one kernel scheduled entity (namely, the process), only one thread per process can execute at a time. If the thread that is executing blocks on I/O, then the whole process blocks.
At the other end of the spectrum is the "kernel-thread model". In this model, all threads are visible to the operating system kernel. Thus, all threads are kernel scheduled entities, and all threads can concurrently execute. The threads are scheduled onto processors by the kernel according to the scheduling attributes of the threads. The drawback to this model is that the creation and management of the threads entails operating system calls, as opposed to subroutine calls, which makes kernel threads heavier weight than library threads.
Hybrids of these two models are common. A hybrid model offers the speed of library threads and the concurrency of kernel threads. In hybrid models, a process has some (relatively small) number of kernel scheduled entities associated with it. It also has a potentially much larger number of library threads associated with it. Some library threads may be bound to kernel scheduled entities, while the other library threads are multiplexed onto the remaining kernel scheduled entities. There are two levels of thread scheduling:
- The run-time library manages the scheduling of (unbound) library threads onto kernel scheduled entities.
- The kernel manages the scheduling of kernel scheduled entities onto processors.
For this reason, a hybrid model is referred to as a "two-level threads scheduling model". In this model, the process can have multiple concurrently executing threads; specifically, it can have as many concurrently executing threads as it has kernel scheduled entities.
A thread that has blocked will not prevent any unblocked thread that is eligible to use the same processing resources from eventually making forward progress in its execution. Eligibility for processing resources is determined by the scheduling policy.
A thread becomes the owner of a mutex, m, when either:
- it returns successfully from pthread_mutex_lock() with m as the mutex argument, or
- it returns successfully from pthread_mutex_trylock() with m as the mutex argument, or
- it returns (successfully or not) from pthread_cond_wait() with m as the mutex argument (except as explicitly indicated otherwise for certain errors), or
- it returns (successfully or not) from pthread_cond_timedwait() with m as the mutex argument (except as explicitly indicated otherwise for certain errors).
The thread remains the owner of m until it either:
- executes pthread_mutex_unlock() with m as the mutex argument, or
- blocks in a call to pthread_cond_wait() with m as the mutex argument, or
- blocks in a call to pthread_cond_timedwait() with m as the mutex argument.
The implementation behaves as if at all times there is at most one owner of any mutex.
A thread that becomes the owner of a mutex is said to have acquired the mutex and the mutex is said to have become locked; when a thread gives up ownership of a mutex it is said to have released the mutex and the mutex is said to have become unlocked.
Thread Scheduling Attributes
In support of the scheduling interface, threads have attributes which are accessed through the pthread_attr_t thread creation attributes object.
The contentionscope attribute defines the scheduling contention scope of the thread to be either PTHREAD_SCOPE_PROCESS or PTHREAD_SCOPE_SYSTEM .
The inheritsched attribute specifies whether a newly created thread is to inherit the scheduling attributes of the creating thread or to have its scheduling values set according to the other scheduling attributes in the pthread_attr_t object.
The schedpolicy attribute defines the scheduling policy for the thread. The schedparam attribute defines the scheduling parameters for the thread. The interaction of threads having different policies within a process is described as part of the definition of those policies.
If the _POSIX_THREAD_PRIORITY_SCHEDULING option is defined, and the schedpolicy attribute specifies one of the priority-based policies defined under this option, the schedparam attribute contains the scheduling priority of the thread. A conforming implementation ensures that the priority value in schedparam is in the range associated with the scheduling policy when the thread attributes object is used to create a thread, or when the scheduling attributes of a thread are dynamically modified. The meaning of the priority value in schedparam is the same as that of priority.
When a process is created, its single thread has a scheduling policy and associated attributes equal to the process's policy and attributes. The default scheduling contention scope value is implementation-dependent. The default values of other scheduling attributes are implementation-dependent.
Thread Scheduling Contention Scope
The scheduling contention scope of a thread defines the set of threads with which the thread must compete for use of the processing resources. The scheduling operation will select at most one thread to execute on each processor at any point in time and the thread's scheduling attributes (for example, priority), whether under process scheduling contention scope or system scheduling contention scope, are the parameters used to determine the scheduling decision.
The scheduling contention scope, in the context of scheduling a mixed scope environment, effects threads as follows:
- A thread created with PTHREAD_SCOPE_SYSTEM scheduling contention scope contends for resources with all other threads in the same scheduling allocation domain relative to their system scheduling attributes. The system scheduling attributes of a thread created with PTHREAD_SCOPE_SYSTEM scheduling contention scope are the scheduling attributes with which the thread was created. The system scheduling attributes of a thread created with PTHREAD_SCOPE_PROCESS scheduling contention scope are the implementation-dependent mapping into system attribute space of the scheduling attributes with which the thread was created.
- Threads created with PTHREAD_SCOPE_PROCESS scheduling contention scope contend directly with other threads within their process that were created with PTHREAD_SCOPE_PROCESS scheduling contention scope. The contention is resolved based on the threads' scheduling attributes and policies. It is unspecified how such threads are scheduled relative to threads in other processes or threads with PTHREAD_SCOPE_SYSTEM scheduling contention scope.
- Conforming implementations support the PTHREAD_SCOPE_PROCESS scheduling contention scope, the PTHREAD_SCOPE_SYSTEM scheduling contention scope, or both.
Scheduling Allocation Domain
Implementations support scheduling allocation domains containing one or more processors. It should be noted that the presence of multiple processors does not automatically indicate a scheduling allocation domain size greater than one. Conforming implementations on multi-processors may map all or any subset of the CPUs to one or multiple scheduling allocation domains, and could define these scheduling allocation domains on a per-thread, per-process, or per-system basis, depending on the types of applications intended to be supported by the implementation. The scheduling allocation domain is independent of scheduling contention scope, as the scheduling contention scope merely defines the set of threads with which a thread must contend for processor resources, while scheduling allocation domain defines the set of processors for which it contends. The semantics of how this contention is resolved among threads for processors is determined by the scheduling policies of the threads.
The choice of scheduling allocation domain size and the level of application control over scheduling allocation domains is implementation-dependent. Conforming implementations may change the size of scheduling allocation domains and the binding of threads to scheduling allocation domains at any time.
For application threads with scheduling allocation domains of size equal to one, the scheduling rules defined for SCHED_FIFO and SCHED_RR will be used. All threads with system scheduling contention scope, regardless of the processes in which they reside, compete for the processor according to their priorities. Threads with process scheduling contention scope compete only with other threads with process scheduling contention scope within their process.
For application threads with scheduling allocation domains of size greater than one, the rules defined for SCHED_FIFO and SCHED_RR are used in an implementation-dependent manner. Each thread with system scheduling contention scope competes for the processors in its scheduling allocation domain in an implementation-dependent manner according to its priority. Threads with process scheduling contention scope are scheduled relative to other threads within the same scheduling contention scope in the process.
The thread cancellation mechanism allows a thread to terminate the execution of any other thread in the process in a controlled manner. The target thread (that is, the one that is being canceled) is allowed to hold cancellation requests pending in a number of ways and to perform application-specific cleanup processing when the notice of cancellation is acted upon.
Cancellation is controlled by the cancellation control interfaces. Each thread maintains its own cancelability state. Cancellation may only occur at cancellation points or when the thread is asynchronously cancelable.
The thread cancellation mechanism described in this section depends upon programs having set deferred cancelability state, which is specified as the default. Applications must also carefully follow static lexical scoping rules in their execution behaviour. For instance, use of setjmp(), return, goto, and so on, to leave user-defined cancellation scopes without doing the necessary scope pop operation will result in undefined behaviour.
Use of asynchronous cancelability while holding resources which potentially need to be released may result in resource loss. Similarly, cancellation scopes may only be safely manipulated (pushed and popped) when the thread is in the deferred or disabled cancelability states.
The cancelability state of a thread determines the action taken upon receipt of a cancellation request. The thread may control cancellation in a number of ways.
Each thread maintains its own cancelability state, which may be encoded in two bits:
- Cancelability Enable
- When cancelability is PTHREAD_CANCEL_DISABLE, cancellation requests against the target thread are held pending. By default, cancelability is set to PTHREAD_CANCEL_ENABLE.
- Cancelability Type
- When cancelability is enabled and the cancelability type is PTHREAD_CANCEL_ASYNCHRONOUS, new or pending cancellation requests may be acted upon at any time. When cancelability is enabled and the cancelability type is PTHREAD_CANCEL_DEFERRED, cancellation requests are held pending until a cancellation point (see below) is reached. If cancelability is disabled, the setting of the cancelability type has no immediate effect as all cancellation requests are held pending, however, once cancelability is enabled again the new type will be in effect. The cancelability type is PTHREAD_CANCEL_DEFERRED in all newly created threads including the thread in which main() was first invoked.
Cancellation points occur when a thread is executing the following functions:aio_suspend() close() creat() fcntl() fsync() getmsg() getpmsg() lockf() mq_receive() mq_send() msgrcv() msgsnd() msync() nanosleep() open() pause() poll() pread() pthread_cond_timedwait() pthread_cond_wait() pthread_join() pthread_testcancel() putmsg() putpmsg() pwrite() read() readv() select() sem_wait() sigpause() sigsuspend() sigtimedwait() sigwait() sigwaitinfo() sleep() system() tcdrain() usleep() wait() wait3() waitid() waitpid() write() writev()
A cancellation point may also occur when a thread is executing the following functions:
catclose() catgets() catopen() closedir() closelog() ctermid() dbm_close() dbm_delete() dbm_fetch() dbm_nextkey() dbm_open() dbm_store() dlclose() dlopen() endgrent() endpwent() endutxent() fclose() fcntl() fflush() fgetc() fgetpos() fgets() fgetwc() fgetws() fopen() fprintf() fputc() fputs() fputwc() fputws() fread() freopen() fscanf() fseek() fseeko() fsetpos() ftell() ftello() ftw() fwprintf() fwrite() fwscanf() getc() getc_unlocked() getchar() getchar_unlocked() getcwd() getdate() getgrent() getgrgid() getgrgid_r() getgrnam() getgrnam_r() getlogin() getlogin_r() getpwent() getpwnam() getpwnam_r() getpwuid() getpwuid_r() gets() getutxent() getutxid() getutxline() getw() getwc() getwchar() getwd() glob() iconv_close() iconv_open() ioctl() lseek() mkstemp() nftw() opendir() openlog() pclose() perror() popen() printf() putc() putc_unlocked() putchar() putchar_unlocked() puts() pututxline() putw() putwc() putwchar() readdir() readdir_r() remove() rename() rewind() rewinddir() scanf() seekdir() semop() setgrent() setpwent() setutxent() strerror() syslog() tmpfile() tmpnam() ttyname() ttyname_r() ungetc() ungetwc() unlink() vfprintf() vfwprintf() vprintf() vwprintf() wprintf() wscanf()
Note, that for fcntl(), for any value of the cmd argument.
An implementation will not introduce cancellation points into any other functions specified in this specification.
The side effects of acting upon a cancellation request while suspended during a call of a function is the same as the side effects that may be seen in a single-threaded program when a call to a function is interrupted by a signal and the given function returns [EINTR]. Any such side effects occur before any cancellation cleanup handlers are called.
Whenever a thread has cancelability enabled and a cancellation request has been made with that thread as the target and the thread calls pthread_testcancel(), then the cancellation request is acted upon before pthread_testcancel() returns. If a thread has cancelability enabled and the thread has an asynchronous cancellation request pending and the thread is suspended at a cancellation point waiting for an event to occur, then the cancellation request will be acted upon. However, if the thread is suspended at a cancellation point and the event that it is waiting for occurs before the cancellation request is acted upon, it is unspecified whether the cancellation request is acted upon or whether the request remains pending and the thread resumes normal execution.
Thread Cancellation Cleanup Handlers
Each thread maintains a list of cancellation cleanup handlers. The programmer uses the functions pthread_cleanup_push() and pthread_cleanup_pop() to place routines on and remove routines from this list.
When a cancellation request is acted upon, the routines in the list are invoked one by one in LIFO sequence; that is, the last routine pushed onto the list (Last In) is the first to be invoked (First Out). The thread invokes the cancellation cleanup handler with cancellation disabled until the last cancellation cleanup handler returns. When the cancellation cleanup handler for a scope is invoked, the storage for that scope remains valid. If the last cancellation cleanup handler returns, thread execution is terminated and a status of PTHREAD_CANCELED is made available to any threads joining with the target. The symbolic constant PTHREAD_CANCELED expands to a constant expression of type ( void *) whose value matches no pointer to an object in memory nor the value NULL.
The cancellation cleanup handlers are also invoked when the thread calls pthread_exit().
A side effect of acting upon a cancellation request while in a condition variable wait is that the mutex is reacquired before calling the first cancellation cleanup handler. In addition, the thread is no longer considered to be waiting for the condition and the thread will not have consumed any pending condition signals on the condition.
A cancellation cleanup handler cannot exit via longjmp() or siglongjmp().
Async-Cancel SafetyThe pthread_cancel(), pthread_setcancelstate() and pthread_setcanceltype() functions are defined to be async-cancel safe.
No other functions in this specification are required to be async-cancel safe.
Thread Read-Write Locks
Multiple readers, single writer (read-write) locks allow many threads to have simultaneous read-only access to data while allowing only one thread to have write access at any given time. They are typically used to protect data that is read-only more frequently than it is changed.
Read-write locks can be used to synchronise threads in the current process and other processes if they are allocated in memory that is writable and shared among the cooperating processes and have been initialised for this behaviour.