Base Priority
The process priority class and thread priority level are combined to form the base priority of each thread. The following table shows the base priority for combinations of process priority class and thread priority value.
Process priority class |
Thread priority level |
Base priority |
IDLE_PRIORITY_CLASS |
THREAD_PRIORITY_IDLE |
1 |
THREAD_PRIORITY_LOWEST |
2 |
|
THREAD_PRIORITY_BELOW_NORMAL |
3 |
|
THREAD_PRIORITY_NORMAL |
4 |
|
THREAD_PRIORITY_ABOVE_NORMAL |
5 |
|
THREAD_PRIORITY_HIGHEST |
6 |
|
THREAD_PRIORITY_TIME_CRITICAL |
15 |
|
BELOW_NORMAL_PRIORITY_CLASS |
THREAD_PRIORITY_IDLE |
1 |
THREAD_PRIORITY_LOWEST |
4 |
|
THREAD_PRIORITY_BELOW_NORMAL |
5 |
|
THREAD_PRIORITY_NORMAL |
6 |
|
THREAD_PRIORITY_ABOVE_NORMAL |
7 |
|
THREAD_PRIORITY_HIGHEST |
8 |
|
THREAD_PRIORITY_TIME_CRITICAL |
15 |
|
NORMAL_PRIORITY_CLASS |
THREAD_PRIORITY_IDLE |
1 |
THREAD_PRIORITY_LOWEST |
6 |
|
THREAD_PRIORITY_BELOW_NORMAL |
7 |
|
THREAD_PRIORITY_NORMAL |
8 |
|
THREAD_PRIORITY_ABOVE_NORMAL |
9 |
|
THREAD_PRIORITY_HIGHEST |
10 |
|
THREAD_PRIORITY_TIME_CRITICAL |
15 |
|
ABOVE_NORMAL_PRIORITY_CLASS |
THREAD_PRIORITY_IDLE |
1 |
THREAD_PRIORITY_LOWEST |
8 |
|
THREAD_PRIORITY_BELOW_NORMAL |
9 |
|
THREAD_PRIORITY_NORMAL |
10 |
|
THREAD_PRIORITY_ABOVE_NORMAL |
11 |
|
THREAD_PRIORITY_HIGHEST |
12 |
|
THREAD_PRIORITY_TIME_CRITICAL |
15 |
|
HIGH_PRIORITY_CLASS |
THREAD_PRIORITY_IDLE |
1 |
THREAD_PRIORITY_LOWEST |
11 |
|
THREAD_PRIORITY_BELOW_NORMAL |
12 |
|
THREAD_PRIORITY_NORMAL |
13 |
|
THREAD_PRIORITY_ABOVE_NORMAL |
14 |
|
THREAD_PRIORITY_HIGHEST |
15 |
|
THREAD_PRIORITY_TIME_CRITICAL |
15 |
|
REALTIME_PRIORITY_CLASS |
THREAD_PRIORITY_IDLE |
16 |
THREAD_PRIORITY_LOWEST |
22 |
|
THREAD_PRIORITY_BELOW_NORMAL |
23 |
|
THREAD_PRIORITY_NORMAL |
24 |
|
THREAD_PRIORITY_ABOVE_NORMAL |
25 |
|
THREAD_PRIORITY_HIGHEST |
26 |
|
THREAD_PRIORITY_TIME_CRITICAL |
31 |
Context Switches
The scheduler maintains a queue of executable threads for each priority level. These are known as ready threads. When a processor becomes available, the system performs a context switch, changing the thread context. The steps in a context switch are:
The following classes of threads are not ready threads.
Until threads that are suspended or blocked become ready to run, the scheduler does not allocate any processor time to them, regardless of their priority. The most common reasons for a context switch are:
When a running thread needs to wait, it relinquishes the remainder of its time slice.
Priority Boosts
Each thread has a dynamic priority. This is the priority the scheduler uses to determine which thread to execute. Initially, a thread's dynamic priority is the same as its base priority. The system can boost and lower the dynamic priority, to ensure that it is responsive and that no threads are starved for processor time. The system does not boost the priority of threads with a base priority level between 16 and 31. Only threads with a base priority between 0 and 15 receive dynamic priority boosts. The system boosts the dynamic priority of a thread to enhance its responsiveness as follows.
You can disable the priority-boosting feature by calling the SetProcessPriorityBoost() or SetThreadPriorityBoost() function. To determine whether this feature has been disabled, call the GetProcessPriorityBoost() or GetThreadPriorityBoost() function. After raising a thread's dynamic priority, the scheduler reduces that priority by one level each time the thread completes a time slice, until the thread drops back to its base priority. A thread's dynamic priority is never less than its base priority.
Priority Inversion
Priority inversion occurs when two or more threads with different priorities are in contention to be scheduled. Consider a simple case with three threads: thread 1, thread 2, and thread 3. Thread 1 is high priority and becomes ready to be scheduled. Thread 2, a low-priority thread, is executing code in a critical section. Thread 1, the high-priority thread, begins waiting for a shared resource from thread 2. Thread 3 has medium priority. Thread 3 receives all the processor time, because the high-priority thread (thread 1) is waiting for shared resources from the low-priority thread (thread 2). Thread 2 will not leave the critical section, because it does not have the highest priority and will not be scheduled.
The scheduler solves this problem by randomly boosting the priority of the ready threads (in this case, the low priority lock-holders). The low priority threads run long enough to exit the critical section, and the high-priority thread can enter the critical section. If the low-priority thread does not get enough CPU time to exit the critical section the first time, it will get another chance during the next round of scheduling.