(Latest Revision: March 11, 2018)
Chapter Six -- CPU Scheduling -- Lecture Notes
- 6.0 Objectives
- Introduce CPU scheduling
- describe scheduling algorithms
- learn criteria for evaluating scheduling algorithms
- see examples of scheduling algorithms use by operating systems
- 6.1 Basic Concepts
- Graphs of typical process CPU burst frequency versus
duration tend to be (hyper)exponential - in other words
there are lots of short bursts and very few long ones.
- The job of the short-term scheduler of the OS is to choose
the next job from the ready queue to be allowed to run in the CPU.
Conceptually, the data structure in the ready queue is a process
control block (PCB).
- Sometimes the current process, P, in the CPU is removed from the CPU
by an interrupt or trap but it remains ready to run and goes
immediately back into the ready queue. If, under those
circumstances, the short-term scheduler decides to run a process
other than P next, then we say P has been preempted. A
scheduler capable of preempting is called a preemptive
- A system with a preemptive short-term scheduler will be much more
responsive in some situations. This is very desirable. However, it
tends to be a challenge to protect shared data structures from
corruption in such systems.
- In some cases, it's possible for the OS to solve a critical
section problem by masking interrupts and refusing to
relinquish the CPU while executing critical section code.
- The dispatcher is the part of the OS that actually puts a
process in the CPU. After the scheduler selects the next process
to run in the CPU, it calls the dispatcher. The dispatcher
performs the data movement associated with context switching,
switches to user mode, and jumps to the requisite
instruction within the user process. It is very important to
design the code of the dispatcher so that it runs as
quickly as possible. The dispatcher is a performance bottleneck
on many systems. The time the dispatcher uses up is called
the dispatch latency.
- 6.2 Scheduling Criteria
- Goals of short-term scheduling: high CPU utilization: high
throughput, short turnaround time, short waiting time, short
- CPU utilzation
== (time CPU runs 'useful' code) / (total elapsed time)
- throughput == jobs completed per unit time
- turnaround time == time from submission to completion
- wait time == time spent in the ready queue
- response time == time from submission
to start of first response
- Generally system designers also want the measures of
performance above to have low variance.
- 6.3 Scheduling Algorithms
- FCFS -- easily understood and implemented -- non-preemptive
-- can lead to long average waiting times -- is subject to the
"convoy effect" wherein either the CPU or I/O channel is not in use
because all the I/O-bound are stuck 'behind' a CPU-bound process.
- SJF -- can't be implemented perfectly at the level of
short-term scheduling -- instead the scheduler employs an estimate
of the length of the next burst -- pure SJF gives minimum average
waiting time -- can be preemptive or non-preemptive -- if preemptive
then when a shorter job enters the queue, the running job must be
replaced with the new job ASAP.
- HPJF -- can be preemptive or non-preemptive -- can lead to
starvation (SJF is a form of HPJF) -- aging used in conjunction with
HPJF can avert starvation -- aging is the elevation of the priority
of processes that have not received CPU time -- Unix is interesting
in that "priority" is actually based on "starvation" so low priority
processes don't starve simply because starvation causes the priority
to become higher.
- Starvation: (a.k.a. indefinite postponement)
This phenomenon is similar to what happens if you go to the
emergency room with a sore throat. You will be served if and when
there is no one there with a more urgent need. After all the people
who were there when you arrived are served, you may still have to
wait longer because more people could have arrived in the meantime
who need care more urgently than you. There is no limit to the
number of people who will be served before you are served. You will
probably be served eventually, but even that is not certain.
- RR -- designed for time-sharing -- circular queue and
time-slicing (quantizing) -- preemptive -- response times tend to be
short -- average wait times tend to be high -- there tends to be a
high amount of context-switching overhead -- to avoid excessive
overhead we need the quantum to be large in comparison with the
context-switch time -- to get low turnaround times the quantum
should be larger than the average CPU burst time -- on the other
hand the quantum has to be small enough to produce good response
- Multilevel Queue Scheduling -- You can divide processes into
several different types and have a separate ready-queue for each
type (e.g. foreground processes, background, system, interactive
editing, ...) You can have a different scheduling algorithm for
each queue - tailored to that type of process. You also need to
design an algorithm that decides which queue to service, and when.
This algorithm might implement strict priorities among the queues,
or time-slice among them.
- Multilevel Feedback Queue Scheduling -- Put jobs in queues
based on recent CPU usage. Allow migration from queue to queue.
'Hungrier processes' move up to higher priority queues - prevents
- 6.4 Thread Scheduling
- 6.5 Multiple-Processor Scheduling
- Asymmetric Multiprocessing (AMP) is a possibility: All OS
code runs on just one processor & so only one process at a time has
access to system data structures.
- Virtually all modern OS's support Symmetric Multiprocessing
(SMP) - system code can run on any processor, OS code on each
processor schedules that processor.
- SMP can be used in conjunction with either a common ready queue or
separate ready queues for each processor.
- Access to a common ready queue has to be programmed carefully
(Critical section problem).
- On the other hand, load balancing can be problematic if there is a
separate ready queue for each processor.
- Processor Affinity If a process migrates from one CPU
to another, the old instruction and address caches become invalid,
and it will take time for caches on the new CPU to become 'populated'.
For this reason, OS designers may build the short-term scheduler
to treat processes as having affinity for the CPU on which
they have been executing most recently.
- The idea of soft processor affinity is for the scheduler to give
priority to putting the process on its 'home' CPU, but not to make
doing so an absolute requirement. With
hard processor affinity, there's little or no flexibility
to allow a process to migrate to a different CPU.
- Another factor is architectures with non-uniform memory access
(NUMA) -- e.g. when there are multiple units with integrated CPU
and memory. Here it is advantageous for the scheduler and memory
allocator to cooperate to keep a process running on the CPU
that is 'close' to the memory in which it is resident.
- Push migration is an approach to load balancing - a system
process periodically checks ready queues and moves processes if
- Pull migration - another approach - OS code running on a
processor that has little work to do takes jobs from other
- Note that load balancing tends to work counter to the idea of
- Multicore processors are basically multiprocessors on a single chip.
- A core may implement two or more logical processors by supporting
the compute cycle of one thread during the memory stall cycle(s) of
- This means that one level of scheduling is done by the hardware of
the cores when they select among the threads assigned to them.
- 6.6 Real-Time CPU Scheduling
- Soft real-time systems guarantees only to give high
priority to certain processes with real-time requirements.
- Hard real-time systems guarantee that certain processes
will execute within their real-time constraints.
- 6.6.1 Minimizing Latency
- In order to assure that deadlines are met, a hard real-time
system must enforce a bound on interrupt latency, the
elapsed time between when an interrupt arrives at a CPU and
when the service routine for the interrupt starts execution.
- Other ways to help assure that deadlines can be met:
- Allow preemption of processes, so that high priority
processes can be dispatched without delay.
- Create means for low-priority processes to
quickly release resources needed by a high-priority
- 6.6.2 Priority-Based Scheduling
- The text considers periodic hard real-time processes. These
processes require the CPU at constant intervals (periods).
Besides the period p, two other constants are associated
with a periodic process, the time t required to complete
the task, and the deadline d. We assume the relation
- One of the jobs of the scheduler in a hard real-time system
is to examine the characteristics of periodic processes, and
determine whether it can guarantee that the deadlines of the
process will always be met.
- If so, the scheduler admits the process.
- If not, the scheduler rejects the process.
- 6.6.3 Rate-Monotonic Scheduling
- Rate-monotonic scheduling involves static priorities
- Periodic processes with shorter periods have priority over
periodic processes with longer periods.
- With this scheduling discipline, a set of periodic processes
have the 'best chance' of all meeting their deadlines. In
other words, if rate-monotonic scheduling does not allow
the processes to always meet their deadlines, then no other
algorithm that assigns static priorities can do so either.
- 6.6.4 Earliest-Deadline-First Scheduling
- EDF scheduling is an algorithm that minimizes maximum
lateness by giving priority to the process with
the earliest deadline. Relative priorities of processes
can change based on which future deadlines are currently
- EDF can perform better than Rate-monotonic scheduling, and
it does not require that processes be periodic, or that they
have fixed processing time t.
- All that's required is that the process be able to 'announce'
its deadline to the scheduler when it becomes runnable.
- 6.6.5 Proportional Share Scheduling
- Proportional share schedulers allocate shares of CPU time to
each member of a group of processes.
- The scheduler 'admits' a client process if it is able
to allocate the number of shares that the client
- 6.6.6 POSIX Real-Time Scheduling
- The POSIX standard provides extensions for real-time
- Basically the scheduling possibilities are forms of
- 6.7 Operating System Examples
- 6.7.1 Example: Linux Scheduling
- The Linux OS started out using a traditional Unix
scheduler, but in more current revisions began using
something called the Completely Fair Scheduler(CFS).
- Standard Linux has two scheduling classes, "default"
and real-time. Each class has its own priority and
- CFS is the algorithm used for the default class.
- CFS assigns a proportion of CPU time to each task,
according to its nice value, which is a
quantity traditionally used in unix-like systems as a
component of the process priority formula. We may think of
the nice value as a base priority.
- The vruntime is the overall CFS priority
of a process, calculated from its nice value (nv) and
its amount of recent (CPU) runtime (rrt). Lower values of either
nv or rrt correspond to higher CFS priority.
- The CFS scheduler has a ready queue implemented as a
red-black tree. The scheduler caches the location of the
leftmost node, which represents a task with a minimum
value of vruntime - a maximum priority.
- Linux supports real-time processing by implementing
POSIX Real-Time Scheduling (see section 6.6.6), and
by giving real-time processes higher priority than all
- 6.7.2 Example: Windows Scheduling
- Windows has preemptive priority scheduling that gives
real-time processes higher priority than normal processes.
- There are 32 levels of priority, each with its own queue.
- The OS changes priorities of normal processes to help
improve response times and mitigate starvation
- Windows 7 has user mode scheduling - a facility
that allows applications to create and schedule user-level
- 6.7.3 Example: Solaris Scheduling
- Solaris 9 has six scheduling classes: time-sharing,
interactive, real time, system, fair share, and
- The Solaris time-sharing class is the default.
Solaris schedules the time-sharing class
with a multi-level feedback queue. Processes that have used
little CPU time lately get high priority. Lower priority
processes get larger timeslices.
- Solaris schedules the interactive class about the same way
as the time-sharing class, but it gives high priority to
- In scheduling the time-sharing and interactive classes,
Solaris gives smaller time slices to higher-priority
proceses. It also lowers the priority of processes
that use up their time slice, and it boosts priorities
of processes that have recently returned from sleep.
- The scheduling algorithm for the time-sharing and
interactive classes is table-driven.
- Processes in the real-time class have priority higher than
any other class.
- Solaris actually maps all the classes and priorities into a
single spectrum of global priorities, and the scheduler runs
the highest priority process in the ready queue.
- 6.8 Algorithm Evaluation (evaluation of scheduling algorithms)
How do we select CPU scheduling algorithms?
First we must choose the goals we want the scheduling algorithm to
achieve, such as response time of less than one second, or low variance
in wait time.
Next we need ways to evaluate algorithms to see if they will achieve the
- 6.8.1 Evaluate using Deterministic modeling -- calculate performance based on
specific test inputs -- this can be effective when trying to find an
good scheduling technique for a system that tends to run the same
kind of program over and over, often with very similar input from
run to run -- e.g. payroll processing, census calculations, and
weather prediction. This is not the style of most personal computer
users, but it is nevertheless not uncommon in business and
- 6.8.2 Evaluate using Queuing Models
- prepare by gathering system statistics
- distribution of CPU and I/O bursts
- distribution of job arrival-time
- do queuing network analysis -- get figures for things like
average queue lengths and waiting times
- the mathematics can be difficult so often researchers make
- Naturally the results may be "weaker" because of the
- 6.8.3 Evaluate using Simulations
- represent major components and activities of the system with
software functions and data structures.
- use a variable to represent a clock and update system state
variables after each "tick"
- measure desired characteristics of the (simulation of the)
- use random numbers and assumed distributions to produce inputs
to the system, or use traces from actual running systems
- trace tapes have the advantage of making it possible to compare
different algorithms on the exact same inputs.
- random numbers and assumed distributions may not capture enough
information such as correlation in time between different kinds
- traces can take up a lot of space on secondary storage
- simulations can be expensive to code and require long periods
of time to run
- 6.8.4 Evaluation by Implementation --
implement scheduling algorithms on an actual system
and measure their performance.
- there are high costs of coding and system modification
- inconvenient to users may be considerable
- user behavior may change in response to new scheduling
algorithm so the benefits of the algorithm may not
- it may be useful to allow managers and users to "tune"
the scheduling policy of the running system. Not many of
today's operating systems are tunable.
- Some versions of unix can be tuned - Solaris has a
dispadmin command for changing parameters of schedule