(
Latest Revision:
Sun Sep 19, 2005
)
Chapter Seven
--
Deadlocks
--
Chapter Notes
- Section 7.1 -- System Model
- Our model system contains processes and resource types.
(Processes can be heavy- or light- weight.)
- Resources can be physical things like CPU's, printers,
disk & tape drives, memory, scanners, network interface
cards, registers, mice, screens, and keyboards. There
are also logical resources such as variables and files.
- We assume that any two elements of the same resource
type are identical from the point of view of any
process. (For example a process requesting a page of
memory or a register does not "care" which particular
page or register it uses.)
- Processes use resources.
- Before using a resource a process must make a request
to the OS for the resource and receive permission from
the OS to use the resource.
- A process is allowed to make multiple requests and to
request several resources at the same time. However a
process is not allowed to request more resources than
the total available in the system. For example a
process is not allowed to ask for two tape drives if
there is only one tape drive.
- If a process requests a resource and the use of the
resource cannot be granted immediately, the process
must wait.
- After using a resource a process must release it by
notifying the OS.
- Deadlock occurs in a group of processes
when each process is waiting for a resource that can
only be acquired when one of the other processes in the
group releases it.
- A deadlock is stable. None of the processes involved can acquire
all the resources for which it is waiting.
- Section 7.2 -- Deadlock Characterization
- Section 7.2.1 -- Necessary Conditions
- Mutual Exclusion -- If multiple processes are
permitted to access all resources concurrently then deadlock
can't happen.
- Hold and Wait -- If processes never hold a resource
while waiting for a resource then deadlock can't happen.
- No preemption -- If "enough" preemption of resources
occurs, then deadlock can't happen. (In the extreme,
suppose that whenever a process waits for a resource the
system takes away all the resources it currently holds.
Deadlock would be impossible. )
- Circular Wait -- If cycles in the resource
allocation graph cannot occur then there cannot be any
deadlock.
- Section 7.2.2 -- Resource Allocation Graph (RAG)
- Processes are Nodes -- Circles
- Resource Types are Nodes -- Squares
- Each instance of the resource type is represented by a
dot in the square
- A solid request edge points from a process to a resource
type
- When a request is granted the request edge is
instantaneously transformed into an assignment edge
extending from one of the instances inside the resource to
the process.
- There may be "dashed" claim edges from a process to a
resource type indicating that the process may make a request
for an instance of the resource in the future.
- If there are no cycles in the RAG then there is no
deadlock in the system.
- If there is only one instance of every resource type, then
a cycle in the RAG indicates a deadlock.
- If some of the resource types have more than one instance
then it is possible for a cycle to exist in the RAG when
there is no deadlock.
- Section 7.3 -- Methods for Handling Deadlocks
- Deadlock Prevention -- Make rules about how requests and
assignments are done so that one or more of the necessary
conditions for deadlock are missing.
- Deadlock Avoidance -- Place restrictions on assignments only when
the system is about to enter an "unsafe state" from which it could
immediately "go out of control" and become deadlocked.
- Deadlock Detection -- Place no restrictions on the system but
monitor it and break up deadlocks when they occur.
- Ignore the problem -- Whatever happens happens -- you can always
reboot the system -- or is it as simple as that?
- Section 7.4 -- Deadlock Prevention
- Section 7.4.1 -- Mutual Exclusion -- If some resources are
sharable then we can prevent them from being involved in any
deadlock situations by making them sharable. The text uses the
example of making read-only files sharable. If no process ever
has to wait for a read-only file, then it will never be part of a
cycle in a RAG.
PROBLEM WITH THAT: This idea is unworkable for use with the many
resources that are inherently unsharable.
- Section 7.4.2 -- Hold and Wait
- Make a rule that processes must request all resources at the
very start of execution, when they have no resources yet, or
- Make a rule that a process may request resources only when
it has none. (There may be a problem with this idea if
multiple requests are allowed, and "piecemeal" assignment..
See this image:
counterExample01.jpg )
- PROBLEMS WITH THAT: Starvation is possible when multiple
requests are made and "piecemeal" assignments are not
allowed. On the other hand, if piecemeal assignments
are allowed then processes can deadlock. Even when
starvation does not occur, lowered device utilization and
reduced system throughput are likely. These methods also
force the programmer to write the code so that processes
will follow the rules.
- Section 7.4.3 -- No Preemption
- One method: "if a process is holding some resources and
requests another resource that cannot be immediately
allocated to it (that is, the process must wait), then all
resources currently being held are preempted." (implicitly
released) The process is then set up as requesting both the
old resources and the new ones.
- Another method: Suppose a process X requests some resources
that are held by a process Y. If Y is waiting for a
resource then X takes what it wants from Y and this is added
to the request that Y is waiting for. If Y is not waiting
then X waits.(You can preempt stateless things like
registers and memory without harming Y. However if you take
away something like a printer, you may as well terminate Y.)
- PROBLEMS WITH THAT: Basically the problems here are the same
as with hold and wait. Instead of being programmed to
"follow rules" processes have to be programmed to deal with
the possibility of having resources preempted.
- Section 7.4.4 -- Circular Wait
- Impose a total ordering on the resource types.
- Require processes to make requests in increasing order.
- If a process wants more than one instance of some resource
type, it has to ask for all of them at once.
- PROBLEMS WITH THAT: Lowered device utilization and reduced
system throughput are likely. This methods also forces the
programmer to write the code so that processes will follow the
rules.
- Section 7.5 -- Deadlock Avoidance
- The idea of deadlock avoidance is for the OS to get some advance
information about the possible needs of each process. The OS
then uses this knowledge to recognize unsafe states from
which the system could slip, out of control, into deadlock.
- Section 7.5.1 -- Safe State
- Basically an unsafe state is one which will turn in to a
deadlock if all processes immediately request their
remaining possible needs.
- The state is safe if it is not unsafe. That means even if
all processes max out their requests, they will all be able
to finish executing in some order P0, P1, P2, ... , Pn.
When each process exits, it will give up its resources and
those will become available to the next process in the
sequence.
- The system will grant a request from a process
- if the request is not in excess of the possible needs that
the process declared,
- if the resources requested are currently available (free),
and
- if granting the request will leave the system in a safe
state.
If granting a request would put the system into an unsafe
state, the system makes the process wait for the resource
instead. The resources will be granted sometime later when it
will be safe to do so.
- PROBLEMS WITH THAT: The system is burdened with doing a lot of
checking each time there is a request for a resource. Resource
utilization and throughput can be reduced and starvation can occur.
- Section 7.5.2 -- Resource-Allocation Graph
- We can create an augmented resource allocation graph
(AUGRAG) by adding claim edges representing each request
that each process might make.
- If there is just one instance of each resource type, then
"unsafe" is equivalent to "cycle in the AUGRAG." This is a
conceptually simple way to characterize safety. Cycle
detection algorithms typically require O(sqr(N)) work, where
N is the number of processes in the system. This kind of
method does *not* work where there are multiple instances
of some resource types.
- Section 7.5.3 -- Banker's Algorithm
- The Banker's Algorithm is a deadlock avoidance scheme that
works when there are multiple instances of resource types.
It is generally less efficient than the cycle-detection
scheme.
- Before it does anything else a new process must declare the
maximum number of instances of each resource type that it
may need.
- Various data structures are required. See the "GLOSSARY"
here.
- Section 7.5.3.1 -- Safety Algorithm
- Read the algorithm to check for safety
here.
- Section 7.5.3.2 -- Resource Request Algorithm
- Read the banker's resource-request algorithm
here.
- Section 7.5.3.3 -- An Illustrative Example
- See the textbook example worked out completely in detail
here.
- Section 7.6 -- Deadlock Detection
- Another alternative: Have the system run a deadlock detection
algorithm and have the system run a recovery algorithm after it
detects a deadlock.
- Section 7.6.1 -- Single Instance of Each Resource Type
- In this case there is a deadlock if and only if there is a cycle
in the resource allocation graph. Therefore the OS can detect
deadlock by maintaining a RAG that represents the system and
doing cycle checks from time to time. There is a more compact
graphical representation called a wait-for graph that can
be used instead. The algorithm will tend to be more efficient
if run on this graph.
- Section 7.6.2 -- Several Instances of a Resource Type
- If there are multiple instances of some resource types then
one can detect deadlock with an algorithm similar to the
safety algorithm. (We can view the safety algorithm as
checking to see if there would be a deadlock if all the
processes were to max out their requests.)
- See the deadlock detection algorithm
here.
- Section 7.6.3 -- Detection-Algorithm Usage
- In the system model we use the addition of a request edge to a
RAG is always the last step in the creation of a deadlock. If
we run the deadlock detection algorithm every time a process
blocks requesting a resource, then we will detect each deadlock
as soon as it happens. This would require a lot of overhead
processing.
- Since deadlocks are usually quite rare, it may suffice to check
for deadlock only about as often as deadlock occurs. We also
might make use of heuristics, such as low CPU utilization or the
presence of processes in the system that have been waiting for
resources for an unusually long time.
- Section 7.7 -- Recovery from Deadlock
- Section 7.7.1 -- Process Termination
- To break a deadlock we can just abort all deadlocked
processes, but that is expensive in terms of lost work of all
the processes.
- Instead we can select victim processes one at a time and
abort them, stopping when the deadlock is broken. This
brings up the question of what criteria to use to select
victims.
- Kill the lowest priority process?
- Kill the youngest process?
- Kill the process that has most "life" ahead of it?
- Kill the process with the fewest stateful resources?
- Kill the process that needs the most additional
resources?
- Kill the smallest possible number of processes?
Also, this method requires us to check for
deadlock after killing each victim.
- Aborting processes may leave data and/or devices they were
using in an incorrect state. Consider a process that was in
the midst of printing a file or burning a CD.
- Section 7.7.2 -- Resource Preemption
- Instead of killing deadlocked processes we can take resources
from some and give them to others until the deadlock is
broken.
- We have to consider here too the bases for victim selection.
- If we don't kill the victims we will probably need to roll
them back to some point they were at before they acquired the
resources we have taken from them. How do we do this?
- Starvation is a possibility when we abort or rollback processes
and restart them. There is no guarantee that they will not get
into a deadlock again and be killed or rolled back again. We can
give a process higher priority for being "spared" if it has been
rolled back.
- Section 7.8 -- Summary