(Latest Revision:
Sat Sep 21 23:42:48 PDT 2002
)
Chapter Five
--
Threads
--
Lecture Ideas
- What resources does a process need?
- What individual resources does a thread need? (some individual id info
including a "thread ID number," program counter, registers, runtime stack)
Every other resource must be shared with the other threads in the process.
(code, data, open files, and so forth)
- Discuss (so-called) benefits of multi-threading:
- Responsiveness: A multi-threaded task may have better
turnaround time -- elapsed time from submission to finish of task.
This is because a single process can use CPU(s) *and* I/O channel(s)
simultaneously. However the scheduling policy of the OS can work
against this potential advantage.
- Resource Sharing: The sharing of primary memory facilitates
the communication of the threads of a task. The overall system
benefits from the sharing of primary memory because more memory is
available to run more programs. This can be viewed as something of a
potential disadvantage to the individual task because it must share
the CPU(s) with more tasks. Sharing of memory mapping information
can make context switching go faster, but only when switching between
threads of one task, not when switching from task to task. Therefore
the scheduler has to do some special handling to get the benefit of
fast context switching -- i.e. "batching" the threads of one task.
- Economy: It is more economical to create a new thread in a
task than to create a new task. It is more economical to switch from
one thread to another within the same task than to switch between
different tasks.
- Utilization of Multiprocessing:
Multithreading allows the process to use more than one CPU
simultaneously (assuming helpful scheduling).
- User-level threads rely on library functions.
- User-level threads are generally faster to create and manage than
kernel-level threads.
- Unlike the case with kernel-level threads, the entire task generally has
to block when a user-level thread does a blocking system call.
- Question: Can it be made feasible to run tasks with relatively large
numbers of I/O-bound user-level threads?
- Discuss multithreading support models -- how user threads are supported by
kernel threads: many-to-one model, one-to-one model, and many-to-many
model.
- How should fork-task and exec-task behave when called by a thread in a
task?
- What are (Unix) signals? What is the relation to interrupts and traps?
If a process is "killed by a signal" after attempting an illegal memory
access what is the chain of events? Is a trap involved? Is the OS
involved?
- When a signal is sent to a multithreaded application, which thread should
receive the signal and handle it? System designers have to make some
decisions on how to answer this according to the type of signal sent. For
example a signal to kill the process should probably go to all the
threads. A "suspend" or "continue" signal might be intended for one
particular thread.
- Discuss thread pools (5.3.4; p. 138)
- Look at example of pthread code on page #140?
- Errata: It would seem the text slips up when it states: "Kernel-level
threads are the only objects scheduled" on page 141, paragraph 2, line 4
but then says: ".. the kernel can then schedule another LWP ..." on page
142, paragraph 3, lines 6-7.
- Discuss Solaris 2 Threads
- Discuss Linux "Threads"