May 17, 2018
Chapter One -- Introduction -- Lecture Notes
An OS performs as an intermediary between the user and the hardware,
functioning as a virtual computer that makes the use of the hardware more
convenient and/or efficient.
- 1.0 Chapter Goals
- Describe basic organization of computer systems
- Provide tour of major components of OS's
- Overview the many types of computing environments
- Explore open-source Operating Systems
What Operating Systems Do
- Provide An
Interface above the Level of the Bare Machine
- Make the Computer
- Make Computer Use
Controls and Coordinates Use of the Hardware
- esp. I/O - prevents error and misuse.
- The OS is a
manager and allocator of resources: CPU time, storage,
devices ... There is competition among processes for resources.
Provide Functions Commonly Needed by Application Programs
- Ease of use is a major goal for a PC operating system.
- Optimizing resource utilization is more important
for operating systems on Workstations and mainframes.
- OS features that help conserve battery life are important on
hand-held computing devices.
- IMPORTANT: In this course, we will think of the OS as the Kernel -
the part of the system that is always on the list of ACTIVE
processes - READY to run anytime it can be allocated time on a CPU.
- The typical system consists of one or more general-purpose CPU's and
special purpose controllers sharing a common bus, including a memory
controller. These units are able to operate in parallel, though
they compete for the memory. The memory controller arbitrates.
A firmware-resident routine runs at boot time to perform
initializations and load & execute the OS.
Hardware devices can interrupt the CPU - a signal sent along the
A process can interrupt the CPU - by executing a special
instruction - also by making an error.
- The computer is designed in such a way that
automatically causes suspension of the currently running process and
transfer of execution to some predetermined section of code -
typically a part of the operating system - an interrupt handler.
- When an interrupt occurs,
everything that will be needed to restart
the currently running process must be saved. The interrupt
automatically does some of the saving - for example the saving of
the address of the current instruction. The interrupt handler code
may perform other "context-saving" actions.
- CPU's go through an automatic instruction fetch, decode and execute
- The CPU can access only programs and data that are in main memory.
Permanent copies reside in files on secondary storage. The smallest unit
of addressable primary memory is usually a byte.
- Memory hierarchies consist of many levels with differing speeds,
costs, sizes and volatility.
- Typically methods for controlling peripheral devices are quite
diverse and complex.
Operating systems have "device driver" code -
drivers corresponding to each type of device controller. The job of
the device driver is to receive simple high level commands from the
OS, translate the commands, and communicate with the device to
achieve the desired effects.
Typically drivers have to load bit patterns into registers within
the devices in order to "command" the device
to perform actions.
Typically, when a device finishes performing a data transfer it
notifies the OS by sending an interrupt.
- With Direct Memory Access (DMA) a controller can transfer an entire
block of data between device and primary memory, and interrupt the
CPU only once, after the entire block has been transferred.
- 1.3 Computer-System Architecture
- OS's must rise to the challenge of operating various kinds of
multiprocessor systems efficiently: SMP's, multiple cores, blade
servers & clustered systems.
- A Multiprocessor System (tightly-coupled system) has two or more
general-purpose CPU's that share a bus, and often the clock, main
memory and peripherals. Typically a multiprocessor with N CPU's is
cheaper than N single-CPU computers and does almost as much work in
as little time. If designed properly, a multiprocessor will exhibit
"graceful degradation" and "fault tolerance."
- A multiprocessor may be asymmetric, but most are symmetric nowadays.
- There may be multiple CPUs having their own memory controllers - a CPU may
be able to access some parts of primary memory more quickly than others -
non-uniform memory access (NUMA).
- Multiple cores are essentially multiprocessors on a chip. These
tend to have the advantages of quicker communication and lower power
requirements than multiprocessors constructed from multiple
- Clustered Systems are like multiprocessors. They exhibit graceful
degradation but are less tightly-coupled, consisting of computers
closely connected on some type of high-speed medium, often sharing
secondary storage, but not main memory or clock.
- There are asymmetric and symmetric clustered systems.
- The forms of parallel processing and sharing found on
multiprocessors and clustered systems require sophisticated forms of
synchronization and data "locking" to prevent incorrect results from
- Clustered systems often make use of storage-area networks (SANs).
SANs provide a way for multiple computers to share multiple secondary
memory devices (typically disk drives).
- 1.4 Operating-System Structure
Multiprogramming was a development
pushed into existence by the need
for better utilization -
the need to keep the CPU and I/O channels of
an expensive computer busy, so the computer will
do more work per unit time.
A "QUANTUM LEAP" in sophistication of the OS was required
-- e.g. context switching, memory management, scheduling, concurrency
management, protection, and security.
Time-sharing was a "natural" follow-on
to multiprogramming that allowed a return to
"interactive" computing. Processes associated
users are switched very quickly so that every user
gets rapid response to every command.
- Time-sharing tends to make greater demands for swapping and virtual
- Traditional time-sharing requires implementing a file system -- not
the same as just the ability to write and read disk sectors.
- 1.5 Operating-System Operations
A modern operating system is interrupt driven.
It is interrupts and traps alone that allow the operating system
to regain access to the CPU.
- Hardware Protection mechanisms: The hardware and OS should prevent
processes from harming the system and each other. The OS must be
protected. We must be concerned about illegal I/O, illegal memory
access, and "hogging" of the CPU. Tools: traps -- dual mode
operation -- memory protection -- base and limit registers --
privileged instructions -- timers.
- In your mental model of how an OS works, do you see the OS as a part
of the system that is always executing and always monitoring user
processes? Typically "the OS" is NOT REALLY running all the time.
In fact this would make it impossible for user processes ever to
execute on a uniprocessor.
An OS can employ hardware protection
mechanisms to ensure that it will not lose control of the system,
even during periods when the OS is suspended.
User processes make requests for services from the OS by making
system calls, typically implemented by executing an instruction that
triggers a trap.
- Test your understanding: Which is responsible for changing the mode
bit from 0 (kernel mode) to 1 (user mode): the hardware or the
operating system? Which is responsible for changing the mode bit
from 1 to 0: the hardware or the operating system?
- 1.6 Process Management
- A process is a program in execution. An OS has to schedule, create,
delete, suspend, resume, and synchronize processes.
- Processes compete for resources. They have to request them from the
OS and wait until the OS makes them available.
- An OS must provide mechanisms by which processes can
- 1.7 Memory Management
- Main memory consists of a sequence of 'words' - each
of which has an address. We use the term 'word' to refer to
whatever is the smallest addressable unit of memory. On many
computers, the word is a byte, but it could be something else - e.g.
a 32- or 64-bit word.
- A computer cannot read directly from secondary memory like it can
from main memory. Because of the way computers are designed, in
order for a CPU to execute an instruction, the instruction has to be
in main memory.
- The OS has to allocate memory to processes and deallocate it. It
has to keep track of which portions of memory belong to which
- 1.8 Storage Management
- Operating Systems are responsible for implementing file systems on secondary
storage: e.g. creating and deleting files & directories, and
allocating/deallocating storage space on disk.
- It is the operating system that creates, out of collections of
sectors scattered over secondary memory, what we perceive as a
highly organized system of directories and files.
- The OS has to create and delete files and directories, support
primitive operations for manipulating files (e.g. open, seek, read,
write and close), map files onto locations in secondary storage, and
provide a means for backing up files
- The OS has to manage free space on disks, allocate storage to files,
and may be responsible for scheduling disk operations.
- Various kinds of caches are important for performance. Hardware-only
caches are outside the control of the OS.
- The copying of data between primary and secondary memory is usually
under the control of the OS.
- The OS has to solve consistency and coherence problems, because
copies of the same data item may exist concurrently at different
levels of the memory hierarchy, and in the caches of more than one
CPU. When one process changes one copy, how will the other copies
be updated so that every process looking at any copy of the data
will see the same thing?
- One of the jobs of the OS is to make all the various I/O devices
appear to have the same familiar types of controls, when in fact
there are many peculiarities in how different devices are designed
to be controlled. Operating Systems provide high-level commands to operate
devices, as well as device drivers that interpret the high-level commands and
deal with the device-specific commands that are required. So, for
example, an application can ask the OS to get the next character
from an input device, and the OS will tell the device driver to get
the next character, and then the device driver will issue all the
peculiar commands to the device that are needed to get the next
- 1.9 Protection and Security
- Although the difference between computer protection and security has
become blurred, our text makes an effort to preserve a clear
distinction between the two.
- An OS must perform "gatekeeper" functions - making sure that users
provide proper credentials before being given access to system
resources. This is "protection". The doorman at an expensive hotel
and the sentry at a military facility are doing protection work.
- gatekeepers and sentries certainly cannot adequately prevent all
failures, damage or attacks that could hinder the correct operation
of the facility. Besides protection, the designer of an OS must be
concerned with "security" - insuring that resources are properly
used. It is security work when casino owners watch customers
gamble. The bouncer in a night club and the teacher on yard duty do
- 1.10 Kernel Data Structures
- Among the data structures employed by operating systems are:
- Trees (general, binary, unbalanced, balanced)
- Hash functions and tables
- 1.11 Computing Environments
- Traditional Computing - at the office, roughly chronologically:
terminals attached to mainframes, networked PCs, file- and
print-servers, laptops, portals providing web access to internal
servers, network terminals, handheld devices sync'd to PCs. At
home slow modem connections evolved into small home networks with
broadband network connections.
- Organization portals now provide web access to internal servers.
Mobile devices are integrated via wireless computer networks
and cellular data networks.
- Mobile devices implement location-based services and
have applications that rely on input from accelerometers.
Mobile devices are very portable and feature-rich,
but have small displays. They also have less computing
speed compared to larger computers, and less primary and secondary
- A distributed system is basically a set of computers in
communication via a network.
- Network technologies vary. Local-area, wide-area, metropolitan-area,
and personal-area are some of the classifications.
- A network operating system provides network services to the hosts in
a network. Hosts function independently in a network operating
- A distributed operating system takes the idea of a network
to a higher level. It is operating system software that
makes a distributed system function as
if it were a single computer.
- Client-Server Computing - client processes make requests for
services offered by server processes, typically file service and
- Peer-to-Peer Computing - exploits potential of having both client
and server processes on a node, and schemes to avoid server
bottlenecks through better distribution of clients across the set of
available servers. There may be some use of broadcasting or
- Virtualization allows guest operating systems to run within a host
operating system - useful if one wishes to test one or more operating
systems on a single computer, or if one wishes to run programs that
are designed for a certain non-native operating system, for
example, running a Windows program on a Mac.
- Cloud Computing is delivery of computing power, storage, and/or programs
as a network service. There are public and private clouds, and hybrids.
Word processors, spreadsheets, database service, and backup storage
are among the resources one can access.
- Real-Time Embedded Systems typically have primitive OSs and primarily
perform monitoring and management of devices. A real-time system
has to deal with time constraints imposed by the external
environment. (By the way, this involves more than just being quick.
As an example think about a robot whose job is to open a heavy door
for you, and then close it behind you. If you take a run at the
door, you don't want the robot to be too late opening it or too
early closing it!)
- 1.12 Open Source Operating Systems
- Open source software is made available in source form, not just
binary code. This enables programmers to modify it to increase its
usefulness to them. They may find and fix bugs made by programmers
who worked on the code before them, and pass along the improved
software to other users. Users of open-source software tend to form
associations and give each other help and support, as well as make
suggestions for improvement of the software.
- History - open-source software was in vogue in the early days of
computing, became less prevalent with the corporatization of
computing, but has come back into quite a lot of prominence, notably
under the influence of the Free Software Foundation, founded by
- Linux - the original kernel was designed by Linus Torvalds (probably with
considerable influence from Andrew Tannenbaum's Minix). It was
combined with compilers and utilities produced by Stallman's GNU
project. Many distributions now exist. Thousands of programmers
have worked on Linux (open) source code.
- BSD Unix - The flavor of unix developed at UC Berkeley - was tied to
AT&T unix for some time & required a license, but now is free of
AT&T code. There are many descendants, including the Darwin kernel
component code used in the Mac OS X.
- Solaris - originally the OS of Sun Microsystems was based on
Berkeley unix. Sun migrated to the AT&T based Solaris and
eventually open-sourced most of the code it developed. Some of
Solaris is still closed source.
- Utility - one might say that open-sourcing is a movement that is
helping to increase participation in software development.