Operating System Features and
Structures
Computer systems consist of hardware
controlled by an operating system (OS). The kernel is dedicated code that runs
in protected memory at the top of the OS hierarchy. It is a computer program
that controls all resources available to the computer it runs on. It is always
resident in memory and manages all interactions between processes. Figure 1
shows typical interactions in an OS. At the highest level, there are two major
functions of the operating system. One is the management of hardware and
software (managing resources), and the other is to provide an interface to
applications.
Resource management consists of a set of services or functions inside the kernel. These
services include device, storage, memory, information management, process
control, communications, error detection, and protection and security of the
operating system.
Operating systems’ application
interfaces are typically user-accessible but also exist in the kernel
and must request service using system calls. Error detection is an example of a
function that can take place in the software application, but some operating
systems also handle error detection at the hardware level. File management is a
kernel service, and it is also a user-side function.
Figure 1 Operating system theory concept map
Processes and Threads
Figure 2 illustrates processes and threads in a processor. A process is a program in execution. Silberschatz et al. (2014) state, "A process is the unit of work in a modern time-sharing system" (p. 105). A program becomes a process when it is loaded into memory. As a program, it is a list of instructions, but it becomes more as a process. The instruction section of the process is called the text section. As the process executes, its progress is tracked by a program counter. It keeps a copy of the current contents of the processor registers, maintains a stack for local variables and a data section for global variables. The process often contains a heap of memory for its use and communication with other processes.
There are five process states. A
process is in the new state while loading into memory then
changes to the ready state. During process execution, it can
be in one of three states. It is running when a processor is
available, ready when waiting for the processor, and waiting when
it needs data from an event or I/O. The final state is terminated when
the process is complete or no longer needed.
Each process is represented by
a process control block (PCB) containing its current state, a
process identifier, the value of its program counter, copies of CPU registers,
CPU scheduling information, a list of open files, and I/O devices, and
accounting information to aid in scheduling decisions. The PCB is especially
useful when a process is swapped in or out of the processor. It can also be
extended to allow multithreading.
Motivations for
using multi-threaded processes include taking advantage of the multi-processor
architecture, allowing an application to do several things at once, and
reducing the overhead of creating multiple processes. The benefits of
single-threading is application responsiveness, threads can share the process
memory space, less allocation overhead, and it is scalable across multiple
processors.
Multithreading models involve
the relationship between user threads and kernel threads. A many-to-one model
maps multiple user threads to a single kernel thread. This model is rarely used
because it cannot take advantage of multi-core processors, and blocking calls
stop the entire process. It would seem a one-to-one model
would be the better way to go, but each user thread must create a corresponding
kernel thread. Kernel threads are resource expensive and can reduce the
performance if too many are implemented. This is the most common model in use.
The many-to-many model solves the problems of the other two models by
multiplexing the relationships between user and kernel threads.
Every process has a critical section
of code that changes something other processes can access. If two processes run
their critical-section code at the same time, there will be corrupted data. To
prevent this critical-section problem, we must ensure mutual
exclusion between the progress of and bounded waiting for processes.
Silberschatz et al. (2014) call this "a protocol that the processes can
use to cooperate" (p. 206). While hardware protocols exist, programmers
ensure their applications' processes use locks, semaphores, and priority
inversion to prevent deadlocks.
Bounded Waiting acts like a stack by
storing the number of times each processes' critical section (CS) runs. Brais
(2015) provided an excellent explanation, "the number of times other
processes enter their CSs must be limited." Bounded Waiting uses progress
to guarantee all CSs will eventually run.
Figure 2 Process states and multi-threading
Memory Management
An objective of memory management is
to provide direct access to physical memory for the operating system (OS). The
base block of memory holds the operating system instructions. The amount of
memory not consumed by the OS is then allocated using base and limit registers
to define the extent available for processes. Only the OS can load these registers,
effectively protecting the kernel and physical memory from direct access by
user programs (Silberschatz et al., 2014, p.327).
While memory management schemes
provide exclusive access to the processor, it also extends the usable area of
memory, by mapping virtual (logical) memory to physical addresses. Refer to
Figure 3 depicts the memory-management unit (MMU) using a relocation register.
The value in the relocation register binds the logical address to the physical
address. These bindings can occur at three different times according to how the
programmers write the instructions, compile time, load time, or run time.
Several methods of memory allocation
permit programs that are larger than the installed memory on a system to run.
Dynamic loading only loads portions of the program in use, calling the less
frequently used portions only when needed. Dynamic linking operates much the
same way and also allows sharing of common libraries. Swapping and page files
allow processes to be moved out of fast-access memory when other processes
require prompt attention.
Figure 3 Memory management
File Management
Memory is essential to the processor
for getting work done, but an operating system (OS) would soon run out of work
if it could not permanently store the work product. Building on the memory
management concepts above, file management must keep track of external storage
in many forms. A file is a logical unit of related data written to secondary
storage (Silberschatz et al., 2014, p. 478). The secondary storage can be any
number of devices designed for fast and reliable access by the I/O. The OS
determines the file system particulars, but secondary storage devices can
generally work with any system. The file structure used results from design
decisions when implementing an OS.
Files have predictable patterns that
allow OSs to manipulate them. As the amount of external storage increases, so
does the size of the directory of files. Rather than consume memory space,
directories are stored on the storage device. The directory allows searching,
creating, deleting, listing, and renaming files. As types of directories
increase in complexity, the OS also needs to traverse the file system to
perform backups. Figure 4 outlines five file structures.
The simplest directory is one level;
all files must have unique names, and all users have access. It is easy to
implement but is not scalable, and data isolation is not easy. The two-level
directory allows the separation of files per user but does not allow sharing
files between directories. A tree-structured directory extends the two-level
concept beyond a tree height of two, allowing users to create subdirectories
and organize their files in groups according to function or project.
We still cannot share files. This is
where the acyclic graph directory structure comes in. By allowing users to
share one file through different directory paths, they can collaborate without
copying files back and forth. The general graph directory structure is perhaps
the most useful because it is more flexible by allowing cycles (Geeksforgeeks,
2021). It requires garbage collection routines to remove dangling pointers when
files are deleted, which increases processing overhead.
Figure 4 Five examples of file structures
As software and hardware
standardization increased, so did the number of different types of I/O devices.
An OS must have device drivers for uniform access to the myriad devices on the
market (Silberschatz et al., 2014, p. 565). We typically think of keyboards and
pointing devices when we think about I/O, but there are many others.
Smartphones have cameras, GPS receivers, accelerometers, and of course, the
ubiquitous multi-touch screen that performs double duty as input and output.
These devices must participate on a common bus to work with different
processing platforms and memory configurations.
Protection and Security
Domain protection principles are
based on the goals of secure, private, and reliable processing. Operating
system protection design only protects data and processes as far as it is
enforceable by built-in mechanisms. A computer system consists of processes and
objects, and processes should only have access to resources it needs to
complete its work (Silberschatz et al., 2014, section 13.3). To achieve secure
systems, the additional factors of physical access and user interface can lead
to accidental or malicious data corruption or theft. An access matrix limits
access to objects to authorized access by users and processes. It also
restricts cross access to enforce privacy. An object-to-authorized use
relationship works on the principle of least privilege.
Figure
5 outlines domains represented by concentric circles with increasing privileges
nearer the kernel. The access list shows how a shared workstation might
implement protection. The grey rectangle represents security.
Language-based
protection extends the responsibility of the designers of operating systems to
the programmers of applications. Hardware designs provide various protection
strategies for implementing at the language level, and the language offers
additional protections using data isolation principles of object-oriented
design (OOD).
Figure 5 Protection and security overview
The concepts shared here are my interpretations of the
content I read, researched, and discussed with my classmates during the past
five weeks. My increased understanding of operating systems and the many types
of hardware they control whets my curiosity to learn more about customizing
operating systems to specific uses. As I continue learning about cyber and data
security, I will use this knowledge guide experiments with dedicated purpose
machines to increase my effectiveness as a future cyber warrior.
References
Brais,
H. (2015, October 29). What is progress and bounded waiting in critical
section? Stackoverflow. https://stackoverflow.com/questions/33143779/what-is-progress-and-bounded-waiting-in-critical-section#33409854
Geeksforgeeks.
(2021, February 3). Structures of directory in operating system. https://www.geeksforgeeks.org/structures-of-directory-in-operating-system/
Silberschatz,
A., Galvin, P., & Gagne, G. (2014). Operating Systems Concepts
Essentials: 2nd Edition. https://redshelf.com/
Singh,
T. (2016) A comparative study of disk scheduling algorithms. IJCST. http://www.ijcstjournal.org/volume-4/issue-1/IJCST-V4I1P6.pdf