Pages

Monday, June 14, 2021

 

Operating System Features and Structures

            Computer systems consist of hardware controlled by an operating system (OS). The kernel is dedicated code that runs in protected memory at the top of the OS hierarchy. It is a computer program that controls all resources available to the computer it runs on. It is always resident in memory and manages all interactions between processes. Figure 1 shows typical interactions in an OS. At the highest level, there are two major functions of the operating system. One is the management of hardware and software (managing resources), and the other is to provide an interface to applications.

            Resource management consists of a set of services or functions inside the kernel. These services include device, storage, memory, information management, process control, communications, error detection, and protection and security of the operating system.

            Operating systems’ application interfaces are typically user-accessible but also exist in the kernel and must request service using system calls. Error detection is an example of a function that can take place in the software application, but some operating systems also handle error detection at the hardware level. File management is a kernel service, and it is also a user-side function.


Figure 1 Operating system theory concept map

Processes and Threads

            Figure 2 illustrates processes and threads in a processor. A process is a program in execution. Silberschatz et al. (2014) state, "A process is the unit of work in a modern time-sharing system" (p. 105). A program becomes a process when it is loaded into memory. As a program, it is a list of instructions, but it becomes more as a process. The instruction section of the process is called the text section. As the process executes, its progress is tracked by a program counter. It keeps a copy of the current contents of the processor registers, maintains a stack for local variables and a data section for global variables. The process often contains a heap of memory for its use and communication with other processes.

            There are five process states. A process is in the new state while loading into memory then changes to the ready state. During process execution, it can be in one of three states. It is running when a processor is available, ready when waiting for the processor, and waiting when it needs data from an event or I/O. The final state is terminated when the process is complete or no longer needed.

            Each process is represented by a process control block (PCB) containing its current state, a process identifier, the value of its program counter, copies of CPU registers, CPU scheduling information, a list of open files, and I/O devices, and accounting information to aid in scheduling decisions. The PCB is especially useful when a process is swapped in or out of the processor. It can also be extended to allow multithreading.

            Motivations for using multi-threaded processes include taking advantage of the multi-processor architecture, allowing an application to do several things at once, and reducing the overhead of creating multiple processes. The benefits of single-threading is application responsiveness, threads can share the process memory space, less allocation overhead, and it is scalable across multiple processors.

            Multithreading models involve the relationship between user threads and kernel threads. A many-to-one model maps multiple user threads to a single kernel thread. This model is rarely used because it cannot take advantage of multi-core processors, and blocking calls stop the entire process. It would seem a one-to-one model would be the better way to go, but each user thread must create a corresponding kernel thread. Kernel threads are resource expensive and can reduce the performance if too many are implemented. This is the most common model in use. The many-to-many model solves the problems of the other two models by multiplexing the relationships between user and kernel threads.

            Every process has a critical section of code that changes something other processes can access. If two processes run their critical-section code at the same time, there will be corrupted data. To prevent this critical-section problem, we must ensure mutual exclusion between the progress of and bounded waiting for processes. Silberschatz et al. (2014) call this "a protocol that the processes can use to cooperate" (p. 206). While hardware protocols exist, programmers ensure their applications' processes use locks, semaphores, and priority inversion to prevent deadlocks.

            Bounded Waiting acts like a stack by storing the number of times each processes' critical section (CS) runs. Brais (2015) provided an excellent explanation, "the number of times other processes enter their CSs must be limited." Bounded Waiting uses progress to guarantee all CSs will eventually run.


Figure 2 Process states and multi-threading

Memory Management

            An objective of memory management is to provide direct access to physical memory for the operating system (OS). The base block of memory holds the operating system instructions. The amount of memory not consumed by the OS is then allocated using base and limit registers to define the extent available for processes. Only the OS can load these registers, effectively protecting the kernel and physical memory from direct access by user programs (Silberschatz et al., 2014, p.327).

            While memory management schemes provide exclusive access to the processor, it also extends the usable area of memory, by mapping virtual (logical) memory to physical addresses. Refer to Figure 3 depicts the memory-management unit (MMU) using a relocation register. The value in the relocation register binds the logical address to the physical address. These bindings can occur at three different times according to how the programmers write the instructions, compile time, load time, or run time.

            Several methods of memory allocation permit programs that are larger than the installed memory on a system to run. Dynamic loading only loads portions of the program in use, calling the less frequently used portions only when needed. Dynamic linking operates much the same way and also allows sharing of common libraries. Swapping and page files allow processes to be moved out of fast-access memory when other processes require prompt attention.


Figure 3 Memory management

File Management

            Memory is essential to the processor for getting work done, but an operating system (OS) would soon run out of work if it could not permanently store the work product. Building on the memory management concepts above, file management must keep track of external storage in many forms. A file is a logical unit of related data written to secondary storage (Silberschatz et al., 2014, p. 478). The secondary storage can be any number of devices designed for fast and reliable access by the I/O. The OS determines the file system particulars, but secondary storage devices can generally work with any system. The file structure used results from design decisions when implementing an OS.

            Files have predictable patterns that allow OSs to manipulate them. As the amount of external storage increases, so does the size of the directory of files. Rather than consume memory space, directories are stored on the storage device. The directory allows searching, creating, deleting, listing, and renaming files. As types of directories increase in complexity, the OS also needs to traverse the file system to perform backups. Figure 4 outlines five file structures.

            The simplest directory is one level; all files must have unique names, and all users have access. It is easy to implement but is not scalable, and data isolation is not easy. The two-level directory allows the separation of files per user but does not allow sharing files between directories. A tree-structured directory extends the two-level concept beyond a tree height of two, allowing users to create subdirectories and organize their files in groups according to function or project.

            We still cannot share files. This is where the acyclic graph directory structure comes in. By allowing users to share one file through different directory paths, they can collaborate without copying files back and forth. The general graph directory structure is perhaps the most useful because it is more flexible by allowing cycles (Geeksforgeeks, 2021). It requires garbage collection routines to remove dangling pointers when files are deleted, which increases processing overhead.


Figure 4 Five examples of file structures

            As software and hardware standardization increased, so did the number of different types of I/O devices. An OS must have device drivers for uniform access to the myriad devices on the market (Silberschatz et al., 2014, p. 565). We typically think of keyboards and pointing devices when we think about I/O, but there are many others. Smartphones have cameras, GPS receivers, accelerometers, and of course, the ubiquitous multi-touch screen that performs double duty as input and output. These devices must participate on a common bus to work with different processing platforms and memory configurations.

Protection and Security

         Domain protection principles are based on the goals of secure, private, and reliable processing. Operating system protection design only protects data and processes as far as it is enforceable by built-in mechanisms. A computer system consists of processes and objects, and processes should only have access to resources it needs to complete its work (Silberschatz et al., 2014, section 13.3). To achieve secure systems, the additional factors of physical access and user interface can lead to accidental or malicious data corruption or theft. An access matrix limits access to objects to authorized access by users and processes. It also restricts cross access to enforce privacy. An object-to-authorized use relationship works on the principle of least privilege.

        Figure 5 outlines domains represented by concentric circles with increasing privileges nearer the kernel. The access list shows how a shared workstation might implement protection. The grey rectangle represents security.

        Language-based protection extends the responsibility of the designers of operating systems to the programmers of applications. Hardware designs provide various protection strategies for implementing at the language level, and the language offers additional protections using data isolation principles of object-oriented design (OOD).


Figure 5 Protection and security overview 

Conclusion

         The concepts shared here are my interpretations of the content I read, researched, and discussed with my classmates during the past five weeks. My increased understanding of operating systems and the many types of hardware they control whets my curiosity to learn more about customizing operating systems to specific uses. As I continue learning about cyber and data security, I will use this knowledge guide experiments with dedicated purpose machines to increase my effectiveness as a future cyber warrior.

References

Brais, H. (2015, October 29). What is progress and bounded waiting in critical section? Stackoverflow. https://stackoverflow.com/questions/33143779/what-is-progress-and-bounded-waiting-in-critical-section#33409854

Geeksforgeeks. (2021, February 3). Structures of directory in operating system. https://www.geeksforgeeks.org/structures-of-directory-in-operating-system/

Silberschatz, A., Galvin, P., & Gagne, G. (2014). Operating Systems Concepts Essentials: 2nd Edition. https://redshelf.com/

Singh, T. (2016) A comparative study of disk scheduling algorithms. IJCST. http://www.ijcstjournal.org/volume-4/issue-1/IJCST-V4I1P6.pdf

Thursday, July 23, 2020

Newbie to Newbie Blog Part Two

In my last post four weeks ago, I shared some advice about how to get up and running for coding with Java. I invited you to review some of the sources I used so you could start practicing. If you didn't have time, I understand, it's a lot of reading and experimenting. I have less hair now but more knowledge about algorithmic design in programming. Once again, I invite you to benefit from my struggles to help you along your way.

As you learn more about the syntax of Java and the concepts of object-oriented programming (OOP) you will appreciate the importance of data structures, and how to choose one over another when solving a problem. At its simplest, a data structure can be a list. If you need that list to be an ordered list it becomes slightly more complex. An even more complex structure would be an array containing arrays. It will be your responsibility as a programmer to choose the best structure for storing and manipulating data.

Every choice you make is a trade-off between time and space complexity. For example, you have a set of data that is accessed frequently by a search algorithm. As the size of the dataset increases, the space it consumes in memory grows, and the time it takes to search it increases. Choosing an implementation of a list that is sequential is better for a large data set because a binary search algorithm can locate values much faster. The trade-off is in the resources it takes to order the list as new elements are added, changed, or removed. The larger the dataset and the more often it is accessed, the greater the benefit of a sequential list.

The best tool for making choices in your design is Big 0 notation. Learn this first, and learn it well. Here is an excellent resource for learning how to apply it in your design choices. After you cognate on that, have a look at this and try some of the exercises to get a good handle on these concepts.

Thursday, June 25, 2020

Object-oriented Programming for Newbies

If you are new to object-oriented programming (OOP) like I am, you may find it helpful to read what I just learned through reading a few articles about it. For a programming language to qualify as OOP, it must include functionalities based on four principles.

First, there is encapsulation, which is kind of what it sounds like. We take an object and wrap it up so its inner workings are not visible so that we can control access.

Next, there is data abstraction, which is a fancy way to say, model, or template. If I were to describe to you a garden hose without showing it to you, I would tell you its color, diameter, length, material, and the type of fittings on either end. You would have a general idea of what class of hose I am describing and know if it can do the job you might have for it.

A third principle is inheritance. If my garden hose were a programming object, I could create copies of it, then add additional descriptions that correspond with a similar real-world object, like how long the warranty is, or what temperature range it can handle. I could change the default color inherited to another color.

And finally, we have polymorphism. Who comes up with this stuff? It just means an object has one name but can run in different forms. It is very complex, though, and I don’t quite understand it yet. 

I’ll be writing some basic programs in Java in the next few weeks, so I expect to have a better explanation for you soon. For now, I’ll provide links to resources I’ve been using to get up to speed.

Starting with the basics, here’s an excellent tutorial. When you’re ready to learn from the developers of Java, here’s the definitive reference. And here’s an older article, but still worthwhile, relating OOP concepts in laymen terms.

Thanks for reading.

Sunday, August 25, 2019

Internet Safety

Internet Safety is the new Neighborhood Watch. Near the end of the 1960s, crime rates rose in many neighborhoods across the United States. The National Sheriff’s Association created the National Neighborhood Watch Program in 1972 to involve citizens in protecting their neighborhoods (NNW, n.d., para 2). The Neighborhood Watch has since become part of our culture. As the internet grew, so too did its potential for harming groups and individuals. Internet groupings are our modern neighborhoods. How do we keep them safe?

Sunday, August 18, 2019

Network Security

Information and system security are important to individuals because so much personal and financial information resides on information systems that are inherently vulnerable to compromise. Information and system security are important to organizations for the same reason, but also important because the success of developed countries depends on the secure operation of systems that support utilities, finances, and government (Zwass, 2016, Information Systems Security). Countries depend on organizations, and organizations depend on individuals. All depend on information systems to manage and control activities in the modern economy.

Thursday, August 15, 2019

Computers in the Workplace

I work for a natural gas company. When I came to work here, I was surprised to learn that local natural gas providers are transportation companies. We build and maintain infrastructure for transporting gas from our national provider to each of our customers. We bill for the amount of gas used, yet do not sell the gas itself. Technically, we charge for the transportation of natural gas. Computers are integral to the way we do business.

Thursday, August 8, 2019

Traveling Through a Network

A packet travels through a data network much like a physical package travels to a destination. A router uses the address of a packet to determine the best route to the next stop (another router). A shipping company uses the address of a physical package’s address to determine the best mode of transport, e.g., truck, ship, or plane. In both cases, there is always more than one path to a destination.