Sections:
Updated: 7/27/2015
Learning objective: Explain the general architecture of a operating system
An operating system is a program that acts as an interface between a user of a computer and the computer resources. The purpose of an operating system is to provide an environment in which a user may execute programs.
Hardware
The hardware consists of the memory, CPU, arithmetic-logic unit, various bulk storage devices, I/O, peripheral devices and other physical devices.
Kernel
In computing, the kernel is the central component of most computer operating systems; it is a bridge between applications and the actual data processing done at the hardware level. The kernel's responsibilities include managing the system's resources (the communication between hardware and software components). Usually as a basic component of an operating system, a kernel can provide the lowest-level abstraction layer for the resources (especially processors and I/O devices) that application software must control to perform its function. It typically makes these facilities available to application processes through inter-process communication mechanisms and system calls.
Shell
A shell is a piece of software that provides an interface for users to an operating system which provides access to the services of a kernel. The name shell originates from shells being an outer layer of interface between the user and the innards of the operating system (the kernel). [Wikipedia]
Operating system shells generally fall into one of two categories: command-line and graphical. Command-line shells provide a command-line interface (CLI) to the operating system, while graphical shells provide a graphical user interface (GUI). In either category the primary purpose of the shell is to invoke or "launch" another program; however, shells frequently have additional capabilities such as viewing the contents of directories.
Thinking: Why make the shell separate from the kernel?
Key terms: CLI, GUI, hardware, kernel, shell
Resources:
To maximize your learning, please visit these Web sites and review their content
to help reinforce the concepts presented in this section.
Quick links:
Chapter 1 Introduction into Operating system @ UNESCO (www.netnam.vn/unescocourse/os/11.htm)
Kernel @ Wikipedia
Shell @ Wikipedia
Notes on navigation: Click inside the frame to navigate the embedded Web page. - Click outside the frame to navigate this page to scroll up/down between the embedded Web pages. - Click on the frame title to open that page in a new tab in most browsers. - Click on the the "Reload page" link to reload the original page for that frame.
Chapter 1 Introduction into Operating system @ UNESCO (www.netnam.vn/unescocourse/os/11.htm) |
Reload page
Kernel @ Wikipedia |
Reload page
Shell @ Wikipedia |
Reload page
Notes:
Learning objective: Explain the role of the kernel
The bridge between applications and hardware
In computing, the kernel is the central component of most computer operating systems; it is a bridge between applications and the actual data processing done at the hardware level. The kernel's responsibilities include managing the system's resources (the communication between hardware and software components). Usually as a basic component of an operating system, a kernel can provide the lowest-level abstraction layer for the resources (especially processors and I/O devices) that application software must control to perform its function. It typically makes these facilities available to application processes through inter-process communication mechanisms and system calls. [Wikipedia]
Monolithic and micro kernels
Operating system tasks are done differently by different kernels, depending on their design and implementation. While monolithic kernels will try to achieve these goals by executing all the operating system code in the same address space to increase the performance of the system, microkernels run most of the operating system services in user space as servers, aiming to improve maintainability and modularity of the operating system. A range of possibilities exists between these two extremes. [Wikipedia]
Kernels are very complex
Kernel development is considered one of the most complex and difficult tasks in programming. Its central position in an operating system implies the necessity for good performance, which defines the kernel as a critical piece of software and makes its correct design and implementation difficult. For various reasons, a kernel might not even be able to use the abstraction mechanisms it provides to other software. Such reasons include memory management concerns (for example, a user-mode function might rely on memory being subject to demand paging, but as the kernel itself provides that facility it cannot use it, because then it might not remain in memory to provide that facility) and lack of reentrancy, thus making its development even more difficult for software engineers. [Wikipedia]
The most popular kernels are NT for Microsoft operating systems, Linux for operating systems like Android, and BSD for Apples OS X operating system. The latter two are both variations of Unix. Thus, the most common two operating systems kernels are NT and Unix.
Thinking: Should all kernels be open sourced?
Key terms: kernel, process
Resources:
To maximize your learning, please visit these Web sites and review their content
to help reinforce the concepts presented in this section.
Quick links:
Kernel @ Wikipedia
Notes on navigation: Click inside the frame to navigate the embedded Web page. - Click outside the frame to navigate this page to scroll up/down between the embedded Web pages. - Click on the frame title to open that page in a new tab in most browsers. - Click on the the "Reload page" link to reload the original page for that frame.
Kernel @ Wikipedia |
Reload page
Notes:
Learning objective: Explain the role of the shell
The shell provides the interface to the operating system
A shell is a piece of software that provides an interface for users to an operating system which provides access to the services of a kernel. However, the term is also applied very loosely to applications and may include any software that is "built around" a particular component, such as web browsers and email clients that are "shells" for HTML rendering engines. The name shell originates from shells being an outer layer of interface between the user and the innards of the operating system (the kernel). [Wikipedia]
Operating system shells generally fall into one of two categories: command-line and graphical. Command-line shells provide a command-line interface (CLI) to the operating system, while graphical shells provide a graphical user interface (GUI). In either category the primary purpose of the shell is to invoke or "launch" another program; however, shells frequently have additional capabilities such as viewing the contents of directories. [Wikipedia]
The relative merits of CLI and GUI based shells are often debated. CLI proponents claim that certain operations can be performed much faster under CLI shells than under GUI shells (such as moving files, for example). However, GUI proponents advocate the comparative usability and simplicity of GUI shells. The best choice is often determined by the way in which a computer will be used. On a server mainly used for data transfers and processing with expert administration, a CLI is likely to be the best choice. On the other hand, a GUI would be more appropriate for a computer to be used for image or video editing and the development of the above data. [Wikipedia]
Graphical user interface (GUI)
A graphical user interface (GUI) (sometimes pronounced "gooey") is a type of user interface item that allows people to interact with programs in more ways than typing such as computers; hand-held devices such as MP3 Players, Portable Media Players or Gaming devices; household appliances and office equipment with images rather than text commands. A GUI offers graphical icons, and visual indicators, as opposed to text-based interfaces, typed command labels or text navigation to fully represent the information and actions available to a user. The actions are usually performed through direct manipulation of the graphical elements [Wikipedia]
Typically, the user interacts with information by manipulating visual widgets that allow for interactions appropriate to the kind of data they hold. The widgets of a well-designed interface are selected to support the actions necessary to achieve the goals of the user. A Model-view-controller allows for a flexible structure in which the interface is independent from and indirectly linked to application functionality, so the GUI can be easily customized. This allows the user to select or design a different skin at will, and eases the designer's work to change the interface as the user needs evolve. Nevertheless, good user interface design relates to the user, not the system architecture. [Wikipedia]
Command line interface (CLI)
A command-line interface (CLI) is a mechanism for interacting with a computer operating system or software by typing commands to perform specific tasks. This text-only interface contrasts with the use of a mouse pointer with a graphical user interface (GUI) to click on options, or menus on a text user interface (TUI) to select options. This method of instructing a computer to perform a given task is referred to as "entering" a command: the system waits for the user to conclude the submitting of the text command by pressing the "Enter" key (a descendant of the "carriage return" key of a typewriter keyboard). A command-line interpreter then receives, analyses, and executes the requested command. The command-line interpreter may be run in a text terminal or in a terminal emulator window as a remote shell client such as PuTTY. Upon completion, the command usually returns output to the user in the form of text lines on the CLI. This output may be an answer if the command was a question, or otherwise a summary of the operation. [Wikipedia]
CLIs are often used by programmers and system administrators, in engineering and scientific environments, and by technically advanced personal computer users. CLIs are also popular among people with visual disability, since the commands and feedbacks can be displayed using Refreshable Braille displays. [Wikipedia]
Thinking: If you could only have one shell, which one would you chose and why?
Key terms: , CLI, GUI, kernel, shell
Resources:
To maximize your learning, please visit these Web sites and review their content
to help reinforce the concepts presented in this section.
Quick links:
Shell @ Wikipedia
Graphical user interface @ Winipedia
Command-line interface @ Wikipedia
Notes on navigation: Click inside the frame to navigate the embedded Web page. - Click outside the frame to navigate this page to scroll up/down between the embedded Web pages. - Click on the frame title to open that page in a new tab in most browsers. - Click on the the "Reload page" link to reload the original page for that frame.
Shell @ Wikipedia |
Reload page
Graphical user interface @ Winipedia |
Reload page
Command-line interface @ Wikipedia |
Reload page
Notes:
Learning objective: Explain what resources OSs manage
An operating system provides the environment within which programs are executed. To construct such an environment, the system is partitioned into small modules with a well-defined interface. The design of a new operating system is a major task. It is very important that the goals of the system be will defined before the design begins. The type of system desired is the foundation for choices between various algorithms and strategies that will be necessary. A system as large and complex as an operating system can only be created by partitioning it into smaller pieces. Each of these pieces should be a well defined portion of the system with carefully defined inputs, outputs, and function. Obviously, not all systems have the same structure. However, many modern operating systems share the system components outlined below. [UNESCO]
Process Management
The CPU executes a large number of programs. While its main concern is the execution of user programs, the CPU is also needed for other system activities. These activities are called processes. A process is a program in execution. Typically, a batch job is a process. A time-shared user program is a process. A system task, such as spooling, is also a process. For now, a process may be considered as a job or a time-shared program, but the concept is actually more general.
In general, a process will need certain resources such as CPU time, memory, files, I/O devices, etc., to accomplish its task. These resources are given to the process when it is created. In addition to the various physical and logical resources that a process obtains when its is created, some initialization data (input) may be passed along. For example, a process whose function is to display on the screen of a terminal the status of a file, say F1, will get as an input the name of the file F1 and execute the appropriate program to obtain the desired information.
The operating system is responsible for the following activities in connection with processes managed. [UNESCO]
- The creation and deletion of both user and system processes
- The suspension or resumption of processes.
- The provision of mechanisms for process synchronization
- The provision of mechanisms for deadlock handling.
Memory Management
Memory is central to the operation of a modern computer system. Memory is a large array of words or bytes, each with its own address. Interaction is achieved through a sequence of reads or writes of specific memory address. The CPU fetches from and stores in memory.
In order for a program to be executed it must be mapped to absolute addresses and loaded in to memory. As the program executes, it accesses program instructions and data from memory by generating these absolute is declared available, and the next program may be loaded and executed.
In order to improve both the utilization of CPU and the speed of the computer's response to its users, several processes must be kept in memory. There are many different algorithms depends on the particular situation. Selection of a memory management scheme for a specific system depends upon many factor, but especially upon the hardware design of the system. Each algorithm requires its own hardware support.
The operating system is responsible for the following activities in connection with memory management: [UNESCO]
- Keep track of which parts of memory are currently being used and by whom.
- Decide which processes are to be loaded into memory when memory space becomes available.
- Allocate and deallocate memory space as needed.
Secondary Storage Management
The main purpose of a computer system is to execute programs. These programs, together with the data they access, must be in main memory during execution. Since the main memory is too small to permanently accommodate all data and program, the computer system must provide secondary storage to backup main memory. Most modem computer systems use disks as the primary on-line storage of information, of both programs and data. Most programs, like compilers, assemblers, sort routines, editors, formatters, and so on, are stored on the disk until loaded into memory, and then use the disk as both the source and destination of their processing. Hence the proper management of disk storage is of central importance to a computer system.
There are few alternatives. Magnetic tape systems are generally too slow. In addition, they are limited to sequential access. Thus tapes are more suited for storing infrequently used files, where speed is not a primary concern.
The operating system is responsible for the following activities in connection with disk management: [UNESCO]
- Free space management
- Storage allocation
- Disk scheduling.
I/O System
One of the purposes of an operating system is to hide the peculiarities of specific hardware devices from the user. For example, in Unix, the peculiarities of I/O devices are hidden from the bulk of the operating system itself by the I/O system. Only the device driver knows the peculiarities of a specific device. The I/O system consists of: [UNESCO]
- A buffer caching system
- A general device driver code
- Drivers for specific hardware devices.
File Management
File management is one of the most visible services of an operating system. Computers can store information in several different physical forms; magnetic tape, disk, and drum are the most common forms. Each of these devices has it own characteristics and physical organization.
For convenient use of the computer system, the operating system provides a uniform logical view of information storage. The operating system abstracts from the physical properties of its storage devices to define a logical storage unit, the file. Files are mapped, by the operating system, onto physical devices.
A file is a collection of related information defined by its creator. Commonly, files represent programs (both source and object forms) and data. Data files may be numeric, alphabetic or alphanumeric. Files may be free-form, such as text files, or may be rigidly formatted. In general a files is a sequence of bits, bytes, lines or records whose meaning is defined by its creator and user. It is a very general concept.
The operating system implements the abstract concept of the file by managing mass storage device, such as types and disks. Also files are normally organized into directories to ease their use. Finally, when multiple users have access to files, it may be desirable to control by whom and in what ways files may be accessed.
The operating system is responsible for the following activities in connection with file management: [UNESCO]
- The creation and deletion of files
- The creation and deletion of directory
- The support of primitives for manipulating files and directories
- The mapping of files onto disk storage.
- Backup of files on stable (non volatile) storage.
Protection System
The various processes in an operating system must be protected from each other’s activities. For that purpose, various mechanisms which can be used to ensure that the files, memory segment, cpu and other resources can be operated on only by those processes that have gained proper authorization from the operating system. For example, memory addressing hardware ensure that a process can only execute within its own address space. The timer ensure that no process can gain control of the CPU without relinquishing it. Finally, no process is allowed to do it’s own I/O, to protect the integrity of the various peripheral devices. Protection refers to a mechanism for controlling the access of programs, processes, or users to the resources defined by a computer controls to be imposed, together with some means of enforcement. Protection can improve reliability by detecting latent errors at the interfaces between component subsystems. Early detection of interface errors can often prevent contamination of a healthy subsystem by a subsystem that is malfunctioning. An unprotected resource cannot defend against use (or misuse) by an unauthorized or incompetent user. [UNESCO]
Command Interpreter System
One of the most important component of an operating system is its command interpreter. The command interpreter is the primary interface between the user and the rest of the system. Many commands are given to the operating system by control statements. When a new job is started in a batch system or when a user logs-in to a time-shared system, a program which reads and interprets control statements is automatically executed. This program is variously called (1) the control card interpreter, (2) the command line interpreter, (3) the shell, and so on. Its function is quite simple: get the next command statement, and execute it. The command statement themselves deal with process management, I/O handling, secondary storage management, main memory management, file system access, protection, and networking. [UNESCO]
Thinking: Are all the resources equal with regards to being managed by an OS?
Key terms: I/O system, Storage, file management, memory, process
Resources:
To maximize your learning, please visit these Web sites and review their content
to help reinforce the concepts presented in this section.
Quick links:
Chapter 1 Introduction into Operating system @ UNESCO (www.netnam.vn/unescocourse/os/13.htm)
Notes on navigation: Click inside the frame to navigate the embedded Web page. - Click outside the frame to navigate this page to scroll up/down between the embedded Web pages. - Click on the frame title to open that page in a new tab in most browsers. - Click on the the "Reload page" link to reload the original page for that frame.
Chapter 1 Introduction into Operating system @ UNESCO (www.netnam.vn/unescocourse/os/13.htm) |
Reload page
Notes:
Learning objective: Explain aspects of multi-tasking environments
In computing, multitasking is a method by which multiple tasks, also known as processes, share common processing resources such as a CPU. In the case of a computer with a single CPU, only one task is said to be running at any point in time, meaning that the CPU is actively executing instructions for that task. Multitasking solves the problem by scheduling which task may be the one running at any given time, and when another waiting task gets a turn. The act of reassigning a CPU from one task to another one is called a context switch. When context switches occur frequently enough the illusion of parallelism is achieved. Even on computers with more than one CPU (called multiprocessor machines), multitasking allows many more tasks to be run than there are CPUs. [Wikipedia]
Cooperative multitasking (some Linux)
When computer usage evolved from batch mode to interactive mode, multiprogramming was no longer a suitable approach. Each user wanted to see his program running as if it was the only program in the computer. The use of time sharing made this possible, with the qualification that the computer would not seem as fast to any one user as it really would be if it were running only that user's program. Because a cooperatively multitasked system relies on each process regularly giving up time to other processes on the system, one poorly designed program can consume all of the CPU time for itself or cause the whole system to hang. In a server environment, this is a hazard that makes the network brittle and fragile. All software must be evaluated and cleared for use in a test environment before being installed on the main server, or the entire network either slows down or comes to a halt when a program on the server misbehaves. [Wikipedia]
Preemptive multitasking (NT and some Linux)
Preemptive multitasking allows the computer system to more reliably guarantee each process a regular "slice" of operating time. It also allows the system to rapidly deal with important external events like incoming data, which might require the immediate attention of one or another process. Operating systems were developed to take advantage of these hardware capabilities and run multiple processes preemptively. For example, preemptive multitasking was implemented in the earliest version of Unix in 1969, and is standard in Unix and Unix-like operating systems, including Linux, Solaris and BSD with its derivatives. At any specific time, processes can be grouped into two categories: those that are waiting for input or output (called "I/O bound"), and those that are fully utilizing the CPU ("CPU bound"). In primitive systems, the software would often "poll", or "busywait" while waiting for requested input (such as disk, keyboard or network input). During this time, the system was not performing useful work. With the advent of interrupts and preemptive multitasking, I/O bound processes could be "blocked", or put on hold, pending the arrival of the necessary data, allowing other processes to utilize the CPU. As the arrival of the requested data would generate an interrupt, blocked processes could be guaranteed a timely return to execution. A similar model is used in Windows 9x and the Windows NT family, where native 32-bit applications are multitasked preemptively, and legacy 16-bit Windows 3.x programs are multitasked cooperatively within a single process, although in the NT family it is possible to force a 16-bit application to run as a separate preemptively multitasked process. 64-bit editions of Windows, both for the x86-64 and Itanium architectures, no longer provide support for legacy 16-bit applications, and thus provide preemptive multitasking for all supported applications. [Wikipedia]
Multithreading
As multitasking greatly improved the throughput of computers, programmers started to implement applications as sets of cooperating processes (e.g. one process gathering input data, one process processing input data, one process writing out results on disk). This, however, required some tools to allow processes to efficiently exchange data. Threads were born from the idea that the most efficient way for cooperating processes to exchange data would be to share their entire memory space. Thus, threads are basically processes that run in the same memory context. Threads are described as lightweight because switching between threads does not involve changing the memory context. [Wikipedia]
Thinking: How much multitasking is enough?
Key terms: multitasking, multithreading, process
Resources:
To maximize your learning, please visit these Web sites and review their content
to help reinforce the concepts presented in this section.
Quick links:
Multitasking @ Wikipedia
Notes on navigation: Click inside the frame to navigate the embedded Web page. - Click outside the frame to navigate this page to scroll up/down between the embedded Web pages. - Click on the frame title to open that page in a new tab in most browsers. - Click on the the "Reload page" link to reload the original page for that frame.
Multitasking @ Wikipedia |
Reload page
Notes:
Learning objective: Explain the difference between programs and services
Computer program
A computer program is a sequence of instructions written to perform a specified task for a computer. A computer requires programs to function, typically executing the program's instructions in a central processor. The program has an executable form that the computer can use directly to execute the instructions. The same program in its human-readable source code form, from which executable programs are derived (e.g., compiled), enables a programmer to study and develop its algorithms. [Wikipedia]
Process
A process is an executing program, including the current values of the program counter, registers, and variables. Conceptually, each process has its own virtual CPU. In reality, of course, the real CPU switches back and forth from process, but to understand the system, it is much easier to think about a collection of process running in (pseudo) parallel, than to try to keep track of how the CPU switches form program to program. This rapid switching back and forth is called multiprogramming, as we saw in the previous section. The difference between a process and a program is subtle, but crucial. An analogy may help make this point clearer. Consider a culinary-minded computer scientist who is baking a birthday cake for his daughter. He has a birthday cake recipe and a kitchen well-stocked with the necessary input: flour, eggs, sugar, and so on. In this analogy, the recipe is the program (i.e., an algorithm expressed in some suitable notation), the computer scientist is the processor (CPU), and the cake ingredients are the input data. The process is the activity consisting of our baker reading the recipe, fetching the ingredients, and baking the cake. [UNESCO]
Service
In Unix and other multitasking computer operating systems, a daemon is a computer program that runs as a background process, rather than being under the direct control of an interactive user. Typically daemon names end with the letter d: for example, syslogd is the daemon that implements the system logging facility, or sshd, which services incoming SSH connections. Systems often start daemons at boot time: they often serve the function of responding to network requests, hardware activity, or other programs by performing some task. Daemons can also configure hardware (like udevd on some GNU/Linux systems), run scheduled tasks (like cron), and perform a variety of other tasks. [Wikipedia]
In the Microsoft DOS environment, daemon-like programs were implemented as Terminate and Stay Resident (TSR) software. On Microsoft Windows NT systems, programs called Windows services perform the functions of daemons. They run as processes, usually do not interact with the monitor, keyboard, and mouse, and may be launched by the operating system at boot time. In Windows 2000 and later versions, Windows services are configured and manually started and stopped using the Control Panel or the net start and net stop commands. [Wikipedia]
Thinking: How many services are currently running on your computer?
Key terms: TSR, daemon, process, program, services
Resources:
To maximize your learning, please visit these Web sites and review their content
to help reinforce the concepts presented in this section.
Quick links:
Computer program @ Wikipedia
Chapter 1 Introduction into Operating system @ UNESCO (www.netnam.vn/unescocourse/os/13.htm)
Daemon @ Wikipedia
Notes on navigation: Click inside the frame to navigate the embedded Web page. - Click outside the frame to navigate this page to scroll up/down between the embedded Web pages. - Click on the frame title to open that page in a new tab in most browsers. - Click on the the "Reload page" link to reload the original page for that frame.
Computer program @ Wikipedia |
Reload page
Chapter 1 Introduction into Operating system @ UNESCO (www.netnam.vn/unescocourse/os/13.htm) |
Reload page
Daemon @ Wikipedia |
Reload page
Notes:
Learning objective: Explain the role of virtual memory
The program thinks it has a large range of contiguous addresses, but in reality the parts it is currently using are scattered around RAM, and the inactive parts are saved in a disk file. In computing, virtual memory is a memory management technique developed for multitasking kernels. This technique virtualizes a computer architecture's various hardware memory devices (such as RAM modules and disk storage drives), allowing a program to be designed as though: there is only one hardware memory device and this "virtual" device acts like a RAM module and the program has, by default, sole access to this virtual RAM module as the basis for a contiguous working memory. [Wikipedia]
Main memory (RAM)
Primary storage (or main memory or internal memory), often referred to simply as memory, is the only one directly accessible to the CPU. The CPU continuously reads instructions stored there and executes them as required. Any data actively operated on is also stored there in uniform manner. [Wikipedia]
Secondary memory (disk)
Secondary storage (also known as external memory or auxiliary storage), differs from primary storage in that it is not directly accessible by the CPU. The computer usually uses its input/output channels to access secondary storage and transfers the desired data using intermediate area in primary storage. Secondary storage does not lose the data when the device is powered down—it is non-volatile. Per unit, it is typically also two orders of magnitude less expensive than primary storage. Consequently, modern computer systems typically have two orders of magnitude more secondary storage than primary storage and data are kept for a longer time there. In modern computers, hard disk drives are usually used as secondary storage. The time taken to access a given byte of information stored on a hard disk is typically a few thousandths of a second, or milliseconds. By contrast, the time taken to access a given byte of information stored in random access memory is measured in billionths of a second, or nanoseconds. [Wikipedia]
Thrashing
When paging is used a potential problem called "thrashing" can occur, in which the computer spends a disproportionate amount of its capacity swapping pages to and from a backing store and therefore performs useful work more slowly. Adding real memory is the simplest response, although improving application design, scheduling, and memory usage can help. [Wikipedia]
Thinking: Will there always be a need for virtual memory?
Key terms: memory, storage, thrashing, virtual memory
Resources:
To maximize your learning, please visit these Web sites and review their content
to help reinforce the concepts presented in this section.
Quick links:
Virtual memory @ Wikipedia
Computer data storage @ Wikipedia
Notes on navigation: Click inside the frame to navigate the embedded Web page. - Click outside the frame to navigate this page to scroll up/down between the embedded Web pages. - Click on the frame title to open that page in a new tab in most browsers. - Click on the the "Reload page" link to reload the original page for that frame.
Virtual memory @ Wikipedia |
Reload page
Computer data storage @ Wikipedia |
Reload page
Notes:
Learning objective: Describe the events of the booting sequence
Every time you turn on your computer, its as if its the first time your computer has ever been turned on since there are no instructions or data in memory. This allows for multiple boots to allow different operating systems to be loaded and used on the same computer. For example, Macs allow users to run OS X, Windows, and Linux using the Boot Camp boot manager that comes as a OS X utility.
The booting process of DOS begins with the examination of the first sector of a diskette or hard disk is loaded into memory. From the time we switch on the computer to the booting process, a series of the following events occur. [UNESCO]
In Windows, when the pointer goes from the hour glass to the normal pointer, the operating system has been loaded and all the subsystems are synchronized and ready for user requests to process data.
BIOS
The BIOS of a PC software is built into the PC, and is the first code run by a PC when powered on ('boot firmware'). The primary function of the BIOS is to load and start an operating system. When the PC starts up, the first job for the BIOS is to initialize and identify system devices such as the video display card, keyboard and mouse, hard disk, CD/DVD drive and other hardware. The BIOS then locates software held on a peripheral device (designated as a 'boot device'), such as a hard disk or a CD, and loads and executes that software, giving it control of the PC. This process is known as booting, or booting up, which is short for bootstrapping. [Wikipedia]
Initialization
Program execution starts after the computer is turned on at memory location F000:FFF0. This is part of the ROM-BIOS and contains a jump command to a BIOS routine which takes over system initialization. The location of this routine may differ from one computer to another, however the task this routine performs remains identical for nearly all PCs. [UNESCO]
System Check
First the BIOS tests individual functions of the processor, its register and some instructions. If an error occurs during this test, the system stops without displaying an error message. If the CPU passes the test, BIOS tests the ROMs by computing checksum. Each chip on the main board goes through tests and initialization. [UNESCO]
Peripheral testing
After determining the function of the main board, the computer tests the peripherals (keyboard, disk drive, etc.). Then the computer initializes the BIOS variables and the interrupt vector table. [UNESCO]
The bootstrap loader
A computer's central processor can only execute program code found in Read-Only Memory (ROM), Random Access Memory (RAM) or an operator's console. Modern operating systems and application program code and data are stored on nonvolatile data storage devices, such as hard disk drives, CD, DVD, flash memory cards (like an SD card), USB flash drive, and floppy disk. When a computer is first powered on, it does not have an operating system in ROM or RAM. The computer must initially execute a small program stored in ROM along with the bare minimum of data needed to access the nonvolatile devices from which the operating system programs and data are loaded into RAM. The small program that starts this sequence of loading into RAM, is known as a bootstrap loader, bootstrap or boot loader. This small boot loader program's only job is to load other data and programs which are then executed from RAM. Often, multiple-stage boot loaders are used, during which several programs of increasing complexity sequentially load one after the other in a process of chain loading. [Wikipedia]
Thinking: Why not put the entire OS on a ROM chip?
Key terms: BIOS, booting, bootstrap loader, initialization
Resources:
To maximize your learning, please visit these Web sites and review their content
to help reinforce the concepts presented in this section.
Quick links:
BIOS @ Wikipedia
Booting @ Wikipedia
Comparison of boot loaders @ Wikipedia
Chapter 2 Operating Systems on PC @ UNESCO (www.netnam.vn/unescocourse/os/21.htm)
Notes on navigation: Click inside the frame to navigate the embedded Web page. - Click outside the frame to navigate this page to scroll up/down between the embedded Web pages. - Click on the frame title to open that page in a new tab in most browsers. - Click on the the "Reload page" link to reload the original page for that frame.
BIOS @ Wikipedia |
Reload page
Booting @ Wikipedia |
Reload page
Comparison of boot loaders @ Wikipedia |
Reload page
Chapter 2 Operating Systems on PC @ UNESCO (www.netnam.vn/unescocourse/os/21.htm) |
Reload page
Notes:
Learning objective: Explain the architecture of the NT kernel
Windows 95 was an important operating system for Microsoft. However, it was a transitional system that gave Microsoft their first 32bit operating system. Much of the 95 kernel was still built on older and less stable architecture. The kernel is the core software that connects the software with the hardware. For Microsoft to enter the emerging server market, they needed something truly new. To address these concerns, Microsoft build the NT operating system from the ground up without any 95 kernel code. The Windows 95 shell was fitted to the new OS. This made NT easy to learn and use since the user interface was the same as Windows 95. The NT kernel is comprised of four rings that surround the hardware. The center two rings are currently not used and set aside for future development. The inner ring or kernel mode controls the functioning of the hardware. It is done through a hardware abstraction layer. It was designed to hide differences in hardware and therefore provide a consistent platform on which applications may run. The top ring or user mode manages how the user interface interfaces with the kernel. Note the multiple application program interfaces, or APIs, it has to work with Windows, OS2, and POSIX for UNIX. Microsoft was late into this market and needed to make their OS work well with other more established systems. This new technology or NT would become the kernel foundation for XP, Server 2000, Server 2003, Vista, Windows 7 and beyond. If you want to truly understand Windows, you need to understand the NT kernel.
The architecture of Windows NT, a line of operating systems produced and sold by Microsoft, is a layered design that consists of two main components, user mode and kernel mode. It is a preemptive, reentrant operating system, which has been designed to work with uniprocessor and symmetrical multi processor (SMP)-based computers. To process input/output (I/O) requests, they use packet-driven I/O, which utilizes I/O request packets (IRPs) and asynchronous I/O. Starting with Windows 2000, Microsoft began making 64-bit versions of Windows available before this, these operating systems only existed in 32-bit versions. [Wikipedia]
User mode
The user mode is made up of subsystems which can pass I/O requests to the appropriate kernel mode drivers via the I/O manager (which exists in kernel mode). Two subsystems make up the user mode layer of Windows NT: the Environment subsystem and the Integral subsystem. The environment subsystem was designed to run applications written for many different types of operating systems. None of the environment subsystems can directly access hardware, and must request access to memory resources through the Virtual Memory Manager that runs in kernel mode. Also, applications run at a lower priority than kernel mode processes.
There are three main environment subsystems: the Win32 subsystem, an OS/2 subsystem and a POSIX subsystem.
Kernel mode
Windows NT kernel mode has full access to the hardware and system resources of the computer and runs code in a protected memory area. It controls access to scheduling, thread prioritization, memory management and the interaction with hardware. The kernel mode stops user mode services and applications from accessing critical areas of the operating system that they should not have access to; user mode processes must ask the kernel mode to perform such operations on their behalf. While the x86 architecture supports four different privilege levels (numbered 0 to 3), only the two extreme privilege levels are used. Usermode programs are run with CPL 3, and the kernel runs with CPL 0. These two levels are often referred to as "ring 3" and "ring 0", respectively. Such a design decision had been done to achieve code portability to RISC platforms that only support two privilege levels, though this breaks compatibility with OS/2 applications that contain I/O privilege segments that attempt to directly access hardware. Kernel mode consists of executive services, which is itself made up on many modules that do specific tasks, kernel drivers, a kernel and a Hardware Abstraction Layer, or HAL. [Wikipedia]
Hardware abstraction layer
A hardware abstraction layer (HAL) is an abstraction layer, implemented in software, between the physical hardware of a computer and the software that runs on that computer. Its function is to hide differences in hardware from most of the operating system kernel, so that most of the kernel-mode code does not need to be changed to run on systems with different hardware. On a PC, HAL can basically be considered to be the driver for the motherboard and allows instructions from higher level computer languages to communicate with lower level components, such as directly with hardware. The Windows NT operating system has an HAL in the kernel space, between hardware and kernel, drivers, executive services. This allows portability of the Windows NT kernel-mode code to a variety of processors, with different memory management unit architectures, and a variety of systems with different I/O bus architectures; most of that code runs without change on those systems, when compiled for the instruction set for those systems. For example, the SGI Intel x86-based workstations were not IBM PC compatible workstations, but due to the HAL, Windows NT was able to run on them. [Wikipedia]
Blue Screen of Death
In Windows NT family of operating systems, the blue screen of death (officially known as a Stop error, and referred to as "bug checks" in the Windows Software development kit and Driver development kit documentation) occurs when the kernel or a driver running in kernel mode encounters an error from which it cannot recover. This is usually caused by an illegal operation being performed. The only safe action the operating system can take in this situation is to restart the computer. As a result, data may be lost, as users are not given an opportunity to save data that has not yet been saved to the hard drive. [Wikipedia]
Thinking: Why have a HAL on NT?
Key terms: Blue Screen of Death, HAL, Kernel mode, NT, POSIX, User mode, kernel
Resources:
To maximize your learning, please visit these Web sites and review their content
to help reinforce the concepts presented in this section.
Quick links:
NT kernel @ Wikipedia
Protection rings @ Wikipedia
Hardware abstraction layer @ Wikipedia
POSIX @ Wikipedia
Blue Screen of Death @ Wikipedia
Notes on navigation: Click inside the frame to navigate the embedded Web page. - Click outside the frame to navigate this page to scroll up/down between the embedded Web pages. - Click on the frame title to open that page in a new tab in most browsers. - Click on the the "Reload page" link to reload the original page for that frame.
NT kernel @ Wikipedia |
Reload page
Protection rings @ Wikipedia |
Reload page
Hardware abstraction layer @ Wikipedia |
Reload page
POSIX @ Wikipedia |
Reload page
Blue Screen of Death @ Wikipedia |
Reload page
Notes: