Friday, 3 February 2017

ELECTROMAGNETIC WAVES

FIRST PERIODICAL NOTES
 
notes link :-




Tuesday, 10 January 2017

OPERATING SYSTEMS



→Functions of OS
→Shell programming
→SYSTEM CALL
→Process State
→PCB(process control block)
→Process Scheduling
→Interprocess Communication
→RACE CONDITION
→FCFS(first come first serve)
→SJF(shortest job first)
→SRTF(shortest remaining time first)
→PRIORITY(preemptive)
→PRIORITY(non-preemptive)
→Round Robin
→Multi-level Queue
→Necessary conditions
→Resource allocation graphs
→Deadlock Prevention
→Deadlock Avoidance
→BANKER'S ALGORITHM
→Deadlock Detection
→Wait for Graph
→Multi-threading Models
→Threading Issues
→Producer-Consumer
→Conditions Section Conditions
→Peterson's Solution
→Semaphores
→Reader's-Writer's problem
→Dining-Philosophers problem
→Dining-Philosophers solution using monitors
→Locking Protocols
→Address Binding
→Logical Address space
→Physical Address space
→Swapping
→Memory Allocation
       first fit, best fit, worst fit
→Segmentation
→Paging
Hierarchical Paging
Hashed Page Tables
→Inverted Page Tables
→Virtual Address spacing
→Demand Paging
→FIFO Page Replacement
→OPTIMAL Page Replacement
→LRU Page Replacement
→THRASHING

OPERATING SYSTEM-INTRO

                                                             Operating-System Structures

2.1 Operating-System Services
User Interfaces - Means by which users can issue commands to the system. Depending on the system these may be a command-line interface ( e.g. sh, csh, ksh, tcsh, etc. ), a GUI interface ( e.g. Windows, X-Windows, KDE, Gnome, etc. ), or a batch command systems. The latter are generally older systems using punch cards of job-control language, JCL, but may still be used today for specialty systems designed for a single purpose.
Program Execution - The OS must be able to load a program into RAM, run the program, and terminate the program, either normally or abnormally.
I/O Operations - The OS is responsible for transferring data to and from I/O devices, including keyboards, terminals, printers, and storage devices.
File-System Manipulation - In addition to raw data storage, the OS is also responsible for maintaining directory and subdirectory structures, mapping file names to specific blocks of data storage, and providing tools for navigating and utilizing the file system.
Communications - Inter-process communications, IPC, either between processes running on the same processor, or between processes running on separate processors or separate machines. May be implemented as either shared memory or message passing, ( or some systems may offer both. )
Error Detection - Both hardware and software errors must be detected and handled appropriately, with a minimum of harmful repercussions. Some systems may include complex error avoidance or recovery systems, including backups, RAID drives, and other redundant systems. Debugging and diagnostic tools aid users and administrators in tracing down the cause of problems.
Other systems aid in the efficient operation of the OS itself:
Resource Allocation - E.g. CPU cycles, main memory, storage space, and peripheral devices. Some resources are managed with generic systems and others with very carefully designed and specially tuned systems, customized for a particular resource and operating environment.
Accounting - Keeping track of system activity and resource usage, either for billing purposes or for statistical record keeping that can be used to optimize future performance.
Protection and Security - Preventing harm to the system and to resources, either through wayward internal processes or malicious outsiders. Authentication, ownership, and restricted access are obvious parts of this system. Highly secure systems may log all process activity down to excruciating detail, and security regulation dictate the storage of those records on permanent non-erasable medium for extended times in secure ( off-site ) facilities.

2.2 User Operating-System Interface
2.2.1 Command Interpreter
Gets and processes the next user request, and launches the requested programs.
In some systems the CI may be incorporated directly into the kernel.
More commonly the CI is a separate program that launches once the user logs in or otherwise accesses the system.
UNIX, for example, provides the user with a choice of different shells, which may either be configured to launch automatically at login, or which may be changed on the fly. ( Each of these shells uses a different configuration file of initial settings and commands that are executed upon startup. )
Different shells provide different functionality, in terms of certain commands that are implemented directly by the shell without launching any external programs. Most provide at least a rudimentary command interpretation structure for use in shell script programming ( loops, decision constructs, variables, etc. )
An interesting distinction is the processing of wild card file naming and I/O re-direction. On UNIX systems those details are handled by the shell, and the program which is launched sees only a list of filenames generated by the shell from the wild cards. On a DOS system, the wild cards are passed along to the programs, which can interpret the wild cards as the program sees fit.

2.2.2 Graphical User Interface, GUI
Generally implemented as a desktop metaphor, with file folders, trash cans, and resource icons.
Icons represent some item on the system, and respond accordingly when the icon is activated.
First developed in the early 1970's at Xerox PARC research facility.
In some systems the GUI is just a front end for activating a traditional command line interpreter running in the background. In others the GUI is a true graphical shell in its own right.
Mac has traditionally provided ONLY the GUI interface. With the advent of OSX ( based partially on UNIX ), a command line interface has also become available.
Because mice and keyboards are impractical for small mobile devices, these normally use a touch-screen interface today, that responds to various patterns of swipes or "gestures". When these first came out they often had a physical keyboard and/or a trackball of some kind built in, but today a virtual keyboard is more commonly implemented on the touch screen.

2.2.3 Choice of interface
Most modern systems allow individual users to select their desired interface, and to customize its operation, as well as the ability to switch between different interfaces as needed. System administrators generally determine which interface a user starts with when they first log in.
GUI interfaces usually provide an option for a terminal emulator window for entering command-line commands.
Command-line commands can also be entered into shell scripts, which can then be run like any other programs.

2.3 System Calls
System calls provide a means for user or application programs to call upon the services of the operating system.
Generally written in C or C++, although some are written in assembly for optimal performance.
You can use "strace" to see more examples of the large number of system calls invoked by a single simple command. Read the man page for strace, and try some simple examples. ( strace mkdir temp, strace cd temp, strace date > t.t, strace cp t.t t.2, etc. )
Most programmers do not use the low-level system calls directly, but instead use an "Application Programming Interface", API. The following sidebar shows the read( ) call available in the API on UNIX based systems::
Parameters are generally passed to system calls via registers, or less commonly, by values pushed onto the stack. Large blocks of data are generally accessed indirectly, through a memory address passed in a register or on the stack, as shown in Figure 2.7:

2.4.1 Process Control
Process control system calls include end, abort, load, execute, create process, terminate process, get/set process attributes, wait for time or event, signal event, and allocate and free memory.
Processes must be created, launched, monitored, paused, resumed,and eventually stopped.
When one process pauses or stops, then another must be launched or resumed
When processes stop abnormally it may be necessary to provide core dumps and/or other diagnostic or recovery tools.
Compare DOS ( a single-tasking system ) with UNIX ( a multi-tasking system ).
When a process is launched in DOS, the command interpreter first unloads as much of itself as it can to free up memory, then loads the process and transfers control to it. The interpreter does not resume until the process has completed, Because UNIX is a multi-tasking system, the command interpreter remains completely resident when executing a process.
The user can switch back to the command interpreter at any time, and can place the running process in the background even if it was not originally launched as a background process.
In order to do this, the command interpreter first executes a "fork" system call, which creates a second process which is an exact duplicate ( clone ) of the original command interpreter. The original process is known as the parent, and the cloned process is known as the child, with its own unique process ID and parent ID.
The child process then executes an "exec" system call, which replaces its code with that of the desired process.
The parent ( command interpreter ) normally waits for the child to complete before issuing a new command prompt, but in some cases it can also issue a new prompt right away, without waiting for the child process to complete. ( The child is then said to be running "in the background", or "as a background process". )

2.4.2 File Management

File management system calls include create file, delete file, open, close, read, write, reposition, get file attributes, and set file attributes.
These operations may also be supported for directories as well as ordinary files.
( The actual directory structure may be implemented using ordinary files on the file system, or through other means. Further details will be covered in chapters 11 and 12. )
2.4.3 Device Management

Device management system calls include request device, release device, read, write, reposition, get/set device attributes, and logically attach or detach devices.
Devices may be physical ( e.g. disk drives ), or virtual / abstract ( e.g. files, partitions, and RAM disks ).
Some systems represent devices as special files in the file system, so that accessing the "file" calls upon the appropriate device drivers in the OS. See for example the /dev directory on any UNIX system.
2.4.4 Information Maintenance

Information maintenance system calls include calls to get/set the time, date, system data, and process, file, or device attributes.
Systems may also provide the ability to dump memory at any time, single step programs pausing execution after each instruction, and tracing the operation of programs, all of which can help to debug programs.
2.4.5 Communication

Communication system calls create/delete communication connection, send/receive messages, transfer status information, and attach/detach remote devices.
The message passing model must support calls to:
Identify a remote process and/or host with which to communicate.
Establish a connection between the two processes.
Open and close the connection as needed.
Transmit messages along the connection.
Wait for incoming messages, in either a blocking or non-blocking state.
Delete the connection when no longer needed.
The shared memory model must support calls to:
Create and access memory that is shared amongst processes ( and threads. )
Provide locking mechanisms restricting simultaneous access.
Free up shared memory and/or dynamically allocate it as needed.
Message passing is simpler and easier, ( particularly for inter-computer communications ), and is generally appropriate for small amounts of data.
Shared memory is faster, and is generally the better approach where large amounts of data are to be shared, ( particularly when most processes are reading the data rather than writing it, or at least when only one or a small number of processes need to change any given data item. )
2.4.6 Protection

Protection provides mechanisms for controlling which users / processes have access to which system resources.
System calls allow the access mechanisms to be adjusted as needed, and for non-priveleged users to be granted elevated access permissions under carefully controlled temporary circumstances.
Once only of concern on multi-user systems, protection is now important on all systems, in the age of ubiquitous network connectivity.
2.5 System Programs

System programs provide OS functionality through separate applications, which are not part of the kernel or command interpreters. They are also known as system utilities or system applications.
Most systems also ship with useful applications such as calculators and simple editors, ( e.g. Notepad ). Some debate arises as to the border between system and non-system applications.
System programs may be divided into these categories:
File management - programs to create, delete, copy, rename, print, list, and generally manipulate files and directories.
Status information - Utilities to check on the date, time, number of users, processes running, data logging, etc. System registries are used to store and recall configuration information for particular applications.
File modification - e.g. text editors and other tools which can change file contents.
Programming-language support - E.g. Compilers, linkers, debuggers, profilers, assemblers, library archive management, interpreters for common languages, and support for make.
Program loading and execution - loaders, dynamic loaders, overlay loaders, etc., as well as interactive debuggers.
Communications - Programs for providing connectivity between processes and users, including mail, web browsers, remote logins, file transfers, and remote command execution.
Background services - System daemons are commonly started when the system is booted, and run for as long as the system is running, handling necessary services. Examples include network daemons, print servers, process schedulers, and system error monitoring services.
Most operating systems today also come complete with a set of application programs to provide additional services, such as copying files or checking the time and date.
Most users' views of the system is determined by their command interpreter and the application programs. Most never make system calls, even through the API, ( with the exception of simple ( file ) I/O in user-written programs. )
2.6 Operating-System Design and Implementation

2.6.1 Design Goals

Requirements define properties which the finished system must have, and are a necessary first step in designing any large complex system.
User requirements are features that users care about and understand, and are written in commonly understood vernacular. They generally do not include any implementation details, and are written similar to the product description one might find on a sales brochure or the outside of a shrink-wrapped box.
System requirements are written for the developers, and include more details about implementation specifics, performance requirements, compatibility constraints, standards compliance, etc. These requirements serve as a "contract" between the customer and the developers, ( and between developers and subcontractors ), and can get quite detailed.
Requirements for operating systems can vary greatly depending on the planned scope and usage of the system. ( Single user / multi-user, specialized system / general purpose, high/low security, performance needs, operating environment, etc. )
2.6.2 Mechanisms and Policies

Policies determine what is to be done. Mechanisms determine how it is to be implemented.
If properly separated and implemented, policy changes can be easily adjusted without re-writing the code, just by adjusting parameters or possibly loading new data / configuration files. For example the relative priority of background versus foreground tasks.
2.6.3 Implementation

Traditionally OSes were written in assembly language. This provided direct control over hardware-related issues, but inextricably tied a particular OS to a particular HW platform.
Recent advances in compiler efficiencies mean that most modern OSes are written in C, or more recently, C++. Critical sections of code are still written in assembly language, ( or written in C, compiled to assembly, and then fine-tuned and optimized by hand from there. )
Operating systems may be developed using emulators of the target hardware, particularly if the real hardware is unavailable ( e.g. not built yet ), or not a suitable platform for development, ( e.g. smart phones, game consoles, or other similar devices. )
2.7 Operating-System Structure

For efficient performance and implementation an OS should be partitioned into separate subsystems, each with carefully defined tasks, inputs, outputs, and performance characteristics. These subsystems can then be arranged in various architectural configurations:

2.7.1 Simple Structure

When DOS was originally written its developers had no idea how big and important it would eventually become. It was written by a few programmers in a relatively short amount of time, without the benefit of modern software engineering techniques, and then gradually grew over time to exceed its original expectations. It does not break the system into subsystems, and has no distinction between user and kernel modes, allowing all programs direct access to the underlying hardware. ( Note that user versus kernel mode was not supported by the 8088 chip set anyway, so that really wasn't an option back then. )
The original UNIX OS used a simple layered approach, but almost all the OS was in one big layer, not really breaking the OS down into layered subsystems:
2.7.2 Layered Approach
Another approach is to break the OS into a number of smaller layers, each of which rests on the layer below it, and relies solely on the services provided by the next lower layer.
This approach allows each layer to be developed and debugged independently, with the assumption that all lower layers have already been debugged and are trusted to deliver proper services.
The problem is deciding what order in which to place the layers, as no layer can call upon the services of any higher layer, and so many chicken-and-egg situations may arise.
Layered approaches can also be less efficient, as a request for service from a higher layer has to filter through all lower layers before it reaches the HW, possibly with significant processing at each step.
2.7.3 Microkernels
The basic idea behind micro kernels is to remove all non-essential services from the kernel, and implement them as system applications instead, thereby making the kernel as small and efficient as possible.
Most microkernels provide basic process and memory management, and message passing between other services, and not much more.
Security and protection can be enhanced, as most services are performed in user mode, not kernel mode.
System expansion can also be easier, because it only involves adding more system applications, not rebuilding a new kernel.
Mach was the first and most widely known microkernel, and now forms a major component of Mac OSX.
Windows NT was originally microkernel, but suffered from performance problems relative to Windows 95. NT 4.0 improved performance by moving more services into the kernel, and now XP is back to being more monolithic.
Another microkernel example is QNX, a real-time OS for embedded systems.
2.7.4 Modules
Modern OS development is object-oriented, with a relatively small core kernel and a set of modules which can be linked in dynamically. See for example the Solaris structure, as shown in Figure 2.13 below.
Modules are similar to layers in that each subsystem has clearly defined tasks and interfaces, but any module is free to contact any other module, eliminating the problems of going through multiple intermediary layers, as well as the chicken-and-egg problems.
The kernel is relatively small in this architecture, similar to microkernels, but the kernel does not have to implement message passing since modules are free to contact each other directly.