dovolena-na-lodi.info Fitness Operating System Concepts 6th Edition Silberschatz Galvin Pdf

OPERATING SYSTEM CONCEPTS 6TH EDITION SILBERSCHATZ GALVIN PDF

Sunday, May 5, 2019


AUi SilberschatZ Computer-System Operation 27 Network Structure 48 The operating system must ensure the correct operation of the computer. provides a clear description of the concepts that underlie operating systems. As prerequisites . As we wrote this Sixth Edition, we were guided by the many comments and suggestions we to Avi Silberschatz, Vice President, Information Sciences Research Center, MH. 2T, Bell Peter Baer Galvin, Norton, MA, operating system concepts sixth edition - sofinafoods - operating system operating system concepts 6th edition silberschatz galvin ppt pdf may not make.


Operating System Concepts 6th Edition Silberschatz Galvin Pdf

Author:SHIRL KERNES
Language:English, Spanish, Japanese
Country:Germany
Genre:Politics & Laws
Pages:172
Published (Last):08.02.2016
ISBN:666-7-22592-532-8
ePub File Size:30.38 MB
PDF File Size:11.80 MB
Distribution:Free* [*Regsitration Required]
Downloads:24137
Uploaded by: STAN

Operating System Concepts – 9 Silberschatz, Galvin And 2 operating system concepts – 9th edition silberschatz, galvin and gagne dispatcher. Operating System Concepts Silberschatz, Galvin And Gagne instructor's manual to accompany operating-system concepts seventh edition abraham. ABRAHAM SILBERSCHATZ. Yale University. PETER BAER GALVIN. Pluribus . As we wrote this Ninth Edition of Operating System Concepts, we were guided.

View Paper.

Operating System Concepts

Save to Library. Create Alert. This paper has citations. From This Paper Topics from this paper. Explore Further: Topics Discussed in This Paper Operating system. Version 6 Unix. Citations Publications citing this paper. Sort by: Influence Recency. Highly Influenced.

Documents Similar To Silberschatz.Galvin.Operating.System.Concepts.7th.pdf

Khawatreh You can find your representative at the "Find a Rep? We have created a mailing list consisting of users of our book with the following address: If you wish to be on the list, please send a message to aviabell-1abs. We would appreciate hearing from you about any textual errors or omissions that you identify. If you would like to suggest improvements or to contribute exer- cises, we would also be glad to hear from you. Acknowledgments This book is derived from the previous editions, the first three of which were coauthored by James Peterson.

We thank the following people who contributed to this edition of the book: Bruce Hillyer reviewed and helped with the rewrite of Chapters 2, 12, 13, and Mike Reiter reviewed and helped with the rewrite of Chapter Parts of Chapter 14 were derived from a paper by Hillyer and Silberschatz []. Parts of Chapter 17 were derived from a paper by Levy and Silberschatz [].

Chapter 20 was derived from an unpublished manuscript by Stephen Tweedie. Chapter 21 was derived from an unpublished manuscript by Cliff Martin. Mike Shapiro reviewed the Solaris information and Jim Mauro answered several Solaris-related questions. We thank the following people who reviewed this edition of the book: Preface xiii gia Tech; Larry L.

They were both assisted by Susan- nah Barr, who managed the many details of this project smoothly. Katherine Hepburn was our Marketing Manager. Thecoverillustrator wasSusan Cyr while the cover designer was Made- lyn Lesure. Barbara Heaney was in charge of overseeing the copy-editing and Katie Habibcopyedited the manuscript.

The freelance proofreader was Katrina Avery; the freelance indexer was Rosemary Simpson. Marilyn Turnamian helped generate figures and update the text, Instructors Manual, and slides. Finally, we would like to add some personal notes.

Avi would like to extend his gratitude to KrystynaKwiecien, whose devoted care of his mother has given him the peace of mind he needed to focus on the writing of this book; Pete, would like to thank Harry Kasparian, and his other co-workers, who gave him the freedom to work on this project while doing his "real job"; Greg would like to acknowledge two significant achievements by his children during the period he worked on this text: Tom-age 5-learned to read, and Jay-age 2 -learned to talk.

Contents xvii Chapter 7 Process Synchronization 7. Networking AFS Windows NT Part One! An operating system is a program that acts as an intermediary between the user of a computer and the computer hardware. The purpose of an operating system is to provide an environment in which a user can execute programs in a convenient and efficient manner.

We trace the development of operating systems from the first hands-on sys-!

Understanding the evolutionof operating systemsgives us an appreciation for what an operating system does and how it does it.

The operating system must ensure the correct operation of the computer system. To prevent user programs from interfering with the proper opera- tion of the system, the hardware must provide appropriate mechanisms. We describe the basiccomputer architecturethat makesit possible to write a correct operating system. Theoperating system provides certainservices to programs and to the users of those programs in order to make their tasks easier.

The services differ from one operating system to another, but we identify and explore some common classes of these services. Chapter 1 An operating system is a program that manages the computer hardware. It also provides a basis for application programs and acts as an intermediary between a user of a computer and the computer hardware.

An amazing aspect of operating systems is how varied they are in accomplishing these tasks. Mainframe operating systems are designed primarily to optimize utilization of hardware. Personal computer PC operating systems support complex games, business applications, and everything in between.

Handheld computer operating systems are designed to provide an environment in which a user can easily interface with the computer to execute programs. Thus, some operating systems are designed to be convenient, others to be eficient, and others some combinationof the two. To understand what operating systems are, we must first understand how they have developed.

In this chapter, we trace the development of operating systemsfrom the first hands-on systems through multiprogramrned and time- shared systems to PCs, and handheld computers. We also discuss operating system variations, such as parallel, real-time, and embedded systems. As we move through the various stages, we see how the components of operating systemsevolved as natural solutions to problems in early computer systems. What Is an Operating System?

An operating system is an important part of almost every computer system. A computer system can be divided roughly into four components: The hardware-the central processing unit CPU , the memory, and the inputloutput devices-provides the basic computing resources. The application programs-suchas word processors, spreadsheets, compilers,and web browsers-define the ways in which these resources are used to solve the computing problems of the users.

The operating system controls and coordinates the use of the hardware among the various application programs for the various users. Thecomponents of a computer system are its hardware, software,and data. The operating system provides the means for the proper use of these resources in the operation of the computer system. An operating system is similar to a government. Like a government, it performs no useful function by itself.

It simply provides an enuironment within which other programs can do useful work. Operating systems can be explored from two viewpoints: Most computer users sit in front of a PC, consisting of a monitor, keyboard, mouse, and system unit. Such a system is designed for one user to monopolize its resources, to maximize the work or play that the user is performing.

In this case, the operating system is designed mostly for ease of use, with Some users sit at a terminal connected to a mainframe or minicomputer. Other users are accessing the same computer through other terminals. These users share resources and may exchange information. Other users sit at workstations, connected to networks of other worksta- tions and servers. These users have dedicated resources at their disposal, but they also share resources such as networking and servers-file, compute and print servers.

Therefore, their operating system is designed to compromise betweenindividual usability and resource utilization. Recently, many varieties of handheld computers have come into fashion. These devicesare mostly standalone, used singly by individual users. Some are connected to networks, either directly by wire or moreoften through wireless modems. Due to power and interface limitations they perform relatively few remote operations.

The operating systems are designed mostly for individual usability, but performance per amount of battery life is important as well. Some computers have little or no user view. For example, embedded computers in home devices and automobiles may have a numeric keypad, and may turn indicator lights on or off to show status, but mostly they and their operating systems are designed to run without user intervention.

Operating System Concepts, Sixth Edition

We can view an operating system as a resource allocator. A computer system has many resources-hardwareand software-that may be required to solve a problem: The operating system acts as the manager of these resources. Facingnumerous and possiblyconflictingrequests for resources, the operating system must decide how to allocate them to specific programs and users so that it can operate the computer system efficientlyand fairly. An operating system is a control program.

A control program manages the execution of user programs to prevent errors and improper use of the computer. In general, however, we have no completely adequate definition of an operating system.

Operating systems exist because they are a reasonable way to solve the problem of creating a usable computing system. The fundamental Toward this goal, computer hardware is constructed. Since bare hardware alone is not particularly easy to use, application programs are developed. Thecommonfunctions of controllingand allocating resources are then brought together into one piece of software: In addition, we have no universally accepted definition of what is part of the operating system.

A simple viewpoint is that it includes everything a vendor ships when you order "the operating system. Thestorage capacity of a system is measured in gigabytes. Some systems take up less than 1 megabyte of space and lack even a full- screen editor, whereas others require hundreds of megabytes of space and are entirely based on graphical windowing systems.

A more common definition is that the operating system is the one program running at all times on the computer usually called the kernel , with all else being application programs. This last definition is the one that we generally follow. The matter of what constitutes an operating system is becoming important.

In , the United States Department of Justice filed suit against Microsoft, in essence claiming that Microsoft included too much functionality in its operating systems and thus prevented competitionfrom application vendors.

The primary goal of some operating system is convenience for the user. Operating systems exist because they are supposed to make it easier to compute with them than without them. This view is particularly clear when you look at operating systems for small PCs. The primary goal of other operating systems is efficient operation of the computer system. This is the case for large, shared, multiuser systems. These systems are expensive, so it is desirable to make them as efficient as possible.

These two goals-convenience and efficiency-are sometimes contradictory. In the past, efficiency was often more important than convenience Section 1. Thus, much of operating-system theory concentrates on optimal use of computing resources.

Operating systems have also evolved over time. For example, UNIX started with a keyboardand printer as its interface,limitinghow convenient it could be for the user. Over time, hardware changed, and UNIX was ported to new hardware with more user-friendlyinterfaces. Many graphic The design of an operating system is a complex task.

Designers face many tradeoffsin the design and implementation, and many people are involved not only in bringing an operating system to fruition, but also constantly revising and updating it. How well any given operating system meetsits design goalsis open to debate, and is subjective to the different users of the operating system.

To see what operating systems are and what they do, let us consider how they have developed over the past 45 years. By tracing that evolution, we can identify the common elements of operating systems, and see how and why these systems have developed as they have. Operating systems and computer architecture have influenced each other a great deal. To facilitate the use of the hardware, researchers developed operating systems.

Users of the operating systems then proposed changes in hardware design to simplify them. In this short historical review, notice how identificationof operating-systemproblemsled to the introduction of new hardware features. In this section, we trace the growth of mainframe systems from simple batch systems, where the computer runs one -and only one-application, to time-shared systems, which allow for user interactionwith the computer system.

The common input deviceswere card readers and tape drives. The common output devices were line printers, tape drives, and card punches. The user did not intkract directly with the computer systems. Rather, the user prepared a job -which consisted of the program, the data, and some control information about the nature of the job controlcards -and submitted it to the computer operator.

The job was usually in the form of punch cards. At some later time after minutes, hours, or days , the output appeared. The output consisted of the result of the program, as well as a dump of the final memory and register contents for debugging.

The operating system in these early computers was fairly simple. Its major task was to transfer control automatically from one job to the next. The operating system was always resident in memory Figure1. To speed up processing, operators batched together jobs with similar needs and ran them through the computer as a group. Thus, the programmers would The operator would sort programs into batches with similar requirements and, as the computer became available, would run each batch.

The output from each job would be sent back to the appropriate programmer. Even a slow CPU works in the microsecond range, with thousands of instructions executed per second.

A fast card reader, on the other hand, might read cards per minute or 20 cards per second. However, CPU speeds increased to an even greater extent, so the problem was not only unresolved,but exacerbated. The introduction of disk technology allowed the operating system to keep all jobson a disk, rather than in a serialcard reader.

Withdirect accessto several jobs, the operating system could perform job scheduling, to use resourcesand perform tasks efficiently. We discuss a few important aspects of job and CPU scheduling here; we discuss them in detail in Chapter 6. The idea is as follows: The operating system keeps several jobs in memory simultaneously Figure1.

This set of jobs is a subset of the jobs kept in the job pool-since the number of jobs that can be kept simultaneously in memory is usually much smaller than the number of jobs that can be in the job pool.

The In a non-multiprogrammed system, the CPU would sit idle. In a multiprogramming system, the operating system simply switches to, and executes, another job. When that job needs to wait, the CPU is switched to another job, and so on. Eventually, the first job finishes waiting and gets the CPU back. As long as at least one job needs to execute, the CPU is never idle.

This idea is common in other life situations. A lawyer does not work for only one client at a time. While one case is waiting to go to trial or have papers typed, the lawyer can work on another case. If she has enough clients, the lawyer will never be idle for lack of work. Idle lawyers tend to become politicians, so there is a certain social value in keeping lawyers busy.

Multiprogramming is the first instance where the operating system must make decisions for the users. Multiprogrammed operating systems are there- fore fairly sophisticated. All the jobs that enter the system are kept in the job pool. This pool consists of all processes residing on disk awaiting allocation of main memory.

If several jobs are ready to be brought into memory, and if there is not enough room for all of them, then the system must choose among 1 them. Making this decision is job scheduling, which is discussed in Chapter 6. I When the operating system selects a job from the job pool, it loads that job into memory for execution. Having several programs in memory at the same time requires some form of memory management, which is covered in Chapters 9 and In addition, if several jobsare ready to run at the same time, the system must choose among them.

Making this decision is CPU scheduling, which is discussed in Chapter 6. Finally, multiple jobs running concurrently require that their ability to affect one another be limited in all phases of the operating These considerations are discussed throughout the text. Time sharing or multitasking is a logical extension of multipro- gramming.

The CPU executes multiple jobs by switching among them, but the switchesoccurso frequentlythat the users caninteractwith each program while it is running.

An interactive or hands-on computer system provides direct communi- cation between the user and the system. The user gives instructions to the operating system or to a program directly, using a keyboard or a mouse, and waits for immediate results. Accordingly, the response time should be short- typically within 1second or so. A time-shared operating system allows many users to share the computer simultaneously.

Since each action or command in a time-sharedsystem tends to be short, only a little CPU time is needed for each user. As the system switches rapidly from one user to the next, each user is given the impression that the entire computer system is dedicated to her use, even though it is being shared among many users.

Operating System Concepts, Sixth Edition

A time-shared operating system uses CPU scheduling and multiprogram- ming to provide each user with a small portion of a time-shared computer. Each user has at least one separate program in memory.

A program loaded into memory and executing is commonly referred to as a process. Input, for example, may be bounded by the user's typing speed; seven characters per second is fast for people, but incrediblyslow for comput- ers. Rather than let the CPU sit idle when this interactive input takes place, the operating system will rapidly switch the CPU to the program of some other user.

Time-sharing operating systems are even more complex than multipro- grammed operating systems. In both, several jobs must be kept simultaneously in memory, so the system must have memory management and protection Chapter9. To obtain a reasonableresponse time, jobs may have to be swapped in and out of main memory to the disk that now serves as a backing store for main memory. A common method for achieving this goal is virtual memory, which is a technique that allows the execution of a job that may not be com- pletely in memory Chapter The main advantage of the virtual-memory Further, it abstractsmain memoryinto a large, uniform array of storage,separating logical memory as viewed by the user from physical memory.

This arrangement frees programmers from concernover memory-storagelimitations. Time-sharingsystems must also provide a file system Chapters11and The file system resides on a collection of disks; hence, disk management must be provided Chapter Also, time-sharingsystems provide a mechanismfor concurrent execution, which requires sophisticated CPU-scheduling schemes Chapter 6. To ensure orderly execution, the system must provide mechanisms for job synchronization and communication Chapter 7 ,and it may ensure that jobs do not get stuck in a deadlock, forever waiting for one another Chapter8.

The idea of time sharing was demonstrated as early as , but since time-shared systems are difficult and expensive to build, they did not become common until the early s. Although some batch processing is still done, most systems today are time sharing. Accordingly, multiprogramming and time sharing are the central themes of modern operating systems, and they are the central themes of this book.

Desktop Systems Personal computers PCs appeared in the s. During their first decade, the CPUs in PCs lacked the features needed to protect an operating system from user programs. PC operating systems therefore were neither multiuser nor multitasking.

However, the goalsof theseoperating systems havechanged with time; instead of maximizingCPU and peripheral utilization, the systems opt for maximizing user convenience and responsiveness. The Apple Macintosh operating system has been ported to more advanced hardware, and now includes new features, such as virtual memory and mul- titasking.

Operating systems for these computers have benefited in several ways from the development of operating systems for mainframes. Microcomputers were immediately able to adopt some of the technology developed for larger operating systems. On the other hand, the hardware costs for microcomputers are sufficiently low that individuals have sole use of the computer, and CPU utilization is no longer a prime concern.

Thus, some of the design decisions made in operating systems for mainframes may not be appropriate for smaller systems. For example, file protection was, at first, not necessary on a personal machine. However, these computers are now often tied into other computers over local-area networks or other Internet connec- tions. When other computers and other users can access the files on a PC, file protectionagain becomesa necessary feature of the operating system.

The lack of such protection has made it easy for malicious programs to destroy data on systems such as MS-DOS and the Macintosh operating system. These programs may be self-replicating,and may spread rapidly via worm or virus mechanisms and disrupt entire companies or even worldwide networks. Advanced time- sharing featuressuch as protected memory and file permissionsare not enough, on their own, to safeguard a system from attack.

Recentsecurity breaches have shown that time and again. These topics are discussed in Chapters18 and However, multiprocessor systems also known as parallel systems or tightly coupled systems are growing in importance.

Such systems have more than one processor in close communication, sharing the computer bus, the clock, and sometimes memory and peripheral devices. Multiprocessorsystems have three main advantages. Increased throughput. By increasing the number of processors, we hope to get more work done in less time. The speed-up ratio with N processors is not N; rather, it is less than N. When multiple processors cooperate on a task, a certain amount of overhead is incurred in keeping all the parts working correctly.

This overhead, plus contention for shared resources, lowers the expected gain from additional processors. Similarly, a group of N programmers working closely together does not result in N times the amount of work being accomplished.

Economy of scale. Multiprocessor systems can save more money than multiple single-processor systems, because they can share peripherals, massstorage, and power supplies.

If several programs operate on the same set of data, it is cheaper to store those data on one disk and to have all the processorsshare them, than to have many computers with local disks and many copies of the data.

Increased reliablility. If functions can be distributed properly among several processors, then the failureof one processor will not halt the system, only slow it down. If we have ten processorsand one fails, then each of the remaining nine processors must pick up a share of the work of the failed processor. Thus, the entire system runs only 10 percent slower, rather than failing altogether.

This ability to continue providing service proportional Systems designed for graceful degradation are also called fault tolerant. Continued operation in the presence of failures requires a mechanism to allow the failure to be detected, diagnosed, and, if possible, corrected. The Tan- dem system uses both hardware and software duplication to ensure continued operation despite faults.

The system consists of two identical processors, each with its own local memory. The processors are connected by a bus. One pro- cessor is the primary and the other is the backup. Two copies are kept of each process: At fixed checkpointsin the execution of the system, the state information of each job- including a copy of the memory image-iscopied from the primary machineto the backup. If a failure is detected, the backup copy is activated and is restarted from the most recent checkpoint.

This solution is expensive, since it involves considerablehardware duplication. The most common multiple-processor systems now use symmetric mul- tiprocessing SMP , in whch each processor runs an identical copy of the operating system, and these copies communicate with one another as needed. Some systems use asymmetric multiprocessing, in which each processor is assigned a specific task. A master processor controls the system; the other pro- cessors either look to the master for instruction or have predefined tasks.

This scheme defines a master-slave relationship. The master processor schedules and allocates work to the slave processors. SMP means that all processorsare peers; no master-slave relationshipexists between processors. Each processor concurrently runs a copy of the operating system. Figure 1. This computer can be configured such that it employs dozens of processors, all running copies of UNIX. The benefit of this model is that many processes can run simultaneously-N processes can run if there are N CPUs-without causinga significant deterioration of performance.

Also, since the CPUs are separate, one may be sitting idle while another is overloaded, resulting in inefficiencies. These inefficiencies can be avoided if the processors share certain data structures.

A multiprocessor system of this form will allow Figure 1. I 14 Chapter 1 Introduction I processes and resources-such as memory-to be shared dynamically among the various processors, and can lower the varianceamong the processors.

Such a system must be written carefully, as we shall see in Chapter 7. Thedifferencebetweensymmetricand asymmetricmultiprocessingmay be the result of either hardware or software. Special hardware can differentiatethe multiple processors,or the softwarecan be written toallow only one master and multipleslaves. For instance, Sun's operating system SunOSVersion 4 provides asymmetric multiprocessing,whereas Version 5 Solaris2 is symmetric on the same hardware.

As microprocessors become less expensive and more powerful, additional operating-system functions are off-loaded to slave processors or back-ends.

For example, it is fairly easy to add a microprocessor with its own memory to manage a disk system. The microprocessor could receive a sequence of requestsfrom the main CPU and implement its own disk queue and scheduling algorithm. This arrangement relieves the main CPU of the overhead of disk scheduling. PCs contain a microprocessor in the keyboard to convert the keystrokes into codes to be sent to the CPU. In fact, this use of microprocessors has become so common that it is no longer considered multiprocessing.

Distributed systems depend on networking for their function- ality. By being able to communicate, distributed systems are able to share computational tasks, and provide a rich set of features to users. Networks vary by the protocols used, the distances between nodes, and the transport media. Likewise, operating-system support of protocolsvaries. Some systems support proprietary protocols to suit their needs.

To an operating system, a network protocol simply needs an interface device-a network adapter, for example-with a device driver to manage it, and software to packagedata in the communications protocol to send it and to unpackage it to receive it.

These concepts are discussed throughout the book. Networks are typecast based on the distances between their nodes.

A local-area network LAN , exists within a room, a floor, or a building. A wide-area network WAN , usually exists between buildings, cities, or coun- tries. A global company may have a WAN to connect its offices, worldwide. These networks could run one protocol or several protocols. The continuing advent of new technologies brings about new forms of networks. For exam- BlueToothdevicescommunicateover a short distance of severalfeet, in essence creatinga small-area network. The media to carry networks are equally varied.

They include copper wires, fiber strands, and wireless transmissions between satellites, microwavedishes, and radios. When computing devices are connected to cellular phones, they create a network. Even very short-range infrared communication can be used for networking.

At a rudimentary level, whenever computers communicate they use or create a network. These networks also vary by their performance and reliability. Terminalsconnected to central- ized systems are now beingsupplanted by PCs.

Correspondingly,user-interface functionality that used to be handled directly by the centralized systems is increasingly being handled by the PCs. As a result, centralized systems today act as server systems to satisfy requests generated by client systems.

The general structure of a client-server system is depicted in Figure1. Server systems can be broadly categorized as compute servers and file servers. Compute-server systems provide an interface to which clients can send requests to perform an action, in response to which they execute the action and send back results to the client. File-serversystems provide a file-systeminterfacewhere clients can create, update, read, and delete files.

With the beginningof widespread publicuse of theInternet in thes for electronic mail, ftp, and gopher, many PCs becameconnected to computer networks.

With the introduction of the Web in the mids, network connectivity became an essentialcomponent of a computer system. Virtually all modern PCs and workstations are capable of running a web browser for accessing hypertext documents on the Web. Several include the web browser itself, as well as electronic mail, remote login, and file-transfer clients and servers. In contrast to the tightly coupled systems discussed in Section1. Instead, each processor has its own local memory.

The processorscommunicate with one another through various com- munication lines, such as high-speed buses or telephone lines. These systems are usually referred to as loosely coupled systems or distributed systems. Some operating systems have taken the concept of networks and dis- tributed systems further than the notion of providing network connectivity A network operating system is an operating system that provides features such as file sharing across the network, and that includes a communicationscheme that allows different processes on different computers to exchange messages.

A computer running a network operating system acts autonomously from all other computers on the network, although it is aware of the network and is able to communicate with other networked computers. A distributed operat- ing system is a less autonomous environment: The different operating systems communicatecloselyenough to provide the illusion that only a singleoperating system controls the network.

We cover computer networks and distributed systems in Chapters 15 through Clustered Systems Like parallel systems, clustered systems gather together multiple CPUs to accomplish computational work. Clustered systems differ from parallel sys- tems, however, in that they are composed of two or more individual systems coupled together.

The definition of the term clustered is not concrete; many commercial packages wrestle with what a clustered system is, and why one form is better than another. The generally accepted definition is that clustered computers share storage and are closely linked via LAN networking. Clustering is usually performed to provide high availability. A layer of cluster software runs on the cluster nodes. Each node can monitor one or more of the others over the LAN. If the monitored machine fails, the monitoring The failed machine can remain down, but the users and clients of the application would only see a brief interruption of service.

In asymmetric clustering, one machine is in hot standby mode while the other is running the applications. The hot standby host machine does nothing but monitor the active server.

If that server fails, the hot standby host becomes the active server. In symmetric mode, two or more hosts are running applications,and they are monitoring each other. This mode is obviously more efficient, as it uses all of the available hardware. It does require that more than one application be available to run. Other forms of clustersinclude parallelclusters and clustering over a WAN. I Parallel clusters allow multiple hosts to access the same data on the shared storage.

Because most operating systems lack support for this simultaneous data access by multiple hosts, parallel clusters are usually accomplished by special versions of software and special releases of applications. For example, Oracle Parallel Server is a version of Oracle's database that has been designed to run on parallel clusters. Each machine runs Oracle, and a layer of software tracks access to the shared disk.

Each machine has full access to all data in the database. In spite of improvements in distributed computing, most systems do not offer general-purpose distributed file systems. Therefore, most clusters do not allow shared access to data on the disk.

For this, distributed file systems must provide access control and locking to the files to ensure no conflicting operations occur. This type of service is commonlyknown as a distributed lock manager DLM.

Work is ongoing for general-purpose distributed file systems, with vendors like Sun Microsystems announcing roadmaps for delivery of a DLM within the operating system. Cluster technology is rapidly changing.

Cluster directions include global clusters, in which the machinescould be anywhere in the world or anywhere a WAN reaches. Such projects are still the subject of research and development. Clustered system use and features should expand greatly as storage-area networks SANS ,as described in Section SANsallow easy attachment of multiple hosts to multiple storage units.

Current clusters are usually limited to two or four hosts due to the complexity of connecting the hosts to shared storage. Real-Time Systems Another form of a special-purpose operating system is the real-timesystem.

A real-timesystem is used when rigid time requirements have been placed on the operation of a processor or the flow of data; thus, it is often used as a control devicein a dedicated application. Sensors bring data to the computer.

Related titles

The com- 1! Systems that control scientific experiments, medical imaging systems, industrial control systems, and certain display systems are real-time systems. Some automobile-engine fuel-injection systems, home-appliance controllers, and weapon systems are also real-time systems.

A real-time system has well-defined, fixed time constraints. Processing must be done within the defined constraints, or the system will fail. For I instance, it would not do for a robot arm to be instructed to halt after it had smashed into the car it was building.

Contrast this requirement to a time-sharingsystem, where it is desirable butnot mandatory to respond quickly, or to a batch system, which may have no time constraints at all. Real-time systems come in two flavors: A hard real-time system guarantees that critical tasks be completed on time.

This goal requires that all delays in the system be bounded, from the retrieval of stored data to the time that it takes the operating system to finish any request made of it. Such time constraintsdictatethe facilitiesthat are availablein hard real-timesystems. I Secondary storage of any sort is usually limited or missing, with data instead being stored in short-term memory or in read-only memory ROM.The straightforward method for handling this transfer would be to invoke a generic routine to examine the interrupt information; the routine, in turn, would call the interrupt-specifichandler.

If you continue browsing the site, you agree to the use of cookies on this website. What are two such problems? Cancel Save. In contrast to the tightly coupled systems discussed in Section1.

Con- trast this with a typical PC or workstation, which may have several hundred megabytesof memory! Barbara Heaney was in charge of overseeing the copy-editing and Katie Habibcopyedited the manuscript. In a multiprogramming system, the operating system simply switches to, and executes, another job. Operating systems exist because they are supposed to make it easier to compute with them than without them.

All code examples have been rewritten and are now in C.

AWILDA from Burlington
I do relish reading books quizzically. Look over my other articles. I have always been a very creative person and find it relaxing to indulge in cabaret.