Sei sulla pagina 1di 18

UNIX OVERVIEW Originally, Unix was meant to be a programmer's workbench more than to be used to run application software.

The system grew larger as the operating system started spreading in the academic circle, as users added their own tools to the system and shared them with colleagues. Unix was designed to be portable, multi-tasking and multi-user in a time-sharing configuration. Unix systems are characterized by various concepts: the use of plain text for storing data; a hierarchical file system; treating devices and certain types of inter-process communication (IPC) as files; and the use of a large number of software tools, small programs that can be strung together through a command line interpreter using pipes, as opposed to using a single monolithic program that includes all of the same functionality. These concepts are collectively known as theUnix philosophy. Kernighan and Rob Pike summarize this in The Unix Programming Environment as "the idea that the power of a system comes more from the relationships among programs than from the programs themselves." Unix operating systems are widely used in servers, workstations, and mobile devices. The Unix environment and the clientserver program model were essential elements in the development of the Internet and the reshaping of computing as centered in networks rather than in individual computers. Both Unix and the C programming language were developed by AT&T and distributed to government and academic institutions, which led to both being ported to a wider variety of machine families than any other operating system. As a result, Unix became synonymous with open systems. Under Unix, the operating system consists of many utilities along with the master control program, the kernel. The kernel provides services to start and stop programs, handles the file system and other common "low level" tasks that most programs share, and schedules access to avoid conflicts when programs try to access the same resource or device simultaneously. To mediate such access, the kernel has special rights, reflected in the division between user-space and kernel-space. The microkernel concept was introduced in an effort to reverse the trend towards larger kernels and return to a system in which most tasks were completed by smaller utilities. In an era when a standard computer consisted of a hard disk for storage and a data terminal for input and output (I/O), the Unix file model worked quite well, as most I/O was linear. However, modern systems include networking and other new devices. As graphical user interfaces developed, the file model

proved inadequate to the task of handling asynchronous events such as those generated by amouse. In the 1980s, non-blocking I/O and the set of inter-process communication mechanisms were augmented with Unix domain sockets, shared memory, message queues, and semaphores. Functions such as network protocols were moved out of the kernel. UNIX HISTORY The Unix operating system found its beginnings in MULTICS, which stands for Multiplexed Operating and Computing System. The MULTICS project began in the mid 1960s as a joint effort by General Electric, Massachusetts Institute for Technology and Bell Laboratories. In 1969 Bell Laboratories pulled out of the project. One of Bell Laboratories people involved in the project was Ken Thompson. He liked the potential MULTICS had, but felt it was too complex and that the same thing could be done in simpler way. In 1969 he wrote the first version of Unix, called UNICS. UNICS stood for Uniplexed Operating and Computing System. Although the operating system has changed, the name stuck and was eventually shortened to Unix. Ken Thompson teamed up with Dennis Ritchie, who wrote the first C compiler. In 1973 they rewrote the Unix kernel in C. The following year a version of Unix known as the Fifth Edition was first licensed to universities. The Seventh Edition, released in 1978, served as a dividing point for two divergent lines of Unix development. These two branches are known as SVR4 (System V) and BSD. Ken Thompson spent a year's sabbatical with the University of California at Berkeley. While there he and two graduate students, Bill Joy and Chuck Haley, wrote the first Berkely version of Unix, which was distributed to students. This resulted in the source code being worked on and developed by many different people. The Berkeley version of Unix is known as BSD, Berkeley Software Distribution. From BSD came the vi editor, C shell, virtual memory, Sendmail, and support for TCP/IP. For several years SVR4 was the more conservative, commercial, and well supported. Today SVR4 and BSD look very much alike. Probably the biggest cosmetic difference between them is the way the ps command functions. The Linux operating system was developed as a Unix look alike and has a user command interface that resembles SVR4.

MODERN UNIX SYSTEMS

As UNIX evolved, the number of different implementations proliferated, each providing some useful features. There was a need to produce a new implementation that unified many of the important innovations, added other modern operating system design features, and produced a more modular architecture. There is a small core of facilities, written in a modular fashion, that provide functions and services needed by a number of operating system processes. Each of the outer circles represents functions and an interface that may be implemented in a variety of ways. We now turn to some examples of modern UNIX systems.

System V Release 4 (SVR4) SVR4, developed jointly by AT&T and Sun Microsystems, combines features from SVR3, 4.3BSD, Microsoft Xenix System V, and SunOS. It was almost a total rewrite of the System V kernel and produced a clean, if complex, implementation. New features in the release include real-time processing support, process scheduling classes, dynamically allocated data structures, virtual memory management, virtual file system, and a preemptive kernel. SVR4 draws on the efforts of both commercial and academic designers and was developed to provide a uniform platform for commercial UNIX deployment. Unix Svr4 Process Management UNIX System V makes use of a simple but powerful process facility that is highly visible to the user. UNIX follows the model of Figure 3.15b, in which most of the operating system executes within the environment of a user process. Thus, two modes, user and kernel, are required. UNIX uses two categories of processes: system processes and user processes. System processes run in kernel mode and execute operating system code to perform administrative and housekeeping functions, such as allocation of memory and process swapping. User processes operate in user mode to execute user programs and utilities and in kernel mode to execute instructions that belong to the kernel. A user process enters kernel mode by issuing a system call, when an exception (fault) is generated, or when an interrupt occurs.

Process States A total of nine process states are recognized by the UNIX operating system; UNIX employs two Running states to indicate whether the process is executing in user mode or kernel mode. A distinction is made between the two states: (Ready to Run, in Memory) and (Preempted). These are essentially the same state, as indicated by the dotted line joining them. The distinction is made to emphasize the way in which the preempted state is entered. When a process is running in kernel mode (as a result of a supervisor call, clock interrupt, or I/O interrupt), there will come a time when the kernel has completed its work and is ready to return control to the user program. At this point, the kernel may decide to preempt the current process in favor of one that is ready and of higher priority. In that case, the current process moves to the preempted state. However, for purposes of dispatching, those processes in the preempted state and those in the Ready to Run, in Memory state form one queue. Preemption can only occur when a process is about to move from kernel mode to user mode. While a process is running in kernel mode, it may not be preempted. This makes UNIX unsuitable for real-time processing. Two processes are unique in UNIX. Process 0 is a special process that is created when the system boots; in effect, it is predefined as a data structure loaded at boot time. It is the swapperprocess. In addition, process 0 spawns process 1, referred to as the init process; all other processes in the system have process 1 as an ancestor. When a new interactive user logs onto the system, it is process 1 that creates a user process for that user. Subsequently, the user process can create child processes in a branching tree, so that any particular application can consist of a number of related processes

Process Description

A process in UNIX is a rather complex set of data structures that provide the operating system with all of the information necessary to manage and dispatch processes. Table summarizes the elements of the process image, which are organized into three parts: user-level context, register context, and system-level context. The user-level context contains the basic elements of a user's program and can be generated directly from a compiled object file. The user's program is separated into text and dataareas; the text area is read-only and is intended to hold the program's instructions. While the process is executing, the processor uses the user stack area for procedure

calls and returns and parameter passing. The shared memory area is a data area that is shared with other processes. There is only one physical copy of a shared memory area, but, by the use of virtual memory, it appears to each sharing process that the shared memory region is in its address space. When aprocess is not running, the processor status information is stored in the register context area.The system-level context contains the remaining information that the operating system needs to manage the process. It consists of a static part, which is fixed in size and stays with a process throughout its lifetime, and a dynamic part, which varies in size through the life of the-11-process. One element of the static part is the process table entry. This is actually part of the process table maintained by the operating system, with one entry per process. The process table entry contains process control information that is accessible to the kernel at all times; hence, in a virtual memory system, all process table entries are maintained in main memory.

Process Control

Process creation in UNIX is made by means of the kernel system call, fork( ). When a process issues a fork request, the operating system performs the following functions [BACH86]: 1. It allocates a slot in the process table for the new process. 2. It assigns a unique process ID to the child process. 3. It makes a copy of the process image of the parent, with the exception of any shared memory. 4. It increments counters for any files owned by the parent, to reflect that an additional process now also owns those files.

PROCESS MANAGEMENT IN WINDOWS 7


HISTORY WINDOWS OPERATING SYSTEM Its the 1970s. At work, we rely on typewriters. If we need to copy a document, we likely use a mimeograph or carbon paper. Few have heard of microcomputers, but two young computer enthusiasts, Bill Gates and Paul Allen, see that personal computing is a path to the future. In 1975, Gates and Allen form a partnership called Microsoft. Like most start-ups, Microsoft begins small, but has a huge visiona computer on every desktop and in every home. During the next years, Microsoft begins to change the ways we work. 19821985: Introducing Windows 1.0 Microsoft works on the first version of a new operating system. Interface Manager is the code name and is considered as the final name, but Windows prevails because it best describes the boxes or computing windows that are fundamental to the new system.Windows is announced in 1983, but it takes a while to develop. Skeptics call it vaporware. On November 20, 1985, two years after the initial announcement, Microsoft ships Windows 1.0. Now, rather than typing MS-DOS commands, you just move a mouse to point and click your way through screens, or windows. Bill Gates says, It is unique software designed for the serious PC user There are drop-down menus, scroll bars, icons, and dialog boxes that make programs easier to learn and use. You're able to switch among several programs without having to quit and restart each one. Windows 1.0 ships with several programs, including MS-DOS file management, Paint, Windows Writer, Notepad, Calculator, and a calendar, card file, and clock to help you manage day-to-day activities. 19871992: Windows 2.02.11More windows, more speed On December 9, 1987 Microsoft releases Windows 2.0 with desktop icons and expanded memory. With improved graphics support, you can now overlap windows, control the screen layout, and use keyboard shortcuts to speed up your work. Some software developers write their first Windowsbased programs for this release. Windows 2.0 is designed for the Intel 286 processor. When the Intel 386 processor is released, Windows/386 soon follows to take advantage of its extended memory capabilities. Subsequent Windows releases continue to improve the speed, reliability, and usability of the PC.

In 1988, Microsoft becomes the worlds largest PC software company based on sales. Computers are starting to become a part of daily life for some office workers. 19901994: Windows 3.0Windows NTGetting the graphics On May 22, 1990, Microsoft announces Windows 3.0, followed shortly by Windows 3.1 in 1992. Taken together, they sell 10 million copies in their first 2 years, making this the most widely used Windows operating system yet. The scale of this success causesMicrosoft to revise earlier plans. Virtual Memory improves visual graphics. In 1990 Windows starts to look like the versions to come. Windows now has significantly better performance, advanced graphics with 16 colors, and improved icons. A new wave of 386 PCs helps drive the popularity of Windows 3.0. With full support for the Intel 386 processor, programs run noticeably faster. Program Manager, File Manager, and Print Manager arrive in Windows 3.0. Windows software is installed with floppy discs bought in large boxes with heavy instruction manuals. The popularity of Windows 3.0 grows with the release of a new Windows software development kit (SDK), which helps software developers focus more on writing programs and less on writing device drivers. Windows is increasingly used at work and home and now includes games like Solitaire, Hearts, and Minesweeper. An advertisement: Now you can use the incredible power of Windows 3.0 to goof off. Windows for Workgroups 3.11 adds peer-to-peer workgroup and domain networking support and, for the first time, PCs become an integral part of the emerging client/server computing evolution. Windows NT When Windows NT releases on July 27, 1993, Microsoft meets an important milestone: the completion of a project begun in the late 1980s to build an advanced new operating system from scratch. "Windows NT represents nothing less than a fundamental change in the way that companies can address their business computing requirements," Bill Gates says at its release. Unlike Windows 3.1, however, Windows NT 3.1 is a 32-bit operating system, which makes it a strategic business platform that supports high-end engineering and scientific programs.

19952001: Windows 95the PC comes of age (and don't forget the Internet) On August 24, 1995, Microsoft releases Windows 95, selling a record-setting 7 million copies in the first five weeks. Its the most publicized launch Microsoft has ever taken on. Television commercials feature the Rolling Stones singing "Start Me Up" over images of the new Start button. The press release simply begins: This is the era of fax/modems, e-mail, the new online world, and dazzling multimedia games and educational software. Windows 95 has built-in Internet support, dial-up networking, and new Plug and Play capabilities that make it easy to install hardware and software. The 32-bit operating system also offers enhanced multimedia capabilities, more powerful features for mobile computing, and integrated networking. 19982000: Windows 98, Windows 2000, Windows Me Windows 98 Released on June 25, 1998, Windows 98 is the first version of Windows designed specifically for consumers. PCs are common at work and home, and Internet cafes where you can get online are popping up. Windows 98 is described as an operating system that Works Better, Plays Better. With Windows 98, you can find information more easily on your PC as well as the Internet. Other improvements include the ability to open and close programs more quickly, and support for reading DVD discs and universal serial bus (USB) devices. Another first appearance is the Quick Launch bar, which lets you run programs without having to browse the Start menu or look for them on the desktop. Windows Me Designed for home computer use, Windows Me offers numerous music, video, and home networking enhancements and reliability improvements compared to previous versions. First appearances: System Restore, a feature that can roll back your PC software configuration to a date or time before a problem occurred. Movie Maker provides users with the tools to digitally edit, save, and share home videos. And with Microsoft Windows Media Player 7 technologies, you can find, organize, and play digital media. Windows 2000 Professional More than just the upgrade to Windows NT Workstation 4.0, Windows 2000 Professional is designed to replace Windows 95, Windows 98, and Windows NT Workstation 4.0 on all business desktops and laptops. Built on top of the proven Windows NT Workstation 4.0 code

base, Windows 2000 adds major improvements in reliability, ease of use, Internet compatibility, and support for mobile computing. Among other improvements, Windows 2000 Professional simplifies hardware installation by adding support for a wide variety of new Plug and Play hardware, including advanced networking and wireless products, USB devices, IEEE 1394 devices, and infrared devices. 20012005: Windows XPStable, usable, and fast On October 25, 2001, Windows XP is released with a redesigned look and feel that's centered on usability and a unified Help and Support services center. Its available in 25 languages. From the mid-1970s until the release of Windows XP, about 1 billion PCs have been shipped worldwide. For Microsoft, Windows XP will become one of its best-selling products in the coming years. Its both fast and stable. Navigating the Start menu, taskbar, and Control Panel are more intuitive. Awareness of computer viruses and hackers increases, but fears are to a certain extent calmed by the online delivery of security updates. Consumers begin to understand warnings about suspicious attachments and viruses. Theres more emphasis on Help and Support. Windows XP Home Edition offers a clean, simplified visual design that makes frequently used features more accessible. Designed for home use, Windows XP offers such enhancements as the Network Setup Wizard, Windows Media Player, Windows Movie Maker, and enhanced digital photo capabilities. Windows XP Professional brings the solid foundation of Windows 2000 to the PC desktop, enhancing reliability, security, and performance. With a fresh visual design,Windows XP Professional includes features for business and advanced home computing, including remote desktop support, an encrypting file system, and system restore and advanced networking features. Key enhancements for mobile users include wireless 802.1x networking support, Windows Messenger, and Remote Assistance. Windows XP has several editions during these years:

Windows XP 64-bit Edition (2001) is the first Microsoft operating system for 64-bit processors designed for working with large amounts of memory and projects such as movie special effects, 3D animations, engineering, and scientific programs. Windows XP Media Center Edition (2002) is made for home computing and entertainment. You can browse the Internet, watch live television, enjoy digital music and video collections, and watch DVDs.

Windows XP Tablet PC Edition (2002) realizes the vision of pen-based computing. Tablet PCs include a digital pen for handwriting recognition and you can use the mouse or keyboard, too.

20062008: Windows VistaSmart on security Windows Vista is released in 2006 with the strongest security system yet. User Account Control helps prevent potentially harmful software from making changes to your computer. In Windows Vista Ultimate, BitLocker Drive Encryption provides better data protection for your computer, as laptop sales and security needs increase. Windows Vista also features enhancements toWindows Media Player as more and more people come to see their PCs as central locations for digital media. Here you can watch television, view and send photographs, and edit videos. Design plays a big role in Windows Vista, and features such as the taskbar and the borders around windows get a brand new look. Search gets new emphasis and helps people find files on their PCs faster. Windows Vista introduces new editions that each have a different mix of features. It's available in 35 languages. The redesigned Start button makes its first appearance in Windows Vista. 2009: Windows 7 Windows 7 was built for the wireless world that arose in the late 2000s. By the time it was released, laptops were outselling desktops, and it had become common to connect to public wireless hotspots in coffee shops and private networks in the home. Windows 7 included new ways to work with windowslike Snap, Peek, and Shakewhich both improved functionality and made the interface more fun to use. It also marked the debut of Windows Touch, which let touchscreen users browse the web, flip through photos, and open files and folders.

Single-User Multitasking Windows (from Windows 2000 onward) is a significant example of what has become the new wave in microcomputer operating systems (other examples are OS/2 and MacOS). Windows was driven by a need to exploit the processing capabilities of today's 32-bit microprocessors, which rival mainframes and minicomputers of just a few years ago in speed, hardware sophistication, and memory capacity.

One of the most significant features of these new operating systems is that, although they are still intended for support of a single interactive user, they are multitasking operating systems.-6Two main developments have triggered the need for multitasking on personal computers, workstations, and servers. First, with the increased speed and memory capacity of microprocessors, together with the support for virtual memory, applications have become more complex and interrelated. For example, a user may wish to employ a word processor, a drawing program, and a spreadsheet application simultaneously to produce a document. Without multitasking, if a user wishes to create a drawing and paste it into a word processing document, the following steps are required: Open the drawing program. Create the drawing and save it in a file or on a temporary clipboard. Close the drawing program. Open the word processing program. Insert the drawing in the correct location.

If any changes are desired, the user must close the word processing program, open the drawing program, edit the graphic image, save it, close the drawing program, open the word processing program, and insert the updated image. This becomes tedious very quickly. As the services and capabilities available to users become more powerful and varied, the single-task environment becomes more clumsy and user unfriendly. In a multitasking environment, the user opens each application as needed, and leaves it open. Information can be moved around among a number of applications easily. Each application has one or more open windows, and a graphical interface with a pointing device such as a mouse allows the user to navigate quickly in this environment. A second motivation for multitasking is the growth of client/server computing. With client/server computing, a personal computer or workstation (client) and a host system (server) are used jointly to accomplish a particular application. The two are linked, and each is assigned that part of the job that suits its capabilities. Client/server can be achieved in a local area network-7of personal computers and servers or by means of a link between a user system and a large host such as a mainframe. An application may involve one or more personal computers and one or

more server devices. To provide the required responsiveness, the operating system needs to support sophisticated real-time communication hardware and the associated communications protocols and data transfer architectures while at the same time supporting ongoing user interaction. The foregoing remarks apply to the Professional version of Windows. The Server version is also multitasking but may support multiple users. It supports multiple local server connections as well as providing shared services used by multiple users on the network. As an Internet server, Windows may support thousands of simultaneous Web connections.

Architecture

Figure illustrates the overall structure of Windows 2000; later releases of Windows havesentially the same structure at this level of detail. Its modular structure gives Windowsconsiderable flexibility. It is designed to execute on a variety of hardware platforms and supportsapplications written for a variety of other operating systems. As of this writing, Windows is onlyimplemented on the Intel Pentium/x86 and Itanium hardware platforms. As with virtually all operating systems, Windows separates application-oriented softwarefrom operating system software. The latter, which includes the Executive, the kernel, devicedrivers, and the hardware abstraction layer, runs in kernel mode. Kernel mode software hasaccess to system data and to the hardware. The remaining software, running in user mode, has limited access to system data.

Operating System Organization

Windows does not have a pure microkernel architecture but what Microsoft refers to as a modified microkernel architecture. As with a pure microkernel architecture, Windows is highlymodular. Each system function is managed by just one component of the operating system. The rest of the operating system and all applications access that function through the responsible component using a standard interface. Key system data can only be accessed through theappropriate function. In principle, any module can be removed, upgraded, or replaced without rewriting the entire system or its standard application program interface (APIs). However, unlike a pure microkernel system, Windows is configured so that many of the system functions outside

the microkernel run in kernel mode. The reason is performance. The Windows developers found that using the pure microkernel approach, many non-microkernel functions required severalprocess or thread switches, mode switches, and the use of extra memory buffers. The kernel-mode components of Windows are the following: Executive: Contains the base operating system services, such as memory management,process and thread management, security, I/O, and interprocess communication. Kernel: Consists of the most used and most fundamental components of the operatingsystem. The kernel manages thread scheduling, process switching, exception and interrupthandling, and multiprocessor synchronization. Unlike the rest of the Executive and the userlevel, the kernel's own code does not run in threads. Hence, it is the only part of theoperating system that is not preemptible or pageable. Hardware abstraction layer (HAL): Maps between generic hardware commands and responses and those unique to a specific platform. It isolates the operating system from platform-specific hardware differences. The HAL makes each machine's system bus, direct memory access (DMA) controller, interrupt controller, system timers, and memory module look the same to the kernel. It also delivers the support needed for symmetric multiprocessing (SMP), explained subsequently. Device drivers: Include both file system and hardware device drivers that translate user I/O function calls into specific hardware device I/O requests. Windowing and graphics system: Implements the graphical user interface (GUI) functions, such as dealing with windows, user interface controls, and drawing.-9- The Windows Executive includes modules for specific system functions and provides an API for user-mode software. Following is a brief description of each of the Executive modules: I/O manager: Provides a framework through which I/O devices are accessible toapplications, and is responsible for dispatching to the appropriate device drivers for furtherprocessing. The I/O

manager implements all the Windows I/O APIs and enforces securityand naming for devices and file systems (using the object manager). Cache manager: Improves the performance of file-based I/O by causing recently referenced disk data to resize in main memory for quick access, and by deferring disk writes by holding the updates in memory for a short time before sending them to the disk. Object manager: Creates, manages, and deletes Windows Executive objects and abstract data types that are used to represent resources such as processes, threads, andsynchronization objects. It enforces uniform rules for retaining, naming, and setting the security of objects. The object manager also creates object handles, which consist of access control information and a pointer to the object. Windows objects are discussed later in this section. Plug and play manager: Determines which drivers are required to support a particular device and loads those drivers. Power manager: Coordinates power management among various devices and can be configured to reduce power consumption by putting the processor to sleep. Security reference monitor: Enforces access-validation and audit-generation rules. The Windows object-oriented model allows for a consistent and uniform view of security, right down to the fundamental entities that make up the Executive. Thus, Windows uses the same routines for access validation and for audit checks for all protected objects, including-10- files, processes, address spaces, and I/O devices. Virtual memory manager: Maps virtual addresses in the process's address space to physical pages in the computer's memory. Process/thread manager: Creates and deletes objects and tracks process and thread objects. Configuration manager: Responsible for implementing and managing the system registry,

which is the repository for both systemwide and per-user settings of various parameters. Local procedure call (LPC) Facility: Enforces a client/server relationship between applications and executive subsystems within a single system, in a manner similar to a remote procedure call (RPC) facility used for distributed processing.

User-Mode Processes

Four basic types of user-mode processes are supported by Windows Special system support processes: Include services not provided as part of the Windows operating system, such as the logon process and the session manager. Service processes: Other Windows services such as the event logger. Environment subsystems: Expose the native Windows services to user applications and thus provide an operating system environment or personality. The supported subsystems are Win32, Posix, and OS/2. Each environment subsystem includes dynamic link libraries (DLLs) that convert the user application calls to Windows calls. User applications: Can be one of five types: Win32, Posix, OS/2, Windows 3.1, or MSDOS. 11-

Windows is structured to support applications written for Windows 2000 and later releases,Windows 98, and several other operating systems. Windows provides this support using a single,compact Executive through protected environment subsystems. The protected subsystems are those parts of Windows that interact with the end user. Each subsystem is a separate process, and the Executive protects its address space from that of other subsystems and applications. A protected subsystem provides a graphical or command-line user interface that defines the look and feel of the operating system for a user. In addition, each protected subsystem provides the API for that particular operating environment. This means that

applications created for a particular operating environment may run unchanged on Windows, because the operating system interface that they see is the same as that for which they were written. So, for example, OS/2-based applications can run under the Windows operating system without modification.Furthermore, because the Windows system is itself designed to be platform independent, throughthe use of the hardware abstraction layer (HAL), it should be relatively easy to port both theprotected subsystems and the applications they support from one hardware platform to another.In many cases, a recompile is all that should be required. The most important subsystem is Win32. Win32 is the API implemented on both Windows 2000 and later releases and Windows 98. Some of the features of Win32 are not available in Windows 98, but those features implemented on Windows 98 are identical with those of Windows 2000 and later releases.

Comparing Process Management in Windows and UNIX Operating Systems

Multitasking

Windows

UNIX

Relies more on threads as opposed to Very efficient at creating processes processes.

Multiple Users Creates an initial process for a user known as Creates a shell process to service user the desktop. commands.

Allows only one user to be logged in at a time.

Supports multiple simultaneous users through the command line and GUI.

Provides complete isolation between user processes. Provides complete isolation between user processes.

Multithreading

Windows

UNIX

Windows applications are capable of using Most UNIX kernels are multithreaded to make SMP computers. use of symmetric multiprocessing (SMP) computers.

Process Hierarchy

Windows

UNIX

Windows processes do not have a hierarchical When a UNIX application creates a new relationship and all processes are treated process, the new process becomes a child of equally the parent process.

Another difference between UNIX and Windows:

In Unix, a shared object (.so) file contains code to be used by the program, and also the names of functions and data that it expects to find in the program. When the file is joined to the program, all references to those functions and data in the file's code are changed to point to the actual locations in the program where the functions and data are placed in memory. This is basically a link operation. In Windows, a dynamic-link library (.dll) file has no dangling references. Instead, an access to functions or data goes through a lookup table. So the DLL code does not have to be fixed up at runtime to refer to the program's memory; instead, the code already uses the DLL's lookup table,

and the lookup table is modified at runtime to point to the functions and data. In Unix, there is only one type of library file (.a) which contains code from several object files (.o). During the link step to create a shared object file (.so), the linker may find that it doesn't know where an identifier is defined. The linker will look for it in the object files in the libraries; if it finds it, it will include all the code from that object file. In Windows, there are two types of library, a static library and an import library (both called .lib). A static library is like a Unix .a file; it contains code to be included as necessary. An import library is basically used only to reassure the linker that a certain identifier is legal, and will be present in the program when the DLL is loaded. So the linker uses the information from the import library to build the lookup table for using identifiers that are not included in the DLL. When an application or a DLL is linked, an import library may be generated, which will need to be used for all future DLLs that depend on the symbols in the application or DLL. Unix posseses the process hierarchy. When a new process is created by a UNIX application, it becomes a child of the process that created it. This hierarchy is very important, so there are system calls for influencing child processes. Windows processes on the other hand do not share a hierarchical relationship. Receiving the process handle and ID of the process it created, the creating process of a Windows system can maintain or simulate a hierarchical relationship if it is needed. The Windows operating system ordinarily treats all processes as belonging to the same generation.

Potrebbero piacerti anche