Sei sulla pagina 1di 9

Kernel (computer science)

A kernel connects the application software to the hardware of a computer. In computing, the kernel is the central component of most computer operating systems (OSs). Its responsibilities include managing the system's resources and the communication between hardware and software components. As a basic component of an operating system, a kernel pro ides the lowest!le el abstraction layer for the resources (especially memory, processors and I"O de ices) that applications must control to perform their function. It typically makes these facilities a ailable to application processes through inter!process communication mechanisms and system calls. #hese tasks are done differently by different kernels, depending on their design and implementation. $hile monolithic kernels will try to achie e these goals by e%ecuting all the code in the same address space to increase the performance of the system, microkernels run most of their ser ices in user space, aiming to impro e maintainability and modularity of the codebase.&'( A range of possibilities e%ists between these two e%tremes.

Overview

A typical ision of a computer architecture as a series of abstraction layers) hardware, firmware, assembler, kernel, operating system and applications. *ost operating systems rely on the kernel concept. #he e%istence of a kernel is a natural conse+uence of designing a computer system as a series of abstraction layers, each relying on the functions of layers beneath itself. #he kernel, from this iewpoint, is simply the name gi en to the lowest le el of abstraction that is implemented in software. In order to a oid ha ing a kernel, one would ha e to design all the software on the system to not use abstraction layers, this would increase the comple%ity of the design to such a point that only the simplest systems could feasibly be implemented. $hile it is today mostly called the kernel, the same part of the operating system has also in the past been known as the nucleus or core. (-ote, howe er, that the term core has also been used to refer to the primary memory of a computer system, typically because some early computers used a form of memory called .ore memory.) In most cases, the boot loader starts e%ecuting the kernel in super isor mode, the kernel then initiali/es itself and starts the first process. After this, the kernel does not typically e%ecute directly, only in response to e%ternal e ents (e.g. ia system calls used by applications to re+uest ser ices from the kernel, or ia interrupts used by the hardware to notify the kernel of e ents). Additionally, the kernel typically pro ides a loop that is e%ecuted whene er no processes are a ailable to run, this is often called the idle process.

0ernel de elopment is considered one of the most comple% and difficult tasks in programming. Its central position in an operating system implies the necessity for good performance, which defines the kernel as a critical piece of software and makes its correct design and implementation difficult. 1or arious reasons, a kernel might not e en be able to use the abstraction mechanisms it pro ides to other software. Such reasons include memory management concerns (for e%ample, a user!mode function might rely on memory being sub2ect to demand paging, but as the kernel itself pro ides that facility it cannot use it, because then it might not remain in memory to pro ide that facility) and lack of reentrancy, thus making its de elopment e en more difficult for software engineers. A kernel will usually pro ide features for low!le el scheduling of processes (dispatching), Inter!process communication, process synchroni/ation, conte%t switch, manipulation of process control blocks, interrupt handling, process creation and destruction, process suspension and resumption (see process states).

Kernel basic responsibilities


#he kernel's primary purpose is to manage the computer's resources and allow other programs to run and use these resources. #ypically, the resources consist of) #he .34 (fre+uently called the processor). #his is the most central part of a computer system, responsible for running or executing programs on it. #he kernel takes responsibility for deciding at any time which of the many running programs should be allocated to the processor or processors (each of which can usually run only one program at once) #he computer's memory. *emory is used to store both program instructions and data. #ypically, both need to be present in memory in order for a program to e%ecute. Often multiple programs will want access to memory, fre+uently demanding more memory than the computer has a ailable. #he kernel is responsible for deciding which memory each process can use, and determining what to do when not enough is a ailable. Any Input"Output (I"O) de ices present in the computer, such as disk dri es, printers, displays, etc. #he kernel allocates re+uests from applications to perform I"O to an appropriate de ice (or subsection of a de ice, in the case of files on a disk or windows on a display) and pro ides con enient methods for using the de ice (typically abstracted to the point where the application does not need to know implementation details of the de ice)

0ernels also usually pro ide methods for synchroni/ation and communication between processes (called inter-process communication or I3.). A kernel may implement these features itself, or rely on some of the processes it runs to pro ide the facilities to other processes, although in this case it must pro ide some means of I3. to allow processes to access the facilities pro ided by each other. 1inally, a kernel must pro ide running programs with a method to make re+uests to access these facilities.

Process management
#he main task of a kernel is to allow the e%ecution of applications and support them with features such as hardware abstractions. #o run an application, a kernel typically sets up an address space for the application, loads the file containing the application's code into memory (perhaps ia demand paging), sets up a stack for the program and branches to a gi en location inside the program, thus starting its e%ecution.&'5( *ulti!tasking kernels are able to gi e the user the illusion that the number of processes being run simultaneously on the computer is higher than the ma%imum number of processes the computer is physically able to run simultaneously. #ypically, the number of processes a system may run simultaneously is e+ual to the number of .34s installed (howe er this may not be the case if the processors support simultaneous multithreading). In a pre!empti e multitasking system, the kernel will gi e e ery program a slice of time and switch from process to process so +uickly that it will appear to the user as if these processes were being e%ecuted simultaneously. #he kernel uses scheduling algorithms to determine which process is running ne%t and how much time it will be gi en. #he algorithm chosen may allow for some processes to ha e higher priority than others. #he kernel generally also pro ides these processes a way to communicate, this is known as inter!process communication (I3.) and the main approaches are shared memory, message passing and remote procedure calls (see concurrent computing).

Other systems (particularly on smaller, less powerful computers) may pro ide co!operati e multitasking, where each process is allowed to run uninterrupted until it makes a special re+uest that tells the kernel it may switch to another process. Such re+uests are known as 6yielding6, and typically occur in response to re+uests for interprocess communication, or for waiting for an e ent to occur. Older ersions of $indows and *ac OS both used co!operati e multitasking but switched to pre! empti e schemes as the power of the computers to which they were targeted grew. #he operating system might also support multiprocessing (S*3 or -on!4niform *emory Access), in that case, different programs and threads may run on different processors. A kernel for such a system must be designed to be re!entrant, meaning that it may safely run two different parts of its code simultaneously. #his typically means pro iding synchroni/ation mechanisms (such as spinlocks) to ensure that no two processors attempt to modify the same data at the same time.

Memory management
#he kernel has full access to the system's memory and must allow processes to access this memory safely as they re+uire it. Often the first step in doing this is irtual addressing, usually achie ed by paging and"or segmentation. 7irtual addressing allows the kernel to make a gi en physical address appear to be another address, the irtual address. 7irtual address spaces may be different for different processes, the memory that one process accesses at a particular ( irtual) address may be different memory from what another process accesses at the same address. #his allows e ery program to beha e as if it is the only one (apart from the kernel) running and thus pre ents applications from crashing each other. &'5( On many systems, a program's irtual address may refer to data which is not currently in memory. #he layer of indirection pro ided by irtual addressing allows the operating system to use other data stores, like a hard dri e, to store what would otherwise ha e to remain in main memory (8A*). As a result, operating systems can allow programs to use more memory than the system has physically a ailable. $hen a program needs data which is not currently in 8A*, the .34 signals to the kernel that this has happened, and the kernel responds by writing the contents of an inacti e memory block to disk (if necessary) and replacing it with the data re+uested by the program. #he program can then be resumed from the point where it was stopped. #his scheme is generally known as demand paging. 7irtual addressing also allows creation of irtual partitions of memory in two dis2ointed areas, one being reser ed for the kernel (kernel space) and the other for the applications (user space). #he applications are not permitted by the processor to address kernel memory, thus pre enting an application from damaging the running kernel. #his fundamental partition of memory space has contributed much to current designs of actual general!purpose kernels and is almost uni ersal in such systems, although some research kernels (e.g. Singularity) take other approaches.

Device management
#o perform useful functions, processes need access to the peripherals connected to the computer, which are controlled by the kernel through de ice dri ers. 1or e%ample, to show the user something on the screen, an application would make a re+uest to the kernel, which would forward the re+uest to its display dri er, which is then responsible for actually plotting the character"pi%el. A kernel must maintain a list of a ailable de ices. #his list may be known in ad ance (e.g. on an embedded system where the kernel will be rewritten if the a ailable hardware changes), configured by the user (typical on older 3.s and on systems that are not designed for personal use) or detected by the operating system at run time (normally called 3lug and 3lay). In a plug and play system, a de ice manager first performs a scan on different hardware buses, such as 3eripheral .omponent Interconnect (3.I) or 4ni ersal Serial 9us (4S9), to detect installed de ices, then searches for the appropriate dri ers. As de ice management is a ery OS!specific topic, these dri ers are handled differently by each kind of kernel design, but in e ery case, the kernel has to pro ide the I"O to allow dri ers to physically access their de ices through some port or memory location. 7ery important decisions ha e to be made when designing the de ice management system, as in some designs accesses may in ol e conte%t switches, making the operation ery .34!intensi e and easily causing a significant performance o erhead.

System calls
#o actually perform useful work, a process must be able to access the ser ices pro ided by the kernel. #his is implemented differently by each kernel, but most pro ide a . library or an A3I, which in turn in oke the related kernel functions.

#he method of in oking the kernel function aries from kernel to kernel. If memory isolation is in use, it is impossible for a user process to call the kernel directly, because that would be a iolation of the processor's access control rules. A few possibilities are) 4sing a software!simulated interrupt. #his method is a ailable on most hardware, and is therefore ery common. 4sing a call gate. A call gate is a special address which the kernel has added to a list stored in kernel memory and which the processor knows the location of. $hen the processor detects a call to that location, it instead redirects to the target location without causing an access iolation. 8e+uires hardware support, but the hardware for it is +uite common. 4sing a special system call instruction. #his techni+ue re+uires special hardware support, which common architectures (notably, %:;) may lack. System call instructions ha e been added to recent models of %:; processors, howe er, and some (but not all) operating systems for 3.s make use of them when a ailable. 4sing a memory!based +ueue. An application that makes large numbers of re+uests but does not need to wait for the result of each may add details of re+uests to an area of memory that the kernel periodically scans to find re+uests.

Kernel design decisions


Fault tolerance
An important consideration in the design of a kernel is fault tolerance, specifically, in cases where multiple programs are running on a single computer, it is usually important to pre ent a fault in one of the programs from negati ely affecting the other. <%tended to malicious design rather than a fault, this also applies to security, and is necessary to pre ent processes from accessing information without being granted permission. #wo main approaches to the protection of sensiti e information are assigning pri ileges to hierarchical protection domains, for e%ample by using a processor's super isor mode, or distributing pri ileges differently for each process and resource, for e%ample by using capabilities or access control lists. =ierarchical protection domains are much less fle%ible, as it is not possible to assign different pri ileges to processes that are at the same pri ileged le el, and can't therefore satisfy >enning's four principles for fault tolerance (particularly the 3rinciple of least pri ilege). =ierarchical protection domains also ha e a ma2or performance drawback, since interaction between different le els of protection, when a process has to manipulate a data structure both in 'user mode' and 'super isor mode', always re+uires message copying (transmission by alue).&''( A kernel based on capabilities, howe er, is more fle%ible in assigning pri ileges, can satisfy >enning's fault tolerance principles, and typically doesn't suffer from the performance issues of copy by alue. 9oth approaches typically re+uire some hardware or firmware support to be operable and efficient. #he hardware support for hierarchical protection domains is typically that of 6.34 modes.6 An efficient and simple way to pro ide hardware support of capabilities is to delegate the **4 the responsibility of checking access!rights for e ery memory access, a mechanism called capability!based addressing. *ost commercial computer architectures lack **4 support for capabilities. An alternati e approach is to simulate capabilities using commonly!support hierarchical domains, in this approach, each protected ob2ect must reside in an address space that the application does not ha e access to, the kernel also maintains a list of capabilities in such memory. $hen an application needs to access an ob2ect protected by a capability, it performs a system call and the kernel performs the access for it. #he performance cost of address space switching limits the practicality of this approach in systems with comple% interactions between ob2ects, but it is used in current operating systems for ob2ects that are not accessed fre+uently or which are not e%pected to perform +uickly. Approaches where protection mechanism are not firmware supported but are instead simulated at higher le els (e.g. simulating capabilities by manipulating page tables on hardware that does not ha e direct support), are possible, but there are performance implications. ?ack of hardware support may not be an issue, howe er, for systems that choose to use language!based protection.

Security
An important kernel design decision is the choice of the abstraction le els where the security mechanisms and policies should be implemented. One approach is to use firmware and kernel support for fault tolerance (see abo e), and build the security policy for malicious beha ior on top of that (adding features such as cryptography mechanisms where necessary), delegating some responsibility to the compiler. Approaches that delegate enforcement of security policy to the compiler and"or the application le el are often called language-based security.

Hardware based protection or language based protection


#ypical computer systems today use hardware!enforced rules about what programs are allowed to access what data. #he processor monitors the e%ecution and stops a program that iolates a rule (e.g., a user process that is about to read or write to kernel memory, and so on). In systems that lack support for capabilities, processes are isolated from each other by using separate address spaces. .alls from user processes into the kernel are regulated by re+uiring them to use one of the abo e! described system call methods. An alternati e approach is to use language!based protection. In a language!based protection system, the kernel will only allow code to e%ecute that has been produced by a trusted language compiler. #he language may then be designed such that it is impossible for the programmer to instruct it to do something that will iolate a security re+uirement. !dvantages o" t#is approac# include$ ?ack of need for separate address spaces. Switching between address spaces is a slow operation that causes a great deal of o erhead, and a lot of optimi/ation work is currently performed in order to pre ent unnecessary switches in current operating systems. Switching is completely unnecessary in a language!based protection system, as all code can safely operate in the same address space. 1le%ibility. Any protection scheme that can be designed to be e%pressed ia a programming language can be implemented using this method. .hanges to the protection scheme (e.g. from a hierarchical system to a capability! based one) do not re+uire new hardware.

>isad antages include) ?onger applications start up time. Applications must be erified when they are started to ensure they ha e been compiled by the correct compiler, or may need recompiling either from source code or from byte code. Infle%ible type systems. On traditional systems, applications fre+uently perform operations that are not type safe. Such operations cannot be permitted in a language!based protection system, which means that applications may need to be rewritten and may, in some cases, lose performance.

<%amples of systems with language!based protection include *icrosoft's Singularity.

Process cooperation
<dsger >i2kstra pro ed that from a logical point of iew, atomic lock and unlock operations operating on binary semaphores are sufficient primiti es to e%press any functionality of process cooperation. &':( =owe er this approach is generally held to be lacking in terms of safety and efficiency, whereas a message passing approach is more fle%ible.

%&O devices management


#he idea of a kernel where I"O de ices are handled uniformly with other processes, as parallel co!operating processes, was first proposed and implemented by 9rinch =ansen (although similar ideas were suggested in '@;A). In =ansen's description of this, the 6common6 processes are called internal processes, while the I"O de ices are called external processes.

Kernel wide design approac#es


-aturally, the abo e listed tasks and features can be pro ided in many ways that differ from each other in design and implementation. $hile monolithic kernels e%ecute all of their code in the same address space (kernel space) to increase the performance of the system, microkernels try to run most of their ser ices in user space, aiming to impro e maintainability and modularity of the codebase. *ost kernels do not fit e%actly into one of these categories, but are rather found in between these two designs. #hese are called hybrid kernels. *ore e%otic designs such as nanokernels and e%okernels are a ailable, but are seldom used for production systems. #he Ben hyper isor, for e%ample, is an e%okernel. #he principle of separation of mechanism and policy is the substantial difference between the philosophy of micro and monolithic kernels. =ere a mechanism is the support that allows to implement many different policies, while a policy is a

particular 6mode of operation6. In minimal microkernel 2ust some ery basic policies are included, and its mechanisms allows what is running on top of the kernel (the remaining part of the operating system and the other applications) to decide which policies to adopt (as memory management, high le el process scheduling, file system management, ecc.). A monolithic kernel instead tends to include many policies, therefore restricting the rest of the system to rely on them.

Monolit#ic kernels
Main article: Monolithic kernel

>iagram of *onolithic kernels In a monolithic kernel, all OS ser ices run along with the main kernel thread, thus also residing in the same memory area. #his approach pro ides rich and powerful hardware access. Some de elopers maintain that monolithic systems are easier to design and implement than other solutions, and are e%tremely efficient if well!written. #he main disad antages of monolithic kernels are the dependencies between system components ! a bug in a de ice dri er might crash the entire system ! and the fact that large kernels can become ery difficult to maintain.

Micro kernels Main article: Microkernel

In the microkernel approach, the kernel itself only pro ides basic functionality that allows the e%ecution of ser ers, separate programs that assume former kernel functions, such as de ice dri ers, C4I ser ers, etc. #he microkernel approach consists of defining a simple abstraction o er the hardware, with a set of primiti es or system calls to implement minimal OS ser ices such as memory management, multitasking, and inter!process communication. Other ser ices, including those normally pro ided by the kernel such as networking, are implemented in user!space programs, and referred to as servers. *icrokernels are easier to maintain than monolithic kernels, but the large number of system calls and conte%t switches might slow down the system because they typically generate more o erhead than plain function calls. *icrokernels generally underperform traditional designs, sometimes dramatically. #his is due in large part to the o erhead of mo ing in and out of the kernel, a conte%t switch, to mo e data between the arious applications and ser ers. 9y the mid! '@@5s, most researchers had abandoned the belief that careful tuning could reduce this o erhead dramatically, but recently, newer microkernels, optimi/ed for performance, ha e addressed these problems.
A microkernel allows the implementation of the remaining part of the operating system as a normal application program written in a high!le el language, and the use of different operating systems on top of the same unchanged kernel. It is also possible to dynamically switch among

operating systems and to ha e more than one acti e simultaneously.

Monolit#ic kernels vs microkernels


As the computer kernel grows, a number of problems become e ident. One of the most ob ious is that the memory footprint increases. #his is mitigated to some degree by perfecting the irtual memory system, but not all computer architectures ha e irtual memory support. #o reduce the kernel's footprint, e%tensi e editing has to be performed to carefully remo e unneeded

code, which can be ery difficult with non!ob ious interdependencies between parts of a kernel with millions of lines of code. >ue to the problems that monolithic kernels pose, they were considered obsolete by the early '@@5s. As a result, the design of ?inu% using a monolithic kernel rather than a microkernel was the topic of a famous flame war between ?inus #or alds and Andrew #anenbaum. #here is merit on both sides of the argument presented in the #anenbaum"#or alds debate. Some, including early 4-IB de eloper 0en #hompson, argued that while microkernel designs were more aesthetically appealing, monolithic kernels were easier to implement. =owe er, a bug in a monolithic system usually crashes the entire system, while this doesn't happen in a microkernel with ser ers running apart from the main thread. *onolithic kernel proponents reason that incorrect code doesn't belong in a kernel, and that microkernels offer little ad antage o er correct code. *icrokernels are often used in embedded robotic or medical computers where crash tolerance is important and most of the OS components reside in their own pri ate, protected memory space. #his is impossible with monolithic kernels, e en with modern module!loading ones. =owe er, the monolithic model tends to be more efficient through the use of shared kernel memory, rather than the slower I3. system of microkernel designs, which is typically based on message passing.

Hybrid kernels
Main article: Hybrid kernel

#he hybrid kernel approach tries to combine the speed and simpler design of a monolithic kernel with the modularity and e%ecution safety of a microkernel. =ybrid kernels are essentially a compromise between the monolithic kernel approach and the microkernel system. #his implies running some ser ices (such as the network stack or the filesystem) in kernel space to reduce the performance o erhead of a traditional microkernel, but still running kernel code (such as de ice dri ers) as ser ers in user space.

'anokernels
Main article: Nanokernel A nanokernel delegates irtually all ser ices D including e en the most basic ones like interrupt controllers or the timer D to de ice dri ers to make the kernel memory re+uirement e en smaller than a traditional microkernel.

()okernels
Main article: Exokernel An e%okernel is a type of kernel that does not abstract hardware into theoretical models. Instead it allocates physical hardware resources, such as processor time, memory pages, and disk blocks, to different programs. A program running on an e%okernel can link to a library operating system that uses the e%okernel to simulate the abstractions of a well!known OS, or it can de elop application!specific abstractions for better performance.

History o" kernel development


(arly operating system kernels Main article: History of operating systems

Strictly speaking, an operating system (and thus, a kernel) is not required to run a computer. 3rograms can be directly loaded and e%ecuted on the 6bare metal6 machine, pro ided that the authors of those programs are willing to work without any hardware abstraction or operating system support. *ost early computers operated this way during the '@E5s and early '@;5s, which were reset and reloaded between the e%ecution of different programs. < entually, small ancillary programs such as program loaders and debuggers were left in memory between runs, or loaded from 8O*. As these were de eloped, they formed the basis of what became early operating system kernels. #he 6bare metal6 approach is still used today on some ideo game consoles and embedded systems, but in general, newer computers use modern operating systems and kernels.

*ime s#aring operating systems


Main article: Time-sharing In the decade preceding 4ni%, computers had grown enormously in power ! to the point where computer operators were looking for new ways to get people to use the spare time on their machines. One of the ma2or de elopments during this era was time!sharing, whereby a number of users would get small slices of computer time, at a rate at which it appeared they were each connected to their own, slower, machine. #he de elopment of time!sharing systems led to a number of problems. One was that users, particularly at uni ersities where the systems were being de eloped, seemed to want to hack the system to get more .34 time. 1or this reason, security and access control became a ma2or focus of the *ultics pro2ect in '@;E. Another ongoing issue was properly handling computing resources) users spent most of their time staring at the screen instead of actually using the resources of the computer, and a time!sharing system should gi e the .34 time to an acti e user during these periods. 1inally, the systems typically offered a memory hierarchy se eral layers deep, and partitioning this e%pensi e resource led to ma2or de elopments in irtual memory systems.

+ni)
Main article: nix 4ni% represented the culmination of decades of de elopment towards a modern operating system. >uring the design phase, programmers decided to model e ery high!le el de ice as a file, because they belie ed the purpose of computation was data transformation. 1or instance, printers were represented as a 6file6 at a known location ! when data was copied to the file, it printed out. Other systems, to pro ide a similar functionality, tended to irtuali/e de ices at a lower le el ! that is, both de ices and files would be instances of some lower le el concept. 7irtuali/ing the system at the file le el allowed users to manipulate the entire system using their e%isting file management utilities and concepts, dramatically simplifying operation. As an e%tension of the same paradigm, 4ni% allows programmers to manipulate files using a series of small programs, using the concept of pipes, which allowed users to complete operations in stages, feeding a file through a chain of single!purpose tools. Although the end result was the same, using smaller programs in this way dramatically increased fle%ibility as well as ease of de elopment and use, allowing the user to modify their workflow by adding or remo ing a program from the chain. In the 4ni% model, the !perating "ystem consists of two parts, one the huge collection of utility programs that dri e most operations, the other the kernel that runs the programs. 4nder 4ni%, from a programming standpoint the distinction between the two is fairly thin, the kernel is a program running in super isor mode &A( that acts as a program loader and super isor for the small utility programs making up the rest of the system, and to pro ide locking and I"O ser ices for these programs, beyond that, the kernel didn't inter ene at all in user space. O er the years the computing model changed, and 4ni%'s treatment of e erything as a file no longer seemed to be as uni ersally applicable as it was before. Although a terminal could be treated as a file or a stream, which is printed to or read from, the same did not seem to be true for a graphical user interface. -etworking posed another problem. < en if network communication can be compared to file access, the low!le el packet!oriented architecture dealt with discrete chunks of data and not with whole files. As the capability of computers grew, 4ni% became increasingly cluttered with code. $hile kernels might ha e had '55,555 lines of code in the se enties and eighties, kernels of modern 4ni% successors like ?inu% ha e more than F.E million lines. #hus, the biggest problem with monolithic kernels, or monokernels, was sheer si/e. #he code was so e%tensi e that working on such a large codebase was e%tremely tedious and time!consuming. *odern 4ni%!deri ati es are generally based on module!loading monolithic kernels. <%amples for this are ?inu% distributions like >ebian C-4"?inu%, 8ed =at ?inu% and 4buntu ?inu%, as well as 9erkeley software distributions such as 1ree9S> and -et9S>. Apart from these alternati es, amateur de elopers maintain an acti e operating system de elopment community, populated by self!written hobby kernels which mostly end up sharing many features with ?inu% and"or being compatible with it.

Mac OS
Main article: Mac !" history Apple .omputer first launched *ac OS in '@:F, bundled with its Apple *acintosh personal computer. 1or the first few releases, *ac OS (or System Software, as it was called) lacked many essential features, such as multitasking and a hierarchical filesystem. $ith time, the OS e ol ed and e entually became *ac OS @ and had many new features added, but the kernel basically stayed the same. Against this, *ac OS B is based on >arwin, which uses a hybrid kernel called B-4, which was created combining the F.G9S> kernel and the *ach kernel.

!miga
Main article: #miga!" #he .ommodore Amiga was released in '@:E, and was among the first (and certainly most successful) home computers to feature a microkernel operating system. #he Amiga's kernel, exec$library, was small but capable, pro iding fast pre!empti e multitasking on similar hardware to the cooperati ely!multitasked Apple *acintosh, and an ad anced dynamic linking system that allowed for easy e%pansion.

,indows
Main article: History of Microsoft %indo&s *icrosoft $indows was first released in '@:E as an add!on to >OS. Similarly to *ac OS, it also lacked important features at first but e entually ac+uired them in later releases. #his product line would continue until the release of the $indows @% series and end with $indows *e. At the same time, *icrosoft has been de eloping $indows -# since '@@G, an operating system intended for the high!end and business user. #his line started with the release of $indows -# G.' and replaced the main product line with the release of the -#!based $indows H555. #he highly successful $indows B3 brought these two product lines together, combining the stability of the -# line and the isual appeal of the @% series. It uses the -# kernel, which is generally considered a hybrid kernel because the kernel itself contains tasks such as the $indow *anager and the I3. *anager, but se eral subsystems run in user mode.

Development o" microkernels


Although *ach, de eloped at .arnegie *ellon 4ni ersity from '@:E to '@@F is the best!known general!purpose microkernel, other microkernels ha e been de eloped with more specific aims. #he ?F microkernel family (mainly the ?G and the ?F kernel) was created to demonstrate that microkernels are not necessarily slow. -ewer implementations such as 1iasco and 3istachio are able to run ?inu% ne%t to other ?F processes in separate address spaces. I-B is a real!time operating system with a minimalistic microkernel design that has been de eloped since '@:H, ha ing been far more successful than *ach in achie ing the goals of the microkernel paradigm. It is principally used in embedded systems and in situations where software is not allowed to fail, such as the robotic arms on the space shuttle and machines that control grinding of glass to e%tremely fine tolerances, where a tiny mistake may cost hundreds of thousands of dollars, as in the case of the mirror of the =ubble Space #elescope.

Potrebbero piacerti anche