Sei sulla pagina 1di 25

Chapter 3: OPERATING SYSTEM SECURITY Protected Objects and Methods of Protection

Protected Objects In fact, the rise of multiprogramming meant that several aspects of a computing system required protection:

memory sharable I/O devices, such as disks serially reusable I/O devices, such as printers and tape drives sharable programs and subprocedures networks sharable data

Security Methods of Operating Systems The basis of protection is separation: keeping one user's objects separate from other users. 1) Physical separation o Different processes use different physical objects E.g., different printers for different confidentiality levels of output 2) Temporal separation o Processes having different security reqs executed at different times 3) Logical separation o Illusion that OS executes processes only for single user 4) Cryptographic separation o Processes conceal their data and computations from other processes 5) Combinations of the above ***IMP*** Strength of security via separation (least to most secure): Logical separation Temporal separation Physical separation Complexity of implementation of separation (least to most complex): Physical separation

Temporal separation Logical separation Cryptographic separation Resource utilization in different kinds of separation: Poor: physical separation / temporal separation Good: logical separation / cryptographic separation

1) Memory and Address Protection


Most obvious protection: Protect pgm memory from being affected by other pgms

a. b. c. d. e. f.

Fence Relocation Base/Bounds Registers Tagged Architecture Segmentation Paging

Memory and Address Protection


The most obvious problem of multiprogramming is preventing one program from affecting the data and programs in the memory space of other users. Fortunately, protection can be built into the hardware mechanisms that control efficient use of memory, so solid protection can be provided at essentially no additional cost. 1) Fence The simplest form of memory protection was introduced in single-user operating systems to prevent a faulty user program from destroying part of the resident portion of the operating system. As its name implies, a fence is a method to confine users to one side of a boundary. In one implementation, the fence was a predefined memory address, enabling the operating system to reside on one side and the user to stay on the other. Another implementation used a hardware register, often called a fence register, containing the address of the end of the operating system..

1)Fence
Confining users to one side of a boundary E.g., predefined memory address n between OS and user User pgm instruction at address n (OSs side of the fence) not allowed to execute

1. Fixed fence (wastes space if unusued by OS or blocks IOS from growing)

or
2. Variable fence (cf. Fig. 4-2, p. 185) Using fence register h/w register

Fixed fence

Variable fence

***IMP*** Relocation Pgms written as if starting at location 0 in memory Actually, starting at location n determined by OS Before user instruction executed, each address relocated by adding relocation factor n to it Relocation factor = starting address of pgm in memory Fence register (h/w register) plays role of relocation register as well Bec. adding n to pgm addresses prevents it from accessing addresses below n 2) Base/Bounds Registers A major advantage of an operating system with fence registers is the ability to relocate; this characteristic is especially important in a multiuser environment. With two or more users, none can know in advance where a program will be loaded for execution. The relocation register solves the problem by providing a base or starting address. All addresses inside a program are offsets from that base address. A variable fence register is generally known as a base register. Fence registers provide a lower bound (a starting address) but not an upper one. An upper bound can be useful in knowing how much space is allotted and in checking for overflows into "forbidden" areas. To overcome this difficulty, a second register is often added, as shown in Figure The second register, called a bounds register, is an upper address limit, in the same way that a base or fence register is a lower address limit.

BASE BOUND This technique protects a program's addresses from modification by another user. When execution changes from one user's program to another's, the operating system must change the contents of the base and bounds registers to reflect the true address space for that user. This change is part of the general preparation, called a context switch, that the operating system must perform when transferring control from one user to another. With a pair of base/bounds registers, a user is perfectly protected from outside users, or, more correctly, outside users are protected from errors in any other user's program. Erroneous addresses inside a user's address space can still affect that program because the base/bounds checking guarantees only that each address is inside the user's address space. For example, a user error might occur when a subscript is out of range or an undefined variable generates an address reference within the user's space but, unfortunately, inside the executable instructions of the user's program. In this manner, a user can accidentally store data on top of instructions. Such an error can let a user inadvertently destroy a program, but (fortunately) only the user's own program. We can solve this overwriting problem by using another pair of base/bounds registers, one for the instructions (code) of the program and a second for the data space. Then, only instruction fetches (instructions to be executed) are relocated and checked with the first register pair, and only data accesses (operands of instructions) are relocated and checked with the second register pair. The use of two pairs of base/bounds registers is shown in Figure. Although two pairs of registers do not prevent all program errors, they limit the effect of data-manipulating instructions to the data space. The pairs of registers offer another more important advantage: the ability to split a program into two pieces that can be relocated separately.

3)Tagged Architecture
***IMP***

Problem with base/bounds registers: high granularity of access rights (ARs) Can allow another module to access all or none of its data All or none data within limits of data base-bounds registers Solution: tagged architecture (gives low granularity of access rights) Every word of machine memory has 1 tag bits defining access rights to this word (a h/w solution!)

Access bits set by OS Tested every time instruction accesses its location

Benefit of tagged architecture: Low (good!) granularity of memory access control at memory word level Problems with tagged architecture: Requires special h/w Incompatible with code of most OSs OS compatible with it must: Accommodate tags in each memory word Test each memory word accessed Rewriting OS would be costly Higher memory costs (extra bits per word)

4) Segmentation
Benefits addressing + enhances memory protection for free Effect of an unbounded number of base/bounds registers

Pgm segmentation: Program divided into logical pieces (called segments) E.g. Pieces are: code for single procedure / data of an array / collection of local data values Consecutive pgm segments can be easily stored in nonconsecutive memory locations

Segmentation offers these security benefits:


Each address reference is checked for protection. Many different classes of data items can be assigned different levels of protection. Two or more users can share access to a segment, with potentially different access rights. A user cannot generate an address or access to an unpermitted segment.

Paging One alternative to segmentation is paging. The program is divided into equalsized pieces called pages, and memory is divided into equal-sized units called page frames. (For implementation reasons, the page size is usually chosen to be a power of two between 512 and 4096 bytes.) As with segmentation, each address in a paging scheme is a two-part object, this time consisting of <page, offset>.

Control of Access to General Objects


Protecting memory is a specific case of the more general problem of protecting objects. As multiprogramming has developed, the numbers and kinds of objects shared have also increased. Here are some examples of the kinds of objects for which protection is desirable:

memory a file or data set on an auxiliary storage device an executing program in memory a directory of files a hardware device a data structure, such as a stack a table of the operating system instructions, especially privileged instructions passwords and the user authentication mechanism the protection mechanism itself

***IMP*** Objects and subjects accessing them General objects in OS that need protection (examples) Memory / File or data set on auxiliary storage device Pgm executing in memory / Directory of files / Hardware device Data structure / OS tables / Instructions, esp. privileged instructions Passwords and authentication mechanism / Protection mechanism Subjects User / Administrator / Programmer / Pgm Another object / Anything that seeks to use object

Complementary goals in access control:


1) Check every access Access is not granted forevercan be suspended or revoked 2) Enforce least privilege Give subject access to the smallest number of objects necessary to perform subjects task 3) Verify acceptable use E.g., verify if requested kind of access is acceptable E.g., R is OK, W/X is not

Control Access to the Object


1. Directory 2. Access Control List 3. Access Control Matrix

1) Directory(per object)
File directory mechanism to control file access Unique object owner Owner controls access rights: assigns/revokes them Access rights (ARs): Read, write, execute (possible others) Each user has access rights directory Example: (User A owns O1 and O3. User B owns O2, O4, O5)

Advantage: Difficulties

Easy to implement Just one list (directory) per user

All user directories get too big for large # of shared objects bec. each shared object in dir. of each user sharing it Maintenance difficulties: Deletion of shared objects Requires deleting entry from each directory referencing it Revocation of access If owner A revokes access rights for X from every subject, OS must search dirs of all subjects to remove entries for X

2)Access Control List(Per subject)


There is one such list for each object, and the list shows all subjects who should have access to the object and what their access is. This approach differs from the directory list because there is one access control list per object; a directory is created for each subject. Although this difference seems small, there are some significant advantages.

3) Access Control Matrix(per subject, per object)

We can think of the directory as a listing of objects accessible by a single subject, and the access list as a table identifying subjects that can access a single object. The data in these two representations are equivalent, the distinction being the ease of use in given situations. As an alternative, we can use an access control matrix, a table in which each row represents a subject, each column represents an object, and each entry is the set of access rights for that subject to that object. An example representation of an access control matrix is shown in Table. In general, the access control matrix is sparse (meaning that most cells are empty): Most subjects do not have access rights to most objects. The

access matrix can be represented as a list of triples, having the form <subject, object, rights>. Searching a large number of these triples is inefficient enough that this implementation is seldom used.

File Protection Mechanisms


Basic forms of protection Basic forms of protection All-none protection Group protection

1) All-none protection (in early IBM OS) Public files (all) or files protecd by passwords (none) Access to public files required knowing their names Ignorance (not knowing file name) was an extra barrier

Problems w/ this approach Lack of trust for public files in large systems Difficult to limit access to trusted users only Complexity for password-protected files, human response (password) required for each file access File names easy to find File listings eliminate ignorance barrier

Group Protection
Because the all-or-nothing approach has so many drawbacks, researchers sought an improved way to protect files. They focused on identifying groups of users who had some common relationship. In a typical Unix+ implementation, the world is divided into three classes: the user, a trusted working group associated with the user, and the rest of the users. For simplicity we can call these classes user, group, and world. Windows NT+ uses groups such as Administrators, Power Users, Users, and Guests. (NT+ administrators can also create other groups.) All authorized users are separated into groups. A group may consist of several members working on a common project, a department, a class, or a single user. The basis for group membership is need to share. The group members have some common interest and therefore are assumed to have files to share with the other group members. In this approach, no user belongs to more than one group. (Otherwise, a member belonging to groups A and B could pass along an A file to another B group member.) When creating a file, a user defines access rights to the file for the user, for other members of the same group, and for all other users in general. Typically, the choices for

access rights are a limited set, such as {update, readexecute, read, writecreatedelete}. For a particular file, a user might declare read-only access to the general world, read and update access to the group, and all rights to the user. This approach would be suitable for a paper being developed by a group, whereby the different members of the group might modify sections being written within the group. The paper itself should be available for people outside the group to review but not change. A key advantage of the group protection approach is its ease of implementation. A user is recognized by two identifiers (usually numbers): a user ID and a group ID. These identifiers are stored in the file directory entry for each file and are obtained by the operating system when a user logs in. Therefore, the operating system can easily check whether a proposed access to a file is requested from someone whose group ID matches the group ID for the file to be accessed. Advantage: Ease of implementation OS recognizes user by user ID and group ID (upon login) File directory stores for each file: File owners user ID and file owners group ID Problems with group protection a) User cant belong to > 1 group Solution: Single user gets multiple accounts E.g., Tom gets accounts Tom1 and Tom2 Tom1 in Group1, Tom2 in Group2 Problem: Files owned by Tom1 cant be accessed by Tom2 (unless they are public available to others) Problems: Inconvenience, redundancy (e.g., if admin copies Tom1 files to Tom2 acct) b) User might become responsible for file sharing E.g., admin makes files from all groups visible to a user (e.g., by copying them into one of users accts and making them private users files) => User becomes responsible for manually preventing unauthorized sharing of his files between his different groups c) Limited files sharing choices Only 3 choices for any file: private, group, public

Individual Permissions

In spite of their drawbacks, the file protection schemes we have described are relatively simple and straightforward. The simplicity of implementing them suggests other easy-tomanage methods that provide finer degrees of security while associating permission with a single file.

Persistent Permission
From other contexts you are familiar with persistent permissions. The usual implementation of such a scheme uses a name (you claim a dinner reservation under the name of Sanders), a token (you show your driver's license or library card), or a secret (you say a secret word or give the club handshake). Similarly, in computing you are allowed access by being on the access list, presenting a token or ticket, or giving a password. User access permissions can be required for any access or only for modifications (write access). All these approaches present obvious difficulties in revocation: Taking someone off one list is easy, but it is more complicated to find all lists authorizing someone and remove him or her. Reclaiming a token or password is even more challenging.

Temporary Acquired Permission


Unix+ operating systems provide an interesting permission scheme based on a three-level usergroupworld hierarchy. The Unix designers added a permission called set userid (suid). If this protection is set for a file to be executed, the protection level is that of the file's owner, not the executor. To see how it works, suppose Tom owns a file and allows Ann to execute it with suid. When Ann executes the file, she has the protection rights of Tom, not of herself. This peculiar-sounding permission has a useful application. It permits a user to establish data files to which access is allowed only through specified procedures. For example, suppose you want to establish a computerized dating service that manipulates a database of people available on particular nights. Sue might be interested in a date for Saturday, but she might have already refused a request from Jeff, saying she had other plans. Sue instructs the service not to reveal to Jeff that she is available. To use the service, Sue, Jeff, and others must be able to read the file and write to it (at least indirectly) to determine who is available or to post their availability. But if Jeff can read the file directly, he would find that Sue has lied. Therefore, your dating service must force Sue and Jeff (and all others) to access this file only through an access program that would screen the data Jeff obtains. But if the file access is limited to read and write by you as its owner, Sue and Jeff will never be able to enter data into it. The solution is the Unix SUID protection. You create the database file, giving only you access permission. You also write the program that is to access the database, and save it with the SUID protection. Then, when Jeff executes your program, he temporarily acquires your access permission, but only during execution of the program.

Jeff never has direct access to the file because your program will do the actual file access. When Jeff exits from your program, he regains his own access rights and loses yours. Thus, your program can access the file, but the program must display to Jeff only the data Jeff is allowed to see. This mechanism is convenient for system functions that general users should be able to perform only in a prescribed way. For example, only the system should be able to modify the file of users' passwords, but individual users should be able to change their own passwords any time they wish. With the SUID feature, a password change program can be owned by the system, which will therefore have full access to the system password table. The program to change passwords also has SUID protection so that when a normal user executes it, the program can modify the password file in a carefully constrained way on behalf of the user. Per-Object and Per-User Protection The primary limitation of these protection schemes is the ability to create meaningful groups of related users who should have similar access to related objects. The access control lists or access control matrices described earlier provide very flexible protection. Their disadvantage is for the user who wants to allow access to many users and to many different data sets; such a user must still specify each data set to be accessed by each user. As a new user is added, that user's special access rights must be specified by all appropriate users.

User Authentication
***IMP***

a. b. c. d. e. f. g. h.

Introduction Use of passwords Attacks on passwords Password selection criteria One-time passwords (challenge-response systems) The authentication process Authentication other than passwords Conclusions

An operating system bases much of its protection on knowing who a user of the system is. In real-life situations, people commonly ask for identification from people they do not know: A bank employee may ask for a driver's license before cashing a check, library employees may require some identification before charging out books, and immigration officials ask for passports as proof of identity. In-person identification is usually easier than remote identification. For instance, some universities do not report grades over the telephone because the office workers do not necessarily know the

students calling. However, a professor who recognizes the voice of a certain student can release that student's grades. Over time, organizations and systems have developed means of authentication, using documents, voice recognition, fingerprint and retina matching, and other trusted means of identification. In computing, the choices are more limited and the possibilities less secure. Anyone can attempt to log in to a computing system. Unlike the professor who recognizes a student's voice, the computer cannot recognize electrical signals from one person as being any different from those of anyone else. Thus, most computing authentication systems must be based on some knowledge shared only by the computing system and the user. Authentication mechanisms use any of three qualities to confirm a user's identity. 1. Something the user knows. Passwords, PIN numbers, passphrases, a secret handshake, and mother's maiden name are examples of what a user may know. 2. Something the user has. Identity badges, physical keys, a driver's license, or a uniform are common examples of things people have that make them recognizable. 3. Something the user is. These authenticators, called biometrics, are based on a physical characteristic of the user, such as a fingerprint, the pattern of a person's voice, or a face (picture). These authentication methods are old (we recognize friends in person by their faces or on a telephone by their voices) but are just starting to be used in computer authentications.

***IMP*** I&A can be based on:


1) What entity knows passwords E.g., simple password, challenge-response authentication 2)What entity is biometrics E.g., fingerprints, retinal characteristics 3) What entity has - access tokens E.g., badges, smart cards 4) Where entity is location E.g., in the accounting department Any combinations of the above - hybrid approaches

Passwords as Authenticators
The most common authentication mechanism for user to operating system is a password, a "word" known to computer and user. Although password protection seems to offer a relatively secure system, human practice sometimes degrades its quality.

Use of Passwords
Passwords are mutually agreed-upon code words, assumed to be known only to the user and the system. In some cases a user chooses passwords; in other cases the system assigns them. The length and format of the password also vary from one system to another. Even though they are widely used, passwords suffer from some difficulties of use:

Loss. Depending on how the passwords are implemented, it is possible that no


one will be able to replace a lost or forgotten password. The operators or system administrators can certainly intervene and unprotect or assign a particular password, but often they cannot determine what password a user has chosen; if the user loses the password, a new one must be assigned. Use. Supplying a password for each access to a file can be inconvenient and time consuming.

Disclosure. If a password is disclosed to an unauthorized individual, the file


becomes immediately accessible. If the user then changes the password to reprotect the file, all the other legitimate users must be informed of the new password because their old password will fail.

Revocation. To revoke one user's access right to a file, someone must change
the password, thereby causing the same problems as disclosure.

Loose-Lipped Systems
So far the process seems secure, but in fact it has some vulnerabilities. To see why, consider the actions of a would-be intruder. Authentication is based on knowing the <name, password> pair A complete outsider is presumed to know nothing of the system. Suppose the intruder attempts to access a system in the following manner. (In the following examples, the system messages are in uppercase, and the user's responses are in lowercase.)
WELCOME TO THE XYZ COMPUTING SYSTEMS ENTER USER NAME: adams INVALID USER NAMEUNKNOWN USER ENTER USER NAME:

We assumed that the intruder knew nothing of the system, but without having to do much, the intruder found out that adams is not the name of an authorized user. The intruder could try other common names, first names, and likely generic names such as system or operator to build a list of authorized users. An alternative arrangement of the login sequence is shown below.
WELCOME TO THE XYZ COMPUTING SYSTEMS ENTER USER NAME: adams ENTER PASSWORD: john INVALID ACCESS ENTER USER NAME:

This system notifies a user of a failure only after accepting both the user name and the password. The failure message should not indicate whether it is the user name or password that is unacceptable. In this way, the intruder does not know which failed. These examples also gave a clue as to which computing system is being accessed. The true outsider has no right to know that, and legitimate insiders already know what system they have accessed. In the example below, the user is given no information until the system is assured of the identity of the user.
ENTER USER NAME: adams ENTER PASSWORD: john INVALID ACCESS ENTER USER NAME: adams ENTER PASSWORD: johnq WELCOME TO THE XYZ COMPUTING SYSTEMS

***IMP*** Attacks on passwords

Kinds of password attacks i. ii. iii. iv. v. Try all possible pwds (exhaustive, brute force attack) Try many probable pwds Try likely passwords pwds Search system list of pwds Find pwds by exploiting indiscreet users (social engg)

Try all possible pwds (exhaustive, brute force attack)


Try all possible
Try all possible = exhaustive attack / brute force attack Approach: Try all possible character combinations Example Suppose: - only 26 chars (a-z) allowed in pwd - pwd length: 8 chars nr_of_pwds = i=1 nr_of_i-char_pwd = i=1 26i = 269 1 5 * 1012 If attackers computer checks 1 pwd/s => 5* 1012 s = 5 mln s 2 months to check all possible char combinations for a given pwd (max. exhaustive attack time) With uniform distribution (neither good nor bad luck), expected successful attack time is = of max. exh. attack time (1 month) Is the attack target worth such attackers investment? Might be e.g., a bank acct, credit card nr In an exhaustive or brute force attack, the attacker tries all possible passwords, usually in some automated fashion. Of course, the number of possible passwords depends on the implementation of the particular computing system. If we were to use a computer to create and try each password at a rate of checking one password per millisecond, it would take on the order of 150 years to test all passwords. But if we can speed up the search to one password per microsecond, the work factor drops to about two months. This amount of time is reasonable if the reward is large. For instance, an intruder may try to break the password on a file of credit card numbers or bank account information. But the break-in time can be made more tractable in a number of ways. Searching for a single particular password does not necessarily require all passwords to be tried; an intruder needs to try only until the correct password is identified. If the set of all possible passwords were evenly distributed, an intruder would likely need to try only half of the password space: the expected number of searches to find any particular password. However, an intruder can also use to advantage the fact that passwords are not evenly distributed. Because a password has to be remembered, people tend to pick simple passwords. This feature reduces the size of the password space.

Q) Find the required minimum password length s of passwords so that probability P of a successful attack is 0.5 over a 365-day guessing attack period? Solution We know that: P TG / N P - probability of a successful attack T - number of time units [sec] during which guessing occurs G - number of guesses per time unit [sec] N - number of possible passwords P TG / N => N TG / P Calculations: N TG / P = = (365 days24hrs60min60s)104/0.5 = 6.311011 Choose password length s such that at least N passwords are possible, i.e. sj=1 96j N = 6.311011 (96 1-char words + 962 2-char words + 96s s-char words) => s 6 i.e., passwords must be at least 6 chars long

Try many probable pwds


Try many probable pwds (1) Can reduce expected successful attack time by checking most probable char combinations for a pwd first: Check short pwds first Check common words, etc. first Example check short pwds first People prefer short pwds => check pwds of length k Assume 1 pwd checked per s (per ms in text p.213) k=3: 261 + 262 + 263 = 18,278 possible pwds => 18,278 s 18.3 ms to check all combinations k=4: ... 475 ms 0.5 s k=5: ... 12,356 ms 12.4 s *********************************************************************** * Penetrators searching for passwords realize these very human characteristics and use them to their advantage. Therefore, penetrators try techniques that are likely to lead to rapid success. If people prefer short passwords to long ones, the penetrator will plan to try all passwords but to try them in order by length. There are only 261 + 262 +

263=18,278 passwords of length 3 or less. At the assumed rate of one password per millisecond, all of these passwords can be checked in 18.278 seconds, hardly a challenge with a computer. Even expanding the tries to 4 or 5 characters raises the count only to 475 seconds (about 8 minutes) or 12,356 seconds (about 3.5 hours), respectively. This analysis assumes that people choose passwords such as vxlag and msms as often as they pick enter and beer.

Try likely pwds


*********************************************************************** * People are predictable in pwd selection Attacker can restrict attack dictionary first to names of: family, pets, celebrities, sports stars, streets, projects,... Example: 1979 study of pwds [Morris and Thompson] Even single char pwds! 86% of pwds extremely simplistic! All could be discovered in a week even at 1 msec/pwd checking rate Study repeated in 1990 [Klein] and 1992 [Spafford] with similarly dismal results! Klein: 21% guessed in a week Spafford: ~29% od pwds consisted of lowercase a-z only!

12 steps an attacker might try (start w/ most probable guesses) 1) No password 2) Same as user ID 3) Users name or derived from it 4) Common word list plus common names and patterns Ex. common patterns: asdfg consecutive keyboard keys, aaaa 5) Short college dictionary 6) Complete English word list 7) Common non-English language dictionaries 8) Short college dictionary with capitalizations & substitutions E.g. PaSsWoRd, pa$$w0rd Substitutions include: a -> @, e -> 3, i/l -> 1, o -> 0, s -> $, ... 9) Complete English with capitalization and substitutions 10) Common non-English dictionaries with capitalization and substitutions 11) Brute force, lowercase alphabetic characters 12) Brute force, full character set

Search system list of pwds


*********************************************************************** * Search system list of pwds System must keep list of passwords to authenticate logging users Attacker may try to capture pwd list

Pwd lists: 1) Plaintext system pwd file 2) Encrypted pwd file a. Conventional encryption b. One-way encryption

Plaintext System Password List


To validate passwords, the system must have a way of comparing entries with actual passwords. Rather than trying to guess a user's password, an attacker may instead target the system password file. Why guess when with one table you can determine all passwords with total accuracy? On some systems, the password list is a file, organized essentially as a twocolumn table of user IDs and corresponding passwords. This information is certainly too obvious to leave out in the open. Various security approaches are used to conceal this table from those who should not see it. You might protect the table with strong access controls, limiting access to the operating system. But even this tightening of control is looser than it should be, because not every operating system module needs or deserves access to this table. For example, the operating system scheduler, accounting routines, or storage manager have no need to know the table's contents. Unfortunately, in some systems, there are n+1 known users: n regular users and the operating system. The operating system is not partitioned, so all its modules have access to all privileged information. This monolithic view of the operating system implies that a user who exploits a flaw in one section of the operating system has access to all the system's deepest secrets. A better approach is to limit table access to the modules that need access: the user authentication module and the parts associated with installing new users, for example.

If the table is stored in plain sight, an intruder can simply dump memory at a convenient time to access it. Careful timing may enable a user to dump the contents of all of memory and, by exhaustive search, find values that look like the password table. System backups can also be used to obtain the password table. To be able to recover from system errors, system administrators periodically back up the file space onto some auxiliary medium for safe storage. In the unlikely event of a problem, the file system can be reloaded from a backup, with a loss only of changes made since the last backup. Backups often contain only file contents, with no protection mechanism to control file access. (Physical security and access controls to the backups themselves are depended on to provide security for the contents of backup media.) If a regular user can access the backups, even ones from several weeks, months, or years ago, the password tables stored in them may contain entries that are still valid. Finally, the password file is a copy of a file stored on disk. Anyone with access to the disk or anyone who can overcome file access restrictions can obtain the password file. ***IMP*** Plaintext system pwd file Protected w/ strong access controls Only OS can access it Better: only some OS modules that really need access to pwd list can access it Otherwise any OS penetration is pwd file penetration Attackers ways od getting plaintext pwd files: Memory dump and searching for pwd table Get pwd table from system backups Backups often include no file protection security of backups relies on physical security an access controls Get pwd file by attacking disk

Encrypted Password File


There is an easy way to foil an intruder seeking passwords in plain sight: encrypt them. Frequently, the password list is hidden from view with conventional encryption or one-way ciphers. With conventional encryption, either the entire password table is encrypted or just the password column. When a user's password is received, the stored password is decrypted, and the two are compared. Even with encryption, there is still a slight exposure because for an instant the user's password is available in plaintext in main memory. That is, the password is available to anyone who could obtain access to all of memory.

With one-way encryption, the password file can be stored in plain view. For example, the password table for the Unix operating system can be read by any user unless special access controls have been installed. Because the contents are encrypted, backup copies of the password table are no longer a problem. There is always the possibility that two people might choose the same password, thus creating two identical entries in the password file. Even though the entries are encrypted, each user will know the plaintext equivalent. For instance, if Bill and Kathy both choose their passwords on April 1, they might choose APRILFOOL as a password. Bill might read the password file and notice that the encrypted version of his password is the same as Kathy's. ***IMP*** Encrypted pwd file Two approaches: a. Conventional encryption / b. One-way encryption a. Conventional encryption Encrypts entire pwd table OR encrypts pwd column of pwd table Pwd comparison procedure: When logging principal provides (cleartext) pwd, OS decrypts pwd from pwd table OS compares principals (clrtxt) pwd w/ decrypted pwd Exposure 1: when decrypted pwd is for an instant in memory Attacker who penetrates memory can get it Exposure 2: attacker finding encryption key

One-way encryption (hashing) Better solution - no pwd exposure in memory Pwd encrypted w/ one-way hash function and store Pwd comparison procedure: When logging principal provides (cleartext) pwd, OS hashes principals pwd (w/ one-way encryption) Hash of principals pwd is compared with pwd hash from pwd table

Advantages of one-way encryption: Pwd file can be stored in plain view Backup files not a problem any more

Problem: If Alice and Bill selected the same pwd (e.g., Kalamazoo) and Bill reads pwd file (stored in plain view), Bill learns Alices pwd Solution: salt value is used to perturb hash fcn Hashed value and salt stored in pwd table: [Alice, saltAlice, E(pwdAlice+saltAlice)] stored for Alice [Bill, saltBill, E(pwdBill+saltBill)] stored for Bill => hashed Alices pwd hashed Bills pwd (even if pwdAlice = pwdBill) When Principal X logs in, system gets saltX and calculates E(pwdX+saltX) If result is the same as hash stored for X, X is authenticated Unix+ circumvents this vulnerability by using a password extension, called the salt. The salt is a 12-bit number formed from the system time and the process identifier. Thus, the salt is likely to be unique for each user, and it can be stored in plaintext in the password file. The salt is concatenated to Bill's password (pw) when he chooses it; E(pw+saltB) is stored for Bill, and his salt value is also stored. When Kathy chooses her password, the salt is different because the time or the process number is different. Call this new one saltK. For her, E(pw+saltK) and saltK are stored. When either person tries to log in, the system fetches the appropriate salt from the password table and combines that with the password before performing the encryption. The encrypted versions of (pw+salt) are very different for these two users. When Bill looks down the password list, the encrypted version of his password will not look at all like Kathy's.

Potrebbero piacerti anche