Sei sulla pagina 1di 35

Access Control Models: From the

real-world to trusted computing

Lecture Motivation

We have looked at protocols for distributing and establishing


keys used for authentication and confidentiality

But who should you give these keys to? Who should you trust?
What are the rules governing when to and not to give out
security credentials

In this lecture, we will look at the broad area of secure and


trusted systems
We will focus on access control models
These methods are often used to abstract the requirements for a
computer system
But, they hold for general systems where security is a concern
(e.g. networks, computers, companies)

Lecture Outline

Some generic discussion about security

Objects that require protection

Insights from the real-world

Access control to memory and generic objects


Discretionary Methods: Directory Lists, Access Control
Lists, and the Access Control Matrix, Take-Grant Model
Failures of DACs: Trojan Horses
Dominance and information flow, Multilevel security and
lattices
Bell-LaPadula and Bibas Model

What is a trusted system? Trusted Computing Base

System-security vs. Message-security

In the cryptographic formulation of security, we were concerned


with the confidentiality, authenticity, integrity, and nonrepudiation of messages being exchanged
This is a message-level view of security

A system-level view of security has slightly different issues that


need to be considered
Confidentiality: Concealment of information or resources from
those without the right or privilege to observe this information
Integrity: Trustworthiness of data (has an object been changed in
an unauthorized manner?)
Availability: Is the system and its resources available for usage?

Confidentiality in Systems

Many of the motivations behind confidentiality comes from the


militarys notion of restricting access to information based on
clearance levels and need-to-know

Cryptography supports confidentiality: The scrambling of data


makes it incomprehensible.
Cryptographic keys control access to the data, but the keys
themselves become an object that must be protected

System-dependent mechanisms can prevent processes from


illicitly accessing information
Example: Owner, group, and public concepts in Unixs r/w/x
definition of access control

Resource-hiding:
Often revealing what the configuration of a system is (e.g. use of a
Windows web server), is a desirable form of confidentiality

Integrity in Systems

Integrity includes:
Data integrity (is the content unmodified?)
Origin integrity (is the source of the data really what is claimed, aka.
Authentication)

Two classes of integrity mechanisms: Prevention and Detection


Prevention: Seek to block unauthorized attempts to change the data, or
attempts to change data in unauthorized ways
A user should not be able to change data he is not authorized to change
A user with privileges to work with or alter the data should not be
allowed to change data in ways not authorized by the system
The first type is addressed through authentication and access control
The second type is much harder and requires policies

Detection: Seek to report that datas integrity has been violated


Achieved by analyzing the systems events (system logs), or analyze data
to see if required constraints are violated

Availability of Systems

Availability is concerned with system reliability


The security side of the issue: An adversary may try to make a
resource or service unavailable
Implications often take the form: Eve compromises a secondary
system and then denies service to the primary system as a result
all requests of the first system get redirected to second system

Hence, when used in concert with other methods, the effects can
be very devastating

Denial of service attacks are an example:


Preventing the server from having the resources needed to perform
its function
Prevent the destination from receiving messages
Denial of service is not necessary deliberate

Threats

There are several threats that may seek to undermine confidentiality,


integrity, and availability:
Disclosure Threats: Causing unauthorized access to information
Deception Threats: Causing acceptance of false data
Disruption Threats: Prevention of correct operation of a service
Usurpation Threats: Unauthorized control of some service

Examples:
Snooping: Unauthorized interception of information (passive Disclosure)
Modification/Alteration: Unauthorized change of information (active
Deception, Disruption, Usurpation)
Masquerading/Spoofing: Impersonation of one entity by another
(Deception and Disruption)
Repudiation of Origin: False denial that an entity sent or created data
(Deception)
Denial of Receipt: A false denial that an entity received some information
or message (Deception)
Delay: A temporary delay in the delivery of a service (Usurpation)
Denial of Service: A long-term inhibition of service (Usurpation)

Overview Security Policies

Definition: A security policy is a statement of what is allowed and what is


not allowed to occur between entities in a system

Definition: A security mechanism is a method for enforcing a security policy

Policies may be expressed mathematically


Allowed and disallowed states may be specified
Rules may be formulated for which entity is allowed to do which action

These policies may seek to accomplish:


Prevention
Detection
Recovery

This lecture will focus primarily on formal statements of security policies


Specifically, we will focus on policies associated with access control and
information flow

Objects that Need Protection

Modern operating systems follow a multiprogramming model:


Resources on a single computer system (extend this to a generic system)
could be shared and accessed by multiple users
Key technologies: Scheduling, sharing, parallelism
Monitors oversee each process/programs execution

Challenge of the multiprogramming environment: Now there are more


entities to deal with hard to keep every process/user happy when sharing
resources Even harder if one user is malicious
Several objects that need protection:
Memory
File or data on an auxiliary storage device
Executing program in memory
Directory of files
Hardware Devices
Data structures and Tables in operating systems
Passwords and user authentication mechanisms
Protection mechanisms

Basic Strategies for Protection

There are a few basic mechanisms at work in the operating system that
provide protection:
Physical separation: processes use different physical objects (different
printers for different levels of users)
Temporal separation: Processes having different security requirements
are executed at different time
Logical separation: Operating system constrains a programs accesses
so that it cant access objects outside its permitted domain
Cryptographic separation: Processes conceal their data in such a way
that they are unintelligible to outside processes
Share via access limitation: Operating system determines whether a user
can have access to an object
Limit types of use of an object: Operating system determines what
operations a user might perform on an object

When thinking of access to an object, we should consider its granularity:


Larger objects are easier to control, but sometimes pieces of large objects
dont need protection.
Maybe break objects into smaller objects (see Landwehr)

Access Control to Memory

Memory access protection is one of the most basic


functionalities of a multiprogramming OS
Memory protection is fairly simple because memory access
must go through certain points in the hardware
Fence registers, Base/Bound registers
Tagged architectures: Every word of machine memory has one or
more extra bits to identify access rights to that word (these bits are
set only by privileged OS operations)
Segmentation: Programs and data are broken into segments. The
OS maintains a table of segment names and their true addresses.
The OS may check each request for memory access when it
conducts table lookup.

More general objects may be accessed from a broader variety of


entry points and there may be many levels of privileges:
No central authority!

Insight from Real-world Security Models

Not all information is equally sensitive some data will have more drastic
consequences if leaked than other.
Military sensitivity levels: unclassified, confidential, secret, top secret

Generally, fewer people knowing a secret makes it easier to control


dissemination of that information
Military notion of need-to-know: Classified information should not be
entrusted to an individual unless he has both the clearance level and the
need to know that information
Compartments: Breaking information into specific topical areas
(compartments) and using that as a component in deciding access
Security levels consist of sensitivity levels and the corresponding
compartments
If information is designated to belong to multiple compartments, then the
individual must be cleared for all compartments before he can access the
information.

Real-world Security Models, pg. 2

Documents may be viewed as a collection of sub-objects, some


of which are more sensitive than others.
Hence, objects may be multilevel in their security context.
Level of classification of an object or document is usually the
classification of its most sensitive information it contains.

Aggregation Problem: Often times the combination of two


pieces of data creates a new object that is more sensitive than
either of the pieces separately

Sanitization Problem: Documents may have sensitive


information removed in an effort to sanitize the document. It is a
challenge to determine when enough information has been
removed to densensitize a document.

Multilevel Security Models

We want models that represent a range of sensitivities and that


separate subjects from the objects they should not have access
to.

The military has developed various models for securing


information

We will look at several models for multilevel security


Object-by-Object Methods: Directory lists, Access control lists,
Access control matrix, Take-Grant Model
Lattice model: A generalized model
Bell-LaPadula Model
Biba Model

Access Control to Objects

Some terminology:
Protection system: The component of the system architecture
whose task is to protect and enforce security policies
Object: An object is an entity that is to be protected (e.g. a file, or
a process)
Subject: Set of active objects (such as processes and users) that
have interaction with other
Rights: The rules and relationships allowed to exist between
subjects and objects

Directory-based Access Control (aka. Capability List): A list for


each subject which specifies which objects that subject can
access (and what rights)
Access Control List: A list for each object that specifies which
subjects can access it (and how).

Access Control Matrix

Access control matrix arose in both OS research and database research


Example:
File 1

File 2

Process 1

Process 2

Process 1

Read, Write, Own

Read

Read, Write,
Execute, Own

Write

Process 2

Append

Read, Own

Read

Read, Write,
Execute, Own

What does it mean for a process to read/write/execute another process?


Read is to receive signals from, write is to send signals to, and execute is
to run as a subprocess

Formally, an access control matrix is a table in which each row represents a


subject and each column represents an object.
Each entry in the table specifies the set of access rights for that subject to
that object
In general access control matrices are sparse: most subjects do not have
access rights to most objects
Every subject is also an object!!!

Access Control Matrix, pg. 2

All accesses to objects by subjects are mediated by an enforcement


mechanism that uses the access matrix
This enforcement mechanism is the reference monitor.
Some operations allow for modification of the matrix (e.g. owner might
be allowed to grant permission to another user to read a file)
Owner has complete discretion to change the access rules of an object it
owns (discretionary access control)

The access control matrix is a generic way of specifying rules, and is not
beholden to any specific access rules
It is therefore very flexible and suitable to a broad variety of scenarios
However, it is difficult to prove assertions about the protection provided
by systems following an access control matrix without looking at the
specific meanings of subjects, objects, and rules
Not suitable for specialized requirements, like the military access control
model.

Take-Grant Models

Take-Grant Models represent a


system using a directed graph

Take Operation:
r, g

Nodes in the graph are either


subjects or objects

An arc directed from node A to


node B indicates that the
subject/object A has some access
rights to subject or object B.

Access rights are: read (r), write


(w), take (t), grant (g)

Grant Operation:

Take implies that node A can take


node Bs access rights to any other
node
Grant implies that node B can be
given any access right A possesses

r, g

r, w

r, w
C

Take-Grant Models, pg. 2

Create Rule: A subject A can


create a new graph G1 from
an old graph G0 by adding a
vertex B and an edge from A
to B with rights set X.
Remove Rule: Let A and B
be distinct vertices. Suppose
there is an edge with rights
X. Rules Y may be removed
from X to produce X\Y. If
X\Y is empty, the edge is
deleted.

Create Operation:

X={r,g}

Delete Operation:

X={r,g}

X\Y={r}

Take-Grant Models, pg. 3

Since the graph only includes arcs corresponding to non-empty entries in the
access control matrix, the model provides a compact representation

Question of Take-Grant Models: Can an initial protection graph and rules be


manipulated to produce a particular access right for A to access C with?

Example:

X
A

1. A creates V with {t,g}


X
t
A
B
{t,g}

3. B grants to V the X to C
X
t
A
B
g

V
2. B takes g to V from A
X
t
A
B
{t,g}
g

4. A takes X to C from V
t
A
B
{t,g}

X
X
X

Problems with Discretionary Access Control

Discretionary access controls are inadequate for enforcing information flow


policies
They provide no constraint on copying information from one object to
another

Example: Consider Alice, Bob, and Eve. Alice has a file X that she wants
Bob to read, but not Eve.
Alice authorizes Bob via the following Access Control Matrix
File X
Alice

Own

Bob

Read

Eve

File Y

Write
Read

Bob can subvert Alices discretion by copying X into Y. Bob has write
privileges, and Eve has read privileges for Y.
This case is a simplistic version of what can be much more
pathological The Trojan Horse

DAC and Trojan Horses

What if Bob isnt bad Eve could still


read X by convincing Bob to use a
program carrying a Trojan Horse (Troy)
File X

Consider the new access control matrix:


Eve has created Troy and given it to
Bob, who has execute privileges
Troy inherits Bobs read privileges
to X, and write privileges to a file Y
(perhaps public)
Eve has read privileges to file Y
Trojan Horses perform normal
claimed operations, but also
participates in subversive activities

Alice

Own

Bob

Read

Eve
Prog. Troy

Read

File Y

Prog. Troy

Write

Execute

Read

Read,
Write,
Execute

Write

Solution:
Impose Mandatory Access
Controls (MAC yes, another
MAC!) that cannot be bypassed.

Dominance and Information Flow

There are two basic ways to look at the notion of security privileges:
Dominance and Information Flow.
For all essential purposes, they are the same, and its just a matter of semantics.
Lets start with dominance:
Each piece of information is ranked at a particular sensitivity level (e.g.
unclassified, confidential, secret, top secret)
The ranks form a hierarchy, information at one level is less sensitive than
information at a higher level.
Hence, higher level information dominates lower level information
Formally, we define a dominance relation
if:

on the set of objects and subjects

s o (rank s rank o ) comp s comp o

We say that o dominates s (or s is dominated by o) if

s o.

Dominance and Information Flow, pg. 2

Now let us look at information flow:


Every object is given a security class (or a security label):
Information flowing from objects implies information flowing
between the corresponding security classes
We define a can-flow relationship A B to specify that
information is allowed to flow from entities in security class A
to entities in security class B
We also define a class-combining operator A B C
to specify that objects that contain information from security
classes A and B should be labeled with security class C
Implicitly, there is the notion of cannot-flow

A NOT

Lattice Model of Access Security

The dominance or can-flow relationship defines a partial ordering


relationship by which we may specify a lattice (with Dennings
axioms)
First, the dominance relationship is transitive and antisymmetric
Transitive: If a b and b c , then a c
Antisymmetric: If a b and b a then a b .

A lattice is a set of elements organized by a partial ordering that


satisfies the least upper bound (supremum) and greatest lower
bound properties (infimum)

Supremum: Every pair of elements possesses a least upper bound


Infimum: Every pair of elements possesses a greatest lower bound
In addition to supremum and infimum between two objects, we
need the entire set of security classes to have a supremum and
infimum (i.e. single low point and single high point)

Examples of Information Flow and Lattices

High-Low Policy: Two


security classes (high and
low)

Bounded Isolated Classes: A


set of classes Aj. Between any
two security classes define the
composition A j A k H.
Every class has the low class
as its infimum.

Subset Lattice: Categories A,


B, C may be combined to form
compartments. List of all
subsets forms a lattice

H
{A,B,C}
H

A1

{A,B}

{A,C}

{B,C}

{A}

{B}

{C}

An

L
{}

Mandatory Access Control (MAC) Models

Mandatory Access Control (MAC): When a system mechanism controls


access to an object and an individual user cannot alter that access, the control
is mandatory access control.

In MAC, typically requires a central authority


E.g. the operating system enforces the control by checking information
associated with both the subject and the object to determine whether the
subject should access the object

MAC is suitable for military scenarios:


An individual data owner does not decide who has top-secret clearance.
The data owner cannot change the classification of an object from top
secret to a lower level.
On military systems, the reference monitor must enforce that objects
from one security level cannot be copied into objects of another level, or
into a different compartment!

Example MAC model: Bell-LaPadula

Bell-LaPadula Model

The Bell-LaPadula model describes the allowable flows of information in a


secure system, and is a formalization of the military security policy.
One motivation: Allow for concurrent computation on data at two different
security levels
One machine should be able to be used for top-secret and confidential
data at the same time
Programs processing top-secret data would be prevented from leaking
top-secret data to confidential data, and confidential users would be
prevented from accessing top-secret data.
The key idea in BLP is to augment DAC with MAC to enforce information
flow policies
In addition to an access control matrix, BLP also includes the military
security levels
Each subject has a clearance, and each object has a classification
Authorization in the DAC is not sufficient, a subject must also be
authorized in the MAC

Bell-LaPadula Model, pg. 2

Formally, BLP involves a set of subjects S and a set of objects O.


Each subject s and object o have fixed security classes l(s) and
l(o)
Tranquility Principle: Subjects and objects cannot change their
security levels once they have been instantiated.

There are two principles that characterize the secure flow of


information:

1.

Simple-Security Property: A subject s may have read access to an


object o if and only if l(o) l(s) .

2.

*-Property: A subject s can write to object o iff l(s) l(o)

Read access implies a flow from object to subject

lo ls
Write access implies a flow from subject to object ls lo

Bell-LaPadula Model, pg. 3

High
Security
Level

The *-property is not applied to


users:
Humans are trusted not to leak
information
Programs are assumed
untrustworthy could be Trojan
Horses

The *-property prohibits a program


running at the secret level from
writing to unclassified documents
Sometimes *-property is modified to
require l(s)=l(o) in order to prevent
write-up problems

O3
w
S

O2
r

r
O1
Low
Security
Level

BLP and Trojan Horses

Return to the Trojan Horse problem:


Alice and Bob are secret level users, Eve is an unclassified user
Alice and Bob can have both secret and unclassified subjects (programs)
Eve can only have unclassified subjects
Alice creates secret file X
Simple security prevents Eve from reading X directly
Bob can either have a secret (S-Troy) or an unclassified (U-Troy) TrojanHorse carrying program
S-Troy: Bob running S-Troy will create Y, which will be a secret file. Eves
unclassified subjects will not be able to read Y.
U-Troy: Bob running U-Troy wont be able to read X, and so wont be able
to copy it into Y.

Thus BLP prevents flow between security classes

One problem remains: Covert Channels but thats for another lecture

From BLP to Biba

BLP was concerned with confidentiality keeping data inaccessible to those


without proper access privileges

The Biba model is the integrity counterpart to BLP


Low-integrity information should not be allowed to flow to high-integrity
objects
High-integrity is placed at the top of the lattice and low integrity at the
bottom. Information flows from top to bottom (opposite direction of BLP).

Bibas model principles

1.

Simple-Integrity Property: Subject s can read object o iff (s) (o)

2.

Integrity *-Property: Subject s can write object o only if (o) (s)


In this sense, Biba is the dual of BLP and there is very little difference between
Biba and BLP:
Both are concerned with information flow in a lattice of security classes

Trusted (Operating) System Design

Operating systems control the interaction between subjects and


objects, and mechanisms to enforce this control should be
planned for at the design phase of the system

Some design principles:


Least Privilege: Each user and program should operate with the
fewest privileges possible (minimizes damage from inadvertent or
malicious misuse)
Open Design: The protection mechanisms should be publicly
known so as to provide public scrutiny
Multiple Levels of Protection: Access to objects should depend on
more than one condition (e.g. password and token)
Minimize Shared Resources: Shared resources provide (covert)
means for information flow.

Trusted (Operating) System Design, pg. 2

Unlike a typical OS, a Trusted OS involves each object being protected by an


access control mechanism
Users must pass through an access control layer to use the OS
Another access control layer separates the OS from using program libraries

A trusted OS includes:
User identification and authentication
MAC and DAC
Object reuse protection: When subjects finish using objects, the resources
may be released for use by other subjects. Must be careful! Sanitize the
object!
Audit mechanisms: Maintain a log of events that have transpired. Efficient
use of audit resources is a major problem!
Intrusion detection: Detection mechanisms that allow for the identification
of security violations or infiltrations

Trusted Computing Base (TCB): everything in the trusted operating system


that enforces a security policy

Potrebbero piacerti anche