Sei sulla pagina 1di 4

On Information Security Paradigms

By Vicente Aceituno Canal vaceituno@telefonica.net


An International Article Brought to You from Spain

Introduction
It is very difficult to define security, and there are many reasons why. Information systems are very complex; they have structural and dynamic aspects. Generally speaking, information systems are structured as information repositories and interfaces, connected by channels (physical and logical). Interfaces connect information systems between them, allow interaction with users, and facilitate input/output of information. Repositories hold information temporarily or permanently. Information systems are dynamic, producing results and exchanging messages through channels. Information systems process data, but data is not information. The same information can be rendered as binary data using different formats and rates of data to information. The importance of a single bit of data depends on how much information it represents. Security is not a presence, but an absence. When there havent been any incidents, we could say that we have been safe. Security depends on the context. An unprotected computer wasnt as safe connected directly to the Internet in 1990 as it would be when connected to a companys network in 2005, or totally isolated. We can be safe when there are no threats, even if we dont protect ourselves. So security depends on the context. Security costs money. We must consider the cost of protection, as there is a clear limit on how much we spend protecting an information system, which depends both on how much the system is worth to us and the available budget. Finally, security depends on our expectations. The higher the expectations, the more difficult they will be to meet. A writer who stores everything he wrote in his life in a computer and someone who just bought a computer will have totally different expectations. The writers expectations will be more difficult to meet, as he might expect his hard drive to last forever, so a crash can mean catastrophe, while the recently bought computers hard drive might be replaced with little hassle. A good security definition should assist in the processes related to protecting an information system, for example: 1. Find what threats are relevant to me. 2. Weigh the threats and measure the risk. 3. Select security measures we can afford that reduce the risk to an acceptable level at the lowest cost. Unfortunately, current definitions are not up to this task, and worse still, they are not helpful for advancing information security knowledge. Ideally, a security definition should comply with the scientific method, as it is the

best tool for the advancement of empiric knowledge. Scientific theories are considered successful if they: Survive every falsification experiment tried. Explain an ample spectrum of phenomena becoming widely usable. Facilitate the advance of knowledge. Have predictive power.

Four Approaches to Defining Security


Lets have a look at the four main approaches to defining security. 1. 2. 3. 4. the set of security measures to keep a state to stay in control CIA and derivatives

The first approach is easy to debunk. If security was the set of security measures, a bicycle with a lock would be just as safe in the countryside of England as in Mogadishu, but it is not. It is interesting that Bruce Schneier has been so often misquoted. Security is a process, not a product doesnt mean that security is impossible to achieve, a point of view favored by those who think that being secure is the same as being invulnerable. Reading the quote in context, what he means is that security is not something you can buy; its not a product. Security is NOT the set of security measures we use to protect something. The second approach states that security is a state of invulnerability or the state that results from protection. Examples of proponents of this approach are: Gene Spafford: The only truly secure system is one that is powered off, cast in a block of concrete and sealed in a lead-lined room with armed guards - and even then I have my doubts. RFC2828 Internet Security Glossary: Measures taken to protect a system. The condition of a system that results from the establishment and maintenance of measures to protect the system. The condition of system resources being free from unauthorized access and from unauthorized or accidental change, destruction, or loss. The approach that states that security is like being invulnerable is purely academic and cant be applied to real systems because it neglects to consider that security costs money. Invulnerability leads to protection

THE ISSA JOURNAL September 2005

2005 Technical Enterprises, Inc. Reproduction of this document without permission is prohibited.

from highly unlikely threats, at a high cost. It is related to very uncommon expectations, and it focuses on attacks, neglecting protection from errors, and accidents. The third approach, stay in control, is akin to keeping Confidentiality, defined as the ability to grant access to authorized users and deny access to unauthorized users. This approach can be considered a subset of the CIA paradigm. This approach states that security is to stay in control or protecting information from attacks. Examples of proponents of this approach are: William R. Cheswick: Broadly speaking, security is keeping anyone from doing things you do not want them to do to, with, or from your computers or any peripherals. INFOSEC Glossary 2000: Protection of information systems against access to or modification of information, whether in storage, processing or transit, and against the denial of service to authorized users, including those measures necessary to detect, document, and counter such threats. Common Criteria for Information Technology Security Evaluation - Part 1: Security is concerned with the protection of assets from threats, [] in the domain of security greater attention is given to those threats that are related to malicious or other human activities.. Some access mechanisms used to achieve Confidentiality are often taken as part of security definitions: Identification is defined as the ability to identify a user of an information system at the moment he is granted credentials to that system. Authentication is defined as the ability to validate the credentials presented to an information system at the moment the system is used. Authorization is defined as the ability to control what services can be used and what information can be accessed by an authenticated user. Audit is defined as the ability to know what services have been used by an authorized user and what information has been accessed, created, modified or erased, including details such as when, when, where from, etc. Non-repudiation is defined as the ability to assert the authorship of a message or information authored by a second party, preventing the author from denying his own authorship. This has led to different mixes of CIA and these security mechanisms. As these definitions mix the definition of security with protection mechanisms to achieve security, I wont bother debunking them any further (ACIDA, CAIN, etc) CIA is the fourth approach to defining security and the most popular; keeping confidentiality, integrity, availability, defined as: Confidentiality, already defined, sometimes mistaken for secrecy. Integrity, defined as the ability to guarantee that some information or message hasnt been manipulated. Availability is defined as the ability to access information or use services at any moment we demand it, with appropriate performance. Examples of proponents of this approach are:

Operational Definition (one of many possible)

Comply with existing legal regulations. Control the access to secrets Control the access to private information Control the access to copyrighted information. Identify the authors of information or messages Record of the use of services. Bear the users responsible for their use of services and acceptance of contracts and agreements. Control the physical ownership of information Confidentiality and information systems. Control the existence of information and Integrity services. Control the destruction of information and No services. Control the availability of information and Availability services. Control the reliability and performance of Availability services. Control the precision of information. No Reflect the real time and date in all their records. No Figure 1: Table of equivalence showing CIA and an Operational Definition of Security

CIA Equivalent No Confidentiality Confidentiality Confidentiality Confidentiality Confidentiality No

ISO17799: Preservation of confidentiality, integrity and availability of information Confidentiality: Ensuring that information is accessible only to those authorized to have access. Integrity: Safeguarding the accuracy and completeness of information and processing methods. Availability. Ensuring that authorized users have access to information and associated assets when required. INFOSEC Glossary 2000: Measures and controls that ensure confidentiality, integrity, and availability of information system assets including hardware, software, firmware, and information being processed, stored, and communicated.

The CIA Paradigm and The Operational Definition


This popular paradigm classifies incidents and threats by effects, not causes, and therefore is not falsifiable. Professionals who dont question the CIA paradigm classify the loss of synchronization as an integrity problem (time information has been changed), while its clear that only stateful information, like a file or a database, can have the property of integrity. It is impossible to think of an experiment that shows an incident or a threat not to belong to one of the confidentiality, integrity or availability categories. Therefore the CIA paradigm is unscientific. There are several examples of incidents that are not well treated using CIA, but appear to fit within the paradigm. Uncontrolled permanence of information can lead to Confidentiality Loss. Information Copy in violation of authorship rights can lead to Confidentiality Loss, as someone is getting access who is not authorized. Copy in violation of privacy rights can lead to Confidentiality Loss, as someone is getting access who is not authorized. Now, what are these CIA classifications good for? Its very clear that to prevent confidentiality incidents, our controls will be very different if we want to limit access, if we want to prevent breaching of authorship rights, or if we want to guarantee infor-

2005 Technical Enterprises, Inc. Reproduction of this document without permission is prohibited.

THE ISSA JOURNAL September 2005

mation erasure. So, why are we classifying at all, if the classification doesnt help to do something as simple as selecting a security measure? Some other examples of incidents that dont fit CIA are operator errors and fraud. To neutralize a threat, a control that regulates the causes of the threat will normally be needed; therefore, for control selection, it would be far more useful to classify by causes than by effects, which is exactly what CIA doesnt do. CIA doesnt consider the context at all. This is why small and medium size organizations are intimidated by the exigency of Confidentiality, Integrity and Availability, giving up on devoting enough resources to security. Only big organizations aim for Confidentiality, Integrity and Availability. CIA doesnt consider our expectation about our information systems. You cant demand confidentiality of public information. You cant demand integrity of low-durability information; it is too easy to reproduce. And you cant demand availability of low-priority services. Many practitioners who use the CIA definition have a stance of We want to prevent attacks from succeeding. In other words, for us to be safe is equivalent to being invulnerable. The definition of an incident under this light is totally independent of the context, and considers attacks only, neglecting accidents and errors as incidents. Disaster recovery plans show that the need to protect a company from catastrophes is well known, but many accidents are considered a reliability issue and not a security issue, because accidents are not considered a security problem. So, if no current information security definition or paradigm is satisfactory, what can replace it? An interesting alternative is the use of an operational definition. For example, a meter is defined operationally as the distance travelled by a beam of light in a certain span of time. An example for the need of operational definitions is the collapse of the West Gate Bridge in Melbourne, Australia in 1970, killing 35 construction workers. The subsequent enquiry found that the failure arose because engineers had specified the supply of a quantity of flat steel plate. The word flat in this context lacked an operational definition, so there was no test for accepting or rejecting a particular shipment or for controlling quality. Before detailing the operational definition, some words about probability. Probability has predictive power with the following considerations: As long as systems and the environmental conditions dont change, the future is similar to the past. You can apply probability to a set of phenomena, not to individual phenomenon. A sufficiently big set of historic cases must be available for significant probability calculations. Probability is often misunderstood. If you drop a coin nine times and get nine crosses, the probability of getting a cross the tenth time is still half, not lower as intuition suggests. Quite the opposite, the more crosses we get, the higher should be our confidence that the next drop will be a cross, too. An operational definition for information security is: The absence of threats that can affect our expectations about information systems equivalently protected in equivalent environments. This operational definition is not only falsifiable, but it is expectationsdependent and deals cleanly with the definition difficulties of context. It is helpful to determine what threats are relevant, to weigh the threats, measure the risk, and to select security measures.

The following definitions of incident and threat follow from the operational definition: Incident: Any failure to meet our expectations about an information system. This definition makes our expectations the pivotal point about what should protected. Threat: Any historical cause of at least one incident. This implies that the probability is not zero, and brings in the context. The threats relevant to an information system will be the causes of historic incidents in information systems protected equivalently in equivalent environments. Insecurity can be measured by the cost of historic incidents in a span of time for every information system equivalently protected in an equivalent environment. Many companies have these general expectations about their information systems and the way they are used: 1. Comply with existing legal regulations. 2. Control the access to secrets and information or services protected by law, like private information and copyrights. 3. Identify the authors of information or messages and record of their use of services. 4. Bear the users responsible for their use of services and acceptance of contracts and agreements. 5. Control the physical ownership of information and information systems. 6. Control the existence and destruction of information and services. 7. Control the availability of information and services. 8. Control the reliability and performance of services. 9. Control the precision of information. 10. Reflect the real time and date in all their records. Every organization will have a different set of expectations, which leads to different sets of incidents to protect from and different sets of threats to worry about, depending on the environment. The more specific the expectations, the easier it becomes to determine the threats and the security measures. To determine how relevant the threats are, it is necessary to gather historical data for incidents in equivalent systems in equivalent environments. Unfortunately, whereas the insurance industry has been doing this for years, information security practitioners lack this statistical information. It is possible to know the likelihood and cause of having a car accident, but there is not data enough to know how likely you are to suffer an information security incident, nor the cause. Quantitative risk measurement without proper historical data is useless. Some practitioners even mix estimative figures with complex formulae, which is equivalent to mixing magic and physics. Even if there is no accurate data about risk, it is possible to follow a risk assessment process similar to OCTAVE to identify the expectations about the information systems and the significant threats that can prevent the expectations to materialize. With the operational definition, every identified threat can be controlled using suitable security measures. If quantitative risk information is available, the most cost-efficient security measures could be selected. The operational definition of an incident helps to focus on whatever is relevant to our context. If there is no expectation for secrecy, no matter what is revealed, there is no incident. The operational definition of a threat helps focus on threats that are both relevant and likely. It doesnt make much sense to consider meteors as a threat if no information sys-

THE ISSA JOURNAL September 2005

2005 Technical Enterprises, Inc. Reproduction of this document without permission is prohibited.

tem has ever been destroyed by a meteor. Resources Measuring insecurity by the cost of incidents http://www.ee.oulu.fi/research/ouspg/sage/glossary/index.html#h-ref4 helps to gauge how much to invest in informa Google define: (integrity, confidentiality, etc) tion security. If our expenses protecting informa RFC 2828 - Internet Security Glossary - http://www.faqs.org/rfcs/rfc2828.html tion systems for the last five years were 10.000 Common Criteria for IT Security Evaluation - http://www.commoncriteriaportal.org/ euros a year, and our losses were 500 euros a Pseudoscience: http://www.chem1.com/acad/sci/pseudosci.html year, it probably doesnt make sense to raise the budget to 20.000 euros, but to 10.500 tops. Of course this is a gross estimate, but it gives us an idea of what can be achieved if statistics on the cost of incidents and their causes were available.

Conclusion
The operational definition is richer than the other paradigms. It addresses expectations, context and cost and makes it far easier to determine what security measures to take to protect the expectations put of an information system. The adoption of a falsifiable definition should enable some progress in information security theory, which has been stagnant for many years.

Vicente Aceituno Canal has 12 years experience in IT and security consulting. He leads the F.I.S.T information security conferences in Spain (www.fistconference.org), authored the ISM3 (Information Security Management Maturity Model www.isecom.org/ism3), published his first book, Information Security, ISBN: 84-933336-7-0 last year, and maintains a Web site on personal computer security (www.seguridaddelainformacion.com).

2005 Technical Enterprises, Inc. Reproduction of this document without permission is prohibited.

THE ISSA JOURNAL September 2005

Potrebbero piacerti anche