Sei sulla pagina 1di 62

64 Information Security and Risk Management

Notes

Unit 2: Computer Security

Structure:
2.1 Hardening (Computing)
2.1.1 Operating System
2.1.2 Application Code
2.2 File System Security
2.2.1 File
2.2.2 File Structure
2.2.3 File Type
2.2.4 File Access Mechanisms
2.2.5 Operating System Variations for File System Security
2.3 Local Security Policy in Windows
2.3.1 Opening Local Security Policy Console in Windows
2.3.2 Defining a Password Policy in Windows
2.3.3 Defining an Account Lockout Policy
2.3.4 Defining an Audit Policy
2.3.5 Setting Basic Security Options
2.3.6 Applying Changed Settings in Local Security Policy
2.4 Default Accounts
2.4.1 Removing Unnecessary Default User Accounts
2.5 Network Activity
2.6 Malicious Code
2.6.1 Malicious Code in Java
2.6.2 Malicious Code Threatens Enterprise Security
2.6.3 How to Avoid Malicious Code?
2.6.4 Test for Malicious Code with Vera Code
2.7 Firewall
2.7.1 Introduction
2.7.2 Firewall Logic
2.7.3 Firewall Rules
2.7.4 Types of Firewall
2.7.5 Understanding Packet-filtering Firewalls
2.7.6 Understanding Application/Proxy Firewalls
2.7.7 Understanding Reverse Proxy Firewalls
2.8 Fault Tolerant System
2.8.1 Faults
2.8.2 Approaches to Faults
2.8.3 Achieving Fault Tolerance
2.8.4 Levels of Availability
2.8.5 Active Replication
2.8.6 Primary Backup (Active Standby) Approach

Amity Directorate of Distance and Online Education


Computer Security 65

2.8.7 Agreement in Faulty Systems


Notes
2.8.8 Examples of Fault Tolerance
2.9 Backup
2.9.1 Storage: The Base of a Backup System
2.9.2 Managing the Data Repository
2.9.3 Selection and Extraction of Data
2.9.4 Files
2.9.5 File Systems
2.9.6 Live Data
2.9.7 Limitations
2.10 Uninterruptible Power Supply (UPS)
2.10.1 Common Power Problems
2.10.2 Technologies
2.10.3 Online/Double-conversion UPS
2.10.4 Other Designs
2.11 Summary
2.12 Check Your Progress
2.13 Questions and Exercises
2.14 Key Terms
2.15 Check Your Progress: Answers
2.16 Case Study
2.17 Further Readings

Objectives
After studying this unit, you should be able to understand:
Ɣ Local security policies
Ɣ Firewall
Ɣ File System Security
Ɣ Backup and UPS
Ɣ Default Account
Ɣ A case study based on this unit
Computer Security is the protection of computing systems and the data that they
store or access.

Why is Computer Security Important?


Computer Security allows the University to carry out its mission by:
Ɣ Enabling people to carry out their jobs, education, and research
Ɣ Supporting critical business process
Ɣ Protecting personal and sensitive information
Good Security Standards follow the “90/10” Rule:
Ɣ 10% of security safeguards are technical.
Ɣ 90% of security safeguards rely on the computer user (“YOU”) to adhere to
good computing practices.

Amity Directorate of Distance and Online Education


66 Information Security and Risk Management
Example: The lock on the door is the 10%. You remembering to lock the lock,
Notes
checking to see if the door is closed, ensuring others do not prop the door open, keeping
control of the keys, etc. is the 90%. You need both parts for effective security.

2.1 Hardening (Computing)


In computing, hardening is usually the process of securing a system by reducing its
surface of vulnerability. A system has a larger vulnerability surface the more functions it
fulfills; in principle, a single-function system is more secure than a multipurpose one.
Reducing available vectors of attack typically includes the removal of unnecessary
software, unnecessary usernames or logins and the disabling or removal of unnecessary
services. Making a user’s computer more secure. It ensures that the latest patches to
operating systems, web browsers and other vulnerable applications are automatically
applied. It may also include the disabling of file sharing as well as establishing login
passwords.
Bullet-proof network operating systems does not exist, but there are some
common-sense steps that IT managers can take to make the NOS a less-attractive target
for mischief-makers (and worse).
Ɣ Identify and remove unused applications and services. The fewer components
intruders can get their hands on, the better off your networks will be.
Ɣ Implement and enforce strong password policies. Remove or disable all
unnecessary accounts. This includes immediately removing accounts when
workers leave the company.
Ɣ Limit the number of administrator accounts available, and make sure users and
IT staff have only the privileges they need to do their jobs.
Ɣ Set account lockout policies to discourage password cracking.
Ɣ Remove unused file shares.
Ɣ Keep an eye out for new security patches and hot fixes.
Ɣ Log all user account and administrative task transactions. This is an extremely
important step for forensics if your network OS does get hacked.
Ɣ Beware of “social engineering” tactics. Make sure that no one gives out
important security information such as administrator passwords without getting
approval from managers.
Ɣ Keep a secure backup solution handy to restore all systems in case of emergency.

2.1.1 Operating System


An operating system (OS) is a collection of software that manages computer
hardware resources and provides common services for computer programs. The
operating system is a vital component of the system software in a computer system.
In technical terms, it is a software which manages hardware. An operating system
controls the allocation of resources and services such as memory, processors, devices
and information.

Definition
An operating system is a program that acts as an interface between the user and the
computer hardware and controls the execution of all kinds of programs.

Amity Directorate of Distance and Online Education


Computer Security 67

Notes
User 1 User 2 User n

System Application
Softwares Softwares
Software

Operating System

Hardware CPU RAM I/O

Following are some of important functions of an operating system.


Ɣ Memory Management
Ɣ Processor Management
Ɣ Device Management
Ɣ File Management
Ɣ Security
Ɣ Control over System Performance
Ɣ Job Accounting
Ɣ Error Detecting Aids
Ɣ Coordination between Other Software and Users

Memory Management
Memory management refers to management of Primary Memory or Main Memory.
Main memory is a large array of words or bytes where each word or byte has its own
address.
Main memory provides a fast storage that can be accessed directly by the CPU. So,
for a program to be executed, it must be in the main memory. Operating System does the
following activities for memory management.
Ɣ Keeps tracks of primary memory, i.e., what part of it are in use by whom and
what part are not in use.
Ɣ In multiprogramming, OS decides which process will get memory when and
how much.
Ɣ Allocates the memory when the process requests it to do so.
Ɣ De-allocates the memory when the process no longer needs it or has been
terminated.

Amity Directorate of Distance and Online Education


68 Information Security and Risk Management
Processor Management
Notes
In multiprogramming environment, OS decides which process gets the processor
when and in how much time. This function is called process scheduling. Operating
System does the following activities for processor management.
Ɣ Keeps tracks of processor and status of process. Program responsible for this
task is known as traffic controller.
Ɣ Allocates the processor (CPU) to a process.
Ɣ De-allocates processor when processor is no longer required.

Device Management
OS manages device communication via their respective drivers. Operating System
does the following activities for device management.
Ɣ Keeps tracks of all devices. Program responsible for this task is known as the
I/O controller.
Ɣ Decides which process gets the device when and for how much time.
Ɣ Allocates the device in the efficient way.
Ɣ De-allocates devices.

File Management
A file system is normally organized into directories for easy navigation and usage.
These directories may contain files and other directions. Operating System does the
following activities for file management.
Ɣ Keeps track of information, location, uses, status, etc. The collective facilities
are often known as file system.
Ɣ Decides who gets the resources.
Ɣ Allocates the resources.
Ɣ De-allocates the resources.

Other Important Activities


Following are some of the important activities that Operating System does.
Ɣ Security: By means of password and similar other techniques, preventing
unauthorized access to programs and data.
Ɣ Control over system performance: Recording delays between request for a
service and response from the system.
Ɣ Job accounting: Keeping track of time and resources used by various jobs
and users.
Ɣ Error detecting aids: Production of dumps, traces, error messages and other
debugging and error detecting aids.
Ɣ Coordination between other softwares and users: Coordination and
assignment of compilers, interpreters, assemblers and other software to the
various users of the computer systems.

2.1.2 Application Code


Application code is code written specifically for an application created in a language
such as Java. (This same code could have been generated in C# or C++ too.) However,
the term “application code” is going to be mainly used to describe something which has
been inserted into something else, possibly without being realized as such. In security
terminology, we examine the possibility of application code being inserted into an image

Amity Directorate of Distance and Online Education


Computer Security 69

or an audio file. Also, macros in documents can also contain application code. Although
Notes
we may hope that in the most part, application code has been written without a malicious
purpose, we cannot rule out this usage, and therefore we must constantly protect our
applications from malicious code.

2.2 File System Security

2.2.1 File
A file is a named collection of related information that is recorded on secondary
storage such as magnetic disks, magnetic tapes and optical disks. In general, a file is a
sequence of bits, bytes, lines or records whose meaning is defined by the files creator
and user.

2.2.2 File Structure


File structure is a structure, which is according to a required format that operating
system can understand.
Ɣ A file has a certain defined structure according to its type.
Ɣ A text file is a sequence of characters organized into lines.
Ɣ A source file is a sequence of procedures and functions.
Ɣ An object file is a sequence of bytes organized into blocks that are understandable
by the machine.
Ɣ When operating system defines different file structures, it also contains the
code to support these file structure. Unix, MS-DOS support minimum number
of file structure.

2.2.3 File Type


File type refers to the ability of the operating system to distinguish different types of
file such as text files, source files, binary files etc. Many operating systems support many
types of files. Operating system like MS-DOS and Unix has the following types of files:
1. Ordinary files
Ɣ These are the files that contain user information.
Ɣ These may have text, databases or executable programme.
Ɣ The user can apply various operations on such files like add, modify, delete
or even remove the entire file.
2. Directory files
Ɣ These files contain list of file names and other information related to these
files.
3. Special files
Ɣ These files are also known as device files.
Ɣ These files represent physical device like disks, terminals, printers,
networks, tape drive, etc.
These files are of two types
(i) Character special files – data is handled character by character as in
case of terminals or printers.
(ii) Block special files – data is handled in blocks as in the case of disks
and tapes.

Amity Directorate of Distance and Online Education


70 Information Security and Risk Management

Notes 2.2.4 File Access Mechanisms


File access mechanism refers to the manner in which the records of a file may be
accessed. There are several ways to access files:
Ɣ Sequential access
Ɣ Direct/Random access
Ɣ Indexed sequential access
Most file systems have methods to assign permissions or access rights to specific
users and groups of users. These systems control the ability of the users to view or make
changes to the contents of the file system.

2.2.5 Operating System Variations for File System Security


Unix-like and otherwise POSIX-compliant systems, including LINUX-based systems
and all Mac OS X versions, have a simple system for managing individual file
permissions, which in this article are called “traditional Unix permissions”. Most of these
systems also support some kind of access control lists, either proprietary (old HP-UX
ACLs, for example), or POSIX.1e ACLs, based on an early POSIX draft that was
abandoned, or NFSv4 ACLs, which are part of the NFSv4 standard.
Microsoft and IBM DOS variants (including MS-DOS, PC DOS, Windows 95,
Windows 98, Windows 98 SE, and Windows Me) do not have permissions, only file
attributes. There is a read-only attribute (R), which can be set or unset on a file by any
user or program, and therefore does not prevent him/her from changing/deleting the file.
There is no permission in these systems which would prevent a user from reading a file.
Other MS-DOS/PC DOS-compatible operating systems such as DR DOS 3.31 and
higher, Palm DOS, Novell DOS, and Open DOS, Flex OS, 4680 OS, 4690 OS,
Concurrent DOS, Multiuser DOS, Datapac System Manager and IMS REAL/32 support
read/write/execute/delete file/directory access permissions on FAT volumes. With the
exception of Flex OS, 4680 OS, and 4690 OS, all these operating systems also support
individual file/directory passwords. All operating systems except for DR DOS, Palm DOS,
Novell DOS and Open DOS also support three independent file/directory ownership
classes world/group/owner, whereas the single-user operating systems DR DOS 6.0 and
higher, Palm DOS, Novell DOS and Open DOS only support them with an optional
multi-user security module (SECURITY.BIN) loaded.
OpenVMS (a.k.a. VMS), as well as Microsoft Windows NT and its derivatives
(including Windows 2000 and Windows XP), use access control lists (ACLs) to
administer a more complex and varied set of permissions. OpenVMS also uses a
permission scheme similar to that of Unix, but more complex. There are four categories
(System, Owner, Group, and World) and four types of access permissions (Read, Write,
Execute and Delete). The categories are not mutually disjoint: World includes Group
which in turn includes Owner. The System category independently includes system users
(similar to super users in Unix).
Classic Mac Operating Systems are similar to DOS variants and DOS-based
Windows: they do not support permissions, but only a “Protected” file attribute.
The Amig OS File system, Amiga DOS supports a relatively advanced permissions
system, for a single-user OS. In Amiga OS 1.x, files had Archive, Read, Write, Execute
and Delete (collectively known as ARWED) permissions/flags. In Amiga OS 2.x and
higher, additional Hold, Script, and Pure permissions/flags were added.
Mac OS X versions 10.3 (“Panther”) and prior use POSIX-compliant permissions.
Mac OS X, beginning with version 10.4 (“Tiger”), also support the use of NFSv4 ACLs.
They still support “traditional Unix permissions” as used in previous versions of Mac OS

Amity Directorate of Distance and Online Education


Computer Security 71

X, and the Apple Mac OS X Server version 10.4+ File Services Administration Manual
Notes
recommends using only traditional Unix permissions if possible. It also still supports the
Mac OS Classic’s “Protected” attribute.
Solaris ACL support depends on the file system being used; older UFS file system
supports POSIX.1e ACLs, while ZFS supports only NFSv4 ACLs.
Linux supports POSIX.1e ACLs. There is experimental support for NFSv4 ACLs for
ext3 file system.
FreeBSD supports POSIX.1e ACLs on UFS, and NFSv4 ACLs on UFS and ZFS.
IBM z/OS implements file security via RACF (Resource Access Control Facility).

2.3 Local Security Policy in Windows


Local Security Policy allows enforcing many system, user and security-related
settings, such as password policy, audit policy and user rights. Event Viewer can then be
used to check log events. By default, most settings in Windows are fine, but some still
need adjustment.
Sadly, Microsoft decided not to add the Local Security Policy console into home
versions of Windows. So, this article can be skipped by users of Windows Starter, Home,
Home Basic and Home Premium editions.
In Windows 8 and 8.1, most user account settings affect Local accounts only, not
Microsoft accounts. See the User management in Windows article for differences
between these sign in options.
Please change only the settings listed in this article, other settings could very well
make your computer inoperable or inaccessible by other computers in your home
network in case you do not know what you’re doing.
If you do decide to dig a little deeper, read a setting’s description on Explain tab
thoroughly before changing it!

2.3.1 Opening Local Security Policy Console in Windows


In all non-home versions of Windows, open Run dialog using keyboard shortcut
Windows Key + R. Type secpol.msc and click OK. Windows Vista and 7 users can also
type this into Start menu Search box and press Enter.
Please note that in Windows 8 and 8.1, Local Security Policy is available in Start
screen search results only if you have enabled the displaying of Administrative Tools.
Touch screen users can swipe in from the right edge of screen, tap Search, type a
part of “administrative tools” into Search box and click the result. Then open Local
Security Policy from the window.

Amity Directorate of Distance and Online Education


72 Information Security and Risk Management
Windows Vista will open another hot and sexy User Account Control dialog. Click
Notes
Continue to open the console.

There is also a much more detailed configuration console available – Local Group
Policy Editor. To access this, open Run dialog using keyboard shortcut Windows Key + R,
type gpedit.msc and click OK.
Settings described below are in Computer Configuration, Windows Settings and
Security Settings section of Local Group Policy console window.
Again, please do not change any settings not described below unless you really
know what you are doing. Always read the setting explanation thoroughly!

Amity Directorate of Distance and Online Education


Computer Security 73

Notes

2.3.2 Defining a Password Policy in Windows


If you want to make sure that you and other users of your computer have secure
passwords and that passwords are changed after defined number of days, you need to
set up a password policy.
Expand Account Policies and click Password Policy on the left side of Local Security
Settings window. Double-click Enforce password history on the right side of the window.

This setting defines how many previously used passwords Windows remembers for
each user to prevent frequent reusage of passwords. Usually, 3-5 is enough. Click OK to
close the dialog.

Amity Directorate of Distance and Online Education


74 Information Security and Risk Management

Notes

Now change other settings of Password Policy by double-clicking on them (settings


not listed below are fine by default):
Ɣ Maximum password age – default is “42”. This specifies how long a user can
use the same password for his/her Windows account.
You can set the number higher if you want to (“120” is a suggested one), but
keep in mind that you should change your password at least once a year, so do
not enter more than “365” here.
Ɣ Minimum password age – default is “0”, meaning that users can change their
passwords whenever they like. If you set this to “1”, it means that a password
must be in effect for at least 1 day (24 hours) before a user can change it again.
Ɣ Minimum password length – set to at least “8”, but “12” is recommended.
Ɣ Password must meet complexity requirements – set to “Enabled”. This means
that a password must include at least two opposite case letters, a number and
a special character (punctuation marks, for example).
This is a very important step in keeping user accounts secure in
Windows. Please read the Creating Strong Passwords tutorial on how to
make and remember strong passwords.
Ɣ Store passwords using reversible encryption – always leave to “Disabled”. If
you enable this policy, all users’ passwords are easy to crack.

Amity Directorate of Distance and Online Education


Computer Security 75

The next time a user changes his/her password, it must be in accordance with
Notes
Password Policy. If not, error message “The password you typed does not meet the
password policy requirements” will be displayed:

User must then enter a password that satisfies the Password Policy requirements.
The current passwords are not affected by the policy; requirements are checked only
when changing a password. The only change that does apply is maximum password
age – the current passwords will have to be changed after specified number of days. You
can read instructions on creating and remembering strong passwords in the Passwords
article.

2.3.3 Defining an Account Lockout Policy


A strong password is good, but when a malicious program (or someone behind your
keyboard) is trying to break your password, the attempts must be stopped quickly. By
default, anyone or anything can enter any password any number of times without getting
stopped by Windows. Such behavior is called brute-force attack and you can stop it by
creating an Account Lockout Policy – when a user enters a wrong password several
times, the account will be locked out for a specified period of time. The user then cannot
log on during this time. Every attempt to login during the lockout period extends the
period.
Expand Account Lockout Policy on the left and double-click Account lockout
threshold:

Specify the number of times a user can enter a wrong password before Windows
locks the user account. I recommend using “5” for this. Click OK.

Amity Directorate of Distance and Online Education


76 Information Security and Risk Management

Notes

Next, Windows offers default settings for Account lockout duration and Reset
account lockout counter after settings. These settings specify for how long a user
account stays locked after entering a wrong password too many times (during that time,
the user cannot log on to the computer) and after which period of time the count of wrong
passwords entered will be set back to zero.
The defaults are fine, click OK.

Amity Directorate of Distance and Online Education


Computer Security 77

2.3.4 Defining an Audit Policy Notes


The next article, Event Viewer, tells how to track successful and failed logons,
password change attempts and policy changes. Before this can be done, Audit Policy
must be in place.
Expand Local Policies on the left side and click Audit Policy. Double-click the first
item, Audit account logon events.
If you have a home network (e.g., multiple devices using Home Group or folders/
printers shared over local network), check both Success and Failure boxes and then click
OK. This means that both successful and failed logons/sign-ins from another computer/
device on the same network are recorded in Security log of Event Viewer. Please note
that this has nothing to do with Internet connections.
If you do not have a home network, you can safely leave these boxes unticked.

Adjust other Audit Policy settings as described below. “No auditing” means clearing
both Success and Failure check boxes.
Ɣ Audit account management – This stores events related to creating,
changing and deleting user accounts. Tick both Success and Failure.
Ɣ Audit directory service access – Leave this at No auditing. This is related to
Active Directory domain servers only.

Amity Directorate of Distance and Online Education


78 Information Security and Risk Management
Ɣ Audit logon events – Always turn on both Success and Failure. This records
Notes
all logon attempts on your computer.
Ɣ Audit object access – In most cases, leave this at No auditing. If enabled, it
records events related to user’s accessing folders, printers, Registry entries,
etc. that have non-default access rights defined.
Ɣ Audit policy change – Enable both Success and Failure. This starts storing
events that deal with changing this policy and adding or removing user rights
(for example, you add a Standard/Limited user to Administrators group).
Ɣ Audit privilege use – You can leave this to No auditing on most home
computers. If you do need to record failed attempts to access a folder or a file,
printer, etc. enable Failure only.
Ɣ Audit process tracking – This should be left at No auditing. If enabled, all
program launches and exits, plus service and scheduled job creations and
some technical details are recorded in Event Viewer. This would mean a lot of
unneeded entries for home users.
Ɣ Audit system events – Tick both Success and Failure here. These records
events related to Windows starting up or shutting down, clearing event logs,
etc – helpful stuff that might be useful while troubleshooting.

2.3.5 Setting Basic Security Options

Expand Security Options (under Local Policies)


Verify that Accounts: Limit local account use of blank passwords to console logon
only is set to Enabled. This prevents users with no password set from logging in over
network or Remote Desktop. This setting is extremely important in Windows XP, where
Administrator account is enabled and has a blank password. In Windows Vista and later,
Administrator account is disabled by default.

Scroll down to Network access options and make sure that Network access:
Allow anonymous SID/name translation is set to Disabled.

Amity Directorate of Distance and Online Education


Computer Security 79

Then set both Network access: Do not allow anonymous enumeration of SAM
Notes
accounts and Network access. Do not allow anonymous enumeration of SAM accounts
and shares to Enabled. These settings make sure that only authenticated users get
access to shared resources (printers, folders, etc.) over local networks.
Changing these settings spawns a warning dialog “You are about to change this
setting to a value that may affect compatibility with clients, services and applications”.
This can be safely ignored.

Amity Directorate of Distance and Online Education


80 Information Security and Risk Management
In the Network security options, make sure that Network security: Do not store
Notes
LAN Manager hash value on next password change is set to Enabled. This prevents
storing the weak LAN Manager (LM) hash of account password, easy target for
hackers and malware.

Next, set the Network security: LAN Manager authentication level to Send
NTLMv2 response only. Refuse LM if you’re using home network. This prevents using
older and easy-to-crack authentication methods while accessing shared resources.
Please remember that you must set the same authentication level for all Windows
computers on your network, otherwise file and printer sharing will not work!
The setting is not very important for devices that are not connected to a local
network, or not sharing files or printers.

Amity Directorate of Distance and Online Education


Computer Security 81

Notes

2.3.6 Applying Changed Settings in Local Security Policy


Other settings in Local Security Policy are good by default, so we just need to
enforce the policies we changed (again, do not mess with settings not described here,
they can easily make your computer inoperable and ruin your day or even week!).
To do that, right-click on the Security Settings on the top of the left pane and click
Reload.

Amity Directorate of Distance and Online Education


82 Information Security and Risk Management
You can now close Local Security Settings window and read on to find out about
Notes
tracking system and security events using Event Viewer.
In case you used Local Group Policy console instead, open Run dialog (Windows
Key + R), type gpupdate/force and click OK. You can also restart your computer for the
same effect.

2.4 Default Accounts

2.4.1 Removing Unnecessary Default User Accounts


During installation of the operating system, a number of default user and group IDs
are created. Depending on the applications you are running on your system and where
your system is located in the network, some of these user and group IDs can become
security weaknesses, vulnerable to exploitation. If these users and group IDs are not
needed, you can remove them to minimize security risks associated with them.
The following table lists the most common default user IDs that you might be able to
remove:
Table 2.1: Common default user IDs that you might be able to remove

User ID Description

uucp, nuucp Owner of hidden files used by uucp protocol. The uucp user account is used for
the Unix-to-Unix Copy Program, which is a group of commands, programs, and
files, present on most AIX® systems, that allows the user to communicate with
another AIX system over a dedicated line or a telephone line

lpd Owner of files used by printing subsystem

guest Allows access to users who do not have access to accounts

The following table lists common group IDs that might not be needed:

Table 2.2: Common group IDs that might not be needed

Group ID Description

uucp Group to which uucp and nuucp users belong

printq Group to which lpd user belongs

Analyze your system to determine which IDs are indeed not needed. There might
also be additional user and group IDs that you might not need. Before your system goes
into production, perform a thorough evaluation of available IDs.

2.5 Network Activity


A computer network is a system in which multiple computers are connected to each
other to share information and resources.

Amity Directorate of Distance and Online Education


Computer Security 83

Notes

Computer Network Activity


Ɣ Share resources from one computer to another.
Ɣ Create files and store them in one computer, access those files from the other
computer(s) connected over the network.
Ɣ Connect a printer, scanner, or a fax machine to one computer within the
network and let other computers of the network use the machines available
over network.
Following is the list of hardware’s required to setup a computer network.
Ɣ Network Cables
Ɣ Distributors
Ɣ Routers
Ɣ Internal Network Cards
Ɣ External Network Cards

Network Cables
Network cables are used to connect computers. The most commonly used cable is
Category 5 cable RJ-45.

Distributors
A computer can be connected to another one via a serial port but if we need to
connect many computers to produce a network, this serial connection will not work. The
solution is to use a central body to which other computers, printers, scanners, etc. can be
connected and then this body will manage or distribute network traffic.

Amity Directorate of Distance and Online Education


84 Information Security and Risk Management

Notes

Router
A router is a type of device which acts as the central point among computers and
other devices that are part of a network. A router is equipped with holes called ports and
computers and other devices are connected to a router using network cables. Nowadays,
router comes in wireless modes using which computers can be connected without any
physical cable.

Network Card
Network card is a necessary component of a computer without which a computer
cannot be connected over a network. It is also known as network adapter or Network
Interface Card (NIC). Most branded computers have network card pre-installed. Network
cards are of two types: Internal and External Network Cards.

Internal Network Cards


Motherboard has a slot for internal network card where it is to be inserted. Internal
network cards are of two types in which first type uses Peripheral Component
Interconnect (PCI) connection while the second type uses Industry Standard Architecture
(ISA). Network cables are required to provide network access.

Amity Directorate of Distance and Online Education


Computer Security 85

Notes

External Network Cards


External network cards come in two flavours: Wireless and USB based. Wireless
network card need to be inserted into the motherboard but no network cable is required
to connect to network.

Universal Serial Bus (USB)


USB card are easy to use and connect via USB port. Computers automatically
detect USB card and can install the drivers required to support the USB network card
automatically.

2.6 Malicious Code


Viruses and worms are related classes of malicious code; as a result, they are often
confused. Both share the primary objective of replication. However, they are distinctly
different with respect to the techniques they use and their host system requirements. This
distinction is due to the disjoint sets of host systems they attack. Viruses have been
almost exclusively restricted to personal computers, while worms have attacked only
multi-user systems.

Amity Directorate of Distance and Online Education


86 Information Security and Risk Management
A careful examination of the histories of viruses and worms can highlight the
Notes
differences and similarities between these classes of malicious code. The characteristics
shown by these histories can be used to explain the differences between the
environments in which they are found. Viruses and worms have very different functional
requirements; currently no class of systems simultaneously meets the needs of both.
A review of the development of personal computers and multi-tasking workstations
will show that the gap in functionality between these classes of systems is narrowing
rapidly. In the future, a single system may meet all of the requirements necessary to
support both worms and viruses. This implies that worms and viruses may begin to
appear in new classes of systems. A knowledge of the histories of viruses and worms
may make it possible to predict how malicious code will cause problems in the future.

2.6.1 Malicious Code in Java

Types of Malicious Code


Ɣ Viruses: Pieces of code that attach to host programs and propagate when an
infected program executes
Ɣ Worms: Particular to networked computers, carry out pre-programmed attacks
to jump across the network
Ɣ Trojan Horses: Hide malicious intent inside a host program that appears to do
something useful
Ɣ Attack scripts: Programs written by experts to exploit security weaknesses,
usually across the network
Ɣ Java attack applets: Programs embedded in Web pages that gain foothold
through a browser
Ɣ ActiveX controls: Program components that allow malicious code fragment to
control applications or the OS
The Anti-virus software is a kind of software that protects your computer from all
kinds of malicious programs that enter your computer without your consent. Such
malicious programs are intrusive, hostile, and annoying. The different types of malicious
programs can be:
Ɣ Computer virus: This is a kind of hostile program acts similar to the virus that
infects humans. As soon as it enters your computer, it sits quietly till it finds
executable software that helps it to spread. It actively transmits itself over all
the computers that are connected on the network and destroys your important
software.
Ɣ Worm: This hostile program is quite similar to virus but it does not need an
executable to spread it. It spreads automatically and destroys all important
software.

Amity Directorate of Distance and Online Education


Computer Security 87

Ɣ Trojan horse: This hostile program conceals itself under other useful
Notes
programs and invites the user to run it. As soon as it executes, it starts deleting
the user’s files and starts installing other malicious software.
Ɣ Spyware: This harmful program is quite different from viruses and trojans but
is equally harmful. It does not spread like virus but it keeps giving annoying
pop-ups to lure you to install its paid version that treacherously claim to protect
your computer. This software secretly collect private information such as credit
card information, social security number, and user names and passwords from
your computer and sends them to remote computers.
Ɣ Adware: This software is somewhat similar to spyware but its main purpose is
advertisement. They may not be considered malicious software because it
comes on your computer with your consent. However, it may hit you with a
barrage of advertisements; from pop-ups to banner ads.
Ɣ Grayware: This software term is used broadly for all computer programs that
are annoying but not necessarily totally destructive. It includes programs such
as adware, joke programs, and dialers.
The anti-virus software needs to be updated with latest anti-virus definition to keep
protecting your computer with the latest malicious programs. Most anti-virus software can
be set to update automatically.

2.6.2 Malicious Code Threatens Enterprise Security


Malicious code can give a user remote access to a computer. This is known as an
application backdoor. Backdoors may be created with malicious intent, to gain access to
confidential company or customer information. But they can also be created by a
programmer who wants quick access to an application for troubleshooting purposes.
They can even be created inadvertently through programming errors. Regardless of their
origin, all backdoors and malicious code can become a security threat if they are found
and exploited by hackers or unauthorized users. As applications today tend to be built
more and more often with reusable components from a variety of sources with varying
levels of security, malicious code can pose a significant operational risk to the enterprise.
That’s why so many enterprises today are turning to Vera code to secure their
applications.

2.6.3 How to Avoid Malicious Code?


One way to avoid malicious code in your applications is to add static analysis (also
called “white-box” testing) to your software development lifecycle to review your code for
the presence of malicious code. Vera code’s static code analysis looks at applications in
non-runtime environment. This method of security testing has distinct advantages in that
it can evaluate both web and non-web applications and, through advanced modelling,
can detect malicious code in the software’s inputs and outputs that cannot be seen
through other testing methodologies.

2.6.4 Test for Malicious Code with Vera Code


Vera code has the ability to detect applications for malicious code threats that
include time bombs, hardcoded cryptographic constants and credentials, deliberate
information and data leakage, root kits and anti-debugging techniques. These targeted
malicious code threats are hidden in software and mask their presence to evade
detection by traditional security technologies. Vera code’s detection capabilities provide
comprehensive support to combat against backdoors and malicious code.

Amity Directorate of Distance and Online Education


88 Information Security and Risk Management

Notes 2.7 Firewall

2.7.1 Introduction
Firewalls are computer security systems that protect your office/home PCs or your
network from intruders, hackers and malicious code. Firewalls protect you from offensive
software that may come to reside on your systems or from prying hackers. In a day and
age when online security concerns are the top priority of the computer users, Firewalls
provide you with the necessary safety and protection.
Firewalls are a must have for any kind of computer usage that go online. They
protect you from all kinds of abuse and unauthorized access like Trojans that allow taking
control of your computers by remote logins or backdoors, virus or use your resources to
launch DOS attacks.
Firewalls are worth installing. Be it a basic standalone system, a home network or a
office network, all face varying levels of risks and Firewalls do a good job in mitigating
these risks. Tune the firewall for your requirements and security levels and you have one
reason less to worry.

What Exactly are Firewalls?


Firewalls are software programs or hardware devices that filter the traffic that flows
into your PC or your network through a internet connection. They sift through the data
flow and block that which they deem (based on how and for what you have tuned the
firewall) harmful to your network or computer system.
When connected to the internet, even a standalone PC or a network of
interconnected computers make easy targets for malicious software and unscrupulous
hackers. A firewall can offer the security that makes you less vulnerable and also protect
your data from being compromised or your computers being taken hostage.

How Do They Work?


Firewalls are setup at every connection to the Internet, therefore subjecting all data
flow to careful monitoring. Firewalls can also be tuned to follow “rules”. These Rules are
simply security rules that can be set up by yourself or by the network administrators to
allow traffic to their web servers, FTP servers, Telnet servers, thereby giving the
computer owners/administrators immense control over the traffic that flows in and out of
their systems or networks.
Rules will decide who can connect to the internet, what kind of connections can be
made, which or what kind of files can be transmitted in out. Basically, all traffic in and out
can be watched and controlled thus giving the firewall installer a high level of security and
protection.

2.7.2 Firewall Logic


Firewalls use three types of filtering mechanisms:
Ɣ Packet filtering or packet purity: Data flow consists of packets of information
and firewalls analyze these packets to sniff out offensive or unwanted packets
depending on what you have defined as unwanted packets.
Ɣ Proxy: Firewalls in this case assume the role of a recipient and in turn sends it
to the node that has requested the information and vice versa.
Ɣ Inspection: In this case, Firewalls instead of sifting through all of the
information in the packets, mark key features in all outgoing requests and

Amity Directorate of Distance and Online Education


Computer Security 89

check for the same matching characteristics in the inflow to decide if it relevant
Notes
information that is coming through.

2.7.3 Firewall Rules


Firewalls rules can be customized as per your needs, requirements and security
threat levels. You can create or disable firewall filter rules based on such conditions as:
Ɣ IP Addresses: Blocking off a certain IP address or a range of IP addresses,
which you think are predatory.
Ɣ Domain names: You can only allow certain specific domain names to access
your systems/servers or allow access to only some specified types of domain
names or domain name extension like .edu or .mil.
Ɣ Protocols: A firewall can decide which of the systems can allow or have
access to common protocols like IP, SMTP, FTP, UDP, ICMP, Telnet or
SNMP.
Ɣ Ports: Blocking or disabling ports of servers that are connected to the internet
will help maintain the kind of data flow you want to see it used for and also
close down possible entry points for hackers or malignant software.
Ɣ Keywords: Firewalls also can sift through the data flow for a match of the
keywords or phrases to block out offensive or unwanted data from flowing in.

2.7.4 Types of Firewall


Ɣ Software firewalls: New generation Operating systems come with built-in
firewalls or you can buy a firewall software for the computer that accesses the
internet or acts as the gateway to your home network.
Ɣ Hardware firewalls: Hardware firewalls are usually routers with a built-in
Ethernet card and hub. Your computer or computers on your network connect
to this router and access the web.
Some of the firewall products that you may want to check out are:
Ɣ McAfee Internet Security
Ɣ Microsoft Windows Firewall
Ɣ Norton Personal Firewall
Ɣ Trend Micro PC-cillin
Ɣ ZoneAlarm Security Suit
By definition, a firewall is a single device used to enforce security policies within a
network or between networks by controlling traffic flows.
The Firewall Services Module (FWSM) is a very capable device that can be used to
enforce those security policies. The FWSM was developed as a module or blade that
resides in either a Catalyst 6500 series chassis or a 7600 series router chassis. The
“tight” integration with a chassis offers increased flexibility, especially with network
virtualization and the incredible throughput that is not only available today but will
increase significantly with the introduction of the 4.x code train.
The look and feel of the FWSM is similar to that of the PIX and ASA. These products
are all part of the same family, originating with the PIX and the “finesse” operating system.
If you have had any experience with either the PIX or ASA, you will find comfort in not
having to learn another user interface.
Having a good understanding of the capabilities offered by the different types of
firewalls will help you in placing the appropriate type of firewall to best meet your security
needs.

Amity Directorate of Distance and Online Education


90 Information Security and Risk Management

Notes 2.7.5 Understanding Packet-filtering Firewalls


Packet-filtering firewalls validate packets based on protocol, source and/or
destination IP addresses, source and/or destination port numbers, time range,
Differentiate Services Code Point (DSCP), type of service (ToS), and various other
parameters within the IP header. Packet filtering is generally accomplished using Access
Control Lists (ACL) on routers or switches and are normally very fast, especially when
performed in an Application Specific Integrated Circuit (ASIC). As traffic enters or exits an
interface, ACLs are used to match selected criteria and either permit or deny individual
packets.

Advantages
The primary advantage of packet-filtering firewalls is that they are located in just
about every device on the network. Routers, switches, wireless access points, Virtual
Private Network (VPN) concentrators, and so on may all have the capability of being a
packet-filtering firewall.
Routers from the very smallest home office to the largest service provider devices
inherently have the capability to control the flow of packets through the use of ACLs.
Switches may use Routed Access Control Lists (RACLs), which provide the
capability to control traffic flow on a “routed” (Layer 3) interface; Port Access Control Lists
(PACL), which are assigned to a “switched” (Layer 2) interface; and VLAN Access
Control Lists (VACLs), which have the capability to control “switched” and/or “routed”
packets on a VLAN.
Other networking devices may also have the power to enforce traffic flow through
the use of ACLs. Consult the appropriate device documentation for details.
Packet-filtering firewalls are most likely a part of your existing network. These
devices may not be the most feature rich, but when you need to quickly implement a
security policy to mitigate an attack, protect against infected devices, and so on, this may
be the quickest solution to deploy.

Caveats
The challenge with packet-filtering firewalls is that ACLs are static, and packet
filtering has no visibility into the data portion of the IP packet.
Tip: Packet-filtering firewalls do not have visibility into the payload.

Because packet-filtering firewalls match only individual packets, this enables an


individual with malicious intent, also known as a “hacker,” “cracker,” or “script kiddie,” to
easily circumvent your security (at least this device) by crafting packets, misrepresenting
traffic using well-known port numbers, or tunneling traffic unsuspectingly within traffic
allowed by the ACL rules. Developers of peer-to-peer sharing applications quickly
learned that using TCP port 80 (www) would allow them unobstructed access through the
firewall.
Notes: The terms used to describe someone with malicious intent may not be the same in all circles.
Ɣ A cracker refers to someone who “cracks” or breaks into a network or computer, but can also
define someone who “cracks” or circumvents software protection methods, such as keys.
Generally, it is not a term of endearment.
Ɣ A hacker describes someone skilled in programming and who has an in-depth understanding of
computers and/or operating systems. This individual can use his or her knowledge for good
(white-hat hacker) or evil (black-hat hacker). Also, it describes my golf game.
Ɣ A script kiddie is someone who uses the code, methods, or programs created by a hacker for
malicious intent.

Amity Directorate of Distance and Online Education


Computer Security 91

Figure 2.1 shows an example of a packet-filtering firewall, a router using a traditional


Notes
ACL in this case, access list 100. Because the ACL is matching traffic destined for port
80, any flows destined to port 80, no matter what kind, will be allowed to pass through the
router.

Figure 2.1: Packet-filtering Firewall

Given the issues with packet filtering and the fact that they’re easy to circumvent,
you may dismiss using them entirely. This would be a huge mistake! Taking a holistic
approach and using multiple devices to provide defense in depth is a much better
strategy. An excellent use of packet filtering is on the border of your network, preventing
spoofed traffic and private IP addresses (RFC 1918) from entering or exiting your
network. In-depth ACL configuration is beyond the scope of this book, but a good
reference is RFC 2827.

2.7.6 Understanding Application/Proxy Firewalls


The following section uses the Open System Interconnection (OSI) model in the
description of application/proxy firewalls and warrants a brief review. The OSI model
describes how information is transmitted from an application on one computer to an
application on another. Each layer performs a specific task on the information and
passes it to the next layer. This model helps explain where functions take place.
The seven layers of the OSI model are as follows:

Top News

Numenta's Grok for IT: Artificial intelligence meets network performance...

Amity Directorate of Distance and Online Education


92 Information Security and Risk Management

Notes

NIST sets the stage for contactless fingerprint readers

Blackberry buys Good Technology as it further expands into mobile device...

Ɣ Layer 7 is the application layer: It is the user interface to your computer (the
programs), for example, word processor, e-mail application, telnet, and so on.
Ɣ Layer 6 is the presentation layer: It acts as the translator between systems,
converting application layer information to a common format understandable by
different systems. This layer handles encryption and standards such as Motion
Picture Experts Group (MPEG) and Tagged Image File Format (TIFF).
Ɣ Layer 5 is the session layer: It manages the connections or service requests
between computers.
Ɣ Layer 4 is the transport layer: It prepares data for delivery to the network.
Transmission Control Protocol is a function of Layer 4, providing reliable
communication and ordering of data. User Datagram Protocol is also a role of
Layer 4, but it does not provide reliable delivery of data.
Ɣ Layer 3 is the network layer: It is where IP addressing and routing happen.
Data at this layer is considered a “packet.”
Ɣ Layer 2 is the data-link layer: It handles the reliable sending of information.
Media Access Control is a component of Layer 2. Data at this layer would be
referred to as a “frame.”
Ɣ Layer 1 is the physical layer: It is composed of the objects that you can see
and some that you cannot, such as electrical characteristics.

Tip: Use the following mnemonic to remember the OSI model: All People Seem to Need Data
Processing.

Application firewalls, as indicated by the name, work at Layer 7, or the application


layer of the OSI model. These devices act on behalf of a client (aka proxy) for requested
services. For example, open a web browser and then pen a web page to
http://www.cisco.com. The request is sent to the proxy firewall, and then the proxy

Amity Directorate of Distance and Online Education


Computer Security 93

firewall acting on your behalf opens a web connection to http://www.cisco.com. That


Notes
information is then transmitted to your web browser for your viewing pleasure.

Advantages
Because application/proxy firewalls act on behalf of a client, they provide an
additional “buffer” from port scans, application attacks, and so on. For example, if an
attacker found a vulnerability in an application, the attacker would have to compromise
the application/proxy firewall before attacking devices behind the firewall. The
application/proxy firewall can also be patched quickly in the event that a vulnerability is
discovered. The same may not hold true for patching all the internal devices.

Caveats
A computer acting on your behalf at the application layer has a couple of caveats.
First, that device needs to know how to handle your specific application. Web-based
applications are very common, but if you have an application that’s unique, your proxy
firewall may not be able to support it without making some significant modifications.
Second, application firewalls are generally much slower than packet-filtering or
packet-inspection firewalls because they have to run applications, maintain state for both
the client and server, and also perform inspection of traffic.
Figure 2.2 shows an application/proxy firewall and how a session is established
through it to a web server on the outside.

Figure 2.2: Application/Proxy Firewall

The step-by-step process, as shown in the figure, is as follows:


Step 1 The client attempts to connect to the web server located on the outside. For example,
a user enters http://www.cisco.com in a web browser.
Step 2 The proxy server receives the request and forwards that request to the appropriate
web server (http://www.cisco.com).
Step 3 The web server receives the request and responds back to the proxy server with the
requested information.
Step 4 The proxy server receives the information and forwards it to the originating client.

Note: For simplicity’s sake, Domain Name Service (DNS), Address Resolution Protocol (ARP),
and Layer 2/3 information is not discussed in this example. This also assumes that the client web
application has been configured with the appropriate proxy information.

Amity Directorate of Distance and Online Education


94 Information Security and Risk Management
Application/proxy firewalls can be very effective devices to control traffic flow and
Notes
protect clients from malicious software (malware) and outside attacks. These firewalls
must also run applications similar to the clients, which can also make them vulnerable to
application attacks.

2.7.7 Understanding Reverse Proxy Firewalls


Reverse-proxy firewalls function in the same way as proxy firewalls, with the
exception that they are used to protect the servers and not the clients. Clients connecting
to a web server may unknowingly be sent to a proxy server, where it services the request
on behalf of the client. The proxy server may also be able to load balance the requests to
multiple servers, consequently spreading the workload.

Advantages
To be really effective, reverse proxies must understand how the application behaves.
For example, suppose you have a web application that requires input of a mailing address,
specifically the area code. The application firewall needs to be intelligent enough to deny
information that could cause the server on the far end any potential issues, such as a
buffer overflow.

Note: A buffer overflow occurs when the limits of a given allocated space of memory is exceeded.
This results in adjacent memory space being overwritten. If the memory space is overwritten with
malicious code, it can potentially be executed, compromising the device.

If a cracker were to input letters or a long string of characters into the ZIP code field,
this could cause the application to crash. As we all know, well-written applications
“shouldn’t” allow this type of behavior, but “carbon-based” mistakes do happen, and
having defense in depth helps minimize the human element. Having the proxy keenly
aware of the application and what’s allowed is a very tedious process. When any
changes are made to the application, the proxy must also change. Most organizations
deploying reverse-proxy firewalls don’t usually couple their proxy and applications so
tightly to get the most advantage from them, but they should.
Another advantage of a reverse-proxy firewall is for Secure Sockets Layer (SSL)
termination. Two significant benefits are that SSL does not burden the application server,
because it is very processor-intensive, and when decryption is done on a separate device,
the plain-text traffic can be inspected. Many reverse-proxy firewalls perform SSL
termination with an additional hardware module, consequently reducing the burden on
the main processors. Figure 2.3 shows an example of a client on the outside (Internet, for
example) requesting information from a web server.

Amity Directorate of Distance and Online Education


Computer Security 95

Notes

Figure 2.3: Reverse-Proxy Firewall

The step-by-step process, as shown in the figure, is as follows:


Step 1 The client opens a web browser and enters the URL that directs them to the
associated proxy web server, requesting information.

Steps 2 The proxy server can have multiple locations from which to glean information. In this
and 3 example, it requests graphics from Application Server 1 and real-time data from
Application Server 2.

Steps 4 The proxy server prepares the content received from Application Servers 1 and 2 for
and 5 distribution to the requesting client.

Step 6 The proxy server responds to the client with the requested information.

As you can see by the previous example, the function of a reverse-proxy server is
very beneficial in distributing the processing function over multiple devices and by
providing an additional layer of security between the client requesting information and the
devices that contain the “real” data.

2.8 Fault Tolerant System


If we look at the words fault and tolerance, we can define the fault as a malfunction
or deviation from expected behavior and tolerance as the capacity for enduring or putting
up with something. Putting the words together, fault tolerance refers to a system’s ability
to deal with malfunctions.

2.8.1 Faults
A fault in a system is some deviation from the expected behavior of the system: a
malfunction. Faults may be due to a variety of factors, including hardware failure,
software bugs, operator (user) error, and network problems.

Amity Directorate of Distance and Online Education


96 Information Security and Risk Management
Faults can be classified into one of three categories:
Notes
1. Transient faults: These occur once and then disappear. For example, a
network message doesn’t reach its destination but does when the message is
retransmitted.
2. Intermittent faults: Intermittent faults are characterized by a fault occurring,
then vanishing again, then reoccurring, then vanishing. These can be the most
annoying of component faults. A loose connection is an example of this kind of
fault.
3. Permanent faults: This type of failure is persistent. It continues to exist until
the faulty component is repaired or replaced. Examples of this fault are disk
head crashes, software bugs, and burnt-out power supplies.
Any of these faults may be either a fail-silent failure (also known as a fail-stop) or
a Byzantine failure. A fail-silent fault is one where the faulty unit stops functioning and
produces no bad output. More precisely, it either produces no output or produces output
that clearly indicates that the component has failed. A Byzantine fault is one where the
faulty unit continues to run but produces incorrect results. Dealing with Byzantine faults is
obviously more troublesome.
When we discuss fault tolerance, the familiar terms synchronous and asynchronous
take on different meanings. By a synchronous system, we refer to one that responds to
a message within a known, finite amount of time. An asynchronous system does not.
Communicating via a serial port is an example of a synchronous system. Communicating
via IP packets is an example of an asynchronous system.

2.8.2 Approaches to Faults


We can try to design systems that minimize the presence of faults. Fault avoidance
is a process where we go through design and validation steps to ensure that the system
avoids being faulty in the first place. This can include formal validation, code inspection,
testing, and using robust hardware.
Fault removal is an ex post facto approach where faults were encountered in the
system and we managed to remove those faults. This could have been done through
testing, debugging, and verification as well as replacing failed components with better
ones, adding heat sinks to fix thermal dissipation problems, etc.
Fault tolerance is the realization that we will always have faults (or the potential for
faults) in our system and that we have to design the system in such a way that it will be
tolerant of those faults. That is, the system should compensate for the faults and continue
to function.

2.8.3 Achieving Fault Tolerance


The general approach to building fault tolerant systems is redundancy. Redundancy
may be applied at several levels.
Information redundancy seeks to provide fault tolerance through replicating or
coding the data. For example, a Hamming code can provide extra bits in data to recover
a certain ratio of failed bits. Sample uses of information redundancy are parity memory,
ECC (Error Correcting Codes) memory, and ECC codes on data blocks.
Time redundancy achieves fault tolerance by performing an operation several
times. Timeouts and retransmissions in reliable point-to-point and group communication
are examples of time redundancy. This form of redundancy is useful in the presence of
transient or intermittent faults. It is of no use with permanent faults. An example is
TCP/IP’s retransmission of packets.

Amity Directorate of Distance and Online Education


Computer Security 97

Physical redundancy deals with devices, not data. We add extra equipment to
Notes
enable the system to tolerate the loss of some failed components. RAID disks and
backup name servers are examples of physical redundancy.
When addressing physical redundancy, we can differentiate redundancy from
replication. With replication, we have several units operating concurrently and a voting
(quorum) system to select the outcome. With redundancy, only one unit is functioning
while the redundant units are standing by to fill in case the unit ceases to work.

2.8.4 Levels of Availability


In designing a fault-tolerant system, we must realize that 100% fault tolerance can
never be achieved. Moreover, the closer we try to get to 100%, the more costly our
system will be.
To design a practical system, one must consider the degree of replication needed.
This will be obtained from a statistical analysis for probable acceptable behavior. Factors
that enter into this analysis are the average worst-case performance in a system without
faults and the average worst-case performance in a system with faults.
Availability refers to the mount of time that a system is functioning (“available”). It is
typically expressed as a percentage that refers to the fraction of time that the system is
available to users. A system that is available 99.999% of the time (referred to as “five
nines”) will, on average, experience at most 5.26 minutes of downtime per year. This
includes planned (hardware and software upgrades) and unplanned (network outages,
hardware failures, fires, power outages, earthquakes) downtime.
Five nines is the classic standard of availability for telephony. Achieving it entails
intensive software testing, redundant processors, backup generators, and
earthquake-resilient installation. If all that ever happens to your system is that you lose
power for a day once a year, then your reliability is at 99.7%. You can compute an
availability percentage by dividing the minutes of uptime by the minutes in a year (or
hours of uptime by hours in a year, or anything similar). For example, if a system is
expected to be down for three hours a year on average, then the uptime percentage is
1 – (180 minutes/525600 minutes) = 99.97%. The following table shows some availability
levels, their common terms, and the corresponding annual downtime.
Class Availability Annual Downtime
Continuous 100% 0
Fault Tolerant 99.999% 5 minutes
Fault Resilient 99.99% 53 minutes
High Availability 99.9% 8.3 hours
Normal Availability 99-99.5% 44-87 hours

How Much Redundancy?


A system is said to be k-fault tolerant if it can withstand k faults. If the components
fail silently, then it is sufficient to have k + 1 components to achieve k fault tolerance:
k components can fail and one will still be working. This, of course, refers to each
individual component — a single point of failure. For example, three power supplies will
be 2-fault tolerant: two power supplies can fail and the system will still function.
If the components exhibit Byzantine faults, then a minimum of 2k + 1 components
are needed to achieve k fault tolerance. This provides a sufficient number of working
components that will allow the good data to out-vote the bad data that is produced by the
Byzantine faults. In the worst case, k components will fail (generating false results) but
k + 1 components will remain working properly, providing a majority vote that is correct.

Amity Directorate of Distance and Online Education


98 Information Security and Risk Management

Notes 2.8.5 Active Replication


Active replication is a technique for achieving fault tolerance through physical
redundancy. A common instantiation of this is triple modular redundancy (TMR). This
design handles 2-fault tolerance with fail-silent faults or 1-fault tolerance with Byzantine
faults.

Figure 2.4: No Redundancy


Under this system, we provide threefold replication of a component to detect and
correct a single component failure. For example, consider a system where the output of A
goes to the output of B and the output of B goes to C (Figure 2.4). Any single component
failure will cause the entire system to fail.

Figure 2.5: Triple Modular Redundancy (TMR)


In a TMR design, we replicate each component three ways and place voters after
each stage to pick the majority outcome of the stage (Figure 2.5). The voter is
responsible for picking the majority winner of the three inputs. A single Byzantine fault will
be overruled by two good votes. The voters themselves are replicated because they too
can malfunction.
In a software implementation, a client can replicate (or multicast) requests to each
server. If requests are processed in order, all non-faulty servers will yield the same
replies. The requests must arrive reliably and in the same order on all servers. This
requires the use of an atomic multicast.

2.8.6 Primary Backup (Active Standby) Approach


With a primary backup approach, one server (the primary) does all the work. When
the server fails, the backup takes over.
To find out whether a primary failed, a backup may periodically ping the primary with
“are you alive” messages. If it fails to get an acknowledgement, then the backup may
assume that the primary failed and it will take over the functions of the primary. If the
system is asynchronous, there are no upper bounds on a timeout value for the pings.
This is a problem. Redundant networks can help ensure that a working communication
channel exists. Another possible solution is to use a hardware mechanism to forcibly stop
the primary.
This system is relatively easy to design since requests do not go multicast to a
group of machines and there are no decisions to be made on who takes over. An
important point to note is that once the backup machine takes over, another backup is
needed immediately. Backup servers work poorly with Byzantine faults, since the backup
may not be able to detect that the primary has actually failed.
Recovery from a primary failure may be time-consuming and/or complex depending
on the needs for continuous operation and application recovery. Application failover is
referred to by temperature grades. The easiest form of failover is known as cold failover.

Amity Directorate of Distance and Online Education


Computer Security 99

Cold failover entails application restart on the backup machine. When a backup machine
Notes
takes over, it starts all the applications that were previously running on the primary
system. Of course, any work that the primary may have done is now lost. With warm
failover, applications periodically write checkpoint files onto stable storage that is shared
with the backup system. When the backup system takes over, it reads the checkpoint
files to bring the applications to the state of the last checkpoint. Finally, with hot failover,
applications on the backup run in lockstep synchrony with applications on the primary,
taking the same inputs as on the primary. When the backup takes over, it is in the exact
state that the primary was in when it failed. However, if the failure was caused by
software, then there is a good chance that the backup died from the same bug since it
received the same inputs.

2.8.7 Agreement in Faulty Systems


Distributed processes often have to agree on something. For example, they may
have to elect a coordinator, commit a transaction, divide tasks, coordinate a critical
section, etc. What happens when the processes and/or the communication lines are
imperfect?

Two Army Problem


We’ll first examine the case of good processors but faulty communication lines. This
is known as the two army problem and can be summarized as follows:
Two divisions of an army, A and B, coordinate an attack on enemy army, C. A and B
are physically separated and use a messenger to communicate. A sends a messenger to
B with a message of “let’s attack at dawn”. B receives the message and agrees, sending
back the messenger with an “OK” message. The messenger arrives at A, but A realizes
that B did not know whether the messenger made it back safely. If B is not convinced that
A received the acknowledgement, then it will not be confident that the attack should take
place since the army will not win on its own. A may choose to send the messenger back
to B with a message of “A received the OK”, but A will then be unsure as to whether
B received this message. This is also known as the multiple acknowledgment problem. It
demonstrates that even with non-faulty processors, ultimate agreement between two
processes is not possible with unreliable communication. The best we can do is hope
that it usually works.

Byzantine Generals Problem


Solutions to the Byzantine Generals problem are not obvious, intuitive, or simple.
They are not presented in these notes. You can read Lamport’s paper on the problem
here. You can also check out the brief summary and various solutions – which go beyond
the Lamport paper – here.
The other case to consider is that of reliable communication lines but faulty
processors. This is known as the Byzantine Generals Problem. In this problem, there
are n army generals who head different divisions. Communication is reliable (radio or
telephone) but m of the generals are traitors (faulty) and are trying to prevent others from
reaching agreement by feeding them incorrect information. The question is: Can the loyal
generals still reach agreement? Specifically, each general knows the size of his division.
At the end of the algorithm can each general know the troop strength of every other loyal
division? Lamport demonstrated a solution that works for certain cases. His answer to
this problem is that any solution to the problem of overcoming m traitors requires a
minimum of 3m + 1 participants (2m + 1 loyal generals). This means that more than 2/3rd
of the generals must be loyal. Moreover, it was demonstrated that no protocol can
overcome m faults with fewer than m + 1 rounds of message exchanges and O(mn2)

Amity Directorate of Distance and Online Education


100 Information Security and Risk Management
messages. Clearly, this is a rather costly solution. While the Byzantine model may be
Notes
applicable to certain types of special-purpose hardware, it will rarely be useful in general
purpose distributed computing environments.

2.8.8 Examples of Fault Tolerance

ECC (Error Correction Code) Memory


ECC memory contains extra logic that implements Hamming codes to detect and
correct bit errors that are caused by fabrication errors, electrical disturbances, or neutron
and cosmic ray radiation. A simple form of error detection is a parity code: one extra bit
indicates whether the word has an odd or even number of bits. A single bit error will
cause the parity to report an incorrect value and hence indicate an error. Most
implementations of ECC memory use a Hamming code that detects two bit errors and
corrects any single bit error per 64-bit word. In the general case, Hamming codes can be
created for any number of bits.
This is an example of information redundancy. Information coding is used to provide
fault tolerance for the data in memory (and, yes, the coding requires additional memory).

Machine Failover via DNS SRV Records


The fault tolerance goal here is to allow a client to connect to one functioning
machine that represents a given hostname. Some machines may be inaccessible
because they are out of service or the network connection to them is not functioning.
Instead of using DNS (Domain Name System) to resolve a hostname to one IP
address, the client will use DNS to look up SRV records for that name. The SRV record is
a somewhat generic record in DNS that allows one to specify information on available
services for a host. Each SRV record contains a priority, weight, port, and target
hostname. The priority field is used to prioritize the list of servers. An additional weight
value can then be used to balance the choice of several servers of equal priority. Once a
server is selected, DNS is used to look up the address of the target hostname field. If that
server doesn’t respond, then the client tries another machine in the list.
This approach is commonly used in voice over IP (VoIP) systems to pick a SIP
server (or SIP proxy) among several available SIP servers for a specific hostname. DNS
MX records (mail servers for a hostname) take the same approach: use DNS to look up
the list of mail servers for a host and then connect to one that works.
This is an example of physical redundancy.

RAID 1 (disk mirroring)


RAID stands for Redundant Array of Independent Disks (it used to stand for
Redundant Array of Inexpensive Disks but that did not make for a good business model
selling expensive RAID systems). RAID supports different configurations known as levels.
RAID 0, for example, refers to disk striping, where data is spread out across two disks.
For example, disk 0 holds blocks 0, 2, 4, 6, ... and disk 1 holds blocks 1, 3, 5, 7, ... This
level offers no fault tolerance and is designed to provide higher performance: two blocks
can often be written or read concurrently.
RAID 1 is a disk mirroring configuration. All data that is written to one disk is also
written to a second disk. A block of data can be read from either disk. If one disk goes out
of service, the remaining disk will have all the data.
RAID 1 is an example of an active-active approach to physical redundancy. As
opposed to the primary server (active-passive) approach, both systems are in use at all
times.

Amity Directorate of Distance and Online Education


Computer Security 101

RAID 4/RAID 5 (Disk Parity)


Notes
RAID 4 and RAID 5 use block-level striping together with parity to provide 1-fault
tolerance. A stripe refers to a set of blocks that is spread out across a set of n disks, with
one block per disk. A parity block is written to disk n + 1. The parity is the exclusive-or of
the set of blocks in each stripe. If one disk fails, its contents are recovered by computing
an exclusive-or of all the blocks in that stripe set together with the parity block.
RAID 5 is the exact same thing but the parity blocks are distributed among all the
disks so that writing parity does not become a bottleneck. For example, in a four disk
configuration, the parity block may be on disk 0 for the first stripe, disk 1 for the second
stripe, disk 2 for the third stripe, disk 3 for the fourth stripe, disk 0 for the fifth stripe, etc.
RAID 4 and RAID 5 are examples of information redundancy. As with ECC, we need
additional physical hardware but we are achieving the fault tolerance through information
coding, not by having a standby disk that contains a replica of the data.

TCP Retransmission
TCP/IP (Transmission Control Protocol) is the reliable virtual circuit service transport
layer protocol provided on top of the unreliable network layer Internet Protocol. TCP
relies that the sender receives an acknowledgement from the receiver whenever a
packet is received. If the sender does not receive that acknowledgment within a certain
amount of time, it assumes that the packet was lost. The sender then retransmits the
packet. In Windows, the retransmission timer is initialized to three seconds. The default
maximum number of retransmissions is five. The retransmission time is adjusted based
on the usual delay of a specific connection. TCP is an example of time redundancy.

2.9 Backup
In information technology, a backup, or the process of backing up, refers to the
copying and archiving of computer data so it may be used to restore the original after a
data loss event. The verb form is to back up in two words, whereas the noun is backup.
Backups have two distinct purposes. The primary purpose is to recover data after its
loss, be it by data deletion or corruption. Data loss can be a common experience of
computer users. A 2008 survey found that 66% of respondents had lost files on their
home PC. The secondary purpose of backups is to recover data from an earlier time,
according to a user-defined data retention policy, typically configured within a backup
application for how long copies of data are required. Though backups popularly represent
a simple form of disaster recovery, and should be part of a disaster recovery plan, by
themselves, backups should not alone be considered disaster recovery. One reason for
this is that not all backup systems or backup applications are able to reconstitute a
computer system or other complex configurations such as a computer cluster, active
directory servers, or a database server, by restoring only data from a backup.
Since a backup system contains at least one copy of all data worth saving, the data
storage requirements can be significant. Organizing this storage space and managing
the backup process can be a complicated undertaking. A data repository model can be
used to provide structure to the storage. Nowadays, there are many different types of
data storage devices that are useful for making backups. There are also many different
ways in which these devices can be arranged to provide geographic redundancy, data
security, and portability.
Before data are sent to their storage locations, they are selected, extracted, and
manipulated. Many different techniques have been developed to optimize the backup
procedure. These include optimizations for dealing with open files and live data sources

Amity Directorate of Distance and Online Education


102 Information Security and Risk Management
as well as compression, encryption, and de-duplication, among others. Every backup
Notes
scheme should include dry runs that validate the reliability of the data being backed up. It
is important to recognize the limitations and human factors involved in any backup
scheme.

2.9.1 Storage: The Base of a Backup System

Data Repository Models


Any backup strategy starts with a concept of a data repository. The backup data
needs to be stored, and probably should be organized to a degree. The organization
could be as simple as a sheet of paper with a list of all backup media (CDs, etc.) and the
dates they were produced. A more sophisticated setup could include a computerized
index, catalog, or relational database. Different approaches have different advantages.
Part of the model is the backup rotation scheme.
Ɣ Unstructured: An unstructured repository may simply be a stack of or CD-Rs
or DVD-Rs with minimal information about what was backed up and when. This
is the easiest to implement, but probably the least likely to achieve a high level
of recoverability as it lacks automation.
Ɣ Full only/SYSTEM IMAGING: A repository of this type contains complete
system images taken at one or more specific points in time. This technology is
frequently used by computer technicians to record known good configurations.
Imaging is generally more useful for deploying a standard configuration to many
systems rather than as a tool for making ongoing backups of diverse systems.
Ɣ Incremental: An incremental style repository aims to make it more feasible to
store backups from more points in time by organizing the data into increments of
change between points in time. This eliminates the need to store duplicate
copies of unchanged data: with full backups, a lot of the data will be unchanged
from what has been backed up previously. Typically, a full backup (of all files) is
made on one occasion (or at infrequent intervals) and serves as the reference
point for an incremental backup set. After that, a number of incremental backups
are made after successive time periods. Restoring the whole system to the date
of the last incremental backup would require starting from the last full backup
taken before the data loss, and then applying in turn each of the incremental
backups since then. Additionally, some backup systems can reorganize the
repository to synthesize full backups from a series of incremental.
Ɣ Differential: Each differential backup saves the data that has changed since
the last full backup. It has the advantage that only a maximum of two data sets
are needed to restore the data. One disadvantage, compared to the
incremental backup method, is that as time from the last full backup (and thus
the accumulated changes in data) increases, so does the time to perform the
differential backup. Restoring an entire system would require starting from the
most recent full backup and then applying just the last differential backup since
the last full backup.
Note: Vendors have standardized on the meaning of the terms “incremental
backup” and “differential backup”. However, there have been cases where
conflicting definitions of these terms have been used. The most relevant
characteristic of an incremental backup is which reference point it uses to
check for changes. By standard definition, a differential backup copies files that
have been created or changed since the last full backup, regardless of whether
any other differential backups have been made since then, whereas an
incremental backup copies files that have been created or changed since the

Amity Directorate of Distance and Online Education


Computer Security 103

most recent backup of any type (full or incremental). Other variations of


Notes
incremental backup include multi-level incremental and incremental backups
that compare parts of files instead of just the whole file.
Ɣ Reverse delta: A reverse delta type repository stores a recent “mirror” of the
source data and a series of differences between the mirror in its current state
and its previous states. A reverse delta backup will start with a normal full
backup. After the full backup is performed, the system will periodically
synchronize the full backup with the live copy, while storing the data necessary
to reconstruct older versions. This can either be done using hard links, or using
binary diffs. This system works particularly well for large, slowly changing, data
sets. Examples of programs that use this method are rdiff-backup and Time
Machine.
Ɣ Continuous data protection: Instead of scheduling periodic backups, the
system immediately logs every change on the host system. This is generally
done by saving byte or block-level differences rather than file-level differences.
It differs from simple disk mirroring in that it enables a roll-back of the log and
thus restoration of old image of data.

Storage Media
Regardless of the repository model that is used, the data has to be stored on some
data storage medium.
Ɣ Magnetic tape: Magnetic tape has long been the most commonly used
medium for bulk data storage, backup, archiving, and interchange. Tape has
typically an order of magnitude better capacity/price ratio when compared to
hard disk, but recently the ratios for tape and hard disk have become a lot
closer. There are many formats, many of which are proprietary or specific to
certain markets like mainframes or a particular brand of personal computer.
Tape is a sequential access medium, so even though access times may be
poor, the rate of continuously writing or reading data can actually be very fast.
Some new tape drives are even faster than modern hard disks.
Ɣ Hard disk: The capacity/price ratio of hard disk has been rapidly improving for
many years. This is making it more competitive with magnetic tape as a bulk
storage medium. The main advantages of hard disk storage are low access
times, availability, capacity and ease of use. External disks can be connected
via local interfaces like SCSI, USB, FireWire, or eSATA, or via longer distance
technologies like Ethernet, iSCSI, or Fibre Channel. Some disk-based backup
systems, such as Virtual Tape Libraries, support data deduplication which can
dramatically reduce the amount of disk storage capacity consumed by daily
and weekly backup data. The main disadvantages of hard disk backups are
that they are easily damaged, especially while being transported (e.g., for
off-site backups), and that their stability over periods of years is a relative
unknown.
Ɣ Optical storage: Recordable CDs, DVDs, and Blu-ray Discs are commonly
used with personal computers and generally have low media unit costs.
However, the capacities and speeds of these and other optical discs are
typically an order of magnitude lower than hard disk or tape. Many optical disk
formats are WORM type, which makes them useful for archival purposes since
the data cannot be changed. The use of an auto-changer or jukebox can make
optical discs a feasible option for larger-scale backup systems. Some optical
storage systems allow for cataloged data backups without human contact with
the discs, allowing for longer data integrity.

Amity Directorate of Distance and Online Education


104 Information Security and Risk Management
Ɣ Solid state storage: Also known as flash memory, thumb drives, USB flash
Notes
drives, Compact Flash, Smart Media, Memory Stick, Secure Digital cards, etc.,
these devices are relatively expensive for their low capacity in comparison to
hard disk drives, but are very convenient for backing up relatively low data
volumes. A solid state drive does not contain any movable parts unlike its
magnetic drive counterpart, making it less susceptible to physical damage, and
can have huge throughput in the order of 500 Mbit/s to 6 Gbit/s. The capacity
offered from SSDs continues to grow and prices are gradually decreasing as
they become more common.
Ɣ Remote backup service: As broadband Internet access becomes more
widespread, remote backup services are gaining in popularity. Backing up via
the Internet to a remote location can protect against some worst-case
scenarios such as fires, floods, or earthquakes which would destroy any
backups in the immediate vicinity along with everything else. There are,
however, a number of drawbacks to remote backup services. First, Internet
connections are usually slower than local data storage devices. Residential
broadband is especially problematic as routine backups must use an upstream
link that’s usually much slower than the downstream link used only
occasionally to retrieve a file from backup. This tends to limit the use of such
services to relatively small amounts of high value data. Secondly, users must
trust a third party service provider to maintain the privacy and integrity of their
data, although confidentiality can be assured by encrypting the data before
transmission to the backup service with an encryption key known only to the
user. Ultimately, the backup service must itself use one of the above methods.
So, this could be seen as a more complex way of doing traditional backups.
Ɣ Floppy disk: During the 1980s and early 1990s, many personal/home
computer users associated backing up mostly with copying to floppy disks.
However, the data capacity of floppy disks failed to catch up with growing
demands, rendering them effectively obsolete.

2.9.2 Managing the Data Repository


Regardless of the data repository model, or data storage media used for backups, a
balance needs to be struck between accessibility, security and cost. These media
management methods are not mutually exclusive and are frequently combined to meet
the user’s needs. Using online disks for staging data before it is sent to a near-line tape
library is a common example.
Ɣ Online: Online backup storage is typically the most accessible type of data
storage, which can begin restore in milliseconds of time. A good example is an
internal hard disk or a disk array (maybe connected to SAN). This type of
storage is very convenient and speedy, but is relatively expensive. Online
storage is quite vulnerable to being deleted or overwritten, either by accident,
by intentional malevolent action, or in the wake of a data-deleting virus
payload.
Ɣ Near-line: Near-line storage is typically less accessible and less expensive
than online storage, but still useful for backup data storage. A good example
would be a tape library with restore times ranging from seconds to a few
minutes. A mechanical device is usually used to move media units from
storage into a drive where the data can be read or written. Generally, it has
safety properties similar to online storage.
Ɣ Offline: Offline storage requires some direct human action to provide access to
the storage media, e.g., inserting a tape into a tape drive or plugging in a cable.
Amity Directorate of Distance and Online Education
Computer Security 105

Because the data are not accessible via any computer except during limited
Notes
periods in which they are written or read back, they are largely immune to a
whole class of online backup failure modes. Access time will vary depending
on whether the media are on-site or off-site.
Ɣ Off-site data protection: To protect against a disaster or other site-specific
problem, many people choose to send backup media to an off-site vault. The
vault can be as simple as a system administrator’s home office or as
sophisticated as a disaster-hardened, temperature-controlled, high-security
bunker with facilities for backup media storage. Importantly, a data replica can
be off-site but also online (e.g., an off-site RAID mirror). Such a replica has
fairly limited value as a backup, and should not be confused with an offline
backup.
Ɣ Backup site or disaster recovery center (DR center): In the event of a
disaster, the data on backup media will not be sufficient to recover. Computer
systems onto which the data can be restored and properly configured networks
are necessary too. Some organizations have their own data recovery centers
that are equipped for this scenario. Other organizations contract this out to a
third-party recovery center. Because a DR site is itself a huge investment,
backing up is very rarely considered the preferred method of moving data to a
DR site. A more typical way would be remote disk mirroring, which keeps the
DR data as up-to-date as possible.

2.9.3 Selection and Extraction of Data


A successful backup job starts with selecting and extracting coherent units of data.
Most data on modern computer systems is stored in discrete units, known as files. These
files are organized into file systems. Files that are actively being updated can be thought
of as “live” and present a challenge to back up. It is also useful to save metadata that
describes the computer or the file system being backed up.
Deciding what to back up at any given time is a harder process than it seems. By
backing up too much redundant data, the data repository will fill up too quickly. Backing
up an insufficient amount of data can eventually lead to the loss of critical information.

2.9.4 Files
Copying files: With file-level approach, making copies of files is the simplest and
most common way to perform a backup. A means to perform this basic function is
included in all backup software and all operating systems.
Partial file copying: Instead of copying whole files, one can limit the backup to only
the blocks or bytes within a file that have changed in a given period of time. This
technique can use substantially less storage space on the backup medium, but requires
a high level of sophistication to reconstruct files in a restore situation. Some
implementations require integration with the source file system.
Deleted files: To prevent the unintentional restoration of files that have been
intentionally deleted, a record of the deletion must be kept.

2.9.5 File Systems


File system dump: Instead of copying files within a file system, a copy of the whole
file system itself in block-level can be made. This is also known as a raw partition
backup and is related to disk imaging. The process usually involves unmounting the file
system and running a program like dd (Unix). Because the disk is read sequentially and
with large buffers, this type of backup can be much faster than reading every file normally,

Amity Directorate of Distance and Online Education


106 Information Security and Risk Management
especially when the file system contains many small files, is highly fragmented, or is
Notes
nearly full. But because this method also reads the free disk blocks that contain no useful
data, this method can also be slower than conventional reading, especially when the file
system is nearly empty. Some file systems, such as XFS, provide a “dump” utility that
reads the disk sequentially for high performance while skipping unused sections. The
corresponding restore utility can selectively restore individual files or the entire volume at
the operator’s choice.
Identification of changes: Some file systems have an archive bit for each file that
says it was recently changed. Some backup software looks at the date of the file and
compares it with the last backup to determine whether the file was changed.
Versioning file system: A versioning file system keeps track of all changes to a file
and makes those changes accessible to the user. Generally, this gives access to any
previous version, all the way back to the file’s creation time. An example of this is the
Wayback versioning file system for Linux.

2.9.6 Live Data


If a computer system is in use while it is being backed up, the possibility of files
being open for reading or writing is real. If a file is open, the contents on disk may not
correctly represent what the owner of the file intends. This is especially true for database
files of all kinds. The term fuzzy backup can be used to describe a backup of live data
that looks like it ran correctly, but does not represent the state of the data at any single
point in time. This is because the data being backed up changed in the period of time
between when the backup started and when it finished. For databases in particular, fuzzy
backups are worthless.
Snapshot backup: A snapshot is an instantaneous function of some storage
systems that presents a copy of the file system as if it were frozen at a specific point in
time, often by a copy-on-write mechanism. An effective way to back up live data is to
temporarily quiesce them (e.g., close all files), take a snapshot, and then resume live
operations. At this point, the snapshot can be backed up through normal methods. While
a snapshot is very handy for viewing a file system as it was at a different point in time, it is
hardly an effective backup mechanism by itself.
Open file backup: Many backup software packages feature the ability to handle
open files in backup operations. Some simply check for openness and try again later. File
locking is useful for regulating access to open files.
When attempting to understand the logistics of backing up open files, one must
consider that the backup process could take several minutes to back up a large file such
as a database. In order to back up a file that is in use, it is vital that the entire backup
represent a single-moment snapshot of the file, rather than a simple copy of a
read-through. This represents a challenge when backing up a file that is constantly
changing. Either the database file must be locked to prevent changes, or a method must
be implemented to ensure that the original snapshot is preserved long enough to be
copied, all while changes are being preserved. Backing up a file while it is being changed,
in a manner that causes the first part of the backup to represent data before changes
occur to be combined with later parts of the backup after the change results in a
corrupted file that is unusable, as most large files contain internal references between
their various parts that must remain consistent throughout the file.
Cold database backup: During a cold backup, the database is closed or locked
and not available to users. The data files do not change during the backup process so the
database is in a consistent state when it is returned to normal operation.

Amity Directorate of Distance and Online Education


Computer Security 107

Hot database backup: Some database management systems offer a means to


Notes
generate a backup image of the database while it is online and usable (“hot”). This
usually includes an inconsistent image of the data files plus a log of changes made while
the procedure is running. Upon a restore, the changes in the log files are reapplied to
bring the copy of the database up-to-date (the point in time at which the initial hot backup
ended).

2.9.7 Limitations
An effective backup scheme will take into consideration the limitations of the
situation.
Backup window: The period of time when backups are permitted to run on a
system is called the backup window. This is typically the time when the system sees the
least usage and the backup process will have the least amount of interference with
normal operations. The backup window is usually planned with users’ convenience in
mind. If a backup extends past the defined backup window, a decision is made whether it
is more beneficial to abort the backup or to lengthen the backup window.
Performance impact: All backup schemes have some performance impact on the
system being backed up. For example, for the period of time that a computer system is
being backed up, the hard drive is busy reading files for the purpose of backing up, and
its full bandwidth is no longer available for other tasks. Such impacts should be analyzed.
Costs of hardware, software and labour: All types of storage media have a finite
capacity with a real cost. Matching the correct amount of storage capacity (over time)
with the backup needs is an important part of the design of a backup scheme. Any
backup scheme has some labor requirement, but complicated schemes have
considerably higher labor requirements. The cost of commercial backup software can
also be considerable.
Network bandwidth: Distributed backup systems can be affected by limited network
bandwidth.
Implementation: Meeting the defined objectives in the face of the above limitations
can be a difficult task. The tools and concepts below can make that task more
achievable.
Scheduling: Using a job scheduler can greatly improve the reliability and
consistency of backups by removing part of the human element. Many backup software
packages include this functionality.
Authentication: Over the course of regular operations, the user accounts and/or
system agents that perform the backups need to be authenticated at some level. The
power to copy all data off of or onto a system requires unrestricted access. Using an
authentication mechanism is a good way to prevent the backup scheme from being used
for unauthorized activity.
Chain of trust: Removable storage media are physical items and must only be
handled by trusted individuals. Establishing a chain of trusted individuals (and vendors) is
critical to defining the security of the data.
Measuring the process: To ensure that the backup scheme is working as expected,
key factors should be monitored and historical data maintained.
Backup validation (Also known as “backup success validation”): Provides
information about the backup, and proves compliance to regulatory bodies outside the
organization; for example, an insurance company in the USA might be required under
HIPAA to demonstrate that its client data meet records retention requirements. Disaster,
data complexity, data value and increasing dependence upon ever-growing volumes of

Amity Directorate of Distance and Online Education


108 Information Security and Risk Management
data all contribute to the anxiety around and dependence upon successful backups to
Notes
ensure business continuity. Thus, many organizations rely on third party or “independent”
solutions to test, validate, and optimize their backup operations (backup reporting).
Reporting: In larger configurations, reports are useful for monitoring media usage,
device status, errors, vault coordination and other information about the backup process.
Logging: In addition to the history of computer generated reports, activity and
change logs are useful for monitoring backup system events.
Validation: Many backup programs use checksums or hashes to validate that the
data was accurately copied. These offer several advantages. First, they allow data
integrity to be verified without reference to the original file: if the file as stored on the
backup medium has the same checksum as the saved value, then it is very probably
correct. Second, some backup programs can use checksums to avoid making redundant
copies of files, and thus improve backup speed. This is particularly useful for the
de-duplication process.
Monitored backup: Backup processes are monitored by a third party monitoring
center, which alerts users to any errors that occur during automated backups. Monitored
backup requires software capable of pinging the monitoring center’s servers in the case
of errors. Some monitoring services also allow collection of historical meta-data, that can
be used for Storage Resource Management purposes like projection of data growth,
locating redundant primary storage capacity and reclaimable backup capacity.

2.10 Uninterruptible Power Supply (UPS)


An uninterruptible power supply, also uninterruptible power source, UPS or
battery/flywheel backup, is an electrical apparatus that provides emergency power to a
load when the input power source, typically main power, fails. A UPS differs from an
auxiliary or emergency power system or standby generator in that it will provide
near-instantaneous protection from input power interruptions, by supplying energy stored
in batteries, super capacitors, or flywheels. The on-battery runtime of most uninterruptible
power sources is relatively short (only a few minutes) but sufficient to start a standby
power source or properly shut down the protected equipment.
A UPS is typically used to protect hardware such as computers, data centers,
telecommunication equipment or other electrical equipment where an unexpected power
disruption could cause injuries, fatalities, serious business disruption or data loss. UPS
units range in size from units designed to protect a single computer without a video
monitor (around 200 volt-ampere rating) to large units powering entire data centers or
buildings. The world’s largest UPS, the 46-megawatt Battery Electric Storage System
(BESS), in Fairbanks, Alaska, powers the entire city and nearby rural communities during
outages.

2.10.1 Common Power Problems


The primary role of any UPS is to provide short-term power when the input power
source fails. However, most UPS units are also capable in varying degrees of correcting
common utility power problems:
1. Voltage spike or sustained overvoltage
2. Momentary or sustained reduction in input voltage
3. Noise, defined as a high frequency transient or oscillation, usually injected into
the line by nearby equipment
4. Instability of the mains frequency

Amity Directorate of Distance and Online Education


Computer Security 109

5. Harmonic distortion: defined as a departure from the ideal sinusoidal waveform


Notes
expected on the line
UPS units are divided into categories based on which of the above problems they
address, and some manufacturers categorize their products in accordance with the
number of power-related problems they address.

2.10.2 Technologies
The three general categories of modern UPS systems are online, line-interactive or
standby. An online UPS uses a “double conversion” method of accepting AC input,
rectifying to DC for passing through the rechargeable battery (or battery strings), then
inverting back to 120 V/230 V AC for powering the protected equipment. A
line-interactive UPS maintains the inverter in line and redirects the battery’s DC current
path from the normal charging mode to supplying current when power is lost. In a
standby (“offline”) system, the load is powered directly by the input power and the backup
power circuitry is only invoked when the utility power fails. Most UPS below 1 kVA are of
the line-interactive or standby varieties which are usually less expensive.
For large power units, Dynamic Uninterruptible Power Supplies (DUPS) are
sometimes used. A synchronous motor/alternator is connected on the mains via a choke.
Energy is stored in a flywheel. When the mains power fails, an eddy-current regulation
maintains the power on the load as long as the flywheel’s energy is not exhausted. DUPS
are sometimes combined or integrated with a diesel generator that is turned on after a
brief delay, forming a diesel rotary uninterruptible power supply (DRUPS).
A fuel cell UPS has been developed in recent years using hydrogen and a fuel cell
as a power source, potentially providing long-run times in a small space.

Offline/Standby

Charger Battery Inverter

Normal AC Power

Charger Battery Inverter

Over/Undervoltage; Loss of Power

Offline/Standby UPS. Typical protection time: 0-20 minutes.


Capacity expansion: Usually not available.

Amity Directorate of Distance and Online Education


110 Information Security and Risk Management
The offline/standby UPS (SPS) offers only the most basic features, providing surge
Notes
protection and battery backup. The protected equipment is normally connected directly to
incoming utility power. When the incoming voltage falls below or rises above a
predetermined level, the SPS turns on its internal DC-AC inverter circuitry, which is
powered from an internal storage battery. The UPS then mechanically switches the
connected equipment on to its DC-AC inverter output. The switchover time can be as
long as 25 milliseconds depending on the amount of time it takes the standby UPS to
detect the lost utility voltage. The UPS will be designed to power certain equipment, such
as a personal computer, without any objectionable dip or brownout to that device.

Line-interactive

Battery Battery

Charger Charger
Inverter Inverter

Normal AC Power Large Over/Undervoltage; Loss of Power

Battery Battery

Charger Charger
Inverter Inverter

Small Overvoltage Small Undervoltage

Line-interactive UPS. Typical protection time: 5-30 minutes.


Capacity expansion: Several hours.

The line-interactive UPS is similar in operation to a standby UPS, but with the
addition of a multi-tap variable-voltage autotransformer. This is a special type of
transformer that can add or subtract powered coils of wire, thereby increasing or
decreasing the magnetic field and the output voltage of the transformer. This is also
known as a Buck-boost transformer.
This type of UPS is able to tolerate continuous undervoltage brownouts and
overvoltage surges without consuming the limited reserve battery power. It instead
compensates by automatically selecting different power taps on the autotransformer.
Depending on the design, changing the autotransformer tap can cause a very brief output
power disruption, which may cause UPSs equipped with a power-loss alarm to “chirp” for
a moment.
This has become popular even in the cheapest UPSs because it takes advantage of
components already included. The main 50/60 Hz transformer used to convert between
line voltage and battery voltage needs to provide two slightly different turns ratios: one to
convert the battery output voltage (typically a multiple of 12 V) to line voltage, and a
second one to convert the line voltage to a slightly higher battery charging voltage (such
as a multiple of 14 V). The difference between the two voltages is because charging a
battery requires a delta voltage (up to 13-14 V for charging a 12 V battery). Furthermore,
it is easier to do the switching on the line-voltage side of the transformer because of the
lower currents on that side.

Amity Directorate of Distance and Online Education


Computer Security 111

To gain the buck/boost feature, all that is required is two separate switches so that
Notes
the AC input can be connected to one of the two primary taps, while the load is connected
to the other, thus using the main transformer’s primary windings as an autotransformer.
The battery can still be charged while “bucking” an overvoltage, but while “boosting” an
undervoltage, the transformer output is too low to charge the batteries.
Autotransformers can be engineered to cover a wide range of varying input voltages,
but this requires more taps and increases complexity, and expense of the UPS. It is
common for the autotransformer to cover a range only from about 90 V to 140 V for
120 V power, and then switch to battery if the voltage goes much higher or lower than
that range.
In low-voltage conditions, the UPS will use more current than normal. So, it may
need a higher current circuit than a normal device. For example, to power a 1000-W
device at 120 V, the UPS will draw 8.33 A. If a brownout occurs and the voltage drops to
100 V, the UPS will draw 10 A to compensate. This also works in reverse, so that in an
overvoltage condition, the UPS will need less current.

2.10.3 Online/Double-conversion UPS


In an online UPS, the batteries are always connected to the inverter, so that no
power transfer switches are necessary. When power loss occurs, the rectifier simply
drops out of the circuit and the batteries keep the power steady and unchanged. When
power is restored, the rectifier resumes carrying most of the load and begins charging the
batteries, though the charging current may be limited to prevent the high-power rectifier
from overheating the batteries and boiling off the electrolyte. The main advantage of an
online UPS is its ability to provide an “electrical firewall” between the incoming utility
power and sensitive electronic equipment.
The online UPS is ideal for environments where electrical isolation is necessary or
for equipment that is very sensitive to power fluctuations. Although once previously
reserved for very large installations of 10 kW or more, advances in technology have now
permitted it to be available as a common consumer device, supplying 500 W or less. The
initial cost of the online UPS may be higher, but its total cost of ownership is generally
lower due to longer battery life. The online UPS may be necessary when the power
environment is “noisy”, when utility power sags, outages and other anomalies are
frequent, when protection of sensitive IT equipment loads is required, or when operation
from an extended-run backup generator is necessary.
The basic technology of the online UPS is the same as in a standby or line-interactive
UPS. However, it typically costs much more, due to it having a much greater current
AC-to-DC battery-charger/rectifier, and with the rectifier and inverter designed to run
continuously with improved cooling systems. It is called a double-conversion UPS due to
the rectifier directly driving the inverter, even when powered from normal AC current.

2.10.4 Other Designs

Hybrid Topology/Double Conversion on Demand


These hybrid Rotary UPS designs do not have official designations, although one
name used by UTL is “double conversion on demand”. This style of UPS is targeted
towards high-efficiency applications while still maintaining the features and protection
level offered by double conversion.
A hybrid (double conversion on demand) UPS operates as an offline/standby UPS
when power conditions are within a certain preset window. This allows the UPS to
achieve very high efficiency ratings. When the power conditions fluctuate outside of the

Amity Directorate of Distance and Online Education


112 Information Security and Risk Management
pre-defined windows, the UPS switches to online/double-conversion operation. In
Notes
double-conversion mode, the UPS can adjust for voltage variations without having to use
battery power, can filter out line noise and control frequency. Examples of this
hybrid/double conversion on demand UPS design are the HP R8000, HP R12000, HP
RP12000/3 and the Eaton Blade UPS.

Ferro-resonant
Ferro-resonant units operate in the same way as a standby UPS unit; however, they
are online with the exception that a ferro-resonant transformer is used to filter the output.
This transformer is designed to hold energy long enough to cover the time between
switching from line power to battery power and effectively eliminates the transfer time.
Many ferro-resonant UPSs are 82-88% efficient (AC/DC-AC) and offer excellent
isolation.
The transformer has three windings, one for ordinary mains power, the second for
rectified battery power, and the third for output AC power to the load.
This once was the dominant type of UPS and is limited to around the 150 kVA range.
These units are still mainly used in some industrial settings (oil and gas, petrochemical,
chemical, utility, and heavy industry markets) due to the robust nature of the UPS. Many
ferro-resonant UPSs utilizing controlled ferro technology may not interact with power-
factor-correcting equipment.

DC Power
A UPS designed for powering DC equipment is very similar to an online UPS,
except that it does not need an output inverter. Also, if the UPS’s battery voltage is
matched with the voltage the device needs, the device’s power supply will not be needed
either. Since one or more power conversion steps are eliminated, this increases
efficiency and runtime.
Many systems used in telecommunications use an extra-low voltage “common
battery” 48 V DC power, because it has less restrictive safety regulations, such as being
installed in conduit and junction boxes. DC has typically been the dominant power source
for telecommunications, and AC has typically been the dominant source for computers
and servers.
There has been much experimentation with 48 V DC power for computer servers, in
the hope of reducing the likelihood of failure and the cost of equipment. However, to
supply the same amount of power, the current would be higher than an equivalent 115 V
or 230 V circuit; greater current requires larger conductors, or more energy lost as heat.
A laptop computer is a classic example of a PC with a DC UPS built in.
High voltage DC (380 V) is finding use in some data center applications, and allows
for small power conductors, but is subject to the more complex electrical code rules for
safe containment of high voltages.

Rotary
A rotary UPS uses the inertia of a high-mass spinning flywheel (flywheel energy
storage) to provide short-term ride-through in the event of power loss. The flywheel also
acts as a buffer against power spikes and sags, since such short-term power events are
not able to appreciably affect the rotational speed of the high-mass flywheel. It is also one
of the oldest designs, predating vacuum tubes and integrated circuits.
It can be considered to be online since it spins continuously under normal conditions.
However, unlike a battery-based UPS, flywheel-based UPS systems typically provide
10 to 20 seconds of protection before the flywheel has slowed and power output stops. It

Amity Directorate of Distance and Online Education


Computer Security 113

is traditionally used in conjunction with standby diesel generators, providing backup


Notes
power only for the brief period of time the engine needs to start running and stabilize its
output.
The rotary UPS is generally reserved for applications needing more than 10,000 W
of protection, to justify the expense and benefit from the advantages rotary UPS systems
bring. A larger flywheel or multiple flywheels operating in parallel will increase the reserve
running time or capacity.
Because the flywheels are a mechanical power source, it is not necessary to use an
electric motor or generator as an intermediary between it and a diesel engine designed to
provide emergency power. By using a transmission gearbox, the rotational inertia of the
flywheel can be used to directly start up a diesel engine, and once running, the diesel
engine can be used to directly spin the flywheel. Multiple flywheels can likewise be
connected in parallel through mechanical countershafts, without the need for separate
motors and generators for each flywheel.
They are normally designed to provide very high current output compared to a
purely electronic UPS, and are better able to provide inrush current for inductive loads
such as motor startup or compressor loads, as well as medical MRI and cath lab
equipment. It is also able to tolerate short-circuit conditions up to 17 times larger than an
electronic UPS, permitting one device to blow a fuse and fail while other devices still
continue to be powered from the rotary UPS.
Its life cycle is usually far greater than a purely electronic UPS, up to 30 years or
more. But they do require periodic downtime for mechanical maintenance, such as ball
bearing replacement. In larger systems, redundancy of the system ensures the
availability of processes during this maintenance. Battery-based designs do not require
downtime if the batteries can be hot-swapped, which is usually the case for larger units.
Newer rotary units use technologies such as magnetic bearings and air-evacuated
enclosures to increase standby efficiency and reduce maintenance to very low levels.
Typically, the high-mass flywheel is used in conjunction with a motor-generator
system. These units can be configured as:
1. A motor driving a mechanically connected generator,
2. A combined synchronous motor and generator wound in alternating slots of a
single rotor and stator,
3. A hybrid rotary UPS, designed similar to an online UPS, except that it uses the
flywheel in place of batteries. The rectifier drives a motor to spin the flywheel,
while a generator uses the flywheel to power the inverter.
In Case No. 3, the motor generator can be synchronous/synchronous or induction/
synchronous. The motor side of the unit in Case Nos. 2 and 3 can be driven directly by an
AC power source (typically when in inverter bypass), a 6-step double-conversion motor
drive, or a 6-pulse inverter. Case No. 1 uses an integrated flywheel as a short-term
energy source instead of batteries to allow time for external, electrically coupled gensets
to start and be brought online. Case Nos. 2 and 3 can use batteries or a free-standing
electrically coupled flywheel as the short-term energy source.

Amity Directorate of Distance and Online Education


114 Information Security and Risk Management
Form Factors
Notes

Rack-mount Model

UPS systems come in several different forms and sizes. However, the two most
common forms are tower and rack-mount.

Tower Model
Tower models stand upright on the ground or on a desk/shelf, and are typically used
in network workstations or desktop computer applications.

Rack-mount Model
Rack-mount models can be mounted in standard 19" rack enclosures and can
require anywhere from 1U to 12U (rack space). They are typically used in server and
networking applications.

Applications N + 1
In large business environments where reliability is of great importance, a single
huge UPS can also be a single point of failure that can disrupt many other systems. To
provide greater reliability, multiple smaller UPS modules and batteries can be integrated
together to provide redundant power protection equivalent to one very large UPS. “N + 1”
means that if the load can be supplied by N modules, the installation will contain N + 1
modules. In this way, failure of one module will not impact system operation.

Multiple Redundancy
Many computer servers offer the option of redundant power supplies, so that in the
event of one power supply failing, one or more other power supplies are able to power
the load. This is a critical point – each power supply must be able to power the entire
server by itself.
Redundancy is further enhanced by plugging each power supply into a different
circuit (i.e., to a different circuit breaker).
Redundant protection can be extended further yet by connecting each power supply
to its own UPS. This provides double protection from both a power supply failure and a
UPS failure, so that continued operation is assured. This configuration is also referred to
as 1 + 1 or 2N redundancy. If the budget does not allow for two identical UPS units, then
it is common practice to plug one power supply into main power and the other into the
UPS.

Outdoor Use
When a UPS system is placed outdoors, it should have some specific features that
guarantee that it can tolerate weather with no effect on performance. Factors such as
temperature, humidity, rain, and snow among others should be considered by the
manufacturer when designing an outdoor UPS system. Operating temperature ranges for
outdoor UPS systems could be around í40°C to +55°C.

Amity Directorate of Distance and Online Education


Computer Security 115

Outdoor UPS systems can be pole, ground (pedestal), or host mounted. Outdoor
Notes
environment could mean extreme cold; in which case, the outdoor UPS system should
include a battery heater mat, or extreme heat; in which case, the outdoor UPS system
should include a fan system or an air conditioning system.

2.11 Summary
Computer security covers all the processes and mechanisms by which digital
equipment, information and services are protected from unintended or unauthorized
access, change or destruction and the process of applying security measures to ensure
confidentiality, integrity, and availability of data both in transit and at rest. It includes
controlling physical access to the hardware, as well as protecting against harm that may
come via network access, data and code injection, and due to malpractice by operators,
whether intentional, accidental, or due to them being tricked into deviating from secure
procedures.
Firewalls are computer security systems that protect your office/home PCs or your
network from intruders, hackers and malicious code. Firewalls protect you from offensive
software that may come to reside on your systems or from prying hackers. In a day and
age when online security concerns are the top priority of the computer users, firewalls
provide you with the necessary safety and protection.

2.12 Check Your Progress


I. Fill in the Blanks
1. Computer __________ provide guidelines for the morally acceptable use of
computers.
2. __________ are standards of moral conduct.
3. An individual who gathers and sells personal data about other individuals is
known as a data __________ or information reseller.
4. In a __________ identity, the electronic profile of one person is switched with
another.
5. __________ programs record virtually everything you do on your computer.
6. In the United States, the __________ of Information Act, you are entitled to
look at any personal records held by government agencies.
7. Your Web browser creates a __________ file that includes the location of sites
visited by your computer system.
8. __________ are specialized programs that are deposited on your hard disk.
9. __________ cookies monitor your activities at a single website you visit.
10. Ad network or __________ cookies monitor your activities across all sites you
visit.
11. __________ is used to describe a range of programs that are designed to
secretly record and report on an individual’s activities on the Internet.
II. True or False
1. Ethical issues are the same as legal issues.
2. The issue of accuracy of data is concerned with the responsibility of those who
collect data to ensure it is correct.
3. There is a large market for data collected about individuals.
4. Information collected about you and your buying habits can only be released
with your permission.
5. Your employer is legally obligated to give you written notice that your e-mail is
monitored.

Amity Directorate of Distance and Online Education


116 Information Security and Risk Management
6. Once an information reseller has placed spyware on your computer, it is nearly
Notes
impossible to remove it.
7. Ad network cookies are a type of spyware.
8. The illusion of anonymity is related to the idea that many people believe their
privacy is protected on the Web as long as they are selective about disclosing
their name and other personal information.
9. Traditional cookies monitor your activities at a single site.
10. Adware cookies monitor your activities at a single site.
11. Spyware refers to programs that are designed to secretly record and report an
individual’s activities on the Web.
III. Multiple Choice Questions
1. __________ are used in denial of service attacks, typically against targeted
websites.
(a) Worm
(b) Zombie
(c) Virus
(d) Trojan horse
2. Select the correct order for the different phases of virus execution.
(i) Propagation phase (ii) Dormant phase (iii) Execution phase (iv) Triggering phase
(a) (i), (ii), (iii) and (iv)
(b) (i), (iii), (ii) and (iv)
(c) (ii), (i), (iv) and (iii)
(d) (ii), (iii), (iv) and (i)
3. A __________ attaches itself to executable files and replicates, when the
infected program is executed, by finding other executable files to infect.
(a) Stealth Virus
(b) Polymorphic Virus
(c) Parasitic Virus
(d) Macro Virus
4. __________ is a form of virus explicitly designed to hide itself from detection by
antivirus software.
(a) Stealth Virus
(b) Polymorphic Virus
(c) Parasitic Virus
(d) Macro Virus
5. A __________ creates copies during replication that are functionally equivalent
but have distinctly different bit patterns.
(a) Boot Sector Virus
(b) Polymorphic Virus
(c) Parasitic Virus
(d) Macro Virus
6. A portion of the Polymorphic Virus, generally called a __________, creates, a
random encryption, key to encrypt the remainder of the virus.
(a) Mutual engine
(b) Mutation engine
(c) Multiple engine
(d) Polymorphic engine

Amity Directorate of Distance and Online Education


Computer Security 117

7. State whether the following statement is true.


Notes
(i) A macro virus is platform independent.
(ii) Macro viruses infect documents, not executable portions of code.
(a) Only (i)
(b) Only (ii)
(c) Both (i) and (ii)
(d) None of the (i) and (ii)
8. The type(s) of auto executing macros, in Microsoft word is/are __________.
(a) Auto execute
(b) Auto macro
(c) Command macro
(d) All of the above
9. In __________, the virus places an identical copy of itself into other programs
or into certain system areas on the disk.
(a) Dormant phase
(b) Propagation phase
(c) Triggering phase
(d) Execution phase
10. A __________ is a program that secretly takes over another Internet-attached
computer and then uses that computer to launch attacks.
(a) Worm
(b) Zombie
(c) Virus
(d) Trap doors

2.13 Questions and Exercises


1. What is computer security?
2. Discuss the role of UPS.
3. What is the network activity concept?
4. Discuss the key mechanism of firewall.
5. What are file system security?
6. What is hardening?
7. Discuss the concept of local security policies.

2.14 Key Terms


Ɣ Abuse of Privilege: When a user performs an action that they should not have,
according to organizational policy or law.
Ɣ Access Control Lists: Rules for packet filters (typically routers) that define
which packets to pass and which to block.
Ɣ Access Router: A router that connects your network to the external Internet.
Typically, this is your first line of defense against attackers from the outside
Internet. By enabling access control lists on this router, you’ll be able to provide
a level of protection for all of the hosts “behind” that router, effectively making
that network a DMZ instead of an unprotected external LAN.

Amity Directorate of Distance and Online Education


118 Information Security and Risk Management
Ɣ Application Layer Firewall: A firewall system in which service is provided by
Notes
processes that maintain complete TCP connection state and sequencing.
Application layer firewalls often readdress traffic so that outgoing traffic
appears to have originated from the firewall, rather than the internal host.
Ɣ Authentication: The process of determining the identity of a user that is
attempting to access a system.
Ɣ Authentication Token: A portable device used for authenticating a user.
Authentication tokens operate by challenge/response, time-based code sequences,
or other techniques. This may include paper-based lists of one-time passwords.
Ɣ Authorization: The process of determining what types of activities are
permitted. Usually, authorization is in the context of authentication: once you
have authenticated a user, they may be authorized different types of access or
activity.
Ɣ Bastion Host: A system that has been hardened to resist attack, and which is
installed on a network in such a way that it is expected to potentially come
under attack. Bastion hosts are often components of firewalls, or may be
“outside” web servers or public access systems. Generally, a bastion host is
running some form of general purpose operating system (e.g., Unix, VMS, NT,
etc.) rather than a ROM-based or firmware operating system.
Ɣ Challenge/Response: An authentication technique whereby a server sends
an unpredictable challenge to the user, who computes a response using some
form of authentication token.

2.15 Check Your Progress: Answers


I. Fill in the Blanks
1. ethics
2. Ethics
3. gatherer
4. mistaken
5. Snoopware
6. freedom
7. history
8. Cookies
9. Traditional
10. adware
11. Spyware
II. True or False
1. False
2. True
3. True
4. False
5. False
6. False
7. True
8. True
9. True
10. False
11. True

Amity Directorate of Distance and Online Education


Computer Security 119

III. Multiple Choice Questions


Notes
1. (b) Zombie
2. (c) ii, i, iv and iii
3. (c) Parasitic Virus
4. (a) Stealth Virus
5. (b) Polymorphic Virus
6. (b) Mutation engine
7. (c) Both (i) and (ii)
8. (d) All of the above
9. (b) Propagation phase
10. (b) Zombie

2.16 Case Study

Security Case Studies: Company A


Company A is a major supplier of material and services to the industrial sector. Its
business model relies on electronic transactions with key customers and suppliers.
Company A uses Microsoft BizTalk Application to manage transactions and
communications between internal and external environments.

Potential Threats and Security Concerns


Company A wants to make sure that it processes only messages from authenticated
sources. Some of the documents BizTalk Server processes can contain sensitive
information such as financial and personnel data. Company A verifies each incoming
message by using custom cryptographic APIs. It has also built its physical architecture to
handle its security needs.
Company A uses file transfer protocol (FTP) for some of its message traffic.
Although FTP is inherently not secure, Company A accepts the associated risks because
it has many firewalls to help secure other outward-facing applications. Because
Company A receives some of its incoming data through HTTPS, it is concerned about
denial of service (DoS) attacks from external sources. If a DoS attack does occur, the
company has mechanisms to alert the appropriate people immediately.

Security Architecture
The following figure shows the security architecture that Company A uses. Notice
that it has segmented its environment with firewalls to help protect its front-end
application and content servers, its back-end database and business logic servers, and
its outgoing message infrastructure.

Amity Directorate of Distance and Online Education


120 Information Security and Risk Management

Notes

Figure 2.6: Company A security architecture


Company A has two main methods to send and receive information to and from
BizTalk Server. The first method uses FTP. Company A supports electronic data
interchange (EDI) transactions by using a third party translation service provider to
communicate with its suppliers and partners. This third party translation service provider
handles incoming and outgoing orders that BizTalk Server must process in an EDI format.
The second method that Company A uses is HTTPS. Company A also works with a
third party service provider that serves as a hub for its industry and makes the purchase
and sale of products Company A sells and consumes easier.

Secure Digital Certificates


Company A implements its own secure digital certificates. It manages only a few
certificates. Because it uses a third party service provider, it is less concerned about digital
certificates. Company A realizes that digital certificates are a greater concern for the service
provider, because the service provider interacts with many different institutions.

Security Case Studies: Company B


Company B is a software company. Its business model relies on electronic
transactions with key customers and suppliers. Company B uses a BizTalk Server
implementation for its transactions.
Company B uses BizTalk Server to manage transactions and communications
between internal and external applications. Company B communicates with
approximately 85 internal applications and 2300 trading partners. It currently processes
approximately 2.5 million documents per month, and estimates that it will process
6 million documents per month by the end of 2007.

Amity Directorate of Distance and Online Education


Computer Security 121

Potential Threats and Security Concerns


Notes
Company B wants to make sure that it receives and processes only messages from
authenticated sources. Company B also wants to make sure that it can receive and
retrieve documents from outside its corporate network as safely as possible. The firewall
that separates Company B’s corporate network from the Internet only lets through traffic
from port 80 and port 443. The firewall rejects all other traffic.

Security Architecture
The following figure shows the architecture that Company B uses. Company B uses
BizTalk Server as a message broker to communicate between internal applications and
to process, send, and receive correctly formatted messages to and from its suppliers and
customers. Company B has to process internal and external documents in different
formats. This includes flat files and XML documents.
Company B uses a single firewall to separate its corporate computers from the
Internet. As an added layer of security, Company B incorporates Internet Protocol
security (IPsec) communication between all its corporate servers and workstations that
reside within the corporate network. Company B uses IPsec to encrypt all
communications within its internal domain.
Company B uses a file share server to receive flat files. This file share server
resides outside its corporate network and domain. A firewall separates the file share
server from the corporate network. Company B’s external partners post their flat file
documents on this file share server, and they communicate with the file share server
through an encrypted Point-to-Point Tunneling Protocol (PPTP) pipeline. Company B
protects access to the file share server by partner passwords that expire every 30 days.

Figure 2.7: Company B security architecture

Company B has created a custom file movement application that retrieves the flat
file documents from the file share server and sends them to BizTalk Server for additional
processing. The internal applications for Company B also use the custom file movement
application to pass flat files to BizTalk Server. BizTalk Server transforms these
documents and sends them to Company B’s trading partners.

Amity Directorate of Distance and Online Education


122 Information Security and Risk Management
Before BizTalk Server transforms the partner data to the internal application formats,
Notes
it validates that it has an entry for the sender, receiver, and document type. If BizTalk
Server receives a message for which it does not have an entry for either the sender,
receiver, or document type, BizTalk Server rejects the message, and the operations team
of Company B review the message. The internal applications send messages in a variety
of formats that include EDIFACT, flat file, XML, and ANSI X12.
Company B also receives documents through HTTPS from internal and external
sources. External partners post their documents to a Web server outside the corporate
network. A firewall separates this Web server from the corporate network. The custom
file movement application also retrieves the documents posted through HTTPS.
Company B uses a third party product to encrypt and sign messages to its trading
partners. As an additional piece of security, Company B performs a nightly audit on all
the servers to make sure they have the correct security settings. Company B logs all
exceptions for review.

Threat Model Analysis


A threat model analysis (TMA) is an analysis that helps determine the security risks
posed to a product, application, network, or environment, and how attacks can show up.
The goal is to determine which threats require mitigation and how to mitigate them.
This section provides high-level information about the TMA process. For more
information, see Chapter 4 of Writing Secure Code, Second edition, by Michael Howard
and David LeBlanc.
Some of the benefits of a TMA are:
Ɣ Provides a better understanding of your application
Ɣ Helps you find bugs
Ɣ Can help new team members understand the application in detail
Ɣ Contains important information for other teams that build on your application
Ɣ Useful for testers
The high-level steps to perform a TMA are:
Ɣ Step 1. Collect background information
Ɣ Step 2. Create and analyze the threat model
Ɣ Step 3. Review threats
Ɣ Step 4. Identify mitigation techniques and technologies
Ɣ Step 5. Document Security Model and deployment considerations
Ɣ Step 6. Implement and test mitigations
Ɣ Step 7. Keep the Threat Model in sync with design
Step 1. Collect Background Information
To prepare for a successful TMA, you have to collect some background information.
It is useful to analyze your target environment (an application, program, or the whole
infrastructure) as follows:
Ɣ Identify use-case scenarios. For each use-case scenario for your target
environment, identify how you expect your company to use the target
environment, and any limitations or restrictions on the target environment. This
information helps define the scope of the threat model discussion, and provides
pointers to assets (anything of value to your company, such as data and
computers) and entry points.
Ɣ Create a data flow diagram (DFD) for each scenario. Make sure that you go
deep enough to understand your threats.

Amity Directorate of Distance and Online Education


Computer Security 123

Ɣ Determine the boundaries and scope of the target environment.


Notes
Ɣ Understand the boundaries between trusted and untrusted components.
Ɣ Understand the configuration and administration model for each component.
Ɣ Create a list of external dependencies.
Ɣ Create a list of assumptions about other components on which each
component depends. This helps validate cross-component assumptions, action
items, and follow-up items with other teams.
Step 2. Create and Analyze the Threat Model
After you collect the background information, you should have a threat model
meeting or meetings. Make sure that at least one member of each development
discipline (for example, program managers, developers, and testers) is at the meeting.
Make sure that you remind the attendees that the goal of the meeting is to find threats,
not to fix them. During the threat model meeting, do the following:
Ɣ Examine the DFD for each use case. For each use case, identify:
— Entry points
— Trust boundaries
— Flow of data from entry point to final resting location (and back).
Ɣ Note the assets involved.
Ɣ Discuss each DFD, and look for threats in the following categories for all
entries in the DFD: spoofing identity, tampering with data, repudiation,
information disclosure, denial of service and elevation of privileges.
Ɣ Use the Checklist: Architecture and Design Review to make sure that all threat
categories are covered. For more information about the Architecture and
Design Review, see “Security Architecture and Design Review Index” at
http://go.microsoft.com/fwlink/?LinkId=62590.
Ɣ Create a list of the identified threats. We recommend that this list include the
following: title, brief description (including threat trees), asset (asset), impact(s),
risk, mitigation techniques, mitigation status, and a bug number.
Note
You can add risk mitigation techniques and mitigation status as you review the threats. Do not
spend too much time in these areas during the threat model meeting.
Step 3. Review Threats
After you have identified the threats to your environment, you must rank the risk of
each threat and determine how you want to respond to each threat. You can do this with
additional team meetings or through e-mail. You can use the following effect categories
to calculate risk exposure: Damage potential, Reproducibility, Exploitability, Affected
users, and Discoverability.
After you have a list of the threats to your target environment prioritized by risk, you
must determine how you will respond to each threat. Your response can be to do nothing
(rarely a good choice), warn users about the potential problem, remove the problem, or
fix the problem.
Step 4. Identify Mitigation Techniques and Technologies
After you identify which threats you will fix, you must determine the available
mitigation techniques for each threat, and the most appropriate technology to reduce the
effect of each threat.
For example, depending on the details of your target environment, you can reduce
the effect of data-tamper threats by using authorization techniques. You then have to

Amity Directorate of Distance and Online Education


124 Information Security and Risk Management
determine the appropriate authorization technology to use (for example, discretionary
Notes
access control lists (DACLs), permissions, or IP restrictions).
Important
When you evaluate mitigation techniques and technologies to use, you must consider what
makes business sense for your company, and any policies your company has that might affect
the mitigation technique to choose.
After you complete the TMA, do the following:
Ɣ Document the security model and deployment considerations
Ɣ Implement and test mitigations
Ɣ Keep the threat model synchronized with design
Step 5. Document Security Model and Deployment Considerations
It is valuable to document what you discover during the TMA and how you decide to
reduce the effect of the threats to your target environment. This documentation can be
useful to quality assurance (QA), test, support, and operations personnel. Include
information about other applications that interact or interface with your target environment,
and the firewall and topology recommendations and requirements.
Step 6. Implement and Test Mitigations
When your team is ready to fix threats identified during the TMA, make sure you use
development and deployment checklists to follow secure code and deployment
standards that will help minimize the introduction of new threats.
After you implement a mitigation, make sure you test it to make sure it provides the
level of protection that you want for the threat.
Step 7. Keep the Threat Model in Sync with Design
As you add new applications, servers, and other elements to your environment,
make sure that you revisit the findings of the threat model analysis and do TMAs for any
new functionality.

2.17 Further Readings


1. Stallings, Cryptography and Network Security: Principles and Practice, 5/e
(Prentice Hall, 2010). Relative to this book’s 4th edition, The Network Security
Components and an extra chapter on SNMP are also packaged as Stallings’
Network Security Essentials: Applications and Standards, 3/e (Prentice Hall,
2007).
2. Kaufman, Perlman and Speciner, Network Security: Private Communications
in a Public World, 2/e (Prentice Hall, 2003).
3. Menezes, van Oorschot and Vanstone, Handbook of Applied Cryptography
(CRC Press, 1996; 2001 with corrections), free online for personal use.
4. Stallings and Brown, Computer Security: Principles and Practice, 3/e (2014,
Prentice Hall).
5. Boyle and Panko, Corporate Computer Security, 3rd Edition (2013, Prentice
Hall). See also: Panko, Corporate Computer and Network Security, 2/e (2009,
Prentice Hall).
6. Gollmann, Computer Security, 3/e (2011, Wiley).
7. Smith, Elementary Information Security (2011, Jones & Bartlett Learning).
8. Stamp, Information Security: Principles and Practice, 2/e (2011, Wiley).
9. Goodrich and Tamassia, Introduction to Computer Security (2010,
Addison-Wesley).

Amity Directorate of Distance and Online Education


Computer Security 125

10. Saltzer and Kaashoek, Principles of Computer System Design (2009, Morgan
Kaufmann).
Notes
11. Smith and Marchesini, The Craft of System Security (2007, Addison-Wesley).
12. Pfleeger and Pfleeger, Security in Computing, 4/e (2007, Prentice Hall).
13. Bishop, Computer Security: Art and Science (2002, Addison-Wesley).
14. Adams and Lloyd, Understanding Public-Key Infrastructure, 2nd Edition
(Macmillan Technical, 2002).
15. Housley and Polk, Planning for PKI: Best Practices Guide for Deploying Public
Key Infrastructures (Wiley, 2001).

Miscellaneous Resources
1. IEEE Security and Privacy magazine tables of contents (since Jan. 2003).
2. Review of 10 cryptography books (plus background introduction), Susan
Landau. Bull. Amer. Math. Soc., 41 (2004), pp. 357-367. Copyright 2004, AMS.
3. (classic security paper) J.H. Saltzer and M.D. Schroeder. The Protection of
Information in Computer Systems. Web version. Proc. IEEE 63(9):1278-1308
(Sept.1975). DOI: 10.1109/PROC.1975.9939.
4. DoD Orange Book (1985) and other seminal papers in Computer Security
(thanks to: UC Davis/Matt Bishop).
5. Educational comic strips teaching about password guessing attacks (thanks to
Leah Zhang at Carleton).

Amity Directorate of Distance and Online Education

Potrebbero piacerti anche