Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
Notes
Structure:
2.1 Hardening (Computing)
2.1.1 Operating System
2.1.2 Application Code
2.2 File System Security
2.2.1 File
2.2.2 File Structure
2.2.3 File Type
2.2.4 File Access Mechanisms
2.2.5 Operating System Variations for File System Security
2.3 Local Security Policy in Windows
2.3.1 Opening Local Security Policy Console in Windows
2.3.2 Defining a Password Policy in Windows
2.3.3 Defining an Account Lockout Policy
2.3.4 Defining an Audit Policy
2.3.5 Setting Basic Security Options
2.3.6 Applying Changed Settings in Local Security Policy
2.4 Default Accounts
2.4.1 Removing Unnecessary Default User Accounts
2.5 Network Activity
2.6 Malicious Code
2.6.1 Malicious Code in Java
2.6.2 Malicious Code Threatens Enterprise Security
2.6.3 How to Avoid Malicious Code?
2.6.4 Test for Malicious Code with Vera Code
2.7 Firewall
2.7.1 Introduction
2.7.2 Firewall Logic
2.7.3 Firewall Rules
2.7.4 Types of Firewall
2.7.5 Understanding Packet-filtering Firewalls
2.7.6 Understanding Application/Proxy Firewalls
2.7.7 Understanding Reverse Proxy Firewalls
2.8 Fault Tolerant System
2.8.1 Faults
2.8.2 Approaches to Faults
2.8.3 Achieving Fault Tolerance
2.8.4 Levels of Availability
2.8.5 Active Replication
2.8.6 Primary Backup (Active Standby) Approach
Objectives
After studying this unit, you should be able to understand:
Ɣ Local security policies
Ɣ Firewall
Ɣ File System Security
Ɣ Backup and UPS
Ɣ Default Account
Ɣ A case study based on this unit
Computer Security is the protection of computing systems and the data that they
store or access.
Definition
An operating system is a program that acts as an interface between the user and the
computer hardware and controls the execution of all kinds of programs.
Notes
User 1 User 2 User n
System Application
Softwares Softwares
Software
Operating System
Memory Management
Memory management refers to management of Primary Memory or Main Memory.
Main memory is a large array of words or bytes where each word or byte has its own
address.
Main memory provides a fast storage that can be accessed directly by the CPU. So,
for a program to be executed, it must be in the main memory. Operating System does the
following activities for memory management.
Ɣ Keeps tracks of primary memory, i.e., what part of it are in use by whom and
what part are not in use.
Ɣ In multiprogramming, OS decides which process will get memory when and
how much.
Ɣ Allocates the memory when the process requests it to do so.
Ɣ De-allocates the memory when the process no longer needs it or has been
terminated.
Device Management
OS manages device communication via their respective drivers. Operating System
does the following activities for device management.
Ɣ Keeps tracks of all devices. Program responsible for this task is known as the
I/O controller.
Ɣ Decides which process gets the device when and for how much time.
Ɣ Allocates the device in the efficient way.
Ɣ De-allocates devices.
File Management
A file system is normally organized into directories for easy navigation and usage.
These directories may contain files and other directions. Operating System does the
following activities for file management.
Ɣ Keeps track of information, location, uses, status, etc. The collective facilities
are often known as file system.
Ɣ Decides who gets the resources.
Ɣ Allocates the resources.
Ɣ De-allocates the resources.
or an audio file. Also, macros in documents can also contain application code. Although
Notes
we may hope that in the most part, application code has been written without a malicious
purpose, we cannot rule out this usage, and therefore we must constantly protect our
applications from malicious code.
2.2.1 File
A file is a named collection of related information that is recorded on secondary
storage such as magnetic disks, magnetic tapes and optical disks. In general, a file is a
sequence of bits, bytes, lines or records whose meaning is defined by the files creator
and user.
X, and the Apple Mac OS X Server version 10.4+ File Services Administration Manual
Notes
recommends using only traditional Unix permissions if possible. It also still supports the
Mac OS Classic’s “Protected” attribute.
Solaris ACL support depends on the file system being used; older UFS file system
supports POSIX.1e ACLs, while ZFS supports only NFSv4 ACLs.
Linux supports POSIX.1e ACLs. There is experimental support for NFSv4 ACLs for
ext3 file system.
FreeBSD supports POSIX.1e ACLs on UFS, and NFSv4 ACLs on UFS and ZFS.
IBM z/OS implements file security via RACF (Resource Access Control Facility).
There is also a much more detailed configuration console available – Local Group
Policy Editor. To access this, open Run dialog using keyboard shortcut Windows Key + R,
type gpedit.msc and click OK.
Settings described below are in Computer Configuration, Windows Settings and
Security Settings section of Local Group Policy console window.
Again, please do not change any settings not described below unless you really
know what you are doing. Always read the setting explanation thoroughly!
Notes
This setting defines how many previously used passwords Windows remembers for
each user to prevent frequent reusage of passwords. Usually, 3-5 is enough. Click OK to
close the dialog.
Notes
The next time a user changes his/her password, it must be in accordance with
Notes
Password Policy. If not, error message “The password you typed does not meet the
password policy requirements” will be displayed:
User must then enter a password that satisfies the Password Policy requirements.
The current passwords are not affected by the policy; requirements are checked only
when changing a password. The only change that does apply is maximum password
age – the current passwords will have to be changed after specified number of days. You
can read instructions on creating and remembering strong passwords in the Passwords
article.
Specify the number of times a user can enter a wrong password before Windows
locks the user account. I recommend using “5” for this. Click OK.
Notes
Next, Windows offers default settings for Account lockout duration and Reset
account lockout counter after settings. These settings specify for how long a user
account stays locked after entering a wrong password too many times (during that time,
the user cannot log on to the computer) and after which period of time the count of wrong
passwords entered will be set back to zero.
The defaults are fine, click OK.
Adjust other Audit Policy settings as described below. “No auditing” means clearing
both Success and Failure check boxes.
Ɣ Audit account management – This stores events related to creating,
changing and deleting user accounts. Tick both Success and Failure.
Ɣ Audit directory service access – Leave this at No auditing. This is related to
Active Directory domain servers only.
Scroll down to Network access options and make sure that Network access:
Allow anonymous SID/name translation is set to Disabled.
Then set both Network access: Do not allow anonymous enumeration of SAM
Notes
accounts and Network access. Do not allow anonymous enumeration of SAM accounts
and shares to Enabled. These settings make sure that only authenticated users get
access to shared resources (printers, folders, etc.) over local networks.
Changing these settings spawns a warning dialog “You are about to change this
setting to a value that may affect compatibility with clients, services and applications”.
This can be safely ignored.
Next, set the Network security: LAN Manager authentication level to Send
NTLMv2 response only. Refuse LM if you’re using home network. This prevents using
older and easy-to-crack authentication methods while accessing shared resources.
Please remember that you must set the same authentication level for all Windows
computers on your network, otherwise file and printer sharing will not work!
The setting is not very important for devices that are not connected to a local
network, or not sharing files or printers.
Notes
User ID Description
uucp, nuucp Owner of hidden files used by uucp protocol. The uucp user account is used for
the Unix-to-Unix Copy Program, which is a group of commands, programs, and
files, present on most AIX® systems, that allows the user to communicate with
another AIX system over a dedicated line or a telephone line
The following table lists common group IDs that might not be needed:
Group ID Description
Analyze your system to determine which IDs are indeed not needed. There might
also be additional user and group IDs that you might not need. Before your system goes
into production, perform a thorough evaluation of available IDs.
Notes
Network Cables
Network cables are used to connect computers. The most commonly used cable is
Category 5 cable RJ-45.
Distributors
A computer can be connected to another one via a serial port but if we need to
connect many computers to produce a network, this serial connection will not work. The
solution is to use a central body to which other computers, printers, scanners, etc. can be
connected and then this body will manage or distribute network traffic.
Notes
Router
A router is a type of device which acts as the central point among computers and
other devices that are part of a network. A router is equipped with holes called ports and
computers and other devices are connected to a router using network cables. Nowadays,
router comes in wireless modes using which computers can be connected without any
physical cable.
Network Card
Network card is a necessary component of a computer without which a computer
cannot be connected over a network. It is also known as network adapter or Network
Interface Card (NIC). Most branded computers have network card pre-installed. Network
cards are of two types: Internal and External Network Cards.
Notes
Ɣ Trojan horse: This hostile program conceals itself under other useful
Notes
programs and invites the user to run it. As soon as it executes, it starts deleting
the user’s files and starts installing other malicious software.
Ɣ Spyware: This harmful program is quite different from viruses and trojans but
is equally harmful. It does not spread like virus but it keeps giving annoying
pop-ups to lure you to install its paid version that treacherously claim to protect
your computer. This software secretly collect private information such as credit
card information, social security number, and user names and passwords from
your computer and sends them to remote computers.
Ɣ Adware: This software is somewhat similar to spyware but its main purpose is
advertisement. They may not be considered malicious software because it
comes on your computer with your consent. However, it may hit you with a
barrage of advertisements; from pop-ups to banner ads.
Ɣ Grayware: This software term is used broadly for all computer programs that
are annoying but not necessarily totally destructive. It includes programs such
as adware, joke programs, and dialers.
The anti-virus software needs to be updated with latest anti-virus definition to keep
protecting your computer with the latest malicious programs. Most anti-virus software can
be set to update automatically.
2.7.1 Introduction
Firewalls are computer security systems that protect your office/home PCs or your
network from intruders, hackers and malicious code. Firewalls protect you from offensive
software that may come to reside on your systems or from prying hackers. In a day and
age when online security concerns are the top priority of the computer users, Firewalls
provide you with the necessary safety and protection.
Firewalls are a must have for any kind of computer usage that go online. They
protect you from all kinds of abuse and unauthorized access like Trojans that allow taking
control of your computers by remote logins or backdoors, virus or use your resources to
launch DOS attacks.
Firewalls are worth installing. Be it a basic standalone system, a home network or a
office network, all face varying levels of risks and Firewalls do a good job in mitigating
these risks. Tune the firewall for your requirements and security levels and you have one
reason less to worry.
check for the same matching characteristics in the inflow to decide if it relevant
Notes
information that is coming through.
Advantages
The primary advantage of packet-filtering firewalls is that they are located in just
about every device on the network. Routers, switches, wireless access points, Virtual
Private Network (VPN) concentrators, and so on may all have the capability of being a
packet-filtering firewall.
Routers from the very smallest home office to the largest service provider devices
inherently have the capability to control the flow of packets through the use of ACLs.
Switches may use Routed Access Control Lists (RACLs), which provide the
capability to control traffic flow on a “routed” (Layer 3) interface; Port Access Control Lists
(PACL), which are assigned to a “switched” (Layer 2) interface; and VLAN Access
Control Lists (VACLs), which have the capability to control “switched” and/or “routed”
packets on a VLAN.
Other networking devices may also have the power to enforce traffic flow through
the use of ACLs. Consult the appropriate device documentation for details.
Packet-filtering firewalls are most likely a part of your existing network. These
devices may not be the most feature rich, but when you need to quickly implement a
security policy to mitigate an attack, protect against infected devices, and so on, this may
be the quickest solution to deploy.
Caveats
The challenge with packet-filtering firewalls is that ACLs are static, and packet
filtering has no visibility into the data portion of the IP packet.
Tip: Packet-filtering firewalls do not have visibility into the payload.
Given the issues with packet filtering and the fact that they’re easy to circumvent,
you may dismiss using them entirely. This would be a huge mistake! Taking a holistic
approach and using multiple devices to provide defense in depth is a much better
strategy. An excellent use of packet filtering is on the border of your network, preventing
spoofed traffic and private IP addresses (RFC 1918) from entering or exiting your
network. In-depth ACL configuration is beyond the scope of this book, but a good
reference is RFC 2827.
Top News
Notes
Ɣ Layer 7 is the application layer: It is the user interface to your computer (the
programs), for example, word processor, e-mail application, telnet, and so on.
Ɣ Layer 6 is the presentation layer: It acts as the translator between systems,
converting application layer information to a common format understandable by
different systems. This layer handles encryption and standards such as Motion
Picture Experts Group (MPEG) and Tagged Image File Format (TIFF).
Ɣ Layer 5 is the session layer: It manages the connections or service requests
between computers.
Ɣ Layer 4 is the transport layer: It prepares data for delivery to the network.
Transmission Control Protocol is a function of Layer 4, providing reliable
communication and ordering of data. User Datagram Protocol is also a role of
Layer 4, but it does not provide reliable delivery of data.
Ɣ Layer 3 is the network layer: It is where IP addressing and routing happen.
Data at this layer is considered a “packet.”
Ɣ Layer 2 is the data-link layer: It handles the reliable sending of information.
Media Access Control is a component of Layer 2. Data at this layer would be
referred to as a “frame.”
Ɣ Layer 1 is the physical layer: It is composed of the objects that you can see
and some that you cannot, such as electrical characteristics.
Tip: Use the following mnemonic to remember the OSI model: All People Seem to Need Data
Processing.
Advantages
Because application/proxy firewalls act on behalf of a client, they provide an
additional “buffer” from port scans, application attacks, and so on. For example, if an
attacker found a vulnerability in an application, the attacker would have to compromise
the application/proxy firewall before attacking devices behind the firewall. The
application/proxy firewall can also be patched quickly in the event that a vulnerability is
discovered. The same may not hold true for patching all the internal devices.
Caveats
A computer acting on your behalf at the application layer has a couple of caveats.
First, that device needs to know how to handle your specific application. Web-based
applications are very common, but if you have an application that’s unique, your proxy
firewall may not be able to support it without making some significant modifications.
Second, application firewalls are generally much slower than packet-filtering or
packet-inspection firewalls because they have to run applications, maintain state for both
the client and server, and also perform inspection of traffic.
Figure 2.2 shows an application/proxy firewall and how a session is established
through it to a web server on the outside.
Note: For simplicity’s sake, Domain Name Service (DNS), Address Resolution Protocol (ARP),
and Layer 2/3 information is not discussed in this example. This also assumes that the client web
application has been configured with the appropriate proxy information.
Advantages
To be really effective, reverse proxies must understand how the application behaves.
For example, suppose you have a web application that requires input of a mailing address,
specifically the area code. The application firewall needs to be intelligent enough to deny
information that could cause the server on the far end any potential issues, such as a
buffer overflow.
Note: A buffer overflow occurs when the limits of a given allocated space of memory is exceeded.
This results in adjacent memory space being overwritten. If the memory space is overwritten with
malicious code, it can potentially be executed, compromising the device.
If a cracker were to input letters or a long string of characters into the ZIP code field,
this could cause the application to crash. As we all know, well-written applications
“shouldn’t” allow this type of behavior, but “carbon-based” mistakes do happen, and
having defense in depth helps minimize the human element. Having the proxy keenly
aware of the application and what’s allowed is a very tedious process. When any
changes are made to the application, the proxy must also change. Most organizations
deploying reverse-proxy firewalls don’t usually couple their proxy and applications so
tightly to get the most advantage from them, but they should.
Another advantage of a reverse-proxy firewall is for Secure Sockets Layer (SSL)
termination. Two significant benefits are that SSL does not burden the application server,
because it is very processor-intensive, and when decryption is done on a separate device,
the plain-text traffic can be inspected. Many reverse-proxy firewalls perform SSL
termination with an additional hardware module, consequently reducing the burden on
the main processors. Figure 2.3 shows an example of a client on the outside (Internet, for
example) requesting information from a web server.
Notes
Steps 2 The proxy server can have multiple locations from which to glean information. In this
and 3 example, it requests graphics from Application Server 1 and real-time data from
Application Server 2.
Steps 4 The proxy server prepares the content received from Application Servers 1 and 2 for
and 5 distribution to the requesting client.
Step 6 The proxy server responds to the client with the requested information.
As you can see by the previous example, the function of a reverse-proxy server is
very beneficial in distributing the processing function over multiple devices and by
providing an additional layer of security between the client requesting information and the
devices that contain the “real” data.
2.8.1 Faults
A fault in a system is some deviation from the expected behavior of the system: a
malfunction. Faults may be due to a variety of factors, including hardware failure,
software bugs, operator (user) error, and network problems.
Physical redundancy deals with devices, not data. We add extra equipment to
Notes
enable the system to tolerate the loss of some failed components. RAID disks and
backup name servers are examples of physical redundancy.
When addressing physical redundancy, we can differentiate redundancy from
replication. With replication, we have several units operating concurrently and a voting
(quorum) system to select the outcome. With redundancy, only one unit is functioning
while the redundant units are standing by to fill in case the unit ceases to work.
Cold failover entails application restart on the backup machine. When a backup machine
Notes
takes over, it starts all the applications that were previously running on the primary
system. Of course, any work that the primary may have done is now lost. With warm
failover, applications periodically write checkpoint files onto stable storage that is shared
with the backup system. When the backup system takes over, it reads the checkpoint
files to bring the applications to the state of the last checkpoint. Finally, with hot failover,
applications on the backup run in lockstep synchrony with applications on the primary,
taking the same inputs as on the primary. When the backup takes over, it is in the exact
state that the primary was in when it failed. However, if the failure was caused by
software, then there is a good chance that the backup died from the same bug since it
received the same inputs.
TCP Retransmission
TCP/IP (Transmission Control Protocol) is the reliable virtual circuit service transport
layer protocol provided on top of the unreliable network layer Internet Protocol. TCP
relies that the sender receives an acknowledgement from the receiver whenever a
packet is received. If the sender does not receive that acknowledgment within a certain
amount of time, it assumes that the packet was lost. The sender then retransmits the
packet. In Windows, the retransmission timer is initialized to three seconds. The default
maximum number of retransmissions is five. The retransmission time is adjusted based
on the usual delay of a specific connection. TCP is an example of time redundancy.
2.9 Backup
In information technology, a backup, or the process of backing up, refers to the
copying and archiving of computer data so it may be used to restore the original after a
data loss event. The verb form is to back up in two words, whereas the noun is backup.
Backups have two distinct purposes. The primary purpose is to recover data after its
loss, be it by data deletion or corruption. Data loss can be a common experience of
computer users. A 2008 survey found that 66% of respondents had lost files on their
home PC. The secondary purpose of backups is to recover data from an earlier time,
according to a user-defined data retention policy, typically configured within a backup
application for how long copies of data are required. Though backups popularly represent
a simple form of disaster recovery, and should be part of a disaster recovery plan, by
themselves, backups should not alone be considered disaster recovery. One reason for
this is that not all backup systems or backup applications are able to reconstitute a
computer system or other complex configurations such as a computer cluster, active
directory servers, or a database server, by restoring only data from a backup.
Since a backup system contains at least one copy of all data worth saving, the data
storage requirements can be significant. Organizing this storage space and managing
the backup process can be a complicated undertaking. A data repository model can be
used to provide structure to the storage. Nowadays, there are many different types of
data storage devices that are useful for making backups. There are also many different
ways in which these devices can be arranged to provide geographic redundancy, data
security, and portability.
Before data are sent to their storage locations, they are selected, extracted, and
manipulated. Many different techniques have been developed to optimize the backup
procedure. These include optimizations for dealing with open files and live data sources
Storage Media
Regardless of the repository model that is used, the data has to be stored on some
data storage medium.
Ɣ Magnetic tape: Magnetic tape has long been the most commonly used
medium for bulk data storage, backup, archiving, and interchange. Tape has
typically an order of magnitude better capacity/price ratio when compared to
hard disk, but recently the ratios for tape and hard disk have become a lot
closer. There are many formats, many of which are proprietary or specific to
certain markets like mainframes or a particular brand of personal computer.
Tape is a sequential access medium, so even though access times may be
poor, the rate of continuously writing or reading data can actually be very fast.
Some new tape drives are even faster than modern hard disks.
Ɣ Hard disk: The capacity/price ratio of hard disk has been rapidly improving for
many years. This is making it more competitive with magnetic tape as a bulk
storage medium. The main advantages of hard disk storage are low access
times, availability, capacity and ease of use. External disks can be connected
via local interfaces like SCSI, USB, FireWire, or eSATA, or via longer distance
technologies like Ethernet, iSCSI, or Fibre Channel. Some disk-based backup
systems, such as Virtual Tape Libraries, support data deduplication which can
dramatically reduce the amount of disk storage capacity consumed by daily
and weekly backup data. The main disadvantages of hard disk backups are
that they are easily damaged, especially while being transported (e.g., for
off-site backups), and that their stability over periods of years is a relative
unknown.
Ɣ Optical storage: Recordable CDs, DVDs, and Blu-ray Discs are commonly
used with personal computers and generally have low media unit costs.
However, the capacities and speeds of these and other optical discs are
typically an order of magnitude lower than hard disk or tape. Many optical disk
formats are WORM type, which makes them useful for archival purposes since
the data cannot be changed. The use of an auto-changer or jukebox can make
optical discs a feasible option for larger-scale backup systems. Some optical
storage systems allow for cataloged data backups without human contact with
the discs, allowing for longer data integrity.
Because the data are not accessible via any computer except during limited
Notes
periods in which they are written or read back, they are largely immune to a
whole class of online backup failure modes. Access time will vary depending
on whether the media are on-site or off-site.
Ɣ Off-site data protection: To protect against a disaster or other site-specific
problem, many people choose to send backup media to an off-site vault. The
vault can be as simple as a system administrator’s home office or as
sophisticated as a disaster-hardened, temperature-controlled, high-security
bunker with facilities for backup media storage. Importantly, a data replica can
be off-site but also online (e.g., an off-site RAID mirror). Such a replica has
fairly limited value as a backup, and should not be confused with an offline
backup.
Ɣ Backup site or disaster recovery center (DR center): In the event of a
disaster, the data on backup media will not be sufficient to recover. Computer
systems onto which the data can be restored and properly configured networks
are necessary too. Some organizations have their own data recovery centers
that are equipped for this scenario. Other organizations contract this out to a
third-party recovery center. Because a DR site is itself a huge investment,
backing up is very rarely considered the preferred method of moving data to a
DR site. A more typical way would be remote disk mirroring, which keeps the
DR data as up-to-date as possible.
2.9.4 Files
Copying files: With file-level approach, making copies of files is the simplest and
most common way to perform a backup. A means to perform this basic function is
included in all backup software and all operating systems.
Partial file copying: Instead of copying whole files, one can limit the backup to only
the blocks or bytes within a file that have changed in a given period of time. This
technique can use substantially less storage space on the backup medium, but requires
a high level of sophistication to reconstruct files in a restore situation. Some
implementations require integration with the source file system.
Deleted files: To prevent the unintentional restoration of files that have been
intentionally deleted, a record of the deletion must be kept.
2.9.7 Limitations
An effective backup scheme will take into consideration the limitations of the
situation.
Backup window: The period of time when backups are permitted to run on a
system is called the backup window. This is typically the time when the system sees the
least usage and the backup process will have the least amount of interference with
normal operations. The backup window is usually planned with users’ convenience in
mind. If a backup extends past the defined backup window, a decision is made whether it
is more beneficial to abort the backup or to lengthen the backup window.
Performance impact: All backup schemes have some performance impact on the
system being backed up. For example, for the period of time that a computer system is
being backed up, the hard drive is busy reading files for the purpose of backing up, and
its full bandwidth is no longer available for other tasks. Such impacts should be analyzed.
Costs of hardware, software and labour: All types of storage media have a finite
capacity with a real cost. Matching the correct amount of storage capacity (over time)
with the backup needs is an important part of the design of a backup scheme. Any
backup scheme has some labor requirement, but complicated schemes have
considerably higher labor requirements. The cost of commercial backup software can
also be considerable.
Network bandwidth: Distributed backup systems can be affected by limited network
bandwidth.
Implementation: Meeting the defined objectives in the face of the above limitations
can be a difficult task. The tools and concepts below can make that task more
achievable.
Scheduling: Using a job scheduler can greatly improve the reliability and
consistency of backups by removing part of the human element. Many backup software
packages include this functionality.
Authentication: Over the course of regular operations, the user accounts and/or
system agents that perform the backups need to be authenticated at some level. The
power to copy all data off of or onto a system requires unrestricted access. Using an
authentication mechanism is a good way to prevent the backup scheme from being used
for unauthorized activity.
Chain of trust: Removable storage media are physical items and must only be
handled by trusted individuals. Establishing a chain of trusted individuals (and vendors) is
critical to defining the security of the data.
Measuring the process: To ensure that the backup scheme is working as expected,
key factors should be monitored and historical data maintained.
Backup validation (Also known as “backup success validation”): Provides
information about the backup, and proves compliance to regulatory bodies outside the
organization; for example, an insurance company in the USA might be required under
HIPAA to demonstrate that its client data meet records retention requirements. Disaster,
data complexity, data value and increasing dependence upon ever-growing volumes of
2.10.2 Technologies
The three general categories of modern UPS systems are online, line-interactive or
standby. An online UPS uses a “double conversion” method of accepting AC input,
rectifying to DC for passing through the rechargeable battery (or battery strings), then
inverting back to 120 V/230 V AC for powering the protected equipment. A
line-interactive UPS maintains the inverter in line and redirects the battery’s DC current
path from the normal charging mode to supplying current when power is lost. In a
standby (“offline”) system, the load is powered directly by the input power and the backup
power circuitry is only invoked when the utility power fails. Most UPS below 1 kVA are of
the line-interactive or standby varieties which are usually less expensive.
For large power units, Dynamic Uninterruptible Power Supplies (DUPS) are
sometimes used. A synchronous motor/alternator is connected on the mains via a choke.
Energy is stored in a flywheel. When the mains power fails, an eddy-current regulation
maintains the power on the load as long as the flywheel’s energy is not exhausted. DUPS
are sometimes combined or integrated with a diesel generator that is turned on after a
brief delay, forming a diesel rotary uninterruptible power supply (DRUPS).
A fuel cell UPS has been developed in recent years using hydrogen and a fuel cell
as a power source, potentially providing long-run times in a small space.
Offline/Standby
Normal AC Power
Line-interactive
Battery Battery
Charger Charger
Inverter Inverter
Battery Battery
Charger Charger
Inverter Inverter
The line-interactive UPS is similar in operation to a standby UPS, but with the
addition of a multi-tap variable-voltage autotransformer. This is a special type of
transformer that can add or subtract powered coils of wire, thereby increasing or
decreasing the magnetic field and the output voltage of the transformer. This is also
known as a Buck-boost transformer.
This type of UPS is able to tolerate continuous undervoltage brownouts and
overvoltage surges without consuming the limited reserve battery power. It instead
compensates by automatically selecting different power taps on the autotransformer.
Depending on the design, changing the autotransformer tap can cause a very brief output
power disruption, which may cause UPSs equipped with a power-loss alarm to “chirp” for
a moment.
This has become popular even in the cheapest UPSs because it takes advantage of
components already included. The main 50/60 Hz transformer used to convert between
line voltage and battery voltage needs to provide two slightly different turns ratios: one to
convert the battery output voltage (typically a multiple of 12 V) to line voltage, and a
second one to convert the line voltage to a slightly higher battery charging voltage (such
as a multiple of 14 V). The difference between the two voltages is because charging a
battery requires a delta voltage (up to 13-14 V for charging a 12 V battery). Furthermore,
it is easier to do the switching on the line-voltage side of the transformer because of the
lower currents on that side.
To gain the buck/boost feature, all that is required is two separate switches so that
Notes
the AC input can be connected to one of the two primary taps, while the load is connected
to the other, thus using the main transformer’s primary windings as an autotransformer.
The battery can still be charged while “bucking” an overvoltage, but while “boosting” an
undervoltage, the transformer output is too low to charge the batteries.
Autotransformers can be engineered to cover a wide range of varying input voltages,
but this requires more taps and increases complexity, and expense of the UPS. It is
common for the autotransformer to cover a range only from about 90 V to 140 V for
120 V power, and then switch to battery if the voltage goes much higher or lower than
that range.
In low-voltage conditions, the UPS will use more current than normal. So, it may
need a higher current circuit than a normal device. For example, to power a 1000-W
device at 120 V, the UPS will draw 8.33 A. If a brownout occurs and the voltage drops to
100 V, the UPS will draw 10 A to compensate. This also works in reverse, so that in an
overvoltage condition, the UPS will need less current.
Ferro-resonant
Ferro-resonant units operate in the same way as a standby UPS unit; however, they
are online with the exception that a ferro-resonant transformer is used to filter the output.
This transformer is designed to hold energy long enough to cover the time between
switching from line power to battery power and effectively eliminates the transfer time.
Many ferro-resonant UPSs are 82-88% efficient (AC/DC-AC) and offer excellent
isolation.
The transformer has three windings, one for ordinary mains power, the second for
rectified battery power, and the third for output AC power to the load.
This once was the dominant type of UPS and is limited to around the 150 kVA range.
These units are still mainly used in some industrial settings (oil and gas, petrochemical,
chemical, utility, and heavy industry markets) due to the robust nature of the UPS. Many
ferro-resonant UPSs utilizing controlled ferro technology may not interact with power-
factor-correcting equipment.
DC Power
A UPS designed for powering DC equipment is very similar to an online UPS,
except that it does not need an output inverter. Also, if the UPS’s battery voltage is
matched with the voltage the device needs, the device’s power supply will not be needed
either. Since one or more power conversion steps are eliminated, this increases
efficiency and runtime.
Many systems used in telecommunications use an extra-low voltage “common
battery” 48 V DC power, because it has less restrictive safety regulations, such as being
installed in conduit and junction boxes. DC has typically been the dominant power source
for telecommunications, and AC has typically been the dominant source for computers
and servers.
There has been much experimentation with 48 V DC power for computer servers, in
the hope of reducing the likelihood of failure and the cost of equipment. However, to
supply the same amount of power, the current would be higher than an equivalent 115 V
or 230 V circuit; greater current requires larger conductors, or more energy lost as heat.
A laptop computer is a classic example of a PC with a DC UPS built in.
High voltage DC (380 V) is finding use in some data center applications, and allows
for small power conductors, but is subject to the more complex electrical code rules for
safe containment of high voltages.
Rotary
A rotary UPS uses the inertia of a high-mass spinning flywheel (flywheel energy
storage) to provide short-term ride-through in the event of power loss. The flywheel also
acts as a buffer against power spikes and sags, since such short-term power events are
not able to appreciably affect the rotational speed of the high-mass flywheel. It is also one
of the oldest designs, predating vacuum tubes and integrated circuits.
It can be considered to be online since it spins continuously under normal conditions.
However, unlike a battery-based UPS, flywheel-based UPS systems typically provide
10 to 20 seconds of protection before the flywheel has slowed and power output stops. It
Rack-mount Model
UPS systems come in several different forms and sizes. However, the two most
common forms are tower and rack-mount.
Tower Model
Tower models stand upright on the ground or on a desk/shelf, and are typically used
in network workstations or desktop computer applications.
Rack-mount Model
Rack-mount models can be mounted in standard 19" rack enclosures and can
require anywhere from 1U to 12U (rack space). They are typically used in server and
networking applications.
Applications N + 1
In large business environments where reliability is of great importance, a single
huge UPS can also be a single point of failure that can disrupt many other systems. To
provide greater reliability, multiple smaller UPS modules and batteries can be integrated
together to provide redundant power protection equivalent to one very large UPS. “N + 1”
means that if the load can be supplied by N modules, the installation will contain N + 1
modules. In this way, failure of one module will not impact system operation.
Multiple Redundancy
Many computer servers offer the option of redundant power supplies, so that in the
event of one power supply failing, one or more other power supplies are able to power
the load. This is a critical point – each power supply must be able to power the entire
server by itself.
Redundancy is further enhanced by plugging each power supply into a different
circuit (i.e., to a different circuit breaker).
Redundant protection can be extended further yet by connecting each power supply
to its own UPS. This provides double protection from both a power supply failure and a
UPS failure, so that continued operation is assured. This configuration is also referred to
as 1 + 1 or 2N redundancy. If the budget does not allow for two identical UPS units, then
it is common practice to plug one power supply into main power and the other into the
UPS.
Outdoor Use
When a UPS system is placed outdoors, it should have some specific features that
guarantee that it can tolerate weather with no effect on performance. Factors such as
temperature, humidity, rain, and snow among others should be considered by the
manufacturer when designing an outdoor UPS system. Operating temperature ranges for
outdoor UPS systems could be around í40°C to +55°C.
Outdoor UPS systems can be pole, ground (pedestal), or host mounted. Outdoor
Notes
environment could mean extreme cold; in which case, the outdoor UPS system should
include a battery heater mat, or extreme heat; in which case, the outdoor UPS system
should include a fan system or an air conditioning system.
2.11 Summary
Computer security covers all the processes and mechanisms by which digital
equipment, information and services are protected from unintended or unauthorized
access, change or destruction and the process of applying security measures to ensure
confidentiality, integrity, and availability of data both in transit and at rest. It includes
controlling physical access to the hardware, as well as protecting against harm that may
come via network access, data and code injection, and due to malpractice by operators,
whether intentional, accidental, or due to them being tricked into deviating from secure
procedures.
Firewalls are computer security systems that protect your office/home PCs or your
network from intruders, hackers and malicious code. Firewalls protect you from offensive
software that may come to reside on your systems or from prying hackers. In a day and
age when online security concerns are the top priority of the computer users, firewalls
provide you with the necessary safety and protection.
Security Architecture
The following figure shows the security architecture that Company A uses. Notice
that it has segmented its environment with firewalls to help protect its front-end
application and content servers, its back-end database and business logic servers, and
its outgoing message infrastructure.
Notes
Security Architecture
The following figure shows the architecture that Company B uses. Company B uses
BizTalk Server as a message broker to communicate between internal applications and
to process, send, and receive correctly formatted messages to and from its suppliers and
customers. Company B has to process internal and external documents in different
formats. This includes flat files and XML documents.
Company B uses a single firewall to separate its corporate computers from the
Internet. As an added layer of security, Company B incorporates Internet Protocol
security (IPsec) communication between all its corporate servers and workstations that
reside within the corporate network. Company B uses IPsec to encrypt all
communications within its internal domain.
Company B uses a file share server to receive flat files. This file share server
resides outside its corporate network and domain. A firewall separates the file share
server from the corporate network. Company B’s external partners post their flat file
documents on this file share server, and they communicate with the file share server
through an encrypted Point-to-Point Tunneling Protocol (PPTP) pipeline. Company B
protects access to the file share server by partner passwords that expire every 30 days.
Company B has created a custom file movement application that retrieves the flat
file documents from the file share server and sends them to BizTalk Server for additional
processing. The internal applications for Company B also use the custom file movement
application to pass flat files to BizTalk Server. BizTalk Server transforms these
documents and sends them to Company B’s trading partners.
10. Saltzer and Kaashoek, Principles of Computer System Design (2009, Morgan
Kaufmann).
Notes
11. Smith and Marchesini, The Craft of System Security (2007, Addison-Wesley).
12. Pfleeger and Pfleeger, Security in Computing, 4/e (2007, Prentice Hall).
13. Bishop, Computer Security: Art and Science (2002, Addison-Wesley).
14. Adams and Lloyd, Understanding Public-Key Infrastructure, 2nd Edition
(Macmillan Technical, 2002).
15. Housley and Polk, Planning for PKI: Best Practices Guide for Deploying Public
Key Infrastructures (Wiley, 2001).
Miscellaneous Resources
1. IEEE Security and Privacy magazine tables of contents (since Jan. 2003).
2. Review of 10 cryptography books (plus background introduction), Susan
Landau. Bull. Amer. Math. Soc., 41 (2004), pp. 357-367. Copyright 2004, AMS.
3. (classic security paper) J.H. Saltzer and M.D. Schroeder. The Protection of
Information in Computer Systems. Web version. Proc. IEEE 63(9):1278-1308
(Sept.1975). DOI: 10.1109/PROC.1975.9939.
4. DoD Orange Book (1985) and other seminal papers in Computer Security
(thanks to: UC Davis/Matt Bishop).
5. Educational comic strips teaching about password guessing attacks (thanks to
Leah Zhang at Carleton).