Sei sulla pagina 1di 16

ccccccccccccccccccccc 

c
 c
c
ccccccccccccccccccccc
cccccccccccccccccccccccccccccccccccccc c 
c  c
c c
AN APPLICATION OF SIGNAL AND     c c   c
IMAGE PROCESSING
 cc

c
 c Intrusion detection systems fall into one
of three categories: Host Based Intrusion Detection
Intrusion detection is an application of Systems (HIDS), Network Based Intrusion
signal and image processing. It is used to detect or Detection Systems (NIDS), and hybrids of the two.
sense, analyse and manage to respond to the sensed
or collected packet datas. They sense datas through IDS
sensors which are connected both inside and
outside of the firewall. The datas are divided into
small packets and sent to the host by means of NIDS HYBRID HIDS
protocol layers. The datas are filtered to remove all
traffics and responded to the filtered datas. A Host Intrusion Detection System will
Intrusion detection helps in preserving wildlife require some software that resides on the system
sanctuaries and in military appliances. The sensors and can scan all host resources for activity; some
Detects the presence of wild animals and gives a just scan syslog and event logs for activity. It will
detailed description on the presence of their living. log any activities it discovers to a secure database
This helps in saving lives of the wild animals. It is and check to see whether the events match any
also used in military applications. When the malicious event record listed in the knowledge
sensors are fixed in both inside and outside of the base.
firewall it detects the position of the terrorist,
which helps the army in the battle. There are many
applications in which intrusion detection plays a A Network Intrusion Detection System is
major role to humans in preserving our nation. usually inline on the network, and it analyzes
network packets looking for attacks. NIDS receives
all packets on a particular network segment,

 c including switched networks. It carefully
reconstructs the streams of traffic to analyze them
An intrusion is an active sequence of related for pattern of malicious behavior. Most NIDS are
events that deliberately try to cause harm, such as equipped with facilities to log their activities and
rendering system unusable, accessing unauthorized report or alarm on questionable events.
information or manipulating such information. To
record the information about both successful and
unsuccessful attempts, the security professionals A Hybrid Intrusion Detection System
place the devices that examine the network traffic, combines a HIDS, which monitors events occurring
called sensors. These sensors are kept in both front on the host system, with a NIDS which monitors
of the firewall (the unprotected area) and behind network traffic.
the firewall (the protected area) and values through
comparing the information recorded by the two.
The basic process of IDS is that a NIDS or
HIDS passively collects data and preprocesses and
An intrusion Detection Systems (IDS) can classifies them. Statistical analysis can be done to
be defined as the tool, methods and resources to determine whether the information falls outside
help identity, access and report unauthorized normal activity, and if so, it is then matched against
activity. Intrusion Detection is typically one part of a knowledge base. If a match found, an alert is sent.
an overall protection system that is installed around Goal of the Intrusion Detection Systems is to
a system or device. IDS work at the network layer improve an information system¶s security. It¶s an
of the OSI model and sensors are placed at the organization of the consistent parts of data and
choke points on the network. They analyze packets their interrelationships to identify any analogous
to find specific patterns in the network traffic- if activity of interest.
they find such a pattern in the traffic, an alert is
logged and a response can be based on data
recorded. This goal can be further broken down as follows:
 .c Create records of relevant activity for
follow up. Here, in the intrusion detection
2.c Criminal prosecution of intrusion attacks. systems we get the information passively after
3.c Act as a deterrent to malicious attack. the fact, so we would get an alert after the fact.
The response can be set to be automatically
Intrusion analysis process can be broken down into performed, or can be done manually after
four phases and they are as follows: someone manually analyzed the situation.
 .c Preprocessing
2.c Analysis
3.c Response
4.c Refinement efinementc
This is the stage where fine tunings is done,
based on the previous usage and detected
Preprocessingc intrusions. This helps in reducing false positive
levels and to have more security tool. These are
Is a key function once collected from an tool like CTR (Cisco Threat Response) that helps
IDS or IPS sensors. In this step data, are with the refining stage by actually making sure
organized in some fashion for classification. c that an alert is valid by checking whether you are
This stage would help in determine the format vulnerable to the attack or not. Rule based
the data are put into, which would be a canonical detection, even known as signature detection,
format or a structured database. Once the data are pattern matching and misuse detection.
formatted they are further classified, this
classifications depends upon the analysis
schemas being used. If it¶s a rule-based
detection, and the classification will involve rules INTRUSION DETECTION ARCHIETECTURE
and pattern descriptors. And if it¶s anomaly
detection used, then we would have statistical The roles performed by and relationships
profile based on difference algorithms in which among machines, device, applications and
the user behavior is base lined overtime and any processes, including the conventions used for
behavior that falls outside that classification is communicating between them, define
flagged as an anomaly. architecture. The intrusion detection architecture
is a designed structure on which every element
fits. An effective architecture is one in which
On completion of the classification each machine, device, component and process
process the data is concatenated and put into a perform its role in an effective and coordinated
defined detection template of some object by manner resulting in information processing and
replacing variables with values. These detection output.
templates populate the Knowledge base, which Different types of tired architectures are as follows:
are stored in the core analysis engine:  .c Single tired architecture
2.c Multi tired architecture
 .c Detection of backdoor Net bus.
2.c Detection of unexpected privilege
escalation. inglectiredc rchitecturec
3.c Detection of modification of system log
files. A single tired architecture, the most
4.c Detection of Backdoor Sub seven. basic of the architecture discussed here is one in
5.c Oracle grant attempt. which components in an IDS collect data and
process data themselves, rather than passing the
output they collect to another set of components.
nalysisc Example HIDS tool that takes the output system
logs and compares it to known patterns of attack.
Once the preprocessing is
completed, the analysis stage begins. The data
record is compared with the Knowledge base. ultictiredc rchitecturec
The data record will either be logged as an
intrusion event or it will be dropped and next A multi-tired architecture involves
data record is analyzed. multiple components that pass information to
each other. An ID mainly consists of three parts
esponseccc and they are as under:
 .c Sensors Sensors are critical in intrusion detection
2.c Analyzers or agents architectures. Sensors are critical in intrusion-
3.c Manager detection architectures; they are the beginning
Sensors perform data collection. Example point of intrusion detection and prevention because
Network sensors are often programs that capture they supply the initial data about potentially
data from network interfaces, even they collect malicious activity. A deeper look at sensor
data from system logs and other sources such as functionality, deployment, and security will
personal firewalls and TCP wrapper. Sensors provide insight into exactly what sensors are and
pass information to agents which monitor the how they work.
intrusive activity on their individual hosts. Each
sensor and agent is configured to run on the
particular operating environment in which it is ensorc unctionsccccccccccccccccc
placed. Agents are specialized to perform one
and only one function. Example One agent Considering all the possible
might, examine nothing but TCP while the other intrusion-detection components within a particular
agent would examine only FTP connections. architecture, sensors are usually (but not always)
the lowest end components. In other words, sensors
typically do not have very sophisticated
functionality. They are usually designed only to
When an agent has determined that obtain certain data and pass them on. There are two
an attack has occurred or is going to occur then it basic types of sensors: network-based and host-
sends information to the manager components based sensors.
which can perform a variety of function
including:
 .c Collecting and displaying alerts on a etworkcasedcensorc
console.
2.c Triggering a pager or calling a cellular Network-based sensors, the more
phone number. frequently deployed of the two types, are programs
3.c Strong the information regarding the or network devices (such as physical devices) that
incident in the database. capture data in packets traversing a local Ethernet
4.c Sending information to the host that stops or token ring or a network switching point. One of
it from executing certain instructions in the greatest advantages of network-based sensors is
memory. the sheer number of hosts for which they can
5.c Sending commands to firewall. provide data. In an extreme case, one sensor might
be used to monitor all traffic coming into and out
of a network. If the network has a thousand hosts,
A central collection point allows the sensor can, in theory, gather data about misuse
for grater ease in analyzing logs because all the and anomalies in all thousand hosts. The cost-
log information is available at one location. effectiveness of this approach is huge (although
Additionally, writing log data to a different critics justifiably point out that a single sensor is
system (the one in which the manager resides) also likely to miss a considerable amount of data
from the one that produced is advisable, because that may be critical to an intrusion-detection effort
if attackers destroy log data on the original if the sensor does not happen to be recording traffic
system, the database would be still available in on the particular network route over which packets
the central server. Some of the advantages of the containing the data are sent). Additionally, if
multi tired architecture include greater efficiency configured properly, sensors do not burden the
and depth of analysis and some of the downsides network with much additional traffic, especially if
include increased cost and complexity. two network interfaces²one for monitoring and
the other for management²are used. A monitoring
interface has no TCP/IP stack whatsoever, nor does
Architecture consists of three components it have any linkage to any IP address, both of which
and these three components are as follows: make it an almost entirely transparent entity on the
 .c Sensors network.
2.c Agents or Analyzers
3.c Manager Component
The programs that intrusion-
detection tools most frequently use as sensors are
tcpdump and libpcap. To reiterate, tcpdump
ensorsc captures data from packets and prints packet
headers of packets that match a particular filter (or
Boolean) expression. Packet parameters that are (substitute user) command and the root password²
particularly useful in intrusion detection and a possible indication that an attacker has exploited
prevention are time, source and destination a vulnerability to gain root privileges.
addresses, source and destination ports, TCP flags,
initial sequence number from the source IP for the
initial connection, ending sequence number,
number of bytes, and window size. tcpdump is an ensorc
eploymentconsiderationsc
application, but libpcap is a library called by an
application. libpcap is designed to gather packet Many sensors require that a host be
data from the kernel of the operating system and running one or more network interfaces in
then move it to one or more applications²in this promiscuous mode. In many current Unix systems,
particular case, to intrusion-detection applications. entering this command ifconfig <interface>will
For example, an Ethernet card may obtain packet produce standard output that displays the IP
data from a network. The underlying operating address, the MAC address, the net mask, and other
system over which libpcap runs will process each important parameters, including ³promisc´ if the
packet in many ways, starting with determining interface is in promiscuous mode. Note that if there
what kind of packet it is by removing the Ethernet is only one network interface, it is not necessary to
header to get to the next layer up the stack. In all enter the name of the interface in question.
likelihood, the next layer will be the IP layer; if so,
the IP header must be removed to determine the
protocol at the next layer of the stack (although it is Sensors can be placed outside of
important to note that in the case of the IP protocol, exterior firewalls, inside them, or both. Sensors
hexadecimal values of  , 6, or    starting at byte outside exterior firewalls record information about
position 40 within the packet header indicate that Internet attacks. Web servers, FTP servers, external
the transport protocol is ICMP, TCP, or UDP (User DNS servers, and mail servers are often placed
Datagram Protocol), respectively). If the packet is a outside of the firewall, making them much more
TCP packet, the TCP header is also removed and likely to be attacked than other hosts. Placing these
the contents of the packet are then passed on to the systems within an organization¶s internal network
next layer up, the application layer. libpcap potentially makes them lesser targets, because
provides intrusion-detection applications with this being within the internal network at least affords
data (payload) so that these applications can some protection (such as one or more filtering
analyze the content to look for attack signatures, barriers provided by firewalls and screening
names of hacking tools, and so forth. libpcap is routers). At the same time, however, having these
advantageous not only in that it provides a standard servers within the internal network will increase the
interface to these applications, but also because, traffic load for the internal network and will also
like tcpdump, it is public domain software. expose the internal network more if any of these
servers become compromised. Given that servers
placed outside of the internal network are more
vulnerable to attack, it is a good idea to place at
ost-asedcensorsc least one network-based sensor in one or more
demilitarized zones (DMZ¶s).
Host-based sensors, like network-based
sensors, could possibly also receive packet data
captured by network interfaces and then send the Installing host-based sensors
data somewhere. Instead of being set to provides better precision of analysis, because all
promiscuous mode, the network interface on each data gleaned by each sensor are for a particular
host would have to be set to capture only data sent host, and because the data indicate what traffic that
to that particular host. However, doing so would host actually received (and also possibly how that
not make much sense, given the amount of host reacted to the input). In contrast, sensors
processing of data that would have to occur on each placed at intermediate points on the network will
host. Instead, most host-based sensors are programs record data about traffic that may or may not have
that produce log data, such as Unix daemons or the actually reached the host. Additionally, host-based
Event Logger in Windows NT, 2000, XP, and sensors are far more likely to provide information
Windows Server 2003. The output of these about insider attacks, especially if the attacker has
programs is sent (often through a utility such as had physical access to the target host.
scp, secure copy, which runs as a cron job, or
through the Windows Task Scheduler) to an
analysis program that either runs on the same host Although network-based sensors
or on a central host. The program might look for (especially those deployed at external gateways to
events indicating that someone has obtained root networks) provide a wide range of data, effectively
privileges on a Unix system without entering the su
covering many hosts, network-based sensors have a A variation on this strategy is to
number of limitations. install a filter that accepts only TCP traffic on a
few sensors, to install a filter that accepts only
UDP traffic on others, and to install still another
One major concern is throughput filter that accepts only ICMP packets on yet other
rate. A sensor may receive so much input that it sensors. Alternatively, sensors can be removed
simply cannot keep up with it. Many types of from points in the network with very high
sensors (such as those based on bpf) have difficulty throughput²removed from the external gateway
handling throughput much greater than 350±400 and moved to gateways for internal subnets, for
Mbps, and a few have trouble with even lower example (see figure 5). Doing this helps overcome
input rates. Although a sensor may react by any throughput limitations in sensors, but it also
dropping excess packets, the sensor may also crash, diminishes the value of sensors in terms of their
yielding no data whatsoever until it is restarted. breadth of intrusion-detection data gathering.
Alternatively, an overloaded sensor may cause
excessive resource utilization on the machine on
which it runs. Still another possibility is to
modify sensors to sample input according to a
probabilistic model if they become overloaded with
Additionally, in switched networks, packets. The rationale for doing so is that although
network-based sensors cannot capture packet data many packets may be missed, at least a
simply by putting an interface in promiscuous representative set of packets can be analyzed,
mode²switched networks present significant yielding a realistic view of what is occurring on the
hurdles to capturing packets. Obtaining packet data network and serving as a basis for stopping attacks
in switched networks thus requires that one or a that are found at gateways and in individual hosts.
number of potential special solutions be used. One
such method is deploying a special kind of port
known as a @   between a switch or Host-based sensors can be placed at
similar device and a host used to monitor network only one point-on a host-so the point within the
traffic. Another is to place a hub between two network where this type of sensor is deployed is
switches, or between a switch and a router, or to not nearly as much of an issue. As always, the
simply tap the network traffic using a vampire or benefits should outweigh the costs. The costs of
other type of tap. deploying host-based sensors generally include
greater financial cost (because of the narrower
scope of host-based as opposed to network-based
Encrypted network traffic presents sensors), greater utilization of system resources on
an even further level of complication. The most each system on which they are deployed, and the
frequently used solution is placing a sensor at an consequences of being blind to what is happening
endpoint where the traffic is in cleartext. on a host due to unauthorized disabling of the
sensor on the host (especially if that host is a
sensitive or valuable system). Although network-
One possible solution for based sensors are generally used in DMZs, for
bandwidth problems in sensors is to install filters example, deploying a host-based sensor on a
that limit the types of packets that the sensor particularly critical public web server within a
receives. In our experience, of all the transport DMZ would be reasonable.
protocols (TCP, UDP, and ICMP) that can be
captured and analyzed, TCP is the most important
to examine because of its association with attack A hybrid approach²deploying
activity. In other words, given a choice between network-based sensors both at external gateways as
analyzing TCP, UDP, or ICMP traffic, TCP would well as at gateways to subnets or within virtual
often be the best single choice for intrusion- local area networks (VLANs) and using host-based
detection and intrusion-prevention purposes. A sensors where most needed²is in many cases the
filter can be configured to limit input for one or best approach to deploying sensors (see figure 6).
more sensors to TCP traffic only. This solution is, This kind of sensor deployment ensures that packet
of course, not optimal from an intrusion-detection data for traffic going in and out of the network, as
perspective because it misses other potentially well as at least some of the internal traffic, will be
important data. But if sensors are becoming captured. If a sensor at an external gateway
overwhelmed with traffic, this is a viable strategy. becomes overwhelmed with data, data capture
within the network itself can still occur.
Furthermore, although the network-based sensors
at external gateways are unlikely to glean
information about insider attacks, the internal of the data regarding a particular system, network,
network-based sensors are much more likely to do or device. Agents normally share information they
so. have obtained with each other by using a particular
communication protocol over the network,
however. When an agent detects an anomaly or
Finally, if host-based sensors fail, policy violation (such as a brute force attempt to su
there will at least be some redundancy²network- to root, or a massive flood of packets over the
based sensors (especially the internally deployed network), in most cases, the agent will immediately
network-based sensors) can provide some notify the other agents of what it has found. This
information about attacks directed at individual new information, combined with the information
systems. another agent already has, may cause that agent to
report that an attack on another host has also
occurred.
gentscorc nalyzersc
Agents sometimes generate false
Agents are relatively new in alarms, too, thereby misleading other agents, at
intrusion detection, having been developed in the least to some degree. The problem of false alarms
mid-  0s. Their primary function is to analyze is one of the proverbial vultures hovering over the
input provided by sensors. Although many entire intrusion-detection arena, and cooperating
definitions exist, we¶ll define an   as a group but false-alarm-generating agents can compound
of processes that run independently and that are this problem. However, a good IDS or IPS will
programmed to analyze system behavior or allow the data that agents generate to be inspected
network events or both to detect anomalous events on a management console, allowing humans to spot
and violations of an organization¶s security policy. false alarms and to intervene by weeding them out.
Each agent should ideally be a bare-bones
implementation of a specialized function. Some
agents may, for example, examine network traffic
and host-based events rather generically, such as gentc
eploymentconsiderationsc
checking whether normal TCP connections have
occurred, their start and stop times, and the amount Decisions about deployment of
of data transmitted or whether certain services have agents are generally easier to make than decisions
crashed. Having agents that examine UDP and concerning where to deploy sensors. Each agent
ICMP traffic is also desirable, but the UDP and can and should be configured to the operating
ICMP protocols are stateless and connectionless. environment in which it runs. In host-based
Other agents might look at specific aspects of intrusion detection, each agent generally monitors
application layer protocols such as FTP, TFTP, one host, although, as mentioned before, sometimes
HTTP, and SMTP as well as authentication sensors on multiple hosts send data to one or more
sessions to determine whether data in packets or central agents. Choosing the particular hosts to
system behavior is consistent with known attack monitor is thus the major dilemma in deciding on
patterns. Still others may do nothing more than the placement of host-based agents. Most
monitor the performance of systems. organizations that use host-based intrusion
detection select ³crown jewel´ hosts, such as
servers that are part of billing and financial
Our definition of agent states that transaction systems, more than any other. A few
agents run independently. This means that if one organizations also choose a few widely dispersed
agent crashes or is impaired in some manner, the hosts throughout the network to supplement
others will continue to run normally (although they network-based intrusion detection. In network-
may not be provided with as much data as before). based intrusion detection, agents are generally
It also means that agents can be added to or deleted placed in two locations:
from the IDS or IPS as needed. In fact, in a small c
intrusion-detection or intrusion-prevention effort, c
perhaps only a few of two dozen or so agents may herectheycarecmostc— —,
be deployed. In a much larger effort, perhaps all of
the agents may be deployed. Efficiency is related to the particular part of a
network where connections to sensors and other
components are placed. The more locally
Although each agent runs coresident the sensors and agents are, the better the
independently on the particular host on which it efficiency.
resides, agents often cooperate with each other.
Each agent may receive and analyze only one part c
c migrate around networks to monitor network traffic
herectheycwillcecsufficientlycc for anomalies and policy violations.
c There are some drawbacks to using agents, too:
Placing agents in secure zones within networks, c
or at least behind one or more firewalls, is c
essential. esourcecallocation
Agents cause system overhead in terms of
hec dvantagesc andc
isadvantagesc ofc memory consumption and CPU allocation.
gentsc c
c c
The use of agents in intrusion detection and alsecalarms
prevention has proven to be one of the greatest
breakthroughs. Advantages include: False alarms from agents can cause a variety
c of problems.
c c
daptailityc c
ccccccccc ime,ceffort,candcresourcescneeded
cccccc Having a number of small agents means that
any of them can potentially be modified to meet the Agents need to be modified according to
needs of the moment; agents can even be an organization's requirements, they must be tuned
programmed to be self-learning, enabling them to to minimize false alarms, and they must be able to
be able to deal with novel threats. run in the environment in which they are
c deployed²this requires time, effort, and financial
c and other resources.
fficiencyc c
cccccccccccc c
The simplicity of most agent
implementations makes them more efficient than if Potentialcforcsuversion
each agent were to support many functions and to
embody a great deal of code. A compromised agent is generally a far
greater problem than a compromised sensor.At a
c bare minimum, an agent needs to incorporate three
esilience functions or components:
 .c A   —
— to
Agents can and do maintain state information communicate with other components of
even if they fail or their data source fails. IDSs.
c 2.c A j ——
that waits in the background for
c data from sensors and messages from
ndependence other agents and then receives them.
3.c A — —
that transmits data and messages
Agents are implemented to run to other components, such as other agents
independently, so if you lose one or two, the others and the manager component, using
will not be affected. established means of communication, such
as network protocols
c
c
calaility Agents can also provide a variety of
additional functions. Agents can, for example,
Agents can readily be adapted to both large- perform correlation analyses on input received
and small-scale intrusion-detection and intrusion- from a wide range of sensors. In some agent
prevention deployments. implementations, the agents themselves generate
c alerts and alarms. In still other implementations,
c agents access large databases to launch queries to
obtain more information about specific source and
oility destination IP addresses associated with certain
types of attacks, times at which known attacks have
Some agents (believe it or not) may actually move occurred, frequencies of scans and other types of
from one system to another; agents might even malicious activity, and so forth. From this kind of
additional information, agents can perform analyses. For example, you might notice suspicious
functions such as tracking the specific phases of activity from a particular internal host and wonder
attacks and estimating the threat that each attack if there has been similar activity over the last few
constitutes. months. Going to a central repository of data is
preferable to having to find the media on which old
data reside and restoring the data to one or more
Although the types of additional systems.
functions that agents can perform may sound
impressive, ³beefing up´ agents to do more than
simple analysis is not necessarily advantageous. Having sufficient disk space for
These additional functions can instead be management purposes is, of course, a major
performed by the manager component (to be consideration. One good solution is RAID
discussed shortly), leaving agents free to do what (Redundant Array of Inexpensive Disks), which
they do best. Simplicity²in computer science writes data to multiple disks and provides
jargon, V 
@  ²should be the redundancy in case of any disk failing. Another
overwhelming consideration with agents, provided, option is optical media, such as worm drives
of course, that each agent implementation (although performance is an issue).
embodies the required functionality. Additionally,
if resource utilization is already a problem with
simple agents, think of the amount of resources Ideally, the manager component of
multifunctional agents will use! an IDS or IPS will also organize the stored data. A
relational database, such as an Oracle or Sybase
c database, is well suited for this purpose. Once a
anagercomponentc database is designed and implemented, new data
can be added on the fly, and queries against
The final component in a multi- database entries can be made.
tiered architecture is the
  (sometimes also
known as the @  ) component. The fundamental
purpose of this component is to provide an lertingc
executive or master control capability for an IDS or
IPS. Another important function that the
manager component can perform is generating
alerts whenever events that constitute high levels of
c threat occur (such as a compromise of a Windows
domain controller or a network information service
anagerc unctionsc (NIS) master server, or of a critical network devi ce,
such as a router). Agents are designed to provide
We¶ve seen that sensors are detection capability, but agents are normally not
normally fairly low-level components and that involved in alerting because it is more efficient to
agents are usually more sophisticated components do so from a central host. Agents instead usually
that, at a minimum, analyze the data they receive send information to a central server that sends
from sensors and possibly from each other. alerts whenever predefined criteria are met. This
Although sensors and agents are capable of requires that the server not only contain the
functioning without a master control component, addresses of operators who need to be notified, but
having such a component is extremely also have an alerting mechanism.
advantageous in helping all components work in a
coordinated manner. Additionally, the manager
component can perform other valuable functions, Normally, alerts are either sent via
which we¶ll explore next.. e-mail or via the Unix syslog facility. If sent via e-
c mail, the message content should be encrypted
c using PGP (Pretty Good Privacy) or some other

atacanagement form of message encryption. Attackers who


c discover the content of messages concerning
IDSs can gather massive amounts detected intrusions or shunned IP addresses can
of data. One way to deal with this amount of data is adjust their strategies (such as using a different
to compress (to help conserve disk space), archive source IP address if the one they have been using is
it, and then periodically purge it. This strategy, now blocked), thereby increasing their efficiency.
however, is in many cases flawed, because having The syslog facility¶s main advantage is
online rather than archived data on storage media is flexibility² syslog can send messages about nearly
often necessary to perform the necessary ongoing anything to just about everybody if desired.
Encrypting syslog content is a much bigger if any system or any part of the network is
challenge than encrypting e-mail message content, overwhelmed.
however. Fortunately, a project called syslog-ng
will sometime in the future provide encryption
solutions for syslog-related traffic. Additionally,
the syslog server will ideally keep an archive of
Policycenerationcandc
istriution:
alerts that have been issued in case someone needs
to inspect the contents of previous alerts. Another function that is often
embedded in the manager component is policy
generation and distribution. In the context of the
manager component,   refers to settings that
ventcorrelationc affect how the various components of an intrusion-
detection or intrusion-prevention system function.
Another extremely important A policy could be set, for example, to activate all
function of the manager component is correlating agents or to move an agent from one machine to
events that have occurred to determine whether another.
they have a common source, whether they were
part of a series of related attacks, and so forth. Still
another function that the manager component may c
perform is high-level analysis of the events that the ecurityc anagementc andc
intrusion-detection or intrusion-prevention tool nforcement:
discovers. The manager component may, for
example, track the progression of each attack from Security management and
stage to stage, starting with the preparatory enforcement is one of the most critical functions
(doorknob rattling) stage. Additionally, this that can be built into the manager component.
component can analyze the threat that each event
constitutes, sending notification to the alert- anagementconsolec
generation function whenever a threat reaches a
certain specified value. Sometimes high-level Providing an interface for users
analysis is performed by a neural network or expert through a management console is yet another
system that looks for patterns in large amounts of function of the manager component. This function,
data. like most of the others covered in this section,
makes a huge difference in terms of the value of an
IDS to an organization. The management console
should display critical information²alerts, the
c status of each component, data in individual
onitoringc thercomponents : packets, audit log data, and so forth²and should
also allow operators to control every part of an
We¶ve seen previously in this IDS. For example, if a sensor appears to be sending
chapter how having a monitoring component to corrupted data, an operator should be able to
check the health of sensors and agents is important. quickly shut down this sensor using the
The manager is the ideal component in which to management console.
place this function because (once again) this
function is most efficient if it is centralized. anagerc
eploymentconsiderationsc
One of the most important
The manager can, for instance, send deployment considerations for the manager
packets to each sensor and agent to determine component is ensuring that it runs on extremely
whether each is responsive to input on the network. high-end hardware (with a large amount of physical
Better yet, the manager can initiate connections to memory and a fast processor) and on a proven and
each sensor and agent to determine whether each is reliable operating system platform (such as Solaris
up and running. If the manager component or Red Hat Linux). Continuous availability of the
determines that any other component has failed, it manager component is essential²any downtime
can notify its alerting facility to generate an alert. generally renders an IDS totally worthless.

In host-based intrusion detection, Using RAID and deploying


the manager can monitor each host to ensure that redundant servers in case one fails are additional
logging or auditing is functioning correctly. The measures that can be used to help assure
manager component can also track utilization of continuous availability.
system and network resources, generating an alert
Alternatively, an organization might be interested
   c cc
c in only certain types of incoming traffic, perhaps
(as often occurs) only TCP traffic because,
c historically, more security attacks have been TCP-
awcPacketcapturec based than anything else.

IDS internal information flow starts


with raw packet capture. This involves not only Filtering raw packet data can be
capturing packets, but also passing the data to the done in several ways. The NIC itself may be able to
next component of the system. P
@ @ mode filter incoming packets. Although early versions of
means a NIC picks up every packet at the point at NICs (such as the 3COM 3C50  card) did not have
which it interfaces with network media. To be in filtering capabilities, modern and more
 
@ @mode means a NIC picks up only sophisticated NICs do. The driver for the network
packets bound for its particular MAC address, card may be able to take bpf rules and apply them
ignoring the others. Non-promiscuous mode is to the card. The filtering rules are specified in the
appropriate for host-based intrusion detection and configuration of the driver itself. This kind of
prevention, but not for network-based intrusion filtering is not likely to be as sophisticated as the
detection and prevention. A network-based bpf rules themselves, however.
intrusion detection system normally has two NICs-
one for raw packet capture and a second to allow
the host on which the system runs to have network Another method of filtering raw
connectivity for remote administration. packet data is using packet filters to choose and
record only certain packets, depending on the way
the filters are configured.    for example,
The IDS must save the raw packets offers packet filtering via the bpf interpreter. You
that are captured, so they can be processed and can configure a filter that limits the particular types
analyzed at some later point. In most cases, the of packets that will be processed further. The bpf
packets are held in memory long enough so initial interpreter receives all the packets, but it decides
processing activities can occur and, soon which of them to send on to applications. In most
afterwards, written to a file or a data structure to operating systems filtering is done in kernel space
make room in memory for subsequent input or but, in others (such as Solaris), it is done in user
discarded. IDSs typically experience all kinds of space (which is less efficient, because packet data
problems, but one of the most-common problems is must be pushed all the way up the OSI stack to the
packet loss. A frequent variation of this problem is application layer before it can be filtered).
that the NIC used to capture packets receives Operating systems with the bpf interpreter in the
packets much faster than the CPU of the host on kernel are, thus, often the best candidates for IDS
which the IDS runs is capable of despooling them. and IPS host platforms, although Solaris has an
A good solution is simply to deploy higher-ended equivalent capability in the form of its streams
hardware. mechanism.
c
c
Another problem is this: the IDS Packetc
ecodingc
itself cannot keep up with the throughput rates.
Throughput rate is a much bigger problem than Packets are subsequently sent to a
most IDS vendors publicly acknowledge-some of series of decoder routines that define the packet
the best-selling products have rather dismal input structure for the layer two (datalink) data (Ethernet,
processing rates. One solution is to filter out some Token Ring, or IEEE 802.  ) that are collected
of the input that would normally be sent to the IDS. through promiscuous monitoring. The packets are
then further decoded to determine whether the
c packet is an IPv4 packet (which is the case when
ilteringc the first nibble in the IP header is 4), an IP header
 with no options (which is the case when the first
mj—
  means limiting the nibble in the IP header is 5), or IPv6 (where the
packets that are captured according to a certain first nibble in the IP header will be 6), as well as
logic based on characteristics, such as type of the source and destination IP addresses, the TCP
packet, IP source address range, and others. and UDP source and destination ports, and so forth.
Especially in very high-speed networks, the rate of
incoming packets can be over- whelming and can
necessitate limiting the types of packets captured. Some IDSs such as Snort go even
further in packet decoding in that they allow
checksum tests to determine whether the packet the problems that need to be solved for an IDS to
header contents coincide with the checksum value process the packets properly. Packet fragmentation
in the header itself. Checksum verification can be poses yet another problem for IDSs. A reasonable
done for one, or any combination of, or all of the percentage of network traffic consists of packet
IP, TCP, UDP, and ICMP protocols. The downside fragments with which firewalls, routers, switches,
of performing this kind of verification is that and IDSs must deal. Hostile fragmentation, packet
today¶s routers frequently perform checksum tests fragmentation used to attack other systems or to
and drop packets that do not pass the test. evade detection mechanisms, can take several
Performing yet another checksum test within an forms:
IDS or IPS takes its toll on performance and is, in
all likelihood, unnecessary.
One packet fragment can overlap
another in a manner that the fragments will be
toragec reassembled so subsequent fragments overwrite
parts of the first one instead of being reassembled
Once each packet is decoded, it is often in their ³natural´ sequential order. Overlapping
stored either by saving its data to a file or by fragments are often indications of attempted denial-
assimilating it into a data structure while, at the of-service attacks (DoS) or IDS or firewall evasion
same time, the data are cleared from memory. attempts (if none of these know how to deal with
Storing data to a file (such as a binary spool file) is packets of this nature, they would be unable to
rather simple and intuitive because ³what you see process them further).
is what you get.´ New data can simply be appended
to an existing file or a new file can be opened, and
then written to. Packets may be improperly sized.
In one variation of this condition, the fragments are
excessively large²greater than 65,535 bytes and,
But writing intrusion detection data thus, likely to trigger abnormal conditions, such as
to a file also has some significant disadvantages. excessive CPU consumption in the hosts that
For one thing, it is cumbersome to sort through the receive them. Excessively large packets thus
great amount of data within one or more file(s) that usually represent attempts to produce DoS. An
are likely to be accumulated to find particular example is the ³ping of death´ attack in which
strings of interest or perform data correlation. many oversized packets are sent to victim hosts,
Additionally, the amount of data that are likely to causing them to crash. Or, the packet fragments
be written to a hard drive or other storage device could be excessively short, such as less than 64
presents a disk space management challenge. An bytes. Often called a  
 attack, the
alternative is to set up data structures, one for each attacker fabricates, and then sends packets broken
protocol analyzed, and overlay these structures on into tiny pieces. If the fragment is sufficiently
the packet data by creating and linking pointers to small, part of the header information gets displaced
them. into multiple fragments, leaving incomplete
headers. Network devices and IDSs may not be
able to process these headers. In the case of
Taking this latter approach is initially firewalls and screening routers, the fragments could
more complicated, but it makes accessing and be passed through and on to their destination
analyzing the data much easier. Still another although, if they were not fragmented, the packet
alternative is to write to a hash table to condense might not have been allowed through. Or, having to
the amount of data substantially. You could, for reassemble so many small packets could
example, take a source IP address, determine to necessitate a huge amount of memory, causing DoS
how many different ports that address has c
connected, and any other information that might be c
relevant to detecting attacks, and then hash the treamceassemlyc
data. The hash data can serve as a shorthand for c
events that detection routines can later access and 
—c
— — j means taking
process. the data from each TCP stream and, if necessary,
c reordering it (primarily on the basis of packet
c sequence numbers), so it is the same as when it was
ragmentceassemlyc sent by the host that transmitted it and also the host
that received it. This requires determining when
Decoding ³makes sense´ out of each stream starts and stops, something that is not
packets, but this, in and of itself, does not solve all difficult given that TCP communications between
any two hosts begin with a SYN packet and end 5.c Inclusion of integrated forensics
with either a RST (reset) or FIN/ACK packet. functionality in IDSs.
6.c Greater use of honeypots.
7.c Lower Reliance on Signature-Based
Stream reassembly is especially Intrusion Detection
important when data arrive at the IDS in a different
order from their original one. This is a critical step
in getting data ready to be analyzed because IDS The signature approach to intrusion
recognition mechanisms cannot work properly if detection, which traces back to the early   0s,
the data taken in by the IDS are scrambled. Stream represents a major advance over the previous
reassembly also facilitates detection of out-of- statistical-based approaches of the   80s.
sequence scanning methods. Figure-7 shows how Signatures are not only a relatively straightforward
the various type of information processing covered and intuitive approach to intrusion detection, but
so far are related to each other in a host that does they are also efficient-often a set of only a few
not support bpf. The NIC collects packets and hundred signatures can result in reasonably high
sends them to drivers that interface with the kernel. detection rates. Signature-based IDSs have proven
The kernel decodes, filters, and reassembles popular and useful, so much so that you can count
fragmented packets, and then reassembles streams. of some of these tools being available for a long
The output is passed on to applications. time. Signature-based intrusion detection is beset
c with numerous limitations, however, including the
c following:
 .c Because attacks have to occur before their

cP c 
c  signatures can be identified, signatures
cannot be used in discovering new attacks.
Proscofc
carecascfollowsc The ³white hat´ community is thus always
one step behind the ³black hat´
 .c Detects external hackers and network community when it comes to new attack
based attacks. signatures.
2.c Offers centralized management for 2.c Many signatures in IDSs are badly
correlation of distributed attacks. outdated. One can always ³weed out´
3.c Provides the system administrator the obsolete signatures, but doing so requires
ability to quantify attacks. a reasonable amount of unnecessary
4.c Provides an additional layer of protection. effort; good IDS vendors do not include
5.c Provides defense in depth. such signatures in their products¶
signature sets in the first place.
3.c Some attacks do not have single
onscofc
carecascfollowsc distinguishing signatures, but rather a
wide range of possible variations. Each
 .c Generates false positives and negatives. variation could conceivably be
2.c Require full time monitoring. incorporated into a signature set, but doing
3.c It is expensive so inflates the number of signatures,
4.c Require highly skilled staff¶s. potentially hurting IDS performance.
Additionally, keeping up with each
c possible variation is for all practical
c  c c  c
  c purposes an impossible task.
4.c Signatures are almost useless in network-
Intrusion detection fits in with a based IDSs when network traffic is
layered defense approach and intrusion detection encrypted.
technology is still growing and improving. Two 5.c The black hat community is becoming
things are certain²intrusion detection is still a long increasingly better in evading signature-
way from being mature. Massive changes are in based IDSs.
store for both areas. Some of the areas within
intrusion detection, in which substantial and c
beneficial progress is likely to occur. These areas c
include the following: ntrusioncPreventionc
 .c The continued reduction in reliance on
signatures in intrusion detection Intrusion prevention is another area
2.c The growth of intrusion prevention that will grow dramatically in the future. Intrusion
3.c Advances in data correlation and alert prevention is in its infancy. Anyone who thinks that
correlation methods IPSs and IDSs are diametrically opposed or that
4.c Advances in source determination IPSs will eventually supplant IDSs is badly
mistaken, however. An IDS is like a burglar alarm, possibly even within the Internet itself. The last
something that provides information about past and possibility is particularly intriguing. Perhaps some
ongoing activity that facilitates risk and threat organization such as the U.S. government¶s federal
assessment as well as investigations of suspicious incident response team, FedCIRT, will
and possibly wrongful activity. IPSs are designed continuously monitor all traffic bound for U.S.
to be defensive measures that stop or at least limit government sites and stop selectively malicious
the negative consequences of attacks on systems packets long before they reach the gateways of the
and networks, not to yield the wealth of government sites for which they are destined.
information that IDSs typically deliver.


atacandc lertcorrelationc
One of the major, new offshoots of
the last permutation of intrusion prevention Data correlation is becoming
discussed here is called ³active defense. Active increasingly important. IDSs, firewalls, personal
defense means analyzing the condition of systems firewalls, and TCP wrappers are each capable of
and networks and doing what is appropriate to deal generating large amounts of data; collectively, they
with whatever is wrong. According to Dave are capable of overwhelming intrusion detection
Dittrich of the University of Washington, there are analysts with data. Data aggregation helps ensure
four levels of active defense: that data are available in a single location; data
 .c Local data collection, analysis, and correlation enables analysts to recognize patterns in
blocking these data. Although current data correlation
2.c Remote collection of external data methods are for the most part not very
3.c Remote collection of internal data sophisticated, future data correlation is likely to
4.c Remote data alteration, attack suppression, become much better. How will data correlation
and ³interdiction´ algorithms need to change? Waltz and Llinas (in
÷@ @  @, Boston: Artech House,
  0) have developed criteria for systems designed
One of the most important (and to fuse data must be able to, saying that these
controversial) facets of the active defense approach systems must be able to do the following:
to intrusion prevention is determining the  .c Distinguish parameters of interest from
appropriate response. The notion of appropriate noise.
response includes a consideration called ³ 2.c Distinguish among different objects in
proportionality of response,´ which ensures that the space and time
response is proportional to the threat. In the case of 3.c Adequately track and capture each desired
a host that is flooding a network with fragmented type of event and data
packets, blocking traffic sent from that host is 4.c Sample the data and events of interest
almost certainly the most appropriate response. If with sufficient frequency
several dozen hosts known to be operated by an 5.c Provide accurate measurements
ISP repeatedly attack an organization¶s network, 6.c Ensure that each variable that is measured
blocking all the traffic from the range of IP adequately represents the desired types of
addresses owned by that ISP might be the most categories.
appropriate response. Some advocates of the active 7.c Provide access to both raw and correlated
defense approach even believe that if a remote host data
is repeatedly attacking an organization¶s network, 8.c Preserve the temporal characteristics of
counterattacking that host, perhaps by flooding it data and events
with fragmented packets, thereby causing it to
crash, is the appropriate course of action. Although
intrusion prevention appears promising, (as It is unlikely that all systems
mentioned) it is very much in its infancy. Attack designed to fuse data will meet every one of these
stave-off rates for intrusion prevention systems are requirements. The more of these requirements that
nowhere as high as they need to pose a major a system meets, however, the more useful in data
deterrent to attacks. Additionally, false alarms can fusion/correlation it is likely to be. Currently, one
easily cause what effectively amounts to DoS of the greatest barriers to automated data fusion has
within individual systems. been the lack of a common format for data from
intrusion detection systems. Although common
formats have been proposed, little agreement has
Intrusion prevention systems of the resulted. Agreement upon a single data format
future are likely to be able to prevent a wider range would thus constitute a giant step forward.
of attacks, not only at the level of the individual c
host, but also within organizations¶ networks and
c
ourcec
eterminationc    c


—c ——
 means The intrusion detection and intrusion
determining the origin of network traffic. Given prevention arenas are extremely dynamic, with new
how easy it is to spoof IP addresses, any source IP findings, functions, and models being created all
address in conventional IP packets must be viewed the time. A considerable amount of research on
with suspicion. Tools that fabricate packets, data visualization methods for intrusion detection
inserting any desired IP address into the IP headers, data is also currently being conducted. At some
are freely available on the Internet. Many point, the major breakthroughs from this research
countermeasures, most notably strong will be incorporated into IDSs of the future,
authentication methods (such as the use of Smart resulting in output that will be much more useful in
Cards) and digital signatures, can remove doubt terms of identifying threat magnitudes, patterns of
concerning the identity of individuals who initiate elements within incidents, and so forth. Watch for
transactions, but they are not designed to identify the many changes that are currently occurring or
the source IP addresses from which transactions are about to occur with great anticipation.
originate. IPsec, the secure IP protocol, effectively
removes any doubt concerning the validity of IP
source addresses, but IPsec has, unfortunately, not
grown in popularity in proportion to its many
merits.

ntegratedc orensicscapailitiesc
c
Forensics means using special
procedures that preserve evidence for legal
purposes. When people think of forensics, they
normally envision investigators archiving the
contents of hard drives to a machine that runs
forensics software, making hard copies of audit
logs, and labeling and bagging peripherals such as
keyboards and mice. Many people fail to realize
that IDSs are potentially one of the best sources of
forensics data, especially if the IDSs capture and
store keystrokes. A few IDS vendors are starting to
build forensics capabilities into their products,
capabilities that enable those who use the systems
to make copies of IDS output, create a hash value
of the output (to ensure its integrity), search it for
special keywords or graphic content, and so on.

sec ofc oneypotsc inc ntrusionc

etectioncc
c
A °— is a decoy server that
looks and acts like a normal server, but that does
not run or support normal server functions. The
main purpose of deploying honeypots is to observe
the behavior of attackers in a safe environment, one
in which there is (at least in theory) no threat to
normal, operational systems. Having proven
especially useful as a reconnaissance tool that
yields information concerning what kinds of
attacks are occurring and how often, honeypots
have gained a great deal of acceptance within the
information security arena.

c
c
cccccccc

c
c

Potrebbero piacerti anche