Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
Study Guide
April, 2008
Version 1.0.13
RCSP Study Guide
Table of Contents
Preface ..................................................................................................................................................................................................................... 3
Certification Overview ............................................................................................................................................................................................ 3
Benefits of Certification......................................................................................................................................................................................... 3
Exam Information.................................................................................................................................................................................................. 3
Certification Checklist ........................................................................................................................................................................................... 4
Recommended Resources for Study.................................................................................................................................................................... 4
RIVERBED CERTIFIED SOLUTIONS PROFESSIONAL STUDY GUIDE .............................................................................................................. 6
I. General Knowledge ............................................................................................................................................................................................. 6
Optimizations Performed by RiOS........................................................................................................................................................................ 6
TCP/IP ................................................................................................................................................................................................................ 10
Common Ports.................................................................................................................................................................................................... 10
RiOS Auto-discovery Process ............................................................................................................................................................................ 11
Connection Pooling............................................................................................................................................................................................. 12
In-path Rules ...................................................................................................................................................................................................... 12
Peering Rules ..................................................................................................................................................................................................... 13
Steelhead Appliance Models and Capabilities ................................................................................................................................................... 15
II. Deployment ....................................................................................................................................................................................................... 16
In-path................................................................................................................................................................................................................. 17
Virtual In-path ..................................................................................................................................................................................................... 18
PBR..................................................................................................................................................................................................................... 18
WCCP Deployments........................................................................................................................................................................................... 19
Advanced WCCP Configuration ......................................................................................................................................................................... 20
Server-Side Out-of-Path Deployments ............................................................................................................................................................... 22
Asymmetric Route Detection .............................................................................................................................................................................. 24
Connection Forwarding....................................................................................................................................................................................... 25
Simplified Routing............................................................................................................................................................................................... 26
Datastore Synchronization.................................................................................................................................................................................. 26
Authentication and Authorization........................................................................................................................................................................ 27
Central Management Console (CMC) ................................................................................................................................................................ 28
III. Features ............................................................................................................................................................................................................ 31
Feature Licensing ............................................................................................................................................................................................... 31
HighSpeed TCP (HSTCP) .................................................................................................................................................................................. 31
Quality of Service................................................................................................................................................................................................ 33
PFS (Proxy File Service) Deployments .............................................................................................................................................................. 36
NetFlow............................................................................................................................................................................................................... 41
IPSec .................................................................................................................................................................................................................. 43
Operation on VLAN Tagged Links...................................................................................................................................................................... 43
IV. Troubleshooting .............................................................................................................................................................................................. 45
Common Deployment Issues.............................................................................................................................................................................. 45
Reporting and Monitoring ................................................................................................................................................................................... 47
Troubleshooting Best Practices.......................................................................................................................................................................... 50
V. Exam Questions ............................................................................................................................................................................................... 52
Types of Questions............................................................................................................................................................................................. 52
Sample Questions .............................................................................................................................................................................................. 52
VI. Appendix .......................................................................................................................................................................................................... 56
Acronyms and Abbreviations .............................................................................................................................................................................. 56
Preface
This Riverbed Certification Study Guide is aimed at anyone who wants to become certified in
the Riverbed Steelhead products and Riverbed Optimization System (RiOS). The Riverbed
Certified Solutions Professional (RCSP) program is designed to validate the skills required of
technical professionals who work in the implementation of the Riverbed Steelhead products.
This study guide provides a combination of theory and practical experience needed for a general
understanding of the subject matter. It also provides sample questions that will help in the
evaluation of personal progress and provide familiarity with the types of questions that will be
encountered in the exam.
This publication does not replace practical experience, nor is it designed to be a stand-alone
guide for any subject. Instead, it is an effective tool that, when combined with education
activities and experience, can be a very useful preparation guide for the exam.
Certification Overview
The Riverbed Certified Solutions Professional certificate is granted to individuals who
demonstrate advanced knowledge and experience with the RiOS product suite. The typical RCSP
will have taken a Riverbed approved training class such as the Steelhead Appliance Deployment
& Management course in addition to having hands-on experience in performing deployment,
troubleshooting, and maintenance of RiOS products in small, medium, and large organizations.
While there are no set requirements prior to taking the exam, candidates who have taken a
Riverbed training class and have at least six months of hands-on experience with RiOS products
have a significantly higher chance of receiving the accreditation. We would like to emphasize
that solely taking the class will not adequately prepare you for the exam.
To obtain the RCSP certification, you are required to pass a computerized exam available at any
Pearson VUE testing center worldwide.
Benefits of Certification
1. Establishes your credibility as a knowledgeable and capable individual in regard to
Riverbed's products and services.
2. Helps improve your career advancement potential.
3. Qualifies you for discounts and benefits for Riverbed sponsored events and training.
4. Entitles you to use the Riverbed certification logo on your business card.
Exam Information
Exam Specifications
• Exam Number: 199-01
• Exam Name: Riverbed Certified Solutions Professional
• Version of RiOS: Up to RiOS version 3.x
• Number of Questions: 65
• Total Time: 75 minutes for exam, 15 minutes for Survey and Tutorial (90 minutes total)
• Exam Provider: Pearson VUE
• Exam Language: English Only. Riverbed allows a 30-minute time extension for English
exams taken in non-English speaking countries for students that request it. English speaking
countries are Australia, Bermuda, Canada, Great Britain, Ireland, New Zealand, Scotland,
South Africa, and the United States. A form will need to be completed by the candidate and
submitted to Pearson VUE.
• Special Accommodations: Yes (must submit written request to Pearson VUE for ESL or
ADA accommodations; includes time extensions and/or a reader)
• Offered Locations: Worldwide (4200 testing locations worldwide)
• Pre-requisites: None (although taking a Riverbed training class is highly recommended)
• Available to: Everyone (partners, customers, employees, etc)
• Passing Score: 700 out of 1000 (70%)
• Certification Expires: Every 2 years (must recertify every 2 years, no grace period)
• Wait Between Failed Attempts: 72 hours. No retakes allowed on passed exams.
• Cost: $150.00 (USD)
• Number of Attempts Allowed: Unlimited (though statistics are kept)
Certification Checklist
As the RCSP exam is geared towards individuals who have both the theoretical knowledge and
hands on experience with the RiOS product suite, ensuring proficiency in both areas is crucial
towards passing the exam. For individuals starting out with the process, we recommend the
following steps to guide you along the way:
1. Building Theoretical Knowledge
The easiest way to become knowledgeable in deploying, maintaining, and troubleshooting
the RiOS product suite is to take a Riverbed sanctioned training class. To ensure the greatest
possibility of passing the exam, it is recommended that you review the RCSP Study Guide
and ensure your familiarity with all topics listed, prior to any examination attempts.
2. Gaining Hands-on Experience
While the theoretical knowledge will get you halfway there, it's the hands-on knowledge that
can get you over the top and allow you to pass the exam. Since all deployments are different,
providing an exact amount of experience required is difficult. Generally, we recommend that
resellers and partners perform at least five deployments in a variety of technologies prior to
attempting the exam. For customers, and alternatively for resellers and partners, starting from
the design and deployment phase and having at least six months of experience in a
production environment would be beneficial.
3. Taking the Exam
The final step in becoming an RCSP is to take the exam at a Pearson VUE authorized testing
center. To register for any Riverbed Certification exam, please visit
http://www.pearsonvue.com/riverbed.
Recommended Resources for Study
Riverbed Training Courses
Information on Riverbed Training can be found at: http://www.riverbed.com/support/training/.
• Steelhead Appliance Deployment and Management
Publications
Recommended Reading (In No Particular Order)
• This study guide
• Riverbed documentation
o Steelhead Management Console User's Guide
4 © 2007-2008 Riverbed Technology, Inc. All rights reserved.
RCSP Study Guide
WFS (Windows File System), the binary representation of the file is basically the same and thus
references can be sent for that file.
Lempel-Ziv (LZ) Compression
SDR and compression are two different features and can be switched on and off separately.
However, LZ compression is the primary form of data reduction for cold transfers.
The Lempel-Ziv compression methods are among the most popular algorithms for lossless
storage. Compression is turned on by default. In-path rules can be used to define which
optimization features will be used for which set of traffic flowing through the Steelhead
appliance.
TCP Optimizations & Virtual Window Expansion (VWE)
As Steelhead appliances are designed to optimize data transfers across wide area networks, they
make extensive use of standards-based enhancements to the TCP protocol that may not be
present in the TCP stack of many desktop and server operating systems. This includes improved
transport capability for high bandwidth delay product networks via the use of HighSpeed TCP,
TCP Vegas for lower bandwidth links, partial acknowledgements, and other more obscure but
throughput enhancing and latency reducing features.
VWE allows Steelhead appliances to repack TCP payloads with references that represent
arbitrary amounts of data. This is possible because unlike other compression products, Steelhead
appliances operate at the Application Layer and terminate TCP, which gives them more
flexibility in the way they optimize WAN traffic.
Essentially, the TCP payload is increased from its normal window size to an arbitrarily large
amount. Because of this increased payload, a given application that relies on TCP performance
(for example, HTTP or FTP) takes fewer trips across the WAN to accomplish the same task. For
example, consider a client-to-server connection that may have a 64KB TCP window. In the event
that there is 256KB of data to transfer, it would take several TCP windows to accomplish this in
a network with high latency. With SDR however, that 256KB of data can be potentially reduced
to fit inside a single TCP window, removing the need to wait for acknowledgements to be sent
prior to sending the next window, and thus speed the transfer.
Transaction Prediction (TP)
Latency optimization is delivered through TP. TP leverages an intimate understanding of
protocol semantics to reduce the chattiness that would normally occur over the WAN. By acting
on foreknowledge of specific protocol request-response mechanisms, Steelhead appliances
streamline the delivery of data that would normally be delivered in small increments through
large numbers of interactions between the client and server over the WAN. As transactions are
executed between the client and server, the Steelhead appliances intercept each transaction,
compare it to the database of past transactions, and make decisions about the probability of
future events.
Based on this model, if a Steelhead appliance determines there is a high likelihood of a future
transaction occurring, it performs that transaction, rather than waiting for the response from the
server to propagate back to the client and then back to the server. Dramatic performance
improvements result from the time saved by not waiting for each serial transaction to arrive prior
to making the next request. Instead, the transactions are pipelined one right after the other.
Of course, transactions are executed by Steelhead appliances ahead of the client only when it is
safe to do so. To ensure data integrity, Steelhead appliances are designed with knowledge of the
underlying protocols to know when it is safe to do so. Fortunately, a wide range of common
applications have very predictable behaviors and, consequently, TP can enhance WAN
performance significantly. When combined with SDR, TP improves overall WAN performance
up to 100 times.
Common Internet File System (CIFS) Optimization
CIFS is a proposed standard protocol that lets programs make requests for files and services on
remote computers on the Internet. CIFS uses the client/server programming model. A client
program makes a request of a server program (usually in another computer) for access to a file or
to pass a message to a program that runs in the server computer. The server takes the requested
action and returns a response. CIFS is a public or open variation of the Server Message Block
Protocol developed and used by Microsoft.
In the Steelhead appliance, CIFS optimization is enabled by default. Typically, you only disable
CIFS optimization to troubleshoot the system.
Overlapping Opens
Due to the way certain applications handle the opening of files, file locks are not properly
granted to the application in such a way that would allow a Steelhead appliance to optimize
access to that file using TP. To prevent any compromise to data integrity, the Steelhead
appliance only optimizes data to which exclusive access is available (in other words, when locks
are granted). When an oplock is not available, the Steelhead appliance does not perform
application-level latency optimizations but still performs SDR and compression on the data as
well as TCP optimizations. The CIFS overlapping opens feature remedies this problem by having
the server-side Steelhead handle file locking operations on behalf of the requesting application. If
you disable this feature, the Steelhead appliance will still increase WAN performance, but not as
effectively.
Enabling this feature on applications that perform multiple opens of the same file to complete an
operation will result in a performance improvement (for example, CAD applications):
• Optimize the Following Extensions. Specify a list of extensions you want to optimize using
overlapping opens. The default values are: doc, pdf, ppt, sldasm, slddrw, slddwg, sldprt, txt,
vsd, xls.
• Do Not Optimize the Following Extensions. Specify a list of extension you do not want to
optimize using overlapping opens. The default values are: ldb, mdb.
NOTE: If a remote user opens a file which is optimized using the overlapping opens feature and
a second user opens the same file they might receive an error if the file fails to go through a v3.x
Steelhead appliance or if it does not go through a Steelhead appliance (for example, certain
applications that are sent over the LAN). If this occurs, you should disable overlapping opens for
those applications.
Messaging Application Programming Interface (MAPI) Optimization
MAPI optimization is enabled by default. Only uncheck this box if you want to disable MAPI
optimization. Typically, you disable MAPI optimization to troubleshoot problems with the
system. For example, if you are experiencing problems with Microsoft Outlook clients
connecting to Exchange, you can disable MAPI latency acceleration (while continuing to
optimize with SDR for MAPI).
• Read ahead on attachments
• Read ahead on large emails
• Write behind on attachments
8 © 2007-2008 Riverbed Technology, Inc. All rights reserved.
RCSP Study Guide
server, the settings are applied to all volumes on that server unless you override settings for
specific volumes.
• Read-ahead and read caching (checks freshness with modify date)
• Write-behind
• Metadata prefetching and caching
• Convert multiple requests into one larger request
• Special symbolic link handling
TCP/IP
General Operation
Steelhead appliances are typically placed on two ends of the WAN as close to the client and
server as possible (no additional WAN links between the end node and the Steelhead appliance).
By placing Steelhead appliances in the network, the TCP session between client and server can
be intercepted, therefore a level of control over the TCP session can be obtained. TCP sessions
have to be intercepted in order to be optimized; therefore the Steelhead appliances must see all
traffic from source to destination and back. For any given optimized session, there are three
distinct sessions. There is a TCP connection between the client and the client-side Steelhead
appliance, between the server and the server-side Steelhead appliance, and finally a connection
between the two Steelhead appliances.
Common Ports
Ports Used by RiOS
Port Type
7744 Datastore Sync port
7800 In-path port
7801 NAT port
7810 Out-of-path port
7820 Failover port
7830 MAPI Exchange 2003 port
7840 NSPI port
7850 Connection forwarding (neighbor) port
Port Type
513 Remote Login
514 Shell
1494, 2598 Citrix
3389 MS WBT, TS/Remote Desktop
5631 PC Anywhere
5900 - 5903 VNC
600 X11
Secure Ports Commonly Passed through by Default on Steelhead Appliances (Partial List)
Port Type
22/TCP ssh
49/TCP tacacs
443/TCP https
465/TCP smtps
563/TCP nntps
585/TCP imap4-ssl
614/TCP sshell
636/TCP ldaps
989/TCP ftps-data
990/TCP ftps
992/TCP telnets
993/TCP imaps
995/TCP pop3s
1701/TCP l2tp
1723/TCP pptp
3713/TCP tftp over tls
sending a TCP SYN/ACK back. After auto-discovery has taken place, the Steelhead appliances
continue to set up the TCP inner session and the TCP outer sessions.
Client-side Server-side
Client Steelhead Steelhead Server
IP(C)→IP(S):SYN
IP(C)→IP(S):SYN+Probe
IP(SSH)→IP(CSH):SYN/ACK Listening on
port 7800
IP(CSH)→IP(SSH):ACK
Setup Information
IP(C)→IP(S):SYN
IP(S)→IP(C):SYN/ACK
Connect Result
IP(C)→IP(S):ACK
IP(S)→IP(C):SYN/ACK
Connect result is
IP(C)→IP(S):ACK cached until failure
TCP Option
The TCP option used for auto-discovery is 0x4C which is 76 in decimal format. The client-side
Steelhead appliance attaches a 10 byte option to the TCP header; the server-side Steelhead
appliance attaches a 14 byte option in return. Note that this is only done in the initial discovery
process and not during connection setup between the Steelhead appliances and the outer TCP
sessions.
Connection Pooling
General Operation
By default, all auto-discovered Steelhead appliance peers will have a default connection pool of
20. The connection pool is a user configurable value which can be configured for each Steelhead
appliance peer. The purpose of connection pooling is to avoid the TCP handshake for the inner
session between the Steelhead appliances across the high latency WAN. By pre-creating these
sessions between peer Steelhead appliances, when a new connection request is made by a client,
the client-side Steelhead appliance can simply use the connections in the pool. Once a
connection is pulled from the pool, a new connection is created to take its place so as to maintain
the specified number of connections.
In-path Rules
General Operation
In-path rules allow a client-side Steelhead appliance to determine what action to perform when
intercepting a new client connection (the first TCP SYN packet for a connection). The action
taken depends on the type of in-path rule selected and is outlined in detail below. It is important
to note that the rules are matched based on source/destination IP information, destination port,
and/or VLAN, and are processed from the first rule in the list to the last (top down). The rules
processing stops when the first rule matching the parameters specified is reached, at which point
the action selected by the rule is taken. In version 3.x, Steelhead appliances have three
12 © 2007-2008 Riverbed Technology, Inc. All rights reserved.
RCSP Study Guide
passthrough rules by default, and a fourth implicit rule to auto-discover remote Steelhead
appliances. They attempt to optimize traffic if the first three rules are not matched by traffic. The
three default passthrough rules include port groupings matching interactive traffic (i.e., Telnet,
VNC, RDP), encrypted traffic (i.e., server-side Steelhead), and Riverbed related used ports (i.e.,
7800, 7810).
Different Types and Their Function
• Passthrough. Passthrough rules identify traffic that is passed through the network
unoptimized. For example, you may define passthrough rules to exclude subnets from
optimization. Traffic is also passed through when the Steelhead appliance is in bypass mode.
(Passthrough might occur because of in-path rules, because the connection was established
before the Steelhead appliance was put in place, or before the Steelhead service was
enabled.)
• Fixed Target. Fixed-target rules specify out-of-path Steelhead appliances near the target
server that you want to optimize. Determine which servers you want the Steelhead appliance
to optimize (and, optionally which ports), and add rules to specify the network of servers,
ports, port labels, and out-of-path Steelhead appliances to use.
• Auto-discovery. Auto-discovery is the process by which the Steelhead appliance
automatically intercepts and optimizes traffic on all IP addresses and ports. By default, auto-
discovery is applied to all IP addresses and the ports which are not secure, interactive, or
default Riverbed ports. Defining in-path rules modifies this default setting.
• Discard. Packets for the connection that match the rule are dropped silently. The Steelhead
appliance filters out traffic that matches the discard rules. This process is similar to how
routers and firewalls drop disallowed packets; the connection-initiating device has no
knowledge of the fact that its packets were dropped until the connection times out.
• Deny. When packets for connections match the deny rule, the Steelhead appliance actively
tries to reset the connection. With deny rules, the Steelhead appliance actively tries to reset
the TCP connection being attempted. Using an active reset process rather than a silent
discard allows the connection initiator to know that its connection is disallowed.
Peering Rules
Applicability and Conditions of Use
Peering Rules
Configuring peering rules defines what to do when a Steelhead appliance receives an auto-
discovery probe from another Steelhead appliance. As such, the scope of a peering rule is limited
to a server-side Steelhead appliance (the one receiving the probe). Note that peering rules on an
intermediary Steelhead appliance (or server-side) will have no effect in preventing optimization
with a client-side Steelhead appliance if it is using a fixed target rule designating the
intermediary Steelhead appliance as its destination (since there is no auto-discovery probe in a
fixed target rule).
WAN 1 WAN 2
Server 1
Server1 is on the same LAN as Steelhead2 so connections from the client to Server1 should be
optimized between Steelhead1 and Steelhead2. Concurrently, Server2 is on the same LAN as
Steelhead3 and connections from the client to Server2 should be optimized between Steelhead1
and Steelhead3.
• You do not need to define any rules on Steelhead1 or Steelhead3.
• Add peering rules on Steelhead2 to process connections normally going to Server1 and to
pass through all other connections so that connections to Server2 are not optimized by
Steelhead2.
• A rule to pass through inner connections between Steelhead1 and Steelhead3 is already in
place by default (by default connection to destination port 7800 is included by port label
“RBT-Proto”).
This configuration causes connections going to Server1 to be intercepted by Steelhead2, and
connections going to anywhere else to be intercepted by another Steelhead appliance (for
example, Steelhead3 for Server2).
Overcoming Peering Issues Using Fixed-Target Rules
If you do not enable automatic peering or define peering rules as described in the previous
sections, you must define:
• A fixed-target rule on Steelhead1 to go to Steelhead3 for connections to Server2.
• A fixed-target rule on Steelhead3 to go to Steelhead1 for connections to servers in the same
site as Steelhead1.
• If you have multiple branches that go through Steelhead2, you must add a fixed-target rule
for each of them on Steelhead1 and Steelhead3.
II. Deployment
Deployment Methods
Physical In-path
In a physical in-path deployment, the Steelhead appliance is physically in the direct path between
clients and servers. The clients and servers continue to see client and server IP addresses and the
Steelhead appliance bridges traffic from its LAN facing side to its WAN facing side (and vise
versa). Physical in-path configurations are suitable for any location where the total bandwidth is
within the limits of the installed Steelhead appliance or serial cluster of Steelhead appliances. It
is generally one of the simplest deployment options and among the easiest to maintain.
Logical In-path
In a logical in-path deployment, the Steelhead appliance is logically in the path between clients
and servers. In a logical in-path deployment, clients and servers continue to see client and server
IP addresses. This deployment differs from a physical in-path deployment in that a packet
redirection mechanism is used to direct packets to Steelhead appliances that are not in the
physical path of the client or server.
Commonly used technologies for redirection are: Layer-4 switches, Web Cache Communication
Protocol (WCCP), and Policy Based Routing (PBR).
Server-Side Out-of-Path
A server-side out-of-path deployment is a network configuration in which the Steelhead
appliance is not in the direct or logical path between the client and the server. Instead, the server-
side Steelhead appliance is connected through the primary interface and listens on port 7810 to
connections coming from client-side Steelhead appliances. In an out-of-path deployment, the
Steelhead appliance acts as a proxy and does not perform NAT of the client’s IP address as with
in-path deployments (to allow the server to see the original client IP address), but will instead
source NAT to the primary interface address on the Steelhead appliance that is in server-side out-
of-path. A server-side out-of-path configuration is suitable for data center locations when
physical in-path or logical in-path configurations are not possible or when certain forms of NAT
are done between Steelhead appliances. With server-side out-of-path, client IP visibility is no
longer available to the server (due to the NAT) and optimization initiated from the server side is
not possible (since there is no redirection of packets to the Steelhead appliance).
Physical Device Cabling
Steelhead appliances have multiple physical and virtual interfaces. The primary interface is
typically used for management purposes, data store synchronization (if applicable), and for
server-side out-of-path configurations. The primary interface can be assigned an IP address and
connected to a switch. You would use a straight-through cable for this configuration.
The LAN and WAN interfaces are purely L1/L2. No IP addresses can be assigned. Instead, a
logical L3 interface is created. This is the “in-path” interface and it is designated a name on a per
slot and port basis (in LAN/WAN pairs). A bypass card (or in-path card) in slot0 with just one
LAN and one WAN interface will have a logical interface called inpath0_0. In-path interfaces
for a 4-port card in slot1 will get inpath1_0 and inpath1_1, representing the pair or LAN/WAN
ports respectively.
Inpath1_0 will represent LAN1_0 and WAN1_0. Inpath1_1 will represent LAN1_1 and
WAN1_1.
For a physical in-path deployment, when connecting the LAN and WAN interface to the
network, both of them are to be treated as a router. When connecting to a router, host, or firewall,
a crossover cable needs to be used. When connecting to a switch, a straight-through cable has to
be used. The Steelhead appliance supports auto-MDIX (medium dependent interface crossover),
however when using the wrong cables you run the risk of breaking the connection between the
components the Steelhead appliances placed in-between, especially in bypass. These components
may not support auto-MDIX.
For a virtual in-path deployment the WAN interface needs to be connected. The LAN interface
does not need to be connected and will be shut down automatically as soon as the virtual in-path
option is enabled in the Steelhead appliances configuration.
For server-side out-of-path deployments only the primary interface needs to be connected.
In-path
In-path Networks
Physical in-path configurations are suitable for locations where the total bandwidth is within the
limits of the installed Steelhead appliance or serial cluster of Steelhead appliances.
The Steelhead appliance can be physically connected to access both ports and trunks. When the
Steelhead appliance is placed on a trunk, the in-path interface has to be able to tag its traffic with
the correct VLAN number. The supported trunking protocol is Dot1Q. A tag can be assigned via
the GUI or the CLI. The CLI command for this is:
HOSTNAME (config) # in-path interface inpathx_x vlan xxx
There are several variations of the in-path deployment. Steelhead appliances could be placed in
series to be redundant. Peering rules based on a peer IP address will have to be applied to both
Steelhead appliances to avoid peering between each other. When using 4-port cards, and thus
multiple in-path IP addresses, all addresses will have to be defined to avoid peering.
A serial cluster is a failover design that can be used to mitigate the risk of possible network
instabilities and outages caused by a single Steelhead appliance failure (typically caused by
excessive bandwidth as there is no longer data reduction occurring). When the maximum number
of TCP connections for a Steelhead appliance is reached, that appliance stops intercepting new
connections. This allows the next Steelhead appliance in the cluster the opportunity to intercept
the new connections, if it has not reached its maximum number of connections. The in-path
peering rules and in-path rules are used so that the Steelhead appliances in the cluster know not
to intercept connections between themselves.
Appliances in a failover deployment process the peering rules you specify in a spill-over fashion.
A keepalive method is used between two Steelhead appliances to monitor each others status and
set a master and backup state for both Steelhead appliances. It is recommended to assign the
LAN-side Steelhead appliance to be the master due to the amount of passthrough traffic from
Steelhead to client or server. Optionally, data stores can be synchronized to ensure warm
performance in case of a failure.
In case the Steelhead appliances are deployed in parallel of each other, measures need to be
taken to avoid asymmetrical traffic from being passed through without optimization. This usually
occurs when two or more routing points in the network exist where traffic is spread over the
links simultaneously. Connection forwarding can be used to exchange flow information between
the Steelhead appliances in the parallel deployment. Multiple Steelhead appliances can be
bundled together.
Virtual In-path
Introduction to Virtual In-path Deployments
In a virtual in-path deployment, the Steelhead is virtually in the path between clients and servers.
Traffic moves in and out of the same WAN interface. This deployment differs from a physical
in-path deployment in that a packet redirection mechanism is used to direct packets to Steelhead
appliances that are not in the physical path of the client or server.
Redirection mechanisms:
• Layer-4 Switch. You enable Layer-4 switch (or server load-balancer) support when you
have multiple Steelhead appliances in your network to manage large bandwidth
requirements.
• PBR. Policy-Based Routing (PBR) enables you to redirect traffic to a Steelhead appliance
that is configured as virtual in-path device. PBR allows you to define policies to redirect
packets instead of relying on routing protocols. You define policies to redirect traffic to the
Steelhead appliance and policies to avoid loop-back.
• WCCP. WCCP (Web Cache Communication Protocol) was originally implemented on Cisco
routers, multi-layer switches, and web caches to redirect HTTP requests to local web caches
(version 1). Version 2, which is supported on Steelhead appliances, can redirect any type of
connection from multiple routers or web caches and different ports.
PBR
Introduction to PBR
PBR is a router configuration that allows you to define policies to route packets instead of
relying on routing protocols. It is enabled on an interface basis and packets coming into a PBR-
enabled interface are checked to see if they match the defined policies. If they do match, the
packets are routed according to the rule defined for the policy. If they do not match, packets are
routed based on the usual routing table. The rules can redirect the packets to a specific IP
address.
To avoid an infinite loop, PBR must be enabled on the interfaces where the client traffic is
arriving and disabled on the interfaces corresponding to the Steelhead appliance. The common
best practice is to place the Steelhead appliance on a separate subnet.
One of the major issues with PBR is that it can black hole traffic (drop all TCP connections to a
destination) if the device it is redirecting to fails. To avoid black holing traffic, PBR must have a
way of tracking whether the PBR next hop is available. You can enable this tracking feature in a
route map with the following Cisco router command:
set ip next-hop verify-availability
With this command, PBR attempts to verify the availability of the next hop using information
from CDP. If that next hop is unavailable, it skips the actions specified in the route map. PBR
checks availability in the following manner:
1. When PBR first attempts to send to a PBR next hop, it checks the CDP neighbor table to see
if the IP address of the next hop appears to be available. If so, it sends an Address Resolution
Protocol (ARP) request for the address, resolves it, and begins redirecting traffic to the next
hop (the Steelhead appliance).
2. After PBR has verified the next hop, it continues to send to the next hop as long as it obtains
answers from the ARP request for the next hop IP address. If the ARP request fails to obtain
an answer, it then rechecks the CDP table. If there is no entry in the CDP table, it no longer
uses the route map to send traffic. This verification provides a failover mechanism.
In more recent versions of the Cisco IOS software, there is a feature called PBR with Multiple
Tracking Options. In addition to the old method of using CDP information, it allows methods
such as HTTP and ping to be used to determine whether the PBR next hop is available. Using
CDP allows you to run with older IOS 12.x versions.
WCCP Deployments
Introduction to WCCP
The WCCP protocol is a stateful language that the router and Steelhead appliance can use to
redirect traffic to the Steelhead appliance in order for it to optimize. Several functions will have
to be covered to make it stateful and scalable. Failover, load distribution, and negotiation of
connection parameters will all have to be communicated throughout the cluster that the Steelhead
appliance and router form upon successful negotiation. The protocol has four messages to
encompass all of the above functions:
• Here I am. Sent by Steelhead appliances to announce themselves.
• I see you. Sent by WCCP enabled routers to respond to announcements.
• Redirect Assign. Sent by the designated Steelhead appliance to determine flow distribution.
• Removal Query. Sent by router to check a Steelhead appliance after missed “here I am”
messages.
When you configure WCCP on a Steelhead appliance:
1. Routers and Steelhead appliances are added to the same service group.
2. Steelhead appliances announce themselves to the routers.
3. Routers respond with their view of the service group.
4. One Steelhead will be the designated CE and tells the routers how to redirect traffic among
the Steelhead appliances in the service group.
How Steelhead Appliances Communicate With Routers
Steelhead appliances can use one of the following methods to communicate with routers:
• Unicast UDP. The Steelhead appliance is configured with the IP address of each router. If
additional routers are added to the service group, they must be added on each Steelhead
appliance.
• Multicast UDP. The Steelhead appliance is configured with a multicast group. If additional
routers are added, you do not need to add or change configuration settings on the Steelhead
appliances.
Redirection
By default, all TCP traffic is redirected, optionally a redirect-list can be defined where only the
contents of the redirect-list are redirected. A redirect-list in a WCCP configuration refers to an
ACL that is configured on the router to select the traffic that will be redirected.
Traffic is redirected using one of the following schemes:
• Gre (Generic Routing Encapsulation). Each data packet is encapsulated in a GRE packet
with the Steelhead appliance IP address configured as the destination. This scheme is
applicable to any network.
• L2 (Layer-2). Each packet MAC address is rewritten with a Steelhead appliance MAC
address. This scheme is possible only if the Steelhead appliance is connected to a router at
Layer-2.
• Either. The either value uses L2 (Layer-2) first—if Layer-2 is not supported, GRE is used.
This is the default setting.
You can configure your Steelhead appliance to not encapsulate return packets. This allows your
WCCP Steelhead appliance to negotiate with the router or switch as it if were going to send gre-
return packets, but to actually send l2-return packets. This configuration is optional but
recommended when connected at L2 directly. The command to override WCCP packet return
negotiation is wccp l2-return enable. Be sure the network design permits this.
Load Balancing and Failover
WCCP supports unequal load balancing. Traffic is redirected based on a hashing scheme and the
weight of the Steelhead appliances. Each router uses a 256-bucket Redirection Hash Table to
distribute traffic for a Service Group across the member Steelhead appliances. It is the
responsibility of the Service Group's designated Steelhead appliance to assign each router's
Redirection Hash Table. The designated Steelhead appliance uses a
WCCP2_REDIRECT_ASSIGNMENT message to assign the routers' Redirection Hash Tables.
This message is generated following a change in Service Group membership and is sent to the
same set of addresses to which the Steelhead appliance sends WCCP2_HERE_I_AM messages.
A router will flush its Redirection Hash Table if a WCCP2_REDIRECT_ASSIGNMENT is not
received within five HERE_I_AM_T seconds of a Service Group membership change. The
HASH algorithm can use several different input fields to come up with an 8 bit output (which is
the bucket value). Default input fields are source and destination IP address of the packet that is
redirected. Source and destination TCP port or any combination can be used.
The weight determines the percentage of traffic a Steelhead appliance in a cluster gets, the
hashing algorithm determines which flow is redirected to which Steelhead appliance. The default
weight is based on the Steelhead appliance model number. The weight is heavier for models that
support more connections. You can modify the default weight if desired.
With the use of weight you can also create an active/passive cluster by assigning a weight of 0 to
the passive Steelhead appliance. This Steelhead appliance will only get traffic when the active
Steelhead appliance fails.
Advanced WCCP Configuration
Using Multicast Groups
If you add multiple routers and Steelhead appliances to a service group, you can configure them
to exchange WCCP protocol messages through a multicast group. Configuring a multicast group
is advantageous because if a new router is added, it does not need to be explicitly added on each
Steelhead appliance.
Multicast addresses must be between 224.0.0.0 and 239.255.255.255.
Configuring Multicast Groups on the Router
On the router, at the system prompt, enter the following set of commands:
Router> enable
Router# configure terminal
Router(config)# ip wccp 90 group-address 224.0.0.3
Router(config)# interface fastEthernet 0/0
Router(config-if)# ip wccp 90 redirect in
Router(config-if)# ip wccp 90 group-listen
Router(config-if)# end
Router# write memory
NOTE: Multicast addresses must be between 224.0.0.0 and 239.255.255.255.
Configuring Multicast Groups on the Steelhead Appliance
On the WCCP Steelhead appliance, at the system prompt, enter the following set of commands:
WCCP Steelhead > enable
WCCP Steelhead # configure terminal
WCCP Steelhead (config) # wccp enable
WCCP Steelhead (config) # wccp mcast-ttl 10
WCCP Steelhead (config) # wccp service-group 90 routers 224.0.0.3
WCCP Steelhead (config) # write memory
WCCP Steelhead (config) # exit
Limiting Redirection by TCP Port
By default all TCP ports are redirected, but you can configure the WCCP Steelhead appliance to
tell the router to redirect only certain TCP source or destination ports. You can specify up to a
maximum of seven ports per service group.
Using Access Lists for Specific Traffic Redirection
If redirection is based on traffic characteristics other than ports, you can use ACLs on the router
to define what traffic is redirected.
ACL considerations:
• ACLs are processed in order, from top to bottom. As soon as a particular packet matches a
statement, it is processed according to that statement and the packet is not evaluated against
subsequent statements. Therefore, the order of your access-list statements is very important.
• If no port information is explicitly defined, all ports are assumed.
• By default all lists include an implied deny all entry at the end, which ensures that traffic that
is not explicitly included is denied. You cannot change or delete this implied entry.
Access Lists: Best Practice
To avoid requiring the router to do extra work, Riverbed recommends that you create an ACL
that routes only TCP traffic to the Steelhead appliance. When a WCCP configured Steelhead
appliance receives UDP, GRE, ICMP, and other non-TCP traffic, it returns the traffic to the
router.
Verifying and Troubleshooting WCCP Configuration
Checking the Router Configuration
On the router, at the system prompt, enter the following set of commands:
Router>en
Router#show ip wccp
Router#show ip wccp 90 detail
Router#show ip wccp 90 view
Verifying WCCP Configuration on an Interface
On the router, at the system prompt, enter the following set of commands:
Router>en
Router#show ip interface
Look for WCCP status messages near the end of the output.
You can trace WCCP packets and events on the router.
Checking the Access List Configuration
On the router, at the system prompt, enter the following set of commands:
Router>en
Router#show access-lists <access_list_number>
Tracing WCCP Packets and Events on the Router
On the router, at the system prompt, enter the following set of commands:
Router>en
Router#debug ip wccp events
WCCP events debugging is on
Router#debug ip wccp packets
WCCP packet info debugging is on
Router#term mon
A fixed target rule is applied on the client-side Steelhead appliance to make sure the TCP session
is intercepted and statically sent to the out-of-path Steelhead appliance on the server side. When
enabling out-of-path on the server-side Steelhead appliance, it starts listening on port 7810 for
incoming connections from a client-side Steelhead appliance.
Since the Steelhead appliance is not in the path between the client and server it does not perform
NAT. The server will see the IP address of the Steelhead appliance as the source of the
connection so the packets are returned to the Steelhead appliance instead of the client. This is
necessary to make sure that the bidirectional traffic is seen by the Steelhead appliance. Also keep
in mind that optimization will only occur when the TCP connection is initiated by the client.
Client-side Server-side
Client Steelhead Steelhead Server
IP(C)→IP(S):SYN SEQ1
IP(CSH)→IP(SSH):SYN
Listening on
IP(SSH)→IP(CSH):SYN/ACK port 7810
IP(CSH)→IP(SSH):ACK
Setup Information
IP(SSH)→IP(S):SYN SEQ2
IP(S)→IP(SSH):SYN/ACK
Connect Result IP(SSH)→IP(S):ACK
IP(S)→IP(C):SYN/ACK
Connect result is
IP(C)→IP(S):ACK cached until failure
Firewall/VPN
PRI
DMZ
FTP Web
Server Server
In this hybrid design, a client-side Steelhead appliance (not shown) would use a typical auto-
discovery process to optimize any data going to or coming from the clients shown. If however, a
remote user would like to get optimization to the DMZ shown above, the standard auto-
discovery process would not function properly since the packet flow would prevent the auto-
discovery probe from ever reaching the Steelhead appliance. To remedy this, a fixed target rule
matching the destination address of the DMZ and targeted to the Primary (PRI) interface of the
Steelhead appliance above will ensure that the traffic will reach the Steelhead appliance, and due
to the server-side out-of-path NAT process, will ensure that it returns to the Steelhead appliance
for optimization on the return path.
Asymmetric Route Detection
Asymmetric auto-detection enables Steelhead appliances to detect the presence of asymmetry
within the network. Asymmetry is detected by the client-side Steelhead appliances. Once
detected, the Steelhead appliance will pass through asymmetric traffic unoptimized allowing the
TCP connections to continue to work. The first TCP connection for a pair of addresses might be
dropped because during the detection process the Steelhead appliances have no way of knowing
that the connection is asymmetric.
If asymmetric routing is detected, an entry is placed in the asymmetric routing table and any
subsequent connections from that IP address pair will be passed through unoptimized. Further
connections between these hosts are not optimized until that particular asymmetric routing cache
entry times out.
Type Description Asymmetric Routing Table and Log Entries
Packets traverse both Steelhead • Asymmetric Routing Table: bad RST
Complete
appliances going from the client • Log: Sep 5 11:16:38 gen-sh102 kernel:
to the server but bypass both [intercept.WARN] asymmetric routing
Asymmetry
Steelhead appliances on the between 10.11.111.19 and 10.11.25.23
return path. detected (bad RST)
Connection Forwarding
In asymmetric networks, a client request traverses a different network path from the server
response. Although the packets traverse different paths, to optimize a connection, packets
traveling in both directions must pass through the same client and server Steelhead appliances.
If you have one path (through Steelhead-2) from the client to the server and a different path
(through Steelhead-3) from the server to the client, you need to enable in-path connection
forwarding and configure the Steelhead appliances to communicate with each other. These
Steelhead appliances are called neighbors and exchange connection information to redirect
packets to each other.
You can configure multiple neighbors for a Steelhead appliance. Neighbors can be placed in the
same physical site or in different sites, but the latency between them should be small because the
packets traveling between them are not optimized.
When a SYN arrives on Steelhead-2, it will send a message on port 7850 telling it that it is
expecting packets for that connection. Steelhead-3 then acknowledges and once Steelhead-2 gets
the confirmation from Steelhead-3 it will continue with the SYN+ out to the WAN. When the
SYN/ACK+ comes back, if it arrives at Steelhead-3, it will encapsulate that packet and forward
it back to Steelhead-2. Once the connection has been established, there will be no more
encapsulation between the two Steelhead appliances for that flow.
If a subsequent packet arrives on Steelhead-3, it will perform the destination IP/port rewrite. The
Steelhead appliance simply changes the destination IP of the packet to that of the neighbor
Steelhead appliance. No encapsulation is involved later on in the flow.
In WCCP deployments, connection forwarding can also be used to prevent outages whenever the
cluster and the redirection table changes. Default behavior of connection forwarding is that when
a neighbor is lost, the Steelhead appliance that lost the neighbor also will pass through the
connection since it is assuming asymmetric routing of traffic. In WCCP deployments this is not
the case and this behavior has to be avoided. The command in-path neighbor allow-
failure overrides the default behavior and allows the Steelhead appliances to continue
optimizing. Understanding the implication of applying this command prior to configuring it in a
production environment is recommended.
Commands to enable connection forwarding:
in-path neighbor enable
in-path neighbor ip address x.x.x.x
in-path neighbor allow-failure {optional}
IP addresses of neighbors with multiple in-path interfaces only have to be specified with the first
in-path interface.
Simplified Routing
Simplified routing collects the IP address for the next hop MAC address from each packet it
receives to use in addressing traffic. Enabling simplified routing eliminates the need to add static
routes when the Steelhead appliance is in a different subnet from the client and the server.
Without simplified routing, if a Steelhead appliance is installed in a different subnet from the
client or server, you must define one router as the default gateway and optionally define static
routes for the other subnets.
Without having static routes or other forms of ‘routing’ intelligence, packets can end up flowing
through the Steelhead appliance twice causing packet ricochet. This could potentially lead to
broken QoS models, firewalls blocking packets, and a performance decrease. Enabling simplified
routing eliminates these issues.
Datastore Synchronization
In a serial failover scenario the data stores are not synchronized by default. When the master
Steelhead appliance fails, the backup Steelhead appliance will take over but users will experience
cold performance again. Datastore synchronization can be turned on to exchange data store
content. This can either be done via the primary or the auxiliary interface. The synchronization
process runs on port 7744, the reconnect timer is set to 30 seconds. Datastore synchronization
can only occur between the same Steelhead appliance models and can only be used in pairs of
two.
The commands to enable automatic datastore synchronization are:
HOSTNAME (config) #datastore sync peer-ip "x.x.x.x"
HOSTNAME (config) #datastore sync port "7744"
HOSTNAME (config) #datastore sync reconnect "30"
HOSTNAME (config) #datastore sync master
HOSTNAME (config) #datastore sync enable
If you have not deployed datastore synchronization it is also possible to manually send the data
from one Steelhead appliance to another. The receiving Steelhead appliance will have to start a
listening process on the primary/auxiliary interface. The sending Steelhead appliance will have
to push the data to the IP address of the primary interface.
The commands to start this are:
HOSTNAME (config) # datastore receive port xxxx
HOSTNAME (config) # datastore send addr x.x.x.x port xxxxx
might create many configuration-oriented groups that are related to the profile settings (Base,
CIFS, MAPI); and many reporting groups in addition to the group all, perhaps based again on
protocol support (CIFS, MAPI) or on geographic location (Asia, Europe). It is important to
note that grouping is entirely configurable by the administrator and there is no notion of groups
based on location, business division, or other parameters other than those simply designated as
such by the administrator. The use of a group for reporting or configuration purposes is entirely a
user attributed decision and there is no way of enforcing this logic on the CMC as all groups can
be used for reporting, configuration, or both.
The following figure illustrates the relationship between profiles, groups, and appliances.
Appliances
Configuration Report
Profiles Groups Groups
Location
Groups
Asia
Europe
4. Use the CMC to create the profile and group configuration objects you will use to manage
the Steelhead appliances in your system.
5. When you have completed the appliance configuration, display the Appliance Details page
and set the Configuration Ready check box.
6. Set up a DNS server to map the host name riverbedcmc to the IP address for the CMC.
7. Connect the remote Steelhead appliance primary network interface to the network and power
it on.
During startup you are asked if you want to configure using the CMC. Select Yes to confirm.
The next question is which CMC you wish to use. By default the name riverbedcmc is used, if
desired, you can change this to the correct DNS entry for the CMC you want to use.
When the Steelhead appliance contacts the CMC, the CMC will send the configuration to the
remote Steelhead appliance, the appliance will be registered with the CMC, and the CMC will
begin collecting performance metrics for the Steelhead appliance. When, during registration, no
group was assigned to the Steelhead appliance, the Steelhead appliance will end up in the default
group all only.
All Steelhead appliances belong to the group all and can be assigned to more groups as desired.
Steelhead Profiles
Two types of profiles exist in the CMC: appliance specific profiles and common profiles.
Appliance specific profiles contain configuration parameters that are unique for a Steelhead
appliance. These are parameters such as hostname, IP address information, port settings and in-
path settings.
Common profiles are profiles that can exist on multiple Steelhead appliances. These profiles
contain information such as optimization settings and in-path peering rules. Common profiles
can be pushed out to all Steelhead appliances registered in the CMC.
III. Features
Feature Licensing
Certain features on Steelhead appliances require a license for operation. In version 3.x, licenses
for all features, including platform specific licenses, are included with the purchase of a
Steelhead appliance. These licenses are factory installed, however licenses can be installed by
the user via the CLI or web management console. Version 3.x requires three licenses to be
installed for the base system to function, as well as the application acceleration for CIFS and
MAPI. This includes the Scalable Data Referencing license (base), the Windows File Servers
license (CIFS), and the Microsoft Exchange (EXCH) license. Additional licensed features that
will automatically be included upon activating the base license but do not require a separate
license key are the Microsoft SQL optimization module, and the NFS optimization module.
Starting in version 3.x, HighSpeed TCP no longer requires a license, and is included as standard
feature for data center sized Steelhead appliances (starting at the 5010, and higher models). All
licensed features with the exception of the Microsoft MS SQL optimization module are enabled
by default.
HighSpeed TCP (HSTCP)
Applicability and Considerations
To better utilize links that have high bandwidth and high latency, such as in GigE WANs,
OCx/STMx, or any other link that may be classified as a large BDP (bandwidth delay product)
link, enabling HSTCP should be considered. HSTCP is a feature you can enable on Steelhead
appliances to help reduce WAN data transfers inefficiencies that are caused by limitations with
regular TCP. Enabling the HSTCP feature allows for more complete utilization of these “long fat
pipes”. HSTCP is an IETF defined RFC standard (defined in RFC 3649 and RFC 3742), and has
been shown to provide significant performance improvements in networks with high BDP
values.
As a basis for determining the applicability of HSTCP for a given network, the following
formulas and their interpretation is provided below.
For any given TCP Cwnd (congestion window) size and network latency, the maximum
throughput can be calculated by dividing the window size by the latency (64KB/.1s=640KB/s).
End nodes that are limited to window sizes of 64KB or less (nodes that do not support TCP
window scaling as defined in RFC 1323), will prove inefficient in transferring data across links
with bandwidth exceeding the Cwnd/RTT limitation. While it is not HSTCP that introduced TCP
window scaling, it does typically make use of it as links that have high BDP values imply that a
large TCP window size would be needed. For a given transfer, the TCP window size should be
no less than the BDP in order to ensure that the full bandwidth of the link is used by that session.
By the same token, having a TCP window that exceeds the BDP may cause the receiving host, or
devices in between, to exhaust their resources and potentially cause severe bandwidth
degradation.
Additional considerations with HSTCP relate to how the Cwnd changes in size during a transfer.
For most non-HSTCP implementations, after a short period of exponential Cwnd growth (Slow
Start), the window size continues to grow at a rate of 1MSS/RTT. Most operating systems used a
value of 1460 bytes as their MSS, meaning that for each successful round trip (ACK received)
the window will increase by 1460 bytes. In the case of small BDP and thus Cwnd sizes, the 1460
bytes per RTT, represents a moderate growth rate that can peak within a few short seconds. In
the case of a large BDP value however, the 1460 bytes per RTT represents a significant amount
of time before the Cwnd would extend to the full BDP value. The problem of increasing the
Cwnd size at the rate prescribed by standard TCP is further compounded by considering that a
packet loss event causes TCP to “back off” by reducing the current Cwnd size by half. This
reduction is vital in allowing TCP to “play nicely” with other sessions sharing link bandwidth,
however in the case of high BDP links; the time to recover from such a loss event at standard
Cwnd growth rates would represent a very ineffective use of the bandwidth available.
For example, for a standard TCP connection with 1500-byte packets and a 100ms round-trip
time, achieving a steady-state throughput of 10Gbps would require an average congestion
window of 83,333 segments, and a packet drop rate of at most one congestion event every
5,000,000,000 packets (or equivalently, at most one congestion event every 1 2/3 hours). Clearly
this is not a likely possibility in real world networks, and is the basis for which HSTCP was
developed. HSTCP solves problems related to the rate at which to grow the Cwnd, as well as
how to respond when loss events occur and the Cwnd needs to be reduced. Further information
as to how this is achieved is explained in the RFCs referenced above.
The following table and graph displays how filling a Long Fat Network (OC-12) is done.
Test Scenario Bandwidth RTT Latency Throughput
600
its/sec)
500
illionb
Nuti(millions
400
l (m
300
200
W
WANAUtil
100
0
1 49 97 145 193 241 289 337 385 433 481 529 577 625 673
Time e
tim (seconds)
(s e c)
preference to packets in the queue that have more bandwidth allocated to it. As an example,
consider a case of LLQ where two priority queues are created, one for voice traffic, and one for
video traffic. The voice queue is allocated 10% of the bandwidth, and the video queue which is
also latency sensitive, is allocated 40% of the bandwidth. Since the router has no ability to
differentiate that the small voice packets should generally be allowed out before the larger video
packets (up to the bandwidth limit), you will experience a case where small voice packets may
get stuck behind several larger video packets despite not fully utilizing their 10% bandwidth
allocation.
HSFC solves these problems by logically separating the latency element of queuing from the
bandwidth element. As such, you can define multiple queues, each of a different priority relative
to the other queues, and be assured that despite more bandwidth being allocated in lower queues;
the higher queues will still get serviced preferentially from a latency perspective, up to the
amount of bandwidth specified for that queue. Steelhead appliances implement five queues with
each queue starting at “Realtime” and ending with ”Low Priority,” with each queue in between
having lower latency priority than the next (Realtime having the highest). The strategy imposed
by HSFC lends itself particularly well to “bursty” traffic, as is the case with most networks.
Enforcing QoS for Active/Passive FTP
Active/Passive FTP Operation
To configure optimization policies for the FTP data channel, define an in-path rule with the
destination port 20 and set its optimization policy. Setting QoS for destination port 20 on the
client-side Steelhead appliance affects passive FTP, while setting the QoS for destination port 20
on the server-side Steelhead appliance affects active FTP.
In the case of an active FTP session, data connections originate on a server sourced on port 20
and destined to a random port specified by the client. As such, specifying a QoS rule on the
server-side Steelhead with a destination port of 20 is appropriate. With passive FTP however,
data connections initiate on the client from a random port, and are destined to a server on a
random port; as such, there is no seemingly simple way to apply a QoS rule based on the Layer 4
port information. To help solve this problem, the Steelhead allows you to define a client-side
QoS rule with a destination port of 20 to tell it that you would like to apply this QoS rule to a
passive FTP data connection. The Steelhead will intelligently identify the actual ports used for
the passive FTP data transfer, and apply the QoS logic set forth by the class where the rule has
been applied.
Converting between DSCP, IP Precedence, ToS
For the RCSP exam, you are expected to know how to convert various packet marking types.
This is important because the Steelhead appliances only understand DSCP (Differentiated
Services Code Point) values, while other network devices may support a different method of
marking or matching traffic. Various methods of converting to and from DSCP values are
defined by RFC 2474.
Interpreting and Converting Common Router Policies
In addition to being able to convert to and from DSCP values for proper marking and matching
between Steelhead appliances and other network nodes on the RCSP exam, understanding how
to convert simple QoS configurations from Cisco and other popular routing platforms is required.
Generally, some familiarity with QoS configuration on routers and an understanding of how
Steelhead appliances implement QoS (see “Riverbed QoS Implementation” section), should
make the process of converting configurations a more simple task.
Enabling PFS does not reduce the amount of data store allocated for the SDR process performed
by a Steelhead appliance.
Version 2 vs Version 3 Setup
Version 2. Specify the server name and remote path for the share folder on the origin file server.
With Version v2.x, you must have the RCU service running on a Windows server—this can be
the origin file server or a separate server.
Riverbed recommends you upgrade your v2.x shares to 3.x shares so that you do not have to run
the RCU on a server.
Version 3. Specify the login, password, and remote path used to access the share folder on the
origin file server. With Version 3, the RCU runs on the Steelhead appliance—you do not need to
install the RCU service on a Windows server.
Upgrading V2.x PFS Shares
By default, when you configure PFS shares with Steelhead appliance software versions 3.x and
higher, you create v3.x PFS shares. PFS shares configured with Steelhead appliance software
v2.x are v2.x shares. V2.x shares are not upgraded when you upgrade Steelhead appliance
software.
If you have shares created with v2.x software, Riverbed recommends that you upgrade them to
v3.x shares in the Management Console. If you upgrade any v2.x shares, you must upgrade all of
them. Once you have upgraded shares to v3.x, you should only create v3.x shares.
If you do not upgrade your v.2.x shares:
• You should not create v3.x shares.
• You must install and start the RCU on the origin server or on a separate Windows host with
write-access to the data PFS uses. The account that starts the RCU must have write
permissions to the folder on the origin file server that contains the data PFS uses.
NOTE: In Steelhead appliance software version 3.x and higher, you do not need to install the
RCU service on the server for synchronization purposes. All RCU functionality has been
moved to the Steelhead appliance.
• You must configure domain, not workgroup, settings. Domain mode supports v2.x PFS
shares but Workgroup mode does not.
Domain and Local Workgroup Settings
If using your Steelhead appliance for PFS, configure either the domain or local workgroup
settings.
Domain Mode
In Domain mode, you configure the PFS Steelhead appliance to join a Windows domain
(typically, your company’s domain). When you configure the Steelhead appliance to join a
Windows domain, you do not have to manage local accounts in the branch office as you do in
Local Workgroup mode.
Domain mode allows a DC to authenticate users accessing its file shares. The DC can be located
at the remote site or over the WAN at the main data center. The Steelhead appliance must be
configured as a Member Server in the Windows 2000 or later ADS domain. Domain users are
allowed to access the PFS shares based on the access permission settings provided for each user.
Data volumes at the data center are configured explicitly on the proxy file server and are served
locally by the Steelhead appliance. As part of the configuration, the data volume and ACLs from
38 © 2007-2008 Riverbed Technology, Inc. All rights reserved.
RCSP Study Guide
the origin server are copied to the Steelhead appliance. PFS allocates a portion of the Steelhead
appliance data store for users to access as a network file system.
Before you enable Domain mode in PFS, make sure you:
• Configure the Steelhead appliance to use NTP to synchronize the time.
• Configure the DNS server correctly.
• Set the owner of all files and folders in all remote paths to a domain account and not a local
account.
• A DNS entry should exist for the Steelhead appliance primary interface when using Domain
mode.
NOTE: PFS only supports domain accounts on the origin file server; PFS does not support local
accounts on the origin file server. During an initial copy from the origin file server to the PFS
Steelhead appliance, if PFS encounters a file or folder with permissions for both domain and
local accounts, only the domain account permissions are preserved on the Steelhead appliance.
Local Workgroup Mode
In Local Workgroup mode you define a workgroup and add individual users that will have
access to the PFS shares on the Steelhead appliance.
Use Local Workgroup mode in environments where you do not want the Steelhead appliance to
be a part of a Windows domain. Creating a workgroup eliminates the need to join a Windows
domain and vastly simplifies the PFS configuration process.
NOTE: If you use Local Workgroup mode, you must manage the accounts and permissions for
the branch office on the Steelhead appliance. The local workgroup account permissions might
not match the permissions on the origin file server.
PFS Share Operating Modes
PFS provides Windows file service in the Steelhead appliance at a remote site. When you
configure PFS, you specify an operating mode for each individual file share on the Steelhead
appliance. The proxy file server can export data volumes in Local mode, Broadcast mode, and
Stand-Alone mode. After the Steelhead appliance receives the initial copy of the data and ACLs,
shares can be made available to local clients. In Broadcast and Local mode only, shares on the
Steelhead appliance are periodically synchronized with the origin server at intervals you specify,
or manually if you choose. During the synchronization process the Steelhead appliance optimizes
this traffic across the WAN.
• Broadcast Mode. Use Broadcast mode for environments seeking to broadcast a set of read-
only files to many users at different sites. Broadcast mode quickly transmits a read-only copy
of the files from the origin server to your remote offices. The PFS share on the Steelhead
appliance contains read-only copies of files on the origin server. The PFS share is
synchronized from the origin server according to parameters you specify when you configure
it. However, files deleted on the origin server are not deleted on the Steelhead appliance until
you perform a full synchronization. Additionally, if, on the origin server, you perform
directory moves (for example, move .\dir1\dir2 .\dir3\dir2) regularly, incremental
synchronization will not reflect these directory changes. You must perform a full
synchronization frequently to keep the PFS shares in synchronization with the origin server.
• Local Mode. Use Local mode for environments that need to efficiently and transparently
copy data created at a remote site to a central data center, perhaps where tape archival
resources are available to back up the data. Local mode enables read-write access at remote
offices to update files on the origin file server.
After the PFS share on the Steelhead appliance receives the initial copy from the origin
server, the PFS share copy of the data becomes the master copy. New data generated by
clients is synchronized from the Steelhead appliance copy to the origin server based on
parameters you specify when you configure the share. The folder on the origin server
essentially becomes a back-up folder of the share on the Steelhead appliance. If you use
Local mode, users must not directly write to the corresponding folder on the origin server.
NOTE: In Local mode, the Steelhead appliance copy of the data is the master copy; do not make
changes to the shared files from the origin server while in Local mode. Changes are propagated
from the remote office hosting the share to the origin server.
Riverbed recommends that you do not use Windows file shortcuts if you use PFS.
• Stand-Alone Mode. Use Stand-Alone mode for network environments where it is more
effective to maintain a separate copy of files that are accessed locally by the clients at the
remote site. The PFS share also creates additional storage space. The PFS share on the
Steelhead appliance is a one-time, working copy of data mapped from the origin server. You
can specify a remote path to a directory on the origin server, creating a copy at the branch
office. Users at the branch office can read from or write to stand-alone shares but there is no
synchronization back to the origin server since a stand-alone share is an initial and one-time
only synchronization.
Lock Files
When you configure a v3.x Local mode share or any v2.x share (except a Stand-Alone share in
which you do not specify a remote path to a directory on the origin server), a text file
(._rbt_share_lock.txt) that keeps track of which Steelhead appliance owns the share is created
on the origin server. Do not remove this file.
If you remove the._rbt_share_lock.txt file on the origin file server, PFS will not function
properly. (V3.x Broadcast and Stand-Alone shares do not create these files.)
Notes:
• To join a domain, the Windows domain account must have the correct privileges to perform a
join domain operation.
• The PFS share and the origin-server share name cannot contain Unicode characters. The
Management Console does not support Unicode characters.
• If you have shares that were created with RiOS v2.x, the account that starts the RCU must
have write permissions to the folder on the origin file server. Also, the logon user for the
RCU server must to be a member of the Administrators group either locally on the file server
or globally in the domain.
• Make sure the users are members of the Administrators group on the remote share server,
either locally on the file server (the local Administrators group) or globally in the domain
(the Domain Administrator group).
• Riverbed recommends that you do not run a mixed system of PFS shares, that is, v2.x shares
and v3.0 shares.
NetFlow
Operation and Implementation
Starting with version 3.x, Steelhead appliances support the export of NetFlow v5 data. NetFlow
can play an important role in an organization’s network by providing detailed accounting
between hosts. This information can then be used for various purposes such as billing,
identifying top talkers, and capacity planning to name a few. It can also assist in troubleshooting
denial-of-service attacks.
It is common to configure NetFlow on the WAN routers in order to monitor the traffic traversing
the WAN. However, when the Steelhead appliances are in place, the WAN routers will only see
the inner Steelhead TCP session traffic and not the real IP addresses/ports of the client and
server. By supporting NetFlow v5 on the Steelhead appliance, this becomes a non-issue
altogether. In fact, it is possible to only have the Steelhead export the NetFlow data instead of the
router without compromising any functionality. By doing so, the router can spend more CPU
cycles on its core functionality: routing and switching of packets.
Similar to configuring NetFlow on the routers, NetFlow statistics are collected on the ingress
interfaces of the Steelhead appliance. Therefore, to see a complete flow or conversation between
the server and client, it is necessary to configure NetFlow on both the client-side Steelhead
appliance as well as the server-side Steelhead appliance, For example, to determine the amount
of CIFS traffic on the LAN between a server and client, configure NetFlow to collect on the
following interfaces:
• Client-side Steelhead LAN interface (this will show pre-optimized traffic going from client
to server).
• Server-side Steelhead LAN interface (this will show pre-optimized traffic going from server
to client).
Similarly, to determine the amount of CIFS traffic on the WAN between a server and client,
configure NetFlow to collect on the following interfaces:
• Client-side Steelhead WAN interface (this will show optimized traffic going from server to
client).
• Server-side Steelhead WAN interface (this will show optimized traffic going from client to
server).
NetFlow Protocol Header and Record Header
NetFlow version 5 supports the ordering of NetFlow packets by way of a sequence number
transmitted in each packet. Information that can be obtained from a NetFlow packet can be
observed by reviewing the supported fields shown in the flow entry packet and include common
information such as the IP addresses, interfaces, number of packets, and other data related to the
data transfer. Flow information is available for both optimized and passthrough data.
NetFlow Version 5 Flow Header:
from another appliance via the inner TCP session will be placed on the correct VLAN upon
return. The VLAN Tag ID might be the same value or a different value than the VLAN tag used
on the client. A zero (0) value specifies non-tagged (or native) VLAN.
When considering the use of a Steelhead appliance on a trunk link, routing is often a point of
concern due to the potential for many networks. While static inpath routes can be used,
simplified routing commonly allows for an easier deployment.
NOTE: When the Steelhead appliance communicates with a client or a server it uses the same
VLAN tag as the client or the server. If the Steelhead appliance cannot determine which VLAN
the client or server is in, it uses its own VLAN until it is able to determine that information.
IV. Troubleshooting
Common Deployment Issues
Speed and Duplex
Some symptoms around speed and duplex could be:
• Access does not speed up.
• If you look at interface counters and see errors (sometimes counters on a Steelhead appliance
stay low, increase on network gear).
• There should be alarm/log messages about error counts.
• Packet traces see lots of retransmissions. In Ethereal use:
o tcp.analysis.retransmission
o tcp.analysis.fast_retransmission
o tcp.analysis.lost_segment
o tcp.analysis.duplicate_ack
A likely problem is that the router is set to 100Full (fixed) whereas the Steelhead appliance is set
to Auto. In this case, check with flood-ping, ping –f –I {in-path-ip} –s 1400 {clientIP}
or from server-side Steelhead appliance to server. Do not perform across the WAN. Change the
interface speed/duplex to match.
NOTE: Ideally the WAN and LAN have the same duplex settings, otherwise the devices around
the Steelhead appliance will have a duplex mismatch when in bypass.
SMB (Server Message Block) Signing
SMB signing is a protocol add-on to protect permission distribution. It adds a cryptographic
signature to CIFS packets and authenticates endpoints to prevent man-in-the-middle attacks (or
optimization).
A symptom could be that file access does either not speed up or perhaps not as much. You
should see a log message about signed connections. Check the logs for
error=SMB_SHUTDOWN_ERR_SEC_SIG_REQUIRED messages.
A likely problem is that either the server has SMB signing enabled as does the client (1.X only)
or, the server has SMB signing required and the client has it enabled. In this case, change the
server to not be required:
(if enable:enable) protocol cifs secure-sig-opt enable
Packet Ricochet
If network connections fail on their first attempt but succeed on subsequent attempts, it could be
due to packet ricochet. You should suspect packet ricochet if:
• The Steelhead appliance on one or both sides of a network has an in-path interface that is
different from that of the local host.
• There are no in-path routes defined in your network but are needed.
You experience packet ricochet symptoms. Symptoms of packet ricochet are:
• Connections between the Steelhead appliance and the clients or server are routed through the
WAN interface to a WAN gateway, and then they are routed through a Steelhead appliance
to the next-hop LAN gateway.
• The WAN router drops SYN packets from the Steelhead appliance before it issues an ICMP
redirect. Note that some routers might not be able or could be configured to not send ICMP
redirect packets. ICMP redirects are on by default on most routers and are sent whenever the
router has to send the packet out the same interface it arrived on to route it towards the
destination and when the next hop is on the same subnet as the source IP address. ICMP
redirect information is stored for five minutes on the Steelhead appliance.
Opportunistic Locks (Oplocks)
Windows (CIFS) uses opportunistic locking (oplock) to determine the level of safety the
OS/application has in working with a file.
Types of Oplocks
The following list describes the types of oplock that a client may hold:
• Level II oplock. Informs a client that there are multiple concurrent clients of a file, and none
have yet modified it. It allows the client to perform read operations and file attribute fetches
using cached or read-ahead local information, but all other requests have to be sent to the
server.
• Exclusive oplock. Informs a client that it is the only one to have a file open. It allows the
client to perform all file operations using cached or read-ahead local information until it
closes the file, at which time the server has to be updated with any changes made to the state
of the file (contents and attributes).
• Batch oplock. Informs a client that it is the only one to have a file open. It allows the client
to perform all file operations on cached or read-ahead local information (including open and
close operations).
Losing an oplock may pose a problem for several reasons including anti-virus programs. The
oplock controls the consistency of optimizations such as read-ahead. Oplock levels are reduced
when conflicting opens are made to a file. The Steelhead appliance maintains the safety, thus it
reduces optimization when a client has shared access to a file instead of exclusive access in order
to keep correctness.
Asymmetric Routing (AR)
AR occurs when the transmit path is different than the return path for packets. For a Steelhead
appliance to optimize traffic it must see the flow bi-directionally. Traffic can flow
asymmetrically everywhere else in the network (between Steelhead appliances).
Detecting Asymmetric Routing
To detect AR by a client-side Steelhead look for things like:
• A RST packet from the client with an invalid SYN number while the connection is in the
SYN_SENT state
• Receiving a SYN/ACK packet from the server with an invalid ACK number while the
connection is in the SYN_SENT state
• Receiving an unusually high number of SYN retransmits from the client
• Receiving an ACK packet from the client while the connection in SYN_SENT state
Asymmetric Route Passthrough
Asymmetric route passthrough allows connections to be passed through and an entry to be placed
into the AR table. The entry is placed in the table for a default of 24 hours. For SYN
Central Processing Unit (CPU) Utilization - Whether the system has reached the CPU
threshold for any of the CPUs in the Steelhead appliance. If the system has reached the CPU
threshold, check your settings. If your alarm thresholds are correct, reboot the Steelhead
appliance.
NOTE: If more than 100 MB of data is moved through a Steelhead appliance, while performing
PFS synchronization, the CPU utilization might become high and result in a CPU alarm. This
CPU alarm should not be cause for concern.
Data Store - Whether the data store is corrupt. To clear the data store of data, restart the
Steelhead service and click Clear Data Store on Next Restart.
Fan Error - Whether the system has detected a problem with the fans. Fans in 3U systems can
be replaced.
IPMI - Whether the system has encountered an Intelligent Platform Management Interface
(IPMI) error. The system will display a blinking amber LED. To clear the alarm, run the clear
hardware error-log command.
Licensing - Whether your licenses are current.
Link State - Whether the system has detected a link that is down. You are notified via SNMP
traps, email, and alarm status.
Memory Error - Whether the system has encountered a memory error.
Memory Paging - Whether the system has reached the memory paging threshold. If 100 pages
are swapped approximately every two hours the Steelhead appliance is functioning properly. If
thousands of pages are swapped every few minutes, then reboot the Steelhead appliance. If
rebooting does not solve the problem, contact Riverbed Technical Support.
Neighbor Incompatibility - Whether the system has encountered an error in reaching a
Steelhead appliance configured for connection forwarding.
Network Bypass - Whether the system is in bypass mode. If the Steelhead appliance is in bypass
mode, restart the Steelhead service. If restarting the service does not resolve the problem, reboot
the Steelhead appliance. If rebooting does not resolve the problem, shut down and restart the
Steelhead appliance.
NFS V2/V4 Alarm (If NFS enabled and V2/V4 used) - Whether the system has triggered a v2
or v4 NFS alarm.
Optimization Service - Whether the system has detected a software error in the Steelhead
service. The Steelhead service continues to function, but an error message appears in the logs
that you should investigate.
Prepopulation or Proxy File Service Configuration Error - Whether there has been a PFS or
prepopulation operation error. If an operation error is detected, restart the Steelhead service and
PFS.
Prepopulation or Proxy File Service Operation Failed - Whether a synchronization operation
has failed. If an operation failure is detected, attempt the operation again.
Proxy File Service partition Full - Whether the PFS partition is full.
RAID - Whether the system has encountered RAID errors (for example, missing drives, pulled
drives, drive failures, and drive rebuilds). For drive rebuilds, if a drive is removed and then
reinserted, the alarm continues to be triggered until the rebuild is complete.
• -q Quiet output. Print less protocol information so output lines are shorter.
• -r Read packets from file (which was created with the -w option).
• -S Print absolute, rather than relative, TCP sequence numbers.
• -s Snarf snaplen bytes of data from each packet rather than the default of 68. 68 bytes is
adequate for IP, ICMP, TCP and UDP but may truncate protocol information from name
server and NFS packets. Packets truncated because of a limited snapshot are indicated in the
output with “[|proto]”, where proto is the name of the protocol level at which the truncation
has occurred.
• -v (Slightly more) verbose output. For example, the time to live, identification, total length
and options in an IP packet are printed. Also enables additional packet integrity checks such
as verifying the IP and ICMP header checksum.
• -w Write the raw packets to file rather than parsing and printing them out. They can later be
printed with the -r option. Standard output is used if file is -.
• -x Print each packet (minus its link level header) in hex. The smaller of the entire packet or
snaplen bytes will be printed.
• -X When printing hex, print ASCII too. Thus if -x is also set, the packet is printed in
hex/ascii. This option enables you to analyze new protocols.
To delete or upload a tcpdump file from the CLI type:
file tcpdump {delete <filename> | upload <filename> <URL or
scp://username:password@hostname/path/filename>}
In-path rules. Verify that in-path rules are configured correctly. For example, at the system
prompt, enter the CLI command:
show in-path rules
In-path routes. Verify that in-path routes are configured correctly. For example, at the system
prompt, enter the CLI command:
sh ip in-path route <interface-name>
Steelhead service. If necessary, enable the Steelhead service. For example, at the system
prompt, enter the CLI command:
service enable
In-path support. If necessary, enable in-path support. For example, at the system prompt, enter
the CLI command:
in-path enable
In-path client out-path support. If necessary, disable in-path client out-of-path support. For
example, at the system prompt, enter the CLI command:
no in-path oop all-port enable
Network (LAN/WAN) Topology
Packet traversion. Physically draw out both sides of the entire network and make sure that
packets traverse the same client and server Steelhead appliances in both directions (from the
client to the server and from the server to the client). Verify packet traversion by running a
traceroute from the client to the server and the server to the client.
Bi-directional continuity. Make sure there is bi-directional continuity between the client and the
client-side Steelhead appliance, and the server Steelhead appliance and the network server.
Auto-discovery. If the auto-discovery mechanism is failing, try implementing a fixed-target rule.
You can define fixed-target rules using the Management Console or the CLI.
Auto-discovery can fail due to devices dropping TCP options, which sometimes occurs with
certain satellite links and firewalls. To fix this problem, create fixed-target rules that point to the
remote Steelhead appliance’s in-path interface on port 7800.
LAN/WAN bandwidth and reliability. Check if there are any client and server duplex issues or
VoIP traffic that may be clogging the T1 lines.
Protocol optimization. Are all protocols that you expect to optimize actually optimized in both
directions? If no protocols are optimized, only some of the expected protocols are optimized, or
expected protocols are not optimized in both directions, check:
• That connections have been successfully established.
• That Steelhead appliances on the other side of a connection are turned on.
• For secure or interactive ports that are preventing protocol optimization.
• For any passthrough rules that could be causing some protocols to passthrough Steelhead
appliances unoptimized.
• That the LAN and WAN cables are not inadvertently swapped.
V. Exam Questions
Types of Questions
The RSCP exam includes a variety of question types including true/false, single-answer multiple
choice, multiple-answer multiple choice, and fill in the blank. The question distribution is
heavily targeted towards the multiple choice variety, however, fill in the blank questions are used
in situations where the command is believed to be important part of everyday Steelhead
appliance operation. Regardless of the type of question, selecting the best answer(s) in response
the questions will yield the best score.
Sample Questions
1. How do you view the full configuration in the CLI? (one answer)
a. SH > show con
b. SH > show configuration
c. SH > show config all
d. SH # show config full
e. SH (config) # show con
2. Under what circumstances will the NetFlow cache entries flush (be sent to the collector)?
(Multiple answers)
a. When inactive flows have remained for 15 seconds.
b. When inactive flows have remained for 30 minutes.
c. When active flows have remained for 30 minutes.
d. When the TCP URG bit is set.
e. When the TCP FIN bit is set.
3. The auto-discovery probe uses which TCP option number?
a. 0x4e (76 decimal)
b. 0x4c (76 decimal)
c. 0x42 (66 decimal)
d. Auto-discovery does not use TCP options
4. In order to achieve optimization using auto-discovery for traffic coming from site C and
destined to site A in the exhibit, which configuration below would be required?
a. In-path fixed target rule on site B Steelhead pointing to Site A Steelhead
b. Peering rule on site B Steelhead passing through probes from site C
c. Peering rule on site B Steelhead passing through probe responses from site A
d. Both A and C
e. Both B and C
5. You are configuring HighSpeed TCP in an environment with an OC-12 (622Mbit/s) and 60
milliseconds of round-trip latency. The WAN router queue length is set to BDP for the link.
Assuming 1500 byte packets, the queue length for this link would be closest to:
a. 3,110 packets
b. 6,220 packets
c. 775 packets
d. 150 packets
e. 10,000 packets
6. Which of the following correctly describe the combination of cable types used in a fail-to-
wire scenario for the interconnected devices shown in the accompanying figure? Assume
Auto-MDIX is not enabled on any device.
a. Cable 1: Cross-over, Cable 2: Cross-over
b. Cable 1: Straight-through, Cable 2: Straight-through f0
Cable 1
c. Cable 1: Cross-over, Cable 2: Straight-through wan0_0
d. Cable 1: Straight-through, Cable 2: Cross-over
lan0_0
Cable 2
f0/1
7. In the accompanying figure, on which interfaces would you capture the NetFlow export data
for active FTP data packets when a client performs a GET operation? (Assume you are not
interested in client response packets such as acknowledgements.) (One answer)
a. A and B
b. B and D
c. C and D
d. B and C
e. A and C
SH3 B C SH4
f0 s0 s0 f0
wan WAN wan
lan lan
A
D
L3 Switch
L2 Switch
8. Which of the following control messages are NOT used by WCCP? (Single answer)
a. HERE_I_AM
b. I_SEE_YOU
c. REDIRECT_ASSIGN
d. REMOVAL_QUERY
e. KEEPALIVE
9. A customer wants to mark the DSCP value for active FTP data connection as AF22. Which
of the following are true?
a. Specify qos dscp rule at client-side Steelhead with a dest-port of 21 and with a DSCP
value of 22.
b. Specify qos dscp rule at client-side Steelhead with a src-port of 20 and with a DSCP
value of 20.
c. Specify qos dscp rule at server-side Steelhead with a dest-port of 21 and with a DSCP
value of 22
d. Specify qos dscp rule at client-side Steelhead with a dest-port of 20 and with a DSCP
value of 22.
e. Specify qos dscp rule at server-side Steelhead with a dest-port of 20 and with a DSCP
value of 20.
10. Type in the command used to show information regarding the current health (status) of a
Steelhead, the current version, the uptime, and the model number. (fill in the blank)
_______________
Answers
1d, 2ace, 3b, 4b, 5a, 6c, 7e, 8e, 9e, 10 show info
VI. Appendix
Acronyms and Abbreviations
Acronym/Abbreviation Definition
AAA Authentication, Authorization, and Accounting
ACL Access Control List
ACS (Cisco) Access Control Server
AD Active Directory
ADS Active Directory Services
AR Asymmetric Routing
ARP Address Resolution Protocol
BDP Bandwidth-Delay Product
BW Bandwidth
CA Certificate Authority
CAD Computer Aided Design
CDP Cisco Discovery Protocol
CHD Computed Historical Data
CIFS Common Internet File System
CLI Command-Line Interface
CMC Central Management Console
CPU Central Processing Unit
CSR Certificate Signing Request
CSV Comma-Separated Value
DC Domain Controller
DER Distinguished Encoding Rules
DHCP Dynamic Host Configuration Protocol
DNS Domain Name Service
DSA Digital Signature Algorithm
DSCP Differentiated Services Code Point
ECC Error-Correcting Code
ESD Electrostatic Discharge
FDDI Fiber Distributed Data Interface
FIFO First in First Out
FSID File System ID
FTP File Transfer Protocol
GB Gigabytes
GMT Greenwich Mean Time
GRE Generic Routing Encapsulation
GUI Graphical User Interface
HFSC Hierarchical Fair Service Curve
HSRP Hot Standby Routing Protocol
Acronym/Abbreviation Definition
HSTCP High-Speed Transmission Control Protocol
HTTP HyperText Transport Protocol
HTTPS HyperText Transport Protocol Secure
ICMP Internet Control Message Protocol
ID Identification number
IGP Interior Gateway Protocol
IOS (Cisco) Internetwork Operating System
IKE Internet Key Exchange
IP Internet Protocol
IPSec Internet Protocol Security Protocol
ISL InterSwitch Link
L2 Layer-2
L4 Layer-4
LAN Local Area Network
LED Light-Emitting Diode
LZ Lempel-Ziv
MAC Media Access Control
MAPI Messaging Application Protocol Interface
MEISI Microsoft Exchange Information Store Interface
MIB Management Information Base
MOTD Message of the Day
MS SQL Microsoft Structured Query Language
MSFC Multilayer Switch Feature Card
MTU Maximum Transmission Unit
MX-TCP Max-Speed TCP
NAS Network Attached Storage
NAT Network Address Translate
NFS Network File System
NIS Network Information Services
NSPI Name Service Provider Interface
NTLM Windows NT LAN Manager
NTP Network Time Protocol
OSI Open System Interconnection
OSPF Open Shortest Path First
PAP Password Authentication Protocol
PBR Policy-Based Routing
PCI Peripheral Component Interconnect
PEM Privacy Enhanced Mail
PFS Proxy File Service
PKCS12 Public Key Cryptography Standard #12
© 2007-2008 Riverbed Technology, Inc. All rights reserved. 57
RCSP Study Guide
Acronym/Abbreviation Definition
PRTG Paessler Router Traffic Grapher
QoS Quality of Service
RADIUS Remote Authentication Dial-In User Service
RAID Redundant Array of Independent Disks
RCU Riverbed Copy Utility
ROFS Read-Only File System
RSA Rivest-Shamir-Adleman encryption method by RSA Security
SA Security Association
SDR Scalable Data Referencing
SFQ Stochastic Fairness Queuing
SH Riverbed Steelhead Appliance
SMB Server Message Block
SMI Structure of Management Information
SMTP Simple Mail Transfer Protocol
SNMP Simple Network Management Protocol
SQL Structured Query Language
SSH Secure Shell or server-side Steelhead
SSL Secure Sockets Layer
TA Transaction Acceleration
TACACS+ Terminal Access Controller Access Control System
TCP Transmission Control Protocol
TCP/IP Transmission Control Protocol/Internet Protocol
TP Transaction Prediction
TTL Time to Live
ToS Type of Service
U Unit
UDP User Datagram Protocol
UNC Universal Naming Convention
URL Uniform Resource Locator
UTC Universal Time Code
VGA Video Graphics Array
VLAN Virtual Local Area Network
VoIP Voice over IP
VWE Virtual Window Expansion
WAAS Wide-Area Application Services
WAFS Wide-Area File Services
WAN Wide Area Network
WCCP Web Cache Communication Protocol