Sei sulla pagina 1di 13

COMMON INTERNET FILE SYSTEM (CIFS) FILE SERVING ........

MAXIMUM NUMBER OF ACTIVE CONCURRENT CIFS CONNECTIONS.....2

SONAS SYSTEM CONFIGURATION ........................................4

INTERFACE NODE CONFIGURATION ..............................................4


SONAS Best Practices and Processor and Memory configuration.............................4
Networking adapter configuration .................................4
options for CIFS Scalability STORAGE PLANNING AND CONFIGURATION ....................................4

A guide to achieving high levels of CIFS scalability on a PROTOCOL SPECIFIC CONFIGURATION TO MAXIMIZE
CONCURRENT CIFS CONNECTIONS .......................................5
SONAS system
LEASES, LOCKING AND SHARE MODES ..........................................5
June 2013 Leases..............................................................................5
Locking ............................................................................6
Share Modes ...................................................................6
HOME DIRECTORY EXPORTS USING SUBSTITUTION VARIABLES ............7
SHARING FILES AND DIRECTORIES AMONG CIFS CLIENTS ...................7
CIFS share coherency options..........................................8

OTHER CONSIDERATIONS.....................................................9

PLANNING FOR FAIL-OVER SCENARIOS AND UPGRADE.......................9


SCHEDULING ADVANCED FUNCTIONS FOR DATA MANAGEMENT ........10

TUNING AND INVESTIGATING PERFORMANCE CONCERNS .11

REFERENCES .......................................................................13

Page 1
Common Internet File system (CIFS) File
Serving
The IBM Scale Out Network Attached Storage (SONAS) system
provides CIFS file serving to Window client computers via CIFS
shares defined on the SONAS system. A maximum of 1000 CIFS
shares may be defined on each SONAS system.

Having a large number of Windows/CIFS clients (thousands)


concurrently accessing the CIFS shares in a SONAS system requires
planning to ensure that the configuration of the system can support
the planned number of active concurrent CIFS connections from the
CIFS clients.

This paper describes methods to achieve a highly scalable CIFS


environment. It does not guarantee increased response time for any
specific connection as that would vary on a number of factors and
more specifically the workload driven by each connection.

Maximum number of active concurrent CIFS


connections
Each SONAS interface node (including the integrated management
node) is capable of handling a large number of active concurrent
CIFS connections. The exact number of concurrent CIFS connections
per interface node is dependent on many factors including the
number of processors and the amount of memory installed in the
interface node, IO workload, storage configuration and advanced
functions that are configured to run at the same time that it is
serving high number of CIFS connections.

Advanced functions include operations such as creating and


deleting file set or file system level snapshots, TSM or NDMP
backup/restore processing, async replications, Active Cloud Engine
(ACE) WAN caching and others.

For planning purposes, IBM recommends that you plan on no more


than 2500 active concurrent CIFS connections and no more than a
total of 4000 connections per SONAS interface node. This
recommendation is based on traditional Windows home directory
workloads and testing performed by IBM on the SONAS 1.4.1
Release.

SONAS software does not impose this maximum CIFS connection


limit because the actual maximum number that is achievable on a

Page 2
given SONAS system may vary based on a number of factors that
are described later. However, if the maximum number of
concurrent connections goes beyond the maximum recommended
or the limit that your SONAS system is capable of supporting, CIFS
clients may experience longer response times, session disconnect,
or other symptoms.

If you see these symptoms, IBM recommends you expand your


SONAS system configuration or reduce the connections, assuming
all the best practices described in this document have already been
implemented to the fullest extent.

Page 3
SONAS System Configuration
Two important factors in determining the maximum number of
active CIFS connections per interface node that your SONAS system
can support are the configuration of the SONAS interface node and
the underlying configuration of the storage system(s) on which the
file system containing the CIFS share (or shares) reside.

Interface Node Configuration


Processor and Memory configuration
To achieve the maximum possible number of active concurrent CIFS
connections per interface node, IBM recommends that the SONAS
interface nodes be configured with the maximum number of
processors and the maximum amount of memory. For the current
SONAS interface node (2851-SI2) this is two 2.66GHz Intel Xeon 6-
core processors (Feature Code 0102 is ordered for 2nd processors)
and 144GB of memory (quantity of Feature Code 1003 is five).

Networking adapter configuration


In addition, to achieve that maximum possible number of
concurrent CIFS connections per interface node, IBM recommends
that the SONAS interface nodes be configured with the maximum
number of 10GbE networking adapters. For the current SONAS
interface node this is two dual-port 10Gb Universal Converged
Networking Adapters (two of feature code 1102).

Storage planning and configuration


The storage configuration must be carefully planned based on the
overall workload characteristics intended for the SONAS system.

Factors to consider in planning the storage configuration include


expected response time during peak and off peak hours, workload
profiling to understand the data access requirements, both in terms
of throughput (MB/second) and I/O operations per second (IOPS)
and planning for adequate storage resources not just for normal
network file serving but also for advanced functions. Also, metdata
intensive workloads need to be considered during storage planning.

This storage configuration detail would include the number of


storage nodes, number of disk storage subsystems, number and
types of disk drives, total raw/usable storage capacity and projected
IO throughput and IOPS.

Page 4
The recommended maximum number of active concurrent CIFS
connections assumes that the file system containing the CIFS share
(or shares) resides on a file system that has a minimum of twelve
(12) file system disks (known as a GPFS Network Shared Disk or
NSD) for meta-data and data usage regardless of the workload
characteristics. The required number of file system disks must be
determined based on the factors described above.

Generally, in a SONAS environment each file system disk would


corresponds to a single SCSI logical unit (LUN) on a single RAID-6
8+P+Q array on a set of ten physical disk drives, such as high
performance (10K or 15K RPM) SAS disk drives. In some SONAS
configurations this may not be the case, such as SONAS Gateway
configurations attached to external disk storage systems like the
IBM XIV or IBM DCS3700.

Having a file system reside on fewer NSDs, or disks mapped on to


RAID arrays comprised of slower lower speed disk drives, can result
in a lower number of active concurrent CIFS connections per
interface node.

Contact your IBM sales representative, client representative or IBM


Business Partner for assistance in determining a suitable storage
configuration that will support the performance and capacity needs
of your network attached storage (NAS) environments.

Protocol specific configuration to


maximize concurrent CIFS connections
Leases, Locking and Share Modes
If the CIFS shares are only being accessed by Windows clients using
the CIFS protocol (and are not enabled for access via other NAS file
protocols, such as NFS, FTP, HTTPS) then it is highly recommended
that you disable inter-protocol level leases, locking and share
modes to achieve the maximum number of concurrent
connections. Disabling these various locking modes still ensures
data consistency within the CIFS protocol while avoiding
unnecessary overhead incurred for ensuring consistency across
multiple NAS protocols.

Leases
Leases are enabled by default when a CIFS share is created. When
leases are enabled it specifies that clients accessing the file over
other NAS protocols can break the opportunistic lock of a CIFS

Page 5
client, so the CIFS client is informed when another client is now
accessing the same file at the same time using a non-CIFS protocol.

Disabling this feature provides a slight performance increase each


time a file is opened, but it does increase the risk of data corruption
when files are accessed over multiple NAS protocols concurrently
without this inter-protocol synchronization. If files are accessed
using CIFS protocol alone and leases are disabled, opportunistic
locks are maintained by the CIFS protocol using a method that is less
performance intensive.

Leases can be disabled for a particular CIFS share by specifying the -


-cifs "leases=no" option on the mkexport or chexport
commands.

Locking
Locking is enabled by default when a CIFS share is created. When
locking is enabled it specifies that before granting a byte range lock
to a CIFS client, a determination is made as to whether a byte range
file control lock is already present on the requested portion of the
file.

Clients that access the same file using another NAS protocol, such as
NFS, are able to determine whether a CIFS client has set a lock on
that file.

When a share that is only accessed by CIFS clients, it is highly


recommended to disable inter-protocol level byte-range locking to
enhance CIFS file serving performance.

Inter-protocol level locking can be disabled for a particular CIFS


share by specifying the --cifs "locking=no" option on the
mkexport or chexport commands.

Share Modes
The CIFS protocol allows an application to permit simultaneous
access to a file by defining share modes when the file is first
opened, which can be any combination of SHARE_READ,
SHARE_WRITE, and SHARE_DELETE. If no share mode is specified, all
simultaneous access attempts by another application or client to
open a file in a manner that conflicts with the existing open mode is
denied, even if the user has the appropriate permissions granted by
share and file system access control lists.

The sharemodes option is enabled by default when a CIFS share is


created. When enabled, the share modes specified by CIFS clients

Page 6
are respected by other NAS protocols. When disabled, it specifies
that the share modes apply only to access by CIFS clients, and
clients using all other NAS protocols are granted or denied access to
a file without regard to any share mode defined by a CIFS client.

If the share/export is not being accessed by clients using other


network file protocols (such as NFS) then it is highly recommended
that --cifs "sharemodes=no" option be specified on the
mkexport or chexport commands.

For additional information about these options and other


performance and data integrity related options, see the following
section of the SONAS Information Center:

Administering->Managing->Managing shares and exports->Creating


shares and exports->CIFS and NFS data integrity options

Note: If your environment requires data sharing over multiple


protocols and these options can not be disabled, you may not be
able to achieve the maximum active CIFS connections per node. In
that case, consider adding additional interface nodes as well as
increased storage bandwidth.

Home directory exports using substitution


variables
Having a large number of Windows users all concurrently accessing
the same CIFS share can lead to performance bottlenecks, because
Windows clients automatically open the root folder of a share when
connecting. In a home directory environment, it is recommended
that substitution variables be used when creating CIFS exports for
home directories. For example, home directory exports can be
created using the %U substitution variable representing the user
name on the mkexport command (mkexport home
/ibm/gpfs0/.../%U --cifs). For additional information
about substitution variables, see the following section of the SONAS
Information Center:

Administering->Managing->Managing shares and exports->Creating


shares and exports->Using substitution variables

Sharing files and directories among CIFS clients


If your environment calls for extensive file and directory sharing
among a large number of users, such as large set of department
documents, you may experience slowdown in performance. In this
type of environment, it is possible to improve the performance

Page 7
based on the specific needs of the environment. Consider the
following as options to optimize performance:

1. Limit all sharing via single or fewer interface nodes. This


means restricting CIFS connections that share data to a
single interface node will help reduce internal
communication among the interface nodes in your SONAS
system.

2. When sharing a directory among large set of CIFS clients, if


possible, distribute workload in subdirectories to reduce
number of CIFS connections simultaneously accessing the
same directory.

3. Utilize the CIFS share coherency options as described in the


following section.

CIFS share coherency options


The SONAS system provides an advanced option to further increase
the performance of CIFS workload called coherency. This option
controls data consistency needs for a CIFS share. While all options
described earlier affect cross-protocol interaction, this option
applies when a share is being accessed by CIFS clients only.

When the default value of YES is changed, it will help improve


performance. However, this option must only be considered when
all other options have been utilized. Extreme caution must be taken
to determine right settings for your data as it impacts data integrity.
Each share must be evaluated individually to determine right
setting. Limit using this option to as few CIFS shares as possible.

The applications must ensure that files/directories are not modified


by multiple processes at the same time and that reading of file
content does not happen while another process is still writing the
file or alternatively, the application is coordinating all file accesses
to avoid conflicts.

The coherency option can be changed for a particular CIFS share by


specifying the --cifs "coherency=
{yes|no|nodirs|norootdir}" option on the mkexport or
chexport commands.

norootdir : Setting coherency=norootdir will disable


synchronization of directory locks for the root directory of the
specified share, but will keep lock coherency for all files and
directories within and underneath the share root. This option is

Page 8
useful for scenario where large set of connections are accessing
different subdirectories within the same share.
The most common scenario for this value is when a single share
used for home directories of a large number of users like
/ibm/gpfs0/homeroot which then contains a sub-directory for each
user.

nodirs : Setting coherency=nodirs will disable


synchronization of directory locks across the cluster nodes, but will
leave lock coherency enabled for files. This option is useful if data
sharing is not dependent of the changes to the directory attributes
like timestamps or a consistent view of the directory contents.

yes : Setting coherency=yes will enable cross-node lock


coherency for both directories and files. This is the default setting.

no: Setting coherency=no will completely disable cross-node


lock coherency for both directories and files. It should only be used
with applications that guarantee data consistency and all other
options to enhance performance have been exhausted.

Other Considerations
Planning for fail-over scenarios and upgrade
When planning for a SONAS system that will have a high number of
active concurrent CIFS connections, sufficient consideration must
be given to the potential performance impact during fail-over
scenarios.

In the event that a SONAS interface node fails, the IP addresses


hosted by that interface node will be re-located to other SONAS
interface nodes, and CIFS client re-connections will be redistributed
among the remaining SONAS interface nodes. If the failed interface
node hosts only one IP in the network group, all connections served
by that interface node will be moved to the interface node taking
over the IP. This results in increasing the number of connections
served by the interface node taking over the IP.

Recommended best practice is to assign multiple IPs to each


interface node. In the event of an interface node failure, all
affected IPs will be re-distributed among the available interface
nodes, thus distributing CIFS connections across all nodes versus
shifting the entire workload from the failed interface node to
another causing workload imbalance.

Page 9
Therefore, when planning a SONAS system that will have a high
number of active concurrent CIFS connections, some buffer (in
terms of maximum active concurrent CIFS connections) needs to be
factored into the overall system configuration to account for the
potential performance implications during these fail-over scenarios.

During the SONAS software upgrade process, IP addresses are


frequently re-located and, depending on SONAS system
configuration, multiple interface nodes could be suspended at once
to minimize the upgrade time thus leaving fewer interface nodes to
serve various protocol clients including CIFS. Therefore, the
maximum number of active CIFS connections can not be sustained
during the SONAS software upgrade process. You should plan for
SONAS software upgrades during the off-peak hours or schedule a
maintenance window to minimize the impact on clients accessing
the SONAS system. If it is not possible to find a long enough
maintenance window to schedule SONAS software upgrade, consult
with your IBM representative to discuss an alternative of upgrading
at a slower pace with one node at a time.

For information on upgrade planning, please refer to the section


Planning-> Planning for software maintenance in the SONAS
Information Center.

Scheduling advanced functions for data


management
In most environments, it is typical to have an off-peak window of
time, at some point during the day, which can be utilized to perform
data management tasks like nightly backup, snapshots and
asynchronous replication.

Ensure that you have some period of time of lower CIFS file serving
activity and that this time window is sufficient for the desired
advanced functions to complete.

When running advanced functions that require a file system policy


scan such as backup, asynchronous replication, GPFS policy
invocations or Active Cloud Engine (ACE) cache pre-population,
schedule them sufficiently apart to allow adequate time to
complete the policy scan to avoid two overlapping policy scans.

You can adjust the scheduling of TSM backups using the


mktask, lstask and rmtask CLI commands

Page 10
You can adjust the scheduling of async replications using the
mkrepltask, lsrepltask and rmrepltask CLI
commands

You can adjust the scheduling of file movement, migration


and deletion policies using the mkpolicytask and
rmpolicytask CLI commands

In a typical system, a couple of hours gap between two advanced


functions should suffice. However, you should review the logs of
each function to ensure policy scan completes before the next
scheduled advanced function starts. If necessary make adjustments
like increasing the time gap, adding more interface nodes, more
disks for metadata, or adding Solid State Disks (SSDs) for metadata
for SONAS gateway configuration.

As you plan data management tasks for your SONAS system, you
need to ensure adequate resources are available to complete all
data management tasks. If these tasks do not complete in the
expected time window or impact overall performance of the system
during peak hours, consider adding additional resources (like
dedicated interface nodes for backup or additional storage
resources to enhance storage response time) to eliminate
bottlenecks.

Tuning and investigating performance


concerns
If the SONAS system begins to experience performance problems
relative to the high number of active concurrent CIFS connections
on each interface node, here are actions that can be taken to
improve the maximum number of active concurrent CIFS
connections that can be supported by the entire SONAS system:

- You can use the SONAS performance center GUI or


lsperfdata CLI command to investigate what physical
resources (CPU, memory, networking, disks) in the system are
under high utilization in an effort to gain insight into the
physical system resource that may be inhibiting or limiting
performance

- Ensure the SONAS interface nodes are configured with the


maximum number of processors, memory and networking
adapters.

- Add more SONAS interface nodes to your system

Page 11
- Move certain advanced functions, such as TSM or NDMP
backups and async replications, to periods of time when CIFS
file serving activity will be lower

- Reduce the frequency at which file set and/or file system level
snapshot are being created and deleted, especially during the
periods of highest CIFS user activity

- Investigate and tune the performance of the underlying disk


storage systems containing the file systems on which the CIFS
shares reside. This should include the following:

- Ensure that the file system disks belonging to a given


storage system are appropriately distributed between
the pair of SONAS storage nodes to which that storage
system is attached. One half of the file system disks in a
given storage system should have one of the storage
nodes identified as the primary NSD server and the other
half of the file system disks in the storage system should
have the other SONAS storage node in the pair assigned
as the primary NSD server. The lsdisk CLI command
with the v (verbose) option will show the SONAS
storage nodes that are the primary and secondary NSD
server for each file system disk.

- If GPFS metadata replication or data replication is being


used, ensure that you have assigned GPFS file system
disks to failure groups in a manner that reasonably
balances the I/O and data across a given set of file
system disks, RAID arrays and disk storage systems. The
lsdisk CLI command will show the failure group to
which each file system disk is assigned. The chdisk CLI
command can be used to change the failure group to
which each file system disk is assigned.

- If the underlying disk storage systems on which the file


system resides are becoming a performance bottleneck,
consider adding more physical resources to those disk
storage systems. More resources could include more
cache memory, more disk drives, more RAID arrays and
more GPFS file system disks for the file system(s)
containing the CIFS shares.

- If the existing disk storage systems on which the GPFS


file system reside have reached their limit in terms of
capacity and/or performance, then considering adding

Page 12
more disk storage systems and extending the GPFS file
system (on which the CIFS shares reside) by adding new
file system disks (residing on the new disk storage
systems) to it.

References
SONAS Concepts, Architecture, and Planning Guide
Redbook, IBM publication number SC24-7963
SONAS Implementation Guide Redbook, IBM publication
number SC24-7962
SONAS Copy Services Asynchronous Replication Best
Practices, Version 1.4
SONAS Active Cloud Engine (ACE) White Paper

Page 13

Potrebbero piacerti anche