Sei sulla pagina 1di 6

SCALABLE ENTERPRISE

Best Practices for

Oracle Database 10g


Automatic Storage Management
on Dell/EMC Storage

A highly available and scalable storage system forms the heart of data centers running
Oracle databases. This article reflects the cumulative storage design and tuning recom-
mendations from Dell and Oracle teams on Oracle Database 10g solutions using Dell™
PowerEdge™ servers, Dell/EMC Fibre Channel storage, and Oracle Automatic Storage
Management on the Red Hat Linux operating system.

BY PAUL RAD, RAMESH RAJAGOPALAN, TESFAMARIAM MICHAEL, AND JAY KOZAK

S everal architectural enhancements have been intro-


duced in Oracle Database 10g that can benefit admin-
istrators tasked with deploying and managing an Oracle
ASM addresses one of the major challenges faced by
database administrators: the storage management process,
which involves creating and tuning a storage layout for
database solution. The enhanced techniques and methods database files, identifying hot spots, monitoring I/O distri-
supported by Oracle Database 10g either automate or bution among the physical disks, and monitoring overall
greatly simplify the process of configuring, monitoring, storage capacity (see Figure 1). As the database grows
and managing an Oracle database, and can assist organi- in size, this often-daunting storage management process
zations with implementing the degree of availability and needs to be repeated. Moreover, many of the associated
performance that best meets their service-level objectives. administrative tasks require taking the database offline,
Key techniques include: which can reduce availability to a level that may not be
acceptable. ASM can help to improve the storage manage-
• Automatic Storage Management (ASM): For opti- ment process and to reduce total cost of ownership (TCO)
mal layout of database files by allowing storage to grow as needed without requiring
• Automatic Workload Repository (AWR): For gath- up-front investments that account for future growth.
ering performance data Before ASM, raw devices and the Oracle Cluster File
• Automatic Database Diagnostics Monitor (ADDM): System (OCFS) were the available storage management
For analyzing performance issues options for an Oracle database running on the Linux oper-
• Automatic Workload Management (AWM): For ating system. However, these methods are subject to the
managing and controlling processing resources following limitations:
required for applications
• Virtualization and provisioning: For efficient, • Data file management on raw devices can be difficult with
on-demand usage of processing resources respect to name space mapping and backup schemes.

www.dell.com/powersolutions Reprinted from Dell Power Solutions, October 2004. Copyright © 2004 Dell Inc. All rights reserved. POWER SOLUTIONS 91
SCALABLE ENTERPRISE

“Configuring the private network” for more information). Primary


Create and Monitor disk communication among the cluster nodes takes place over the private
tune storage I/O distribution and network. Clients and other application servers access the database
layout storage capacity
over the public network.

Dell/EMC storage systems


The modular Dell/EMC CX series of Fibre Channel storage systems
Identify
hot spots incorporates RAID, multipath I/O, and on-demand storage expansion
capabilities to help provide cost-effective, continuous availability for
critical business environments through either SAN or direct attach
Figure 1. Managing storage for a database configurations (see Figure 3 for representative entry-level, midrange,
and enterprise-class models). A SAN architecture allows administra-
• Storage must be allocated for both raw devices and OCFS with tors to increase storage incrementally and add servers as business
future growth in mind—resulting in significant initial costs. demands grow. Key features of Dell/EMC CX series storage systems
• A storage solution is not scalable if storage allocation is include the following:
based only on short-term requirements. Expanding the
storage for a data file requires creating a larger data file, • High availability: Modular, redundant hardware architecture,
exporting the data from the existing data file, and importing combined with switches, creates multiple paths to the storage
the data into the larger data file. During these operations, to help provide business continuance.
the database is not available. • Scalability and capacity: Up to 35 TB of storage can be sup-
• Both raw devices and OCFS abstract the organization (RAID ported using Fibre Channel 2 (FC2) drives. Disks and enclo-
level) of the physical disk group. Consequently, an applica- sures can be added as needed, resulting in efficient utilization
tion such as a database server cannot take advantage of of storage.
physical properties—host bus adapters (HBAs), SCSI adapt- • Manageability: EMC® Navisphere® and VisualSAN® software
ers, bus speeds, disk spindles, and so forth—to balance I/O help provide simple, powerful storage management capabili-
or redistribute data as the data file grows. ties, including nondisruptive software upgrades.

This article provides an overview of Oracle Real Application


Clusters (RAC) 10g running over a storage area network (SAN) that Public network
comprises Dell servers and Dell/EMC storage. It also describes how
Dell/EMC storage systems and ASM can help improve database Dedicated network
performance and availability. Private network (bonded NICs)
Private network (bonded NICs)

Implementing Oracle RAC 10g on Dell clusters


Dell PowerEdge servers
Dell and Oracle have developed a RAC configuration that is based
on a SAN comprising up to eight Dell PowerEdge server nodes, a
Dell/EMC CX series Fibre Channel storage enclosure, a Fibre Channel
network, a private network, and a public network (see Figure 2). The
Fibre Channel Fibre Channel
storage enclosure can be connected to the nodes directly or through switch switch
Fibre Channel switches. The PowerEdge server nodes run the Red Hat
Enterprise Linux AS 3 operating system with Update 2 or higher and
Oracle Database 10g Enterprise Edition database software.1 Public network
An Oracle RAC 10g cluster requires a private network and a Fibre Channel network
Private network
public network. The private network uses two Gigabit2 Ethernet Dell/EMC Fibre Channel storage
network interface cards (NICs) that are bonded together using the
Red Hat Enterprise Linux network bonding feature (see the sidebar Figure 2. Architecture of Oracle RAC 10g cluster with Dell/EMC Fibre Channel storage

1 To review the latest Dell and Oracle–supported configurations on Dell PowerEdge servers, visit http://www.dell.com/oracle.
2 This term does not connote an actual operating speed of 1 Gbps. For high-speed transmission, connection to a Gigabit Ethernet server and network infrastructure is required.

92 POWER SOLUTIONS October 2004


SCALABLE ENTERPRISE

Dell/EMC Maximum Maximum


storage number storage capacity CONFIGURING THE PRIVATE NETWORK
system of disks with FC2 drives Specifications

CX300 60 in 1 disk processor 8.8 TB • Entry-level array To set up network bonding for Broadcom or Intel NICs and to configure
enclosure (DPE) and • Bandwidth: 680 MB/sec; the private network, perform the following steps on each cluster node:
3 disk array enclosures 50,000 I/Os per second (IOPS)
(DAEs) • RAID levels: 0, 1, 3, 5, and 10
• 2 GB cache 1. Log in as root.
• 2 Gbps DPE2
• Multipath I/O
CX500 120 in 1 DPE 17.5 TB • Midrange array 2. Add the following line to the /etc/modules.conf file:
and 7 DAEs • Bandwidth: 780 MB/sec;
120,000 IOPS
• RAID levels: 0, 1, 3, 5, and 10 alias bond0 bonding
• 4 GB cache
• 2 Gbps DPE2
• Multipath I/O 3. For high availability, edit the /etc/modules.conf file and set the
CX700 240 in 16 DAEs 35 TB • Enterprise-class array option for link monitoring. The default value for miimon is 0,
• Bandwidth: 1520 MB/sec; which disables link monitoring. Change the value to 100 milli-
200,000 IOPS
• RAID levels: 0, 1, 3, 5, and 10 seconds initially, and adjust it as needed to improve performance:
• 8 GB cache
• Storage processor enclosure
(SPE) options bonding miimon=100
• Multipath I/O
4. In the /etc/sysconfig/network-scripts/ directory, edit the
Figure 3. Specifications for Dell/EMC storage system enclosures
ifcfg-bondn configuration file for bond number n. For example,
the configuration file ifcfg-bond0 for the first bond (bond0) would
appear as follows:
RAID technology. The Dell/EMC CX series storage systems use
RAID technology. RAID technology groups separate inexpensive
DEVICE=bond0
disks into one logical unit number (LUN) to help improve reliability,
IPADDR=192.168.0.1
performance, or both. This approach spreads data across all disks,
ONBOOT=yes
which are partitioned into units called stripe elements. Depending
BOOTPROTO=none
on the RAID level, the storage-system hardware can read from and
USERCTL=no
write to multiple disks simultaneously and independently. Because
this approach enables several read/write heads to work on the
5. To use the virtual bonding device (such as bond0), all members
same task at once, RAID can enhance performance. The chunk
of bond0 must be configured so that MASTER=bond0 and
data size read from or written to each disk at a time makes up the
SLAVE=yes. For each member of the specified bonding device,
stripe element size. Figure 4 shows an example six-disk RAID-10
edit its respective configuration file ifcfg-ethn in /etc/sysconfig/
configuration in which each primary disk is first striped and then
network-scripts/ as follows:
mirrored to another disk.

DEVICE=ethn
HWADDR=MAC ADDRESS
ONBOOT=yes
TYPE=Ethernet
64 KB Disk 1 primary
USERCTL=no
64 KB Disk 1 mirror
MASTER=bond0
64 KB Disk 2 primary SLAVE=yes
Stripe size
64 KB BOOTPROTO=none
Disk 2 mirror

64 KB Disk 3 primary Enter the following command:


64 KB Disk 3 mirror
service network restart

Figure 4. RAID-10 with six disks

www.dell.com/powersolutions POWER SOLUTIONS 93


SCALABLE ENTERPRISE

Multipath I/O • I/O balancing: Automatically balances I/O across all avail-
The Dell/EMC CX series storage systems are designed to support multi- able disks defined in the storage environment. Also helps
path routing between SAN switches.3 In a typical topology, a node prevent hot spots and maximize performance by adjusting
has multiple Fibre Channel HBAs, rapidly to changing data-access patterns.
each of which is connected to the ASM enables database • High availability: Provides options for fault tolerance
same SAN, resulting in multiple and high availability by creating data redundancy through
paths to the same storage system administrators to manage software when hardware RAID is not available. In ASM,
devices. Redundant paths in a software redundancy is called internal redundancy, whereas
SAN help provide failover capa- a small number of disk redundancy created through hardware RAID is known as
bility when any component in the external redundancy.
data path fails. Multiple paths can groups and automatically
also enhance efficiency, allow- Setting up ASM on Dell/EMC storage
ing administrators to load bal- manages the placement Oracle has many file types—such as data, log, temp, archive, undo,
ance SAN traffic by distributing system, control, and backup—each characterized by different
I/O across all available paths. of data files within I/O behavior. In addition, the Oracle I/O workload is application
This approach can enable the dependent. Understanding the I/O workload for each database file
SAN to take advantage of the those disk groups. used by an application can be difficult, and a new or unfamiliar
additional bandwidth provided application can make this task daunting. This section provides an
by each physical connection. EMC PowerPath® software is designed overview of various ASM techniques that can be used to configure
to work with Dell/EMC storage systems to help create efficient I/O a Dell/EMC CX series storage system for optimal performance with
path management. PowerPath also helps enhance the high-availability Oracle Database 10g.
capabilities of Dell/EMC storage systems because it is designed to
automatically detect and recover from server-to-storage path failures. Implementing hardware mirroring
Through hardware mirroring, ASM can be configured with exter-
ASM nal redundancy, which typically performs better than internal
ASM is an Oracle database–specific file system and volume manager redundancy because the server
that is built into the Oracle Database 10g kernel.4 ASM is designed Because ASM offers the does not need to consume CPU
to allow Oracle Database 10g to directly manage raw disks and cycles for managing redundancy.
help eliminate the need for a file system and volume manager to capability to implement Instead, these cycles are avail-
manage Oracle files—including data files, log files, archive logs, able for the database or for other
control files, and recovery manager backup sets. Database admin- host-based striping, using operating system–related opera-
istrators can manage the creation of storage pools by using new tions. The simplest way to help
SQL commands, Oracle Enterprise Manager 10g, or Oracle Database ASM with storage-system ensure that data is not lost is
Creation Assistant. to implement mirroring at the
ASM virtualizes storage into disk groups, which serve as reposi- RAID striping such as storage subsystem level. Gener-
tories for Oracle database files. A disk group is a logical grouping ally, the most straightforward
of several individual disks in a storage array or several RAID LUNs. RAID-10 can generate an approach is to implement hard-
ASM enables database administrators to manage a small number ware mirroring at the disk or par-
of disk groups and automatically manages the placement of data optimal, double-striped tition level and then implement
files within those disk groups. striping on top of the mirrored
ASM helps provide the following features and functions, which (a stripe on a stripe) layout disks. Mirrored data can be lost
can be critical for an enterprise database environment: only when multiple disk failures
for spreading I/O across occur—and the probability that
• Dynamic provisioning: Dynamically scales storage capacity multiple disk drive failures will
without affecting database availability. multiple disk spindles. occur at the same time is relatively

3 For more information, see “Building a Highly Scalable and Available Data Environment for Oracle9i RAC” by Zafar Mahmood, Paul Rad, and Robert Nadon in Dell Power Solutions, November 2003.
4 For more information about ASM, see “Enabling a Highly Scalable and Available Storage Environment Using Oracle Automatic Storage Management” by Zafar Mahmood, Joel Borellis, Mahmoud Ahmadian, and
Paul Rad in Dell Power Solutions, June 2004.

94 POWER SOLUTIONS October 2004


SCALABLE ENTERPRISE

USING ASM TO CREATE A DISK GROUP

This sidebar describes how to set up shared storage on Dell/EMC CX series /dev/raw/ASM8 /dev/emcpoweri
storage systems using ASM to create a disk group with nine raw partitions: /dev/raw/ASM9 /dev/emcpowerj

1. Log in as root. 5. Enter the command:

2. Enter the following commands to change the names of the service rawdevices restart
raw devices:
6. Create the initASM.ora file containing the following line:
mv /dev/raw/raw1 /dev/raw/ASM1
mv /dev/raw/raw2 /dev/raw/ASM2 INSTANCE_TYPE = +ASM
.
. 7. Connect to the Oracle SQL*Plus command-line interface as SYSDBA
. for SID=ASM.
mv /dev/raw/raw8 /dev/raw/ASM8
mv /dev/raw/raw9 /dev/raw/ASM9 8. Enter the following commands:

3. Enter the following commands to set ownership of the devices STARTUP NOMOUNT PFILE=initASM.ora
to user oracle of group dba: CREATE SPFILE FROM PFILE=initASM.ora
CREATE DISKGROUP dgroup1 EXTERNAL REDUNDANCY DISK
chown oracle.dba /dev/raw/ASM1 '/dev/raw/ASM1','/dev/raw/ASM2','/dev/raw/ASM3',
chown oracle.dba /dev/raw/ASM2 '/dev/raw/ASM4','/dev/raw/ASM5','/dev/raw/ASM6',
. '/dev/raw/ASM7','/dev/raw/ASM8','/dev/raw/ASM9'
.
. 9. Create the initASMDB.ora file containing the lines:
chown oracle.dba /dev/raw/ASM8
chown oracle.dba /dev/raw/ASM9 INSTANCE_TYPE = RDBMS
DB_CREATE_FILE_DEST = '+dgasm'
4. Edit the /etc/sysconfig/rawdevices file and add the following lines:
10. Connect to SQL*Plus as SYSDBA for SID=ASMDB.
/dev/raw/ASM1 /dev/emcpowerb
/dev/raw/ASM2 /dev/emcpowerc 11. Enter the following command:
.
. STARTUP NOMOUNT PFILE=initASMDB.ora
.

small given the highly reliable nature of current disk drives. Also, is replaced, the SP copies the data from the hot spare onto the
many systems allow a spare drive to be configured so that repairs replacement disk.
can be performed automatically after a RAID drive failure.
Establishing a double-striped layout
Using a hot spare Striping is a technique available on Dell/EMC storage systems that
A required component of an enterprise storage setup, the hot spare can enhance performance by spreading the I/O load across multiple
is a dedicated replacement disk on which end-user applications disk spindles. Striping all files across all disks can help ensure that
cannot store information. Instead, the hot spare is reserved as a the full bandwidth of all disk drives is available for any operation.
global replacement: if any disk in a RAID-5 group, RAID-1 mirrored To optimize disk bandwidth and use all available disks, the data
pair, or RAID-10 group fails, the storage processor (SP) automatically to be accessed must be spread across as many disks as possible so
rebuilds the failed disk’s structure on the hot spare. When the SP that every disk in the disk farm has roughly equivalent utilization.
finishes rebuilding the hot spare, the disk group functions as usual, Any disk that is used more than the other disks could become a
using the hot spare instead of the failed disk. When the failed disk performance bottleneck. Striping all files across all disks can help

www.dell.com/powersolutions POWER SOLUTIONS 95


SCALABLE ENTERPRISE

Node 1 Node 2
equalize the load across disk drives and help eliminate hot spots.
This process is also designed to improve response time by shorten- Oracle Database 10g Private NIC Oracle Database 10g
ing disk queues. ASM ASM
Because ASM offers the capability to implement host-based
PowerPath PowerPath
striping, using ASM with storage-system RAID striping such as
RAID-10 can generate an opti- Request Request Request Request

mal, double-striped (a stripe on a The simplest way to SCSI driver SCSI driver SCSI driver SCSI driver
HBA HBA HBA HBA
stripe) layout for spreading I/O
across multiple disk spindles. help ensure that data is
Figure 5 shows a sample con-
Fibre
figuration in which each LUN in not lost is to implement Channel
switches
the storage array with multiple
disk array enclosures (/dev/raw/ mirroring at the storage
ASM1, /dev/raw/ASM2, and so
/dev/raw/ASM1 /dev/raw/ASM2 /dev/raw/ASM3
on) is configured as RAID-10. subsystem level.
When the LUNs are specified to
ASM as physical devices, ASM can perform another level of strip- /dev/raw/ASM4 /dev/raw/ASM5 /dev/raw/ASM6
ASM disk group using
ing among the LUNs to generate a double-striped layout. The nine LUNs (data files)

sidebar “Using ASM to create a disk group” provides more infor- ASM disk group using
/dev/raw/ASM7 /dev/raw/ASM8 /dev/raw/ASM9
nine LUNs (flashback
mation about setting up the nine partitions shown in Figure 5. recovery)

Dell/EMC CX700
Simplifying the management of database files
Kept and maintained by the Oracle kernel, ASM is a file system
Figure 5. Using ASM to implement double striping on a Dell/EMC CX700 with multiple disk array
created specifically for Oracle data files on top of raw devices. ASM
enclosures configured as RAID-10
is designed to distribute I/O on top of ASM disk groups, which
comprise one or more raw disks that are mirrored by the storage
system. The disk group is used for creating files, and 1 MB files Tesfamariam Michael is a software engineer in the Dell Database and Application Engi-
can be extended across the available disks. The more spindles neering Department of the Dell Product Group. Tesfamariam has an M.S. in Computer Science
that are used within a disk group, the more flexibility the Oracle and a B.S. in Mathematics from Clark Atlanta University, and a B.S. in Electrical Engineering
Database 10g server has to spread the I/O among disks, which can from Georgia Institute of Technology.
help increase performance and improve data redundancy.
ASM is designed to help simplify file management in an Oracle Jay Kozak has been with Oracle since 1996 and is currently a technical business develop-
database. By using this Oracle Database 10g feature, database ment manager in the Server Technologies organization. Recently, Jay has been primarily
administrators can easily create a highly scalable storage solution focused on the joint Oracle and Dell engineering initiatives. Jay graduated from Northern
from a set of disks. ASM provides the necessary tool set to free Illinois University with a B.S. in Finance in 1989 and has done post-graduate computer science
database administrators from the mundane tasks of adding, shifting, work at the Illinois Institute of Technology.
and removing disks—helping to improve availability and response
time while contributing to reduced TCO.

F OR M ORE INF ORM ATION


Paul Rad is a senior software engineer in the Dell Database and Application Engineering
Oracle Automatic Storage Management:
Department of the Dell Product Group. Paul has master’s degrees in both Computer Science
http://otn.oracle.com/obe/obe10gdb/manage/asm/asm.htm
and Computer Engineering from The University of Texas at San Antonio.
Dell and Oracle–supported configurations:
http://www.dell.com/oracle
Ramesh Rajagopalan is a lead software engineer in the Dell Database and Application
Engineering Department of the Dell Product Group. His current areas of focus include Oracle Dell/EMC storage:
RAC solutions and performance analysis of Dell cluster solutions. Ramesh has a Bachelor of http://www.dell.com/emc
Engineering in Computer Science from the Indian Institute of Science, Bangalore.

96 POWER SOLUTIONS October 2004

Potrebbero piacerti anche