Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
A highly available and scalable storage system forms the heart of data centers running
Oracle databases. This article reflects the cumulative storage design and tuning recom-
mendations from Dell and Oracle teams on Oracle Database 10g solutions using Dell™
PowerEdge™ servers, Dell/EMC Fibre Channel storage, and Oracle Automatic Storage
Management on the Red Hat Linux operating system.
www.dell.com/powersolutions Reprinted from Dell Power Solutions, October 2004. Copyright © 2004 Dell Inc. All rights reserved. POWER SOLUTIONS 91
SCALABLE ENTERPRISE
1 To review the latest Dell and Oracle–supported configurations on Dell PowerEdge servers, visit http://www.dell.com/oracle.
2 This term does not connote an actual operating speed of 1 Gbps. For high-speed transmission, connection to a Gigabit Ethernet server and network infrastructure is required.
CX300 60 in 1 disk processor 8.8 TB • Entry-level array To set up network bonding for Broadcom or Intel NICs and to configure
enclosure (DPE) and • Bandwidth: 680 MB/sec; the private network, perform the following steps on each cluster node:
3 disk array enclosures 50,000 I/Os per second (IOPS)
(DAEs) • RAID levels: 0, 1, 3, 5, and 10
• 2 GB cache 1. Log in as root.
• 2 Gbps DPE2
• Multipath I/O
CX500 120 in 1 DPE 17.5 TB • Midrange array 2. Add the following line to the /etc/modules.conf file:
and 7 DAEs • Bandwidth: 780 MB/sec;
120,000 IOPS
• RAID levels: 0, 1, 3, 5, and 10 alias bond0 bonding
• 4 GB cache
• 2 Gbps DPE2
• Multipath I/O 3. For high availability, edit the /etc/modules.conf file and set the
CX700 240 in 16 DAEs 35 TB • Enterprise-class array option for link monitoring. The default value for miimon is 0,
• Bandwidth: 1520 MB/sec; which disables link monitoring. Change the value to 100 milli-
200,000 IOPS
• RAID levels: 0, 1, 3, 5, and 10 seconds initially, and adjust it as needed to improve performance:
• 8 GB cache
• Storage processor enclosure
(SPE) options bonding miimon=100
• Multipath I/O
4. In the /etc/sysconfig/network-scripts/ directory, edit the
Figure 3. Specifications for Dell/EMC storage system enclosures
ifcfg-bondn configuration file for bond number n. For example,
the configuration file ifcfg-bond0 for the first bond (bond0) would
appear as follows:
RAID technology. The Dell/EMC CX series storage systems use
RAID technology. RAID technology groups separate inexpensive
DEVICE=bond0
disks into one logical unit number (LUN) to help improve reliability,
IPADDR=192.168.0.1
performance, or both. This approach spreads data across all disks,
ONBOOT=yes
which are partitioned into units called stripe elements. Depending
BOOTPROTO=none
on the RAID level, the storage-system hardware can read from and
USERCTL=no
write to multiple disks simultaneously and independently. Because
this approach enables several read/write heads to work on the
5. To use the virtual bonding device (such as bond0), all members
same task at once, RAID can enhance performance. The chunk
of bond0 must be configured so that MASTER=bond0 and
data size read from or written to each disk at a time makes up the
SLAVE=yes. For each member of the specified bonding device,
stripe element size. Figure 4 shows an example six-disk RAID-10
edit its respective configuration file ifcfg-ethn in /etc/sysconfig/
configuration in which each primary disk is first striped and then
network-scripts/ as follows:
mirrored to another disk.
DEVICE=ethn
HWADDR=MAC ADDRESS
ONBOOT=yes
TYPE=Ethernet
64 KB Disk 1 primary
USERCTL=no
64 KB Disk 1 mirror
MASTER=bond0
64 KB Disk 2 primary SLAVE=yes
Stripe size
64 KB BOOTPROTO=none
Disk 2 mirror
Multipath I/O • I/O balancing: Automatically balances I/O across all avail-
The Dell/EMC CX series storage systems are designed to support multi- able disks defined in the storage environment. Also helps
path routing between SAN switches.3 In a typical topology, a node prevent hot spots and maximize performance by adjusting
has multiple Fibre Channel HBAs, rapidly to changing data-access patterns.
each of which is connected to the ASM enables database • High availability: Provides options for fault tolerance
same SAN, resulting in multiple and high availability by creating data redundancy through
paths to the same storage system administrators to manage software when hardware RAID is not available. In ASM,
devices. Redundant paths in a software redundancy is called internal redundancy, whereas
SAN help provide failover capa- a small number of disk redundancy created through hardware RAID is known as
bility when any component in the external redundancy.
data path fails. Multiple paths can groups and automatically
also enhance efficiency, allow- Setting up ASM on Dell/EMC storage
ing administrators to load bal- manages the placement Oracle has many file types—such as data, log, temp, archive, undo,
ance SAN traffic by distributing system, control, and backup—each characterized by different
I/O across all available paths. of data files within I/O behavior. In addition, the Oracle I/O workload is application
This approach can enable the dependent. Understanding the I/O workload for each database file
SAN to take advantage of the those disk groups. used by an application can be difficult, and a new or unfamiliar
additional bandwidth provided application can make this task daunting. This section provides an
by each physical connection. EMC PowerPath® software is designed overview of various ASM techniques that can be used to configure
to work with Dell/EMC storage systems to help create efficient I/O a Dell/EMC CX series storage system for optimal performance with
path management. PowerPath also helps enhance the high-availability Oracle Database 10g.
capabilities of Dell/EMC storage systems because it is designed to
automatically detect and recover from server-to-storage path failures. Implementing hardware mirroring
Through hardware mirroring, ASM can be configured with exter-
ASM nal redundancy, which typically performs better than internal
ASM is an Oracle database–specific file system and volume manager redundancy because the server
that is built into the Oracle Database 10g kernel.4 ASM is designed Because ASM offers the does not need to consume CPU
to allow Oracle Database 10g to directly manage raw disks and cycles for managing redundancy.
help eliminate the need for a file system and volume manager to capability to implement Instead, these cycles are avail-
manage Oracle files—including data files, log files, archive logs, able for the database or for other
control files, and recovery manager backup sets. Database admin- host-based striping, using operating system–related opera-
istrators can manage the creation of storage pools by using new tions. The simplest way to help
SQL commands, Oracle Enterprise Manager 10g, or Oracle Database ASM with storage-system ensure that data is not lost is
Creation Assistant. to implement mirroring at the
ASM virtualizes storage into disk groups, which serve as reposi- RAID striping such as storage subsystem level. Gener-
tories for Oracle database files. A disk group is a logical grouping ally, the most straightforward
of several individual disks in a storage array or several RAID LUNs. RAID-10 can generate an approach is to implement hard-
ASM enables database administrators to manage a small number ware mirroring at the disk or par-
of disk groups and automatically manages the placement of data optimal, double-striped tition level and then implement
files within those disk groups. striping on top of the mirrored
ASM helps provide the following features and functions, which (a stripe on a stripe) layout disks. Mirrored data can be lost
can be critical for an enterprise database environment: only when multiple disk failures
for spreading I/O across occur—and the probability that
• Dynamic provisioning: Dynamically scales storage capacity multiple disk drive failures will
without affecting database availability. multiple disk spindles. occur at the same time is relatively
3 For more information, see “Building a Highly Scalable and Available Data Environment for Oracle9i RAC” by Zafar Mahmood, Paul Rad, and Robert Nadon in Dell Power Solutions, November 2003.
4 For more information about ASM, see “Enabling a Highly Scalable and Available Storage Environment Using Oracle Automatic Storage Management” by Zafar Mahmood, Joel Borellis, Mahmoud Ahmadian, and
Paul Rad in Dell Power Solutions, June 2004.
This sidebar describes how to set up shared storage on Dell/EMC CX series /dev/raw/ASM8 /dev/emcpoweri
storage systems using ASM to create a disk group with nine raw partitions: /dev/raw/ASM9 /dev/emcpowerj
2. Enter the following commands to change the names of the service rawdevices restart
raw devices:
6. Create the initASM.ora file containing the following line:
mv /dev/raw/raw1 /dev/raw/ASM1
mv /dev/raw/raw2 /dev/raw/ASM2 INSTANCE_TYPE = +ASM
.
. 7. Connect to the Oracle SQL*Plus command-line interface as SYSDBA
. for SID=ASM.
mv /dev/raw/raw8 /dev/raw/ASM8
mv /dev/raw/raw9 /dev/raw/ASM9 8. Enter the following commands:
3. Enter the following commands to set ownership of the devices STARTUP NOMOUNT PFILE=initASM.ora
to user oracle of group dba: CREATE SPFILE FROM PFILE=initASM.ora
CREATE DISKGROUP dgroup1 EXTERNAL REDUNDANCY DISK
chown oracle.dba /dev/raw/ASM1 '/dev/raw/ASM1','/dev/raw/ASM2','/dev/raw/ASM3',
chown oracle.dba /dev/raw/ASM2 '/dev/raw/ASM4','/dev/raw/ASM5','/dev/raw/ASM6',
. '/dev/raw/ASM7','/dev/raw/ASM8','/dev/raw/ASM9'
.
. 9. Create the initASMDB.ora file containing the lines:
chown oracle.dba /dev/raw/ASM8
chown oracle.dba /dev/raw/ASM9 INSTANCE_TYPE = RDBMS
DB_CREATE_FILE_DEST = '+dgasm'
4. Edit the /etc/sysconfig/rawdevices file and add the following lines:
10. Connect to SQL*Plus as SYSDBA for SID=ASMDB.
/dev/raw/ASM1 /dev/emcpowerb
/dev/raw/ASM2 /dev/emcpowerc 11. Enter the following command:
.
. STARTUP NOMOUNT PFILE=initASMDB.ora
.
small given the highly reliable nature of current disk drives. Also, is replaced, the SP copies the data from the hot spare onto the
many systems allow a spare drive to be configured so that repairs replacement disk.
can be performed automatically after a RAID drive failure.
Establishing a double-striped layout
Using a hot spare Striping is a technique available on Dell/EMC storage systems that
A required component of an enterprise storage setup, the hot spare can enhance performance by spreading the I/O load across multiple
is a dedicated replacement disk on which end-user applications disk spindles. Striping all files across all disks can help ensure that
cannot store information. Instead, the hot spare is reserved as a the full bandwidth of all disk drives is available for any operation.
global replacement: if any disk in a RAID-5 group, RAID-1 mirrored To optimize disk bandwidth and use all available disks, the data
pair, or RAID-10 group fails, the storage processor (SP) automatically to be accessed must be spread across as many disks as possible so
rebuilds the failed disk’s structure on the hot spare. When the SP that every disk in the disk farm has roughly equivalent utilization.
finishes rebuilding the hot spare, the disk group functions as usual, Any disk that is used more than the other disks could become a
using the hot spare instead of the failed disk. When the failed disk performance bottleneck. Striping all files across all disks can help
Node 1 Node 2
equalize the load across disk drives and help eliminate hot spots.
This process is also designed to improve response time by shorten- Oracle Database 10g Private NIC Oracle Database 10g
ing disk queues. ASM ASM
Because ASM offers the capability to implement host-based
PowerPath PowerPath
striping, using ASM with storage-system RAID striping such as
RAID-10 can generate an opti- Request Request Request Request
mal, double-striped (a stripe on a The simplest way to SCSI driver SCSI driver SCSI driver SCSI driver
HBA HBA HBA HBA
stripe) layout for spreading I/O
across multiple disk spindles. help ensure that data is
Figure 5 shows a sample con-
Fibre
figuration in which each LUN in not lost is to implement Channel
switches
the storage array with multiple
disk array enclosures (/dev/raw/ mirroring at the storage
ASM1, /dev/raw/ASM2, and so
/dev/raw/ASM1 /dev/raw/ASM2 /dev/raw/ASM3
on) is configured as RAID-10. subsystem level.
When the LUNs are specified to
ASM as physical devices, ASM can perform another level of strip- /dev/raw/ASM4 /dev/raw/ASM5 /dev/raw/ASM6
ASM disk group using
ing among the LUNs to generate a double-striped layout. The nine LUNs (data files)
sidebar “Using ASM to create a disk group” provides more infor- ASM disk group using
/dev/raw/ASM7 /dev/raw/ASM8 /dev/raw/ASM9
nine LUNs (flashback
mation about setting up the nine partitions shown in Figure 5. recovery)
Dell/EMC CX700
Simplifying the management of database files
Kept and maintained by the Oracle kernel, ASM is a file system
Figure 5. Using ASM to implement double striping on a Dell/EMC CX700 with multiple disk array
created specifically for Oracle data files on top of raw devices. ASM
enclosures configured as RAID-10
is designed to distribute I/O on top of ASM disk groups, which
comprise one or more raw disks that are mirrored by the storage
system. The disk group is used for creating files, and 1 MB files Tesfamariam Michael is a software engineer in the Dell Database and Application Engi-
can be extended across the available disks. The more spindles neering Department of the Dell Product Group. Tesfamariam has an M.S. in Computer Science
that are used within a disk group, the more flexibility the Oracle and a B.S. in Mathematics from Clark Atlanta University, and a B.S. in Electrical Engineering
Database 10g server has to spread the I/O among disks, which can from Georgia Institute of Technology.
help increase performance and improve data redundancy.
ASM is designed to help simplify file management in an Oracle Jay Kozak has been with Oracle since 1996 and is currently a technical business develop-
database. By using this Oracle Database 10g feature, database ment manager in the Server Technologies organization. Recently, Jay has been primarily
administrators can easily create a highly scalable storage solution focused on the joint Oracle and Dell engineering initiatives. Jay graduated from Northern
from a set of disks. ASM provides the necessary tool set to free Illinois University with a B.S. in Finance in 1989 and has done post-graduate computer science
database administrators from the mundane tasks of adding, shifting, work at the Illinois Institute of Technology.
and removing disks—helping to improve availability and response
time while contributing to reduced TCO.