Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
Rich Long
Oracle Corporation
The Challenge
Todays databases
large growing acceptable performance expandable and scalable high availability low maintenance
Storage requirements
Outline
Introduction
get excited about ASM complex, demanding, but achievable simple, easy, better
Conclusion
direct I/O asynchronous I/O striping mirroring load balancing avoids all the worst mistakes
Buffered I/O
Reads
stat: physical reads read from cache may require physical read written to cache synchronously (Oracle waits until the data is safely on disk too)
Writes
Direct I/O
I/O
SGA
bypasses file system cache file system cache does not contain database blocks (so its smaller) database cache can be larger
File System Cache
PGA
Database Cache
Memory
Legend: hot data recent warm data older warm data recent cold data o/s data
Database Cache
Legend: hot data recent warm data older warm data recent cold data o/s data
Cache Effectiveness
Buffered I/O
Direct I/O
overlap wastes memory caches single use data simple LRU policy file system cache hits are relatively expensive extra physical read and write overheads floods file system cache with Oracle data
no overlap no single use data segmented LRU policy all cached data is found in the database cache no physical I/O overheads non-Oracle data cached more effectively
SGA
Log Buffer
I/O Efficiency
Buffered I/O
Direct I/O
small writes must wait for preliminary read large reads & writes performed as a series of single block operations tablespace block size must match file system block size exactly
small writes no need to re-write adjacent data large reads & writes passed down the stack without any fragmentation may use any tablespace block size without penalty
set filesystemio_options parameter set file system mount options configure using operating system commands operating system platform file system type
Depends on
Synchronous I/O
Processes wait for I/O completion and results A process can only use one disk at a time For a series of I/Os to the same disk
the hardware cannot service the requests in the optimal order scheduling latencies
Asynchronous I/O
Can perform other tasks while waiting for I/O Can use many disks at once For a batch of I/Os to the same disk
the hardware can service the requests in the optimal order no scheduling latencies
multiple threads perform synchronous I/O high CPU cost if intensively used only available on some platforms must use raw devices or a pseudo device driver product
eg: Veritas Quick I/O, Oracle Disk Manager, etc
Striping Benefits
Concurrency
hot spots are spread over multiple disks which can service concurrent requests in parallel large reads & writes use multiple disk in parallel full utilization of hardware investment
important for systems relatively few large disks
Transfer rate
I/O spread
most I/Os should be serviced by a single disk caching ensures that disk hot spots are not small 1 Mb is a reasonable stripe element size large I/Os should be serviced by multiple disks but very fine striping increases rotational latency and reduces concurrency 128 Kb is commonly optimal
Striping Breadth
Comprehensive (SAME)
all disks in one stripe ensures even utilization of all disks needs reconfiguration to increase capacity without a disk cache log write performance may be unacceptable
two or more stripe sets one sets may be busy while another is idle can increase capacity by adding a new set can use a separate disk set to isolate log files from I/O interference
Striping How To
Stripe breadth
comprehensive (SAME)
otherwise
Stripe grain
choose coarse for high concurrency applications choose fine for low concurrency applications
Data Protection
Mirroring
RAID-5
only half the raw disk capacity is usable can read from either side of the mirror must write to both sides of the mirror
parity data use the capacity of one disk only one image from which to read must read and write both the data and parity
a crash can leave mirrors inconsistent complete resilvering takes too long so a dirty region log is normally needed
enumerates potentially inconsistent regions makes resilvering much faster but it is a major performance overhead
disk capacity is cheap I/O capacity is expensive avoid dirty region logging overheads to re-establish mirroring quickly after a failure
Data growth
Workload growth
data growth requires more disk capacity placing the new data on the new disks would introduce a hot spot
monitor I/O patterns and densities move files to spread the load out evenly workload patterns may vary file sizes may differ, thus preventing swapping stripe sets may have different I/O characteristics
Difficulties
choose a small, fixed datafile size use multiple such datafiles for each tablespace distribute these datafiles evenly over stripe sets for each tablespace, move datafiles pro-rata from the existing stripe sets into the new one
Disk Group
Disk Group
Disk Group
Disk Group
Disk Group
Disk Group
Disk Group
Disk Group
Disk Group
Disk Group
ASM Architecture
ASM Instance
Oracle DB Instance
ASM Architecture
ASM Instance ASM Instance Oracle DB Instance
Oracle DB Instance
Disk Group
ASM Architecture
ASM Instance ASM Instance Oracle DB Instance
Oracle DB Instance
Disk Group
Disk Group
ASM Architecture
ASM Instance ASM Instance Oracle DB Instance ASM Instance Oracle DB Instance ASM Instance Oracle DB Instance Oracle DB Instance
Oracle DB Instance
Disk Group
Disk Group
ASM Mirroring
3 choices for disk group redundancy
External: defers to hardware mirroring Normal: 2-way mirroring High: 3-way mirroring
ASM Mirroring
Mirror at extent level Mix primary & mirror extents on each disk
ASM Mirroring
Mirror at extent level Mix primary & mirror extents on each disk
ASM Mirroring
No hot spare disk required
Just spare capacity Failed disk load spread among survivors Maintains balanced I/O load
Conclusion
Best practice is built into ASM ASM is easy ASM benefits
can have both fine and coarse grain striping does not require dirty region logging does not require hot spares, just spare capacity
When new disks are added ASM does load balancing automatically without downtime
ASM is Easy
You only need to answer two questions
using BIGFILE tablespaces, you need never name or refer to a datafile again
ASM Benefits
ASM will improve performance
very few sites follow the current best practices no downtime needed for storage changes it automates a complex DBA task entirely
QUESTIONS ANSWERS
Next Steps.
Automatic Storage Management Demo in the Oracle DEMOgrounds
Reminder please complete the OracleWorld online session survey Thank you.