Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
TURKCELL DBA
Ferhat ENGNL http://ferhatsengonul.wordpress.com http://twitter.com/ferhatsengonul
May 2011
Who am I?
11 years in IT in finance sector. Works with (nearly) all dbs from hierarchical to relation Found peace in Exadata Likes APEX (as an amateur) 1 year in Turkcell http://ferhatsengonul.wordpress.com http://twitter.com/ferhatsengonul
Headlines
Turkcell in numbers BI domain in numbers First project
Migration to V2 (8 nodes) ( Total uncompressed size 250 TB)
Second project
Migration to 2 x2-2s (16 nodes) Consolidation of 4 databases ( Total uncompressed size 600 TB) Geographical (continental) change of data center
Future plans
Turkcell in Numbers
Leading Telco in Turkey 34 million subscribers in Turkey as of Feb 28, 2011 Third biggest GSM operator in Europe in terms of subscribers Turkcell started its operations in February 1994 Turkcell co-operates in 8 other countries and has over 60 million subscribers total First and only Turkish company listed on NYSE Over 500 Oracle Databases, 150+ Production DB Machine hosts our biggest Database from DWH Domain
Turkcells BI Environment
Exadata Source DBs ETL AB Initio Oracle ODI Other DWH DBs Reporting MSTR
Amount Of Data
3 Billion CDR per Day 600 -1000 GB raw data extracted from 20+ source databases 5 TB Data on file system processed 2-3 TB loaded into databases, all into Exadata Approximately 600 TB Customer data stored in multiple Databases
600 TB (60 TB compressed) on Exadata
Reporting Environment
EMC DMX-4 70 TB
HITACHI USP-V 50 TB
ORACLE Exadata V2
Server Model Cpu Type Number Of Cpu Threads Total Main Memory Total Storage Capacity Storage Connection Technology Storage Maximum IO Throughput Capacity Server + Storage units Total Power Server + Storage units Total Form Factor Approximate Data Backup Duration Number of Backup Tape Cartridges Per Backup
OLD SYSTEM Sun M9000 Sun Sparc 7 2.52 GHz 176 512 GB 120 TB Fiber Channel (32 x 4 Gtbit/s) 5 GB/s 57 KVA
NEW SYSTEM Oracle Exadata V2 Xeon E5540 Processors 2.53 GHz 128 576 GB 30 TB Infiniband ( 8 x 40 Gbit 21 GB/s 20 KVA
11 Rack
44 Hours 159
1 Rack
14 Hours 57
Avg Time
Exadata Migration
4827
4750
25 min 27 min 7 min 3 min 3 min 1403 25 min 1161 1023 665 489 648 541 361 192 57 454 419 376 346 314 271 260 252 199 193 179 107 34282588 177 139 112 883328 9 5 2 4 0 6 5245351775 4049332987 3519 4 2 1 1 1 342214 3 2 1 3
0-5 dk 5-10 dk 10-30 dk 30-45 dk 45-60 dk 1-1.5 saat 1.5-2 saat 2-2.5 saat 2.5-3 saat 3-3.5 saat 3.5- 4 saat 4 saat 0-5 dk 5-10 dk 10-30 dk 30-45 dk 45-60 dk 1-1.5 saat 1.5-2 saat 2-2.5 saat 2.5-3 saat 3-3.5 saat 3.5- 4 saat 4 saat 0-5 dk 5-10 dk 10-30 dk 30-45 dk 45-60 dk 1-1.5 saat 1.5-2 saat 2-2.5 saat 2.5-3 saat 3-3.5 saat 3.5- 4 saat 4 saat 0-5 dk 5-10 dk 10-30 dk 30-45 dk 45-60 dk 1-1.5 saat 1.5-2 saat 2-2.5 saat 2.5-3 saat 3-3.5 saat 3.5- 4 saat 4 saat 0-5 dk 5-10 dk 10-30 dk 30-45 dk 45-60 dk 1-1.5 saat 1.5-2 saat 2-2.5 saat 2.5-3 saat 3-3.5 saat 0-5 dk 5-10 dk 10-30 dk 30-45 dk 45-60 dk 1-1.5 saat 1.5-2 saat 2-2.5 saat 2.5-3 saat 3.5- 4 saat Temmuz (05-11) Temmuz (12-18) Temmuz (19-25) Temmuz (26-01) Agustos(02-08) Agustos(09-15)
Performance Needs
Over 50K reports run every month on this DB Average report run time is reduced from 27 minutes to 3 minutes !!! Reports completed less than 5 mins rose from %45 to %90 Reports running more than 4 hours down from 87 to 1
Project Challenges
Will we fit into 30 TB ? (~100 TB 10g compressed) How to move that much data in 2 days ?
100TB 10g compressed, how much of it can be moved before/after How much data needs to be moved during mig window What kind of Network infrastructure is needed to support such xfr rate
Migration Facts
Insert/append over DB Links
Platform and version change forced us to use insert over db-link None of the other methods like TTS, ASM Rebalance was applicable
Migration Facts
Parallel runs continued for few weeks till we completely feel comfortable on Exadata
Stability of the system under real load was proved for various load patterns Backup/Restore tests were completed
Compression in Action
Old System 10gR2 Compression
~2-3 times ~250TB raw data to 100TB
SORT_A
SORT_B SORT_A_B
Q_HIGH
Q_HIGH Q_HIGH
12,18
15,37 11,64
11,29
8,95 11,80
http://ferhatsengonul.wordpress.com/2010/08/09/getting-the-most-from-hybrid-columnar-compression/
Performance Gains
Report Name CRC Control Report Old System 0:15:48.73 Exadata byX 0:05:06.07 X2
8:02:10.59
0:38:17.77 0:09:46.25 0:17:57.95 0:03:22.24 0:05:41.34 0:31:32.38 0:25:21.00
1:51:33.20 X4.3
0:00:23.34 X163 0:02:08.00 X4.5 0:00:37.61 X45.7 0:00:00.66 X487 0:01:00.34 X4.3 0:00:46.51 X66.3 0:00:54.88 X44.9
Connectcard Aktivasyon
Over 50K reports run every month Performance improvement is up-to 400x for some reports and on average it is 10x
User Feedbacks
We heard before that infrastructure changes would give us performance gains but this time we were surprised with it, it was well over our expectations. Now we can take faster actions in this competitive environment. Director of Marketing Department XDDS is fantastic in a single word, none of the reports take more than 10 minutes, It was taking 3-4 hours before now it completes in 3 minutes. It sounds like un-real but it is real. Power end-user from Finance Department It was a never ending race to match the business' performance and capacity needs. With the Database Machine V2, we have outperformed our user's expectations and we are prepared for the future growth. Veteran System Admin You started to scare me MSTR updated her status on facebook. End-user from Marketing Department
Second Project
Monthly 1 TB increase in size. Need a second RACK. Management was satisfied and bought 2 RACKS in stead of one. Migration of Data Center from Europe to Asia Consolidation on Exadata.
Operational Sources
SMARTCUBE - MicroStrategy
EXADATA x2-2
ODS DDS
CDRDM
Extract
Feed
RDS
.. ..
SINGLE DWH ENVIRONMENT WITHOUT SINGLE DATABASE RDS 5 TBOF DATA DUPLICATION
DDS 35 TB + 18 Ay
ODS 5 TB
ZDDS 5TB HIGH AVAILABILTY SOLUTION FOR DWH 50TB OTHER DBS 25TB
CDRDM 15TB
100TB COLD
Performance increase 3x even the ETL Server and the DB are on different continents And runs in only one database node. (server pool which has only one node.)
Existing XDDS has migrated in April. ETL servers and reporting servers have migrated simultaneously.
Migration method
From Sun Solaris to Exadata
Insert/append over dblink method. We still love our inhouse code.
XCDRDM
XRDS ZDDS NODS ARA TOPLAM TOTAL
60TB
15TB 15TB 6TB 96TB 131TB
13TB
3TB 3TB 2TB 21TB 56TB
15TB
5TB 5TB 5TB 30TB 70TB
20TB
7TB 7TB 10TB 44TB 94TB
120 TB (net space) of disk is given back with the first project. 100 TB (net space) of disk is given back with the second project.
GAINS on reporting
Avg Time
6,42 min
7,1 min
3,28 min
Even though were using only 8 nodes on X2-2 cluster, we had performance increase.
Server Pools
Still can be used for dividing the nodes between servers. We do not want to run 2 different instances on the same node. But want to increase or decrease the number of nodes between systems. Still want to have the change to get all 16 nodes for 1 database.
Ferhat ENGNL
http://ferhatsengonul.wordpress.com http://twitter.com/ferhatsengonul
www.turkcell.com.tr