Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
Agenda
RAC concepts Planning for a RAC installation Pre Installation steps Installation of 10g R2 Clusterware Installation of 10g R2 Software Creation of RAC database Configuring Services and TAF Migration of single instance to RAC
Interconnect
Node
Disks
Database vs Instance
RAC Cluster consists of . One or more instances One Database residing on shared storage Instance 1 Node 1 Local Disk Shared Storage
Database
Interconnect
Instance 2
Node 2
Local Disk
Why RAC?
High Availability survive node and instance failures Scalability Add or remove nodes when needed Pay as you grow harness the power of multiple low-cost computers Enable Grid Computing DBAs have their own vested interests!
Network Requirements
Each node must have at least two network adapters; one for the public network interface and one for the private network interface (the interconnect). The public network adapter must support TCP/IP For the private network, the interconnect must preferably be a Gigabit Ethernet switch that supports UDP. This is used for Cache Fusion inter-node connection Host name and IP addresses associated with the public interface should be registered in DNS and /etc/hosts
IP Address Requirements
For each Public Network Interface an IP address and host name registered in the DNS
One unused Virtual IP address and associated host name registered in the DNS for each node to be used in the cluster A private IP address and optional host name for each private interface Virtual IP addresses is used in the network config files
Virtual IP Addresses
VIPs are used in order to facilitate faster failover in the event of a node failure Each node not only has its own statically assigned IP address as well as also a virtual IP address that is assigned to the node The listener on each node will be listening on the Virtual IP and client connections will also come via this Virtual IP. Without VIP, clients will have to wait for long TCP/IP timeout before getting an error message or TCP reset from nodes that have died
Copy authorized_keys from this node to other nodes Run the same command on all nodes to generate the authorized_keys file Finally all nodes will have the same authorized_keys file
ITLINUXBL54
ssh-keygen -t dsa cat id_dsa.pub >> authorized_keys scp authorized_keys itlinuxbl53:/opt/oracle/.ssh ssh itlinuxbl54 hostname ssh itlinuxbl53 hostname
ASM Architecture
RAC Database
Clustered Servers
Disk Group
[root@itlinuxbl53 init.d]# ./oracleasm createdisk VOL2 /dev/sddlmac1 Marking disk "/dev/sddlmac1" as an ASM disk: [ OK ]
[root@itlinuxbl53 init.d]# ./oracleasm createdisk VOL3 /dev/sddlmaf1 Marking disk "/dev/sddlmaf1" as an ASM disk: [ OK ]
Prerequisites Validation
Voting Disk
Configuration Assistants
Configuring ASM
Services
Logically group consumers who share common attributes like workload, a database schema or some common application functionality Manage client load balancing Manage server-side load balancing Connect-time failover with TAF Controlled by tnsnames.ora parameters FAILOVER=ON, FAILOVER_MODE, METHOD Managed via DBCA or SRVCTL commands
Configuring Services
Configuring Services
Configuring Services
Configuring Services
Configuring Services
racdb2:/var/opt/oracle>srvctl config service -d racdb racdb_blade53 PREF: racdb1 AVAIL: racdb2 racdb_blade54 PREF: racdb2 AVAIL: racdb1
racdb2:/var/opt/oracle>srvctl status service -d racdb -s racdb_blade53 Service racdb_blade53 is running on instance(s) racdb1
SQL> SELECT SEQUENCE#, THREAD#, STATUS FROM V$LOG; SEQUENCE# THREAD# ---------- ---------9 1 10 1 4 2 5 2 STATUS ---------------INACTIVE CURRENT ACTIVE CURRENT
Since the same control files are used by both instances RACDB1 and RACDB2 (same db RACDB) the output is the same on both sides.
run { set until logseq 10 thread 1; set autolocate on; allocate channel c1 type disk; restore database ; recover database ; release channel c1; }
Create the init.ora for the instance gavin1 only add one line with the SPFILE value pointing to the spfile we created on the OCFS file system $ cat initgavin1.ora
SPFILE=/ocfs/oradata/gavin/spfilegavin.ora
Note: Do the same on the other node for the instance gavin2
QUESTIONS ANSWERS
Contact me: Email: gavin.soorma@emirates.com Phone: + 971507843900