Sei sulla pagina 1di 74

Oracle RAC 12c (12.1.0.

2)
Operational Best Practices
Markus Michalewicz
Director of Product Management
Oracle Real Application Clusters (RAC)
October 1st, 2014
@OracleRACpm
http://www.linkedin.com/in/markusmichalewicz
http://www.slideshare.net/MarkusMichalewicz

Safe Harbor Statement


The following is intended to outline our general product direction. It is intended for
information purposes only, and may not be incorporated into any contract. It is not a
commitment to deliver any material, code, or functionality, and should not be relied upon
in making purchasing decisions. The development, release, and timing of any features or
functionality described for Oracles products remains at the sole discretion of Oracle.

Copyright 2014, Oracle and/or its affiliates. All rights reserved.

Operational Best Practices


http://www.slideshare.net/MarkusMichalewicz/oracle-rac-12c-collaborate-best-practices-ioug-2014-version

Use Case
Area

Installation

Generic
Clusters

Extended
Cluster

Dedicated
(OLTP / DWH)

Consolidated
Environments

Storage
OS

Update

Network
Cluster
DB

SI

t
Copyright 2014, Oracle and/or its affiliates. All rights reserved.

Program Agenda
1

New in Oracle RAC 12.1.0.2 (Install)


Operational Best Practices for

Generic Clusters

Extended Cluster

Dedicated Environments

Consolidated Environments
Appendices A D
Copyright 2014, Oracle and/or its affiliates. All rights reserved.

Program Agenda
1

New in Oracle RAC 12.1.0.2 (Install)


Operational Best Practices for

Generic Clusters

Extended Cluster

Dedicated Environments

Consolidated Environments
Appendices A D
Copyright 2014, Oracle and/or its affiliates. All rights reserved.

New in 12.1.0.2: GIMR No Choice Anymore


12.1.0.1 Grid Infrastructure
Management Repository (GIMR)

12.1.0.2 Grid Infrastructure


Management Repository (GIMR)
Single Instance Oracle Database 12c
Container Database with one PDB

The resource is called ora.mgmtdb


Future consolidation planned
Installed on one of the (HUB) nodes
Managed as a failover database
Stored in the first ASM disk group created

Copyright 2014, Oracle and/or its affiliates. All rights reserved.

Recommendation Change in Disk Group Creation


12.1.0.1 disk group creation:
start with GRID disk group

12.1.0.2 disk group creation:


start with GIMR hosting disk group
GIMR typically does not require
redundancy for the disk group.
Hence, do not share with GRID DG.

Clusterware files (Voting Files


and OCR) are easy to relocate
Information
in Appendix
SeeMore
example
in Appendix
A.A

More information:
How to Move GI Management Repository to
Different Shared Storage (Diskgroup, CFS or NFS etc)
(Doc ID 1589394.1)
Managing the Cluster Health Monitor Repository
(Doc ID 1921105.1)
Copyright 2014, Oracle and/or its affiliates. All rights reserved.

ACFS News
ACFS is Free of Charge!
All functionality
for non-database files;
no exception.
For database files, all
ACFS functionality; the
following exceptions apply:
Snapshots: require DB EE
Replication, Encryption,
Security, and Auditing:
not available for DB files
Respective DB functionality
should be used instead (e.g.
Advanced Security Option)

ACFS Support for Exadata


Systems (Linux only)
12.1.0.2 supports the following
database versions on ACFS on
Exadata Database Machines:
Oracle Database 10g Rel. 2
(10.2.0.4 and 10.2.0.5)
Oracle Database 11g (11.2.0.4)
Oracle Database 12c (12.1.0.1+)

Test & Dev Database


Management made
simple with gDBclone
gDBClone is a simple sample
script that leverages Oracle
ACFS snapshot functionality to
create space efficient copies
for the management of Test
and Dev Oracle Databases.

More information:
My Oracle Support (MOS) Note
1929629.1 Oracle ACFS Support
on Oracle Exadata Database
Machine (Linux only)

Copyright 2014, Oracle and/or its affiliates. All rights reserved.

Simplified ACFS Licensing


Oracle ACFS Features

Oracle Database Files

Non-Oracle Database Files

FREE

FREE

Snapshots

Oracle DB EE required

FREE

Encryption

Not Available

FREE

Security

Not Available

FREE

Replication

Not Available

FREE

Auditing

Not Available

FREE

ACFS features other


than those listed below

Copyright 2014, Oracle and/or its affiliates. All rights reserved.

10

ACFS and a Simple and Free of Charge Approach to


Managing Test & Dev Oracle Database Environments

The gDBclone sample script takes databases from any


source and duplicates them on the Test & Dev cluster
using ACFS snapshots to create space efficient copies.

gDBclone automatically converts databases from any type to


any type; quickly test your application on a RAC test database
using your single instance database production data.

Look for the gDBClone Database Clone/Snapshot Management Script and WP here:
http://www.oracle.com/technetwork/indexes/samplecode/index.html
Copyright 2014, Oracle and/or its affiliates. All rights reserved.

11

New in 12.1.0.2: Recommendation to use Flex Cluster


12.1.0.1: Go with Standard Cluster

12.1.0.2: Use Flex Cluster


(includes Flex ASM by default)

One exception: if installing


for an Extended Oracle RAC
Cluster use Standard
Cluster + Flex ASM

Copyright 2014, Oracle and/or its affiliates. All rights reserved.

12

Continue to use Leaf Nodes for Applications in 12.1.0.2


DBCA

Despite running Leaf Nodes

[GRID]> olsnodes -s -t
germany
argentina
brazil

Active
Active
Active

Unpinned
Unpinned
Unpinned

italy
spain

Active
Active

Unpinned
Unpinned

More Information in Appendix D


Copyright 2014, Oracle and/or its affiliates. All rights reserved.

13

New Network Flexibility in 12.1.0.2 Recommendation


Install whats necessary

Configure whats required (later)

More Information in Appendix B


Copyright 2014, Oracle and/or its affiliates. All rights reserved.

14

Installation Complete Result


Server OS:
HUBs 4GB+ memory recommended

Oracle RAC

Oracle RAC

Oracle GI | HUB

Oracle GI | HUB

germany

argentina

One HUB at a time will host GIMR database.


Only HUBs will host (Flex) ASM instances.
Leafs can have less memory, dependent on the use case.
Installer enforces HUB minimum memory requirement.

OL 6.5 UEK (other kernels are supported)


Oracle RAC
Oracle GI | HUB
brazil

Oracle GI | Leaf

Databases
1. rdwh, on all HUBs

italy

Oracle GI | Leaf

2. cons, on argentina and brazil

spain

Copyright 2014, Oracle and/or its affiliates. All rights reserved.

15

Automatic Diagnostic Repository (ADR)


ADR_base

Oracle Grid Infrastructure now supports


the Automatic Diagnostic Repository
ADR simplifies log analysis by

diag

centralizing most logs under


a defined folder structure.
maintaining a history of logs.
providing its own command line tool
to manage diagnostic information.

asm

rdbms

tnslsnr

clients

crs

(others)

More Information in Appendix C


Copyright 2014, Oracle and/or its affiliates. All rights reserved.

16

Program Agenda
1

New in Oracle RAC 12.1.0.2 (Install)


Operational Best Practices for

Generic Clusters

Extended Cluster

Dedicated Environments

Consolidated Environments
Appendices A D
Copyright 2014, Oracle and/or its affiliates. All rights reserved.

17

Operational Best Practices Generic Clusters


Use Case
Area

Generic
Clusters

Extended
Cluster

Dedicated
(OLTP / DWH)

Consolidated
Environments

Storage
OS
Network
Cluster
DB

Copyright 2014, Oracle and/or its affiliates. All rights reserved.

18

Generic Clusters Storage


Use Case
Area
Storage

Generic
Clusters

Extended
Cluster

Dedicated
(OLTP / DWH)

Consolidated
Environments

Appendix A

OS
Network
Cluster
DB
Step 1: Create GRID Disk Group Generic Cluster
Step 2: Move Clusterware Files
Step 3: Move ASM SPFILE / password file
More Information in Appendix A

Copyright 2014, Oracle and/or its affiliates. All rights reserved.

19

Generic Clusters OS / Memory

Swapping

Oracle RAC
Oracle GI

germany

Oracle RAC
Oracle GI

argentina

Avoid memory pressure!


Use Memory Guard

Oracle RAC
Oracle GI

germany

Oracle RAC
Oracle GI

argentina

Use Solid State Disks (SSDs) to host swap

Use HugePages for SGA (Linux)

More Information in My Oracle Support


(MOS) note 1671605.1 Use Solid State
Disks to host swap space in order to
increase node availability

More information in
MOS notes 361323.1 & 401749.1
Avoid Transparent HugePages (Linux6)
See alert in MOS note 1557478.1

Copyright 2014, Oracle and/or its affiliates. All rights reserved.

20

Generic Clusters OS / OraChk and TFA


OraChk
Formerly RACcheck
A.k.a. ExaChk

RAC Configuration Audit Tool


Details in MOS note ID 1268927.1

Checks Oracle (Databases):


Standalone Database
Grid Infrastructure & Oracle RAC
Maximum Availability Architecture
(MAA) Validation (if configured)
Oracle Hardware setup configuration

Trace File Analyzer


More information in MOS note 1513912.1

Copyright 2014, Oracle and/or its affiliates. All rights reserved.

21

Generic Clusters OS Summary


Use Case
Area
Storage
OS

Generic
Clusters

Extended
Cluster

Dedicated
(OLTP / DWH)

Consolidated
Environments

Appendix A
Memory Config
+ OraChk / TFA

Network

Cluster
DB

Copyright 2014, Oracle and/or its affiliates. All rights reserved.

22

Generic Clusters Network


1500 byte MTU

Fragmentation

Reassembly

Send()

Oracle RAC
germany

Receive()

Oracle RAC
argentina

8K Data
Block
Size Interconnect for aggregated throughput

Define normal

Use redundancy (HAIPs) for Load Balancing

Use Jumbo Frames wherever possible

Use different subnets for the interconnect

Ensure entire infrastructure support

More Information in Appendix B

Copyright 2014, Oracle and/or its affiliates. All rights reserved.

23

Virtual Generic Clusters? Use Ping Targets with 12.1.0.2


Fact: In virtual environments, certain
network components are virtualized.

Consequence: Sometimes, network failures


are not reflected in the guest environment.
Reason: OS commands run in the guest fail to detect
the network failure as the virtual NIC remains up.

Result: corrective actions may not be performed.


Solution: Ping Targets

DBI

APP

Guest

Server

Copyright 2014, Oracle and/or its affiliates. All rights reserved.

24

(Virtual) Generic Clusters Use Ping Targets on Public


Ping Targets are new in Oracle RAC 12.1.0.2
[GRID]> su
Password:
[GRID]> srvctl modify network -k 1 -pingtarget <UsefulTargetIP(s)>"
[GRID]> exit
exit
[GRID]> srvctl config network -k 1
Network 1 exists
Subnet IPv4: 10.1.1.0/255.255.255.0/eth0, static
Subnet IPv6:
Ping Targets: <UsefulTargetIP(s)>
Network is enabled
Network is individually enabled on nodes:
Network is individually disabled on nodes:

Ping Targets use a probe to a given destination


(IP) in order to determine network availability.
Ping Targets are used in addition to local checks

Ping Targets are used on the public network only


Private networks already use constant heartbeating

Ping Targets should be chosen carefully:


Availability of the ping target is important
More than one target can be defined for redundancy

Ping target failures should be meaningful


DBI

APP

Guest

Server

Example: Pinging a central switch (probably needs to


be enabled) between clients and the database servers.

Copyright 2014, Oracle and/or its affiliates. All rights reserved.

25

Generic Clusters Network Summary


Use Case
Area
Storage
OS
Network

Generic
Clusters

Extended
Cluster

Dedicated
(OLTP / DWH)

Consolidated
Environments

Appendix A
Memory Config
+ OraChk / TFA
As discussed
+Appendix B

Cluster
DB

Copyright 2014, Oracle and/or its affiliates. All rights reserved.

26

Generic Clusters Cluster


Use Case
Area
Storage
OS

Generic
Clusters

Extended
Cluster

Dedicated
(OLTP / DWH)

Consolidated
Environments

Appendix A

Memory Config
+ OraChk / TFA

Network

As discussed
+Appendix B

Cluster

Appendix D

DB

1. Install / maintain HUBs, add Leaf Nodes


2. Adding nodes to the cluster
3. Use Leaf nodes for non-DB use cases
Copyright 2014, Oracle and/or its affiliates. All rights reserved.

27

Program Agenda
1

New in Oracle RAC 12.1.0.2 (Install)


Operational Best Practices for

Generic Clusters

Extended Cluster

Dedicated Environments

Consolidated Environments
Appendices A D
Copyright 2014, Oracle and/or its affiliates. All rights reserved.

28

Extended Oracle RAC

From an Oracle perspective, an Extended RAC installation is used as soon


as data (using Oracle ASM) is mirrored between independent storage arrays.
(Exadata Storage Cells are excluded from this definition.)
ER: open to make "EXTENDED ORACLE RAC" A DISTINGUISHABLE CONFIGURATION
Copyright 2014, Oracle and/or its affiliates. All rights reserved.

29

Extended Cluster Storage


Use Case
Area
Storage
OS

Generic
Clusters

Extended
Cluster

Appendix A

Appendix A

Dedicated
(OLTP / DWH)

Consolidated
Environments

Memory Config
+ OraChk / TFA

Network

As discussed
+Appendix B

Cluster

Appendix D

DB
Step 1: Create GRID Disk Group Extended Cluster
Step 2: Move Clusterware Files
Step 3: Move ASM SPFILE / password file
Step 4: srvctl modify asm count all
Copyright 2014, Oracle and/or its affiliates. All rights reserved.

30

Extended Cluster OS
Use Case
Area
Storage
OS

Generic
Clusters

Extended
Cluster

Appendix A

Appendix A

Memory Config
+ OraChk / TFA

As for Generic
Clusters

Network

As discussed
+Appendix B

Cluster

Appendix D

Dedicated
(OLTP / DWH)

Consolidated
Environments

DB
More information: Oracle Real Application Clusters
on Extended Distance Clusters (PDF) http://www.oracle.com/technetwork/database/options/clu
stering/overview/extendedracversion11-435972.pdf

Copyright 2014, Oracle and/or its affiliates. All rights reserved.

31

Extended Cluster Network

More information: Oracle Real Application Clusters


on Extended Distance Clusters (PDF) http://www.oracle.com/technetwork/database/options/clustering/
overview/extendedracversion11-435972.pdf

Define normal
The goal in an Extended RAC setup is to hide the distance.

Any latency increase might (!)


impact application performance.

VLANs are fully supported for Oracle RAC for more information, see:
http://www.oracle.com/technetwork/database/databasetechnologies/clusterware/overview/interconnect-vlan-06072012-1657506.pdf

Vertical subnet separation is not supported.

Copyright 2014, Oracle and/or its affiliates. All rights reserved.

32

Extended Cluster Network Summary


Use Case

Generic
Clusters

Extended
Cluster

Appendix A

Appendix A

Memory Config
+ OraChk / TFA

As for Generic
Clusters

Network

As discussed
+Appendix B

As discussed
+Appendix B

Cluster

Appendix D

Area
Storage
OS

Dedicated
(OLTP / DWH)

Consolidated
Environments

DB

Copyright 2014, Oracle and/or its affiliates. All rights reserved.

33

Extended Cluster Cluster Summary


Use Case

Generic
Clusters

Extended
Cluster

Appendix A

Appendix A

Memory Config
+ OraChk / TFA

As for Generic
Clusters

Network

As discussed
+Appendix B

As discussed
+Appendix B

Cluster

Appendix D

As Generic

Area
Storage
OS

Dedicated
(OLTP / DWH)

Consolidated
Environments

DB

The goal in an Extended RAC setup is to hide the distance.


Copyright 2014, Oracle and/or its affiliates. All rights reserved.

34

Program Agenda
1

New in Oracle RAC 12.1.0.2 (Install)


Operational Best Practices for

Generic Clusters

Extended Cluster

Dedicated Environments

Consolidated Environments
Appendices A D
Copyright 2014, Oracle and/or its affiliates. All rights reserved.

35

Dedicated Environments Only a few items to consider


Use Case

Generic
Clusters

Extended
Cluster

Appendix A

Appendix A

Memory Config
+ OraChk / TFA

As for Generic
Clusters

Network

As discussed
+Appendix B

As discussed
+Appendix B

Cluster

Appendix D

As Generic

Area
Storage
OS

Dedicated
(OLTP / DWH)

Consolidated
Environments

DB

Copyright 2014, Oracle and/or its affiliates. All rights reserved.

36

Dedicated Environments Network


[GRID]> srvctl config scan -all
SCAN name: cupscan.cupgnsdom.localdomain, Network: 1
Subnet IPv4: 10.1.1.0/255.255.255.0/eth0, static
Subnet IPv6:
SCAN 0 IPv4 VIP: 10.1.1.55
SCAN VIP is enabled.
SCAN VIP is individually enabled on nodes:
SCAN VIP is individually disabled on nodes:
SCAN name: cupscan2, Network: 2
Subnet IPv4: 10.2.2.0/255.255.255.0/, static
Subnet IPv6:
SCAN 1 IPv4 VIP: 10.2.2.55
SCAN VIP is enabled.
SCAN VIP is individually enabled on nodes:
SCAN VIP is individually disabled on nodes:

More Information:

More information:

Valid Node Checking For Registration (VNCR) (Doc ID 1600630.1)

How to Enable VNCR on RAC Database to Register only Local Instances


(Doc ID 1914282.1)

SCAN on
Network 1

SCAN on
Network 2

Oracle Real Application Clusters - Overview of SCAN http://www.oracle.com/technetwork/database/optio


ns/clustering/overview/scan-129069.pdf

Copyright 2014, Oracle and/or its affiliates. All rights reserved.

37

Dedicated Environments Network Summary


Use Case

Generic
Clusters

Extended
Cluster

Appendix A

Appendix A

Memory Config
+ OraChk / TFA

As for Generic
Clusters

Network

As discussed
+Appendix B

As discussed
+Appendix B

Cluster

Appendix D

As Generic

Area
Storage
OS

Dedicated
(OLTP / DWH)

Consolidated
Environments

Appendix B +
as discussed

DB

Copyright 2014, Oracle and/or its affiliates. All rights reserved.

38

Dedicated Environments Database (DB)


Connection
Pool

Oracle RAC
Oracle GI

germany

Problem: Patching and Upgrades

Problem: Memory consumption

Solution: Rapid Home Provisioning

Solution: Memory Caps

Oracle RAC
Oracle GI

argentina

Problem: Number of Connections


Solution: various,
mostly using connection pools

Copyright 2014, Oracle and/or its affiliates. All rights reserved.

39

Dedicated Environments Database (DB)


[DB]> sqlplus / as sysdba
SQL*Plus: Release 12.1.0.2.0 Production on Thu Sep 18 18:57:30 2014

SQL> show parameter pga


NAME
TYPE VALUE
------------------------------------ ----------- -----------------------------pga_aggregate_limit
big integer 2G
pga_aggregate_target
big integer 211M

1.

Do not handle connection storms, prevent them.

2.

Limit the number of connections to the database.

3.

Use Connection Pools where possible:

4.

Ensure applications close connections.

See documentation for PGA_AGGREGATE_LIMIT

If number of active connections is fairly less than


the number of open connections, consider using
Database Resident Connection Pooling docs.oracle.com/database/121/JJDBC/drcp.htm#JJDBC29023

5.

If you cannot prevent the storm, slow it down.

New in Oracle Database 12c:


SGA and PGA aggregated targets can be limited.

Oracle Universal Connection Pool (UCP) -

http://docs.oracle.com/database/121/JJUCP/rac.htm#JJUCP8197

SQL> show parameter sga


NAME
TYPE VALUE
------------------------------------ ----------- -----------------------------lock_sga
boolean
FALSE
pre_page_sga
boolean
TRUE
sga_max_size
big integer 636M
sga_target
big integer 636M
unified_audit_sga_queue_size
integer
1048576

Connection Pool

6.

Use listener parameters to mitigate the negative side


effects of a connection storm. Most of these
parameters can also be used with SCAN.

Services can be assigned to one subnet at a time.


You control the subnet, you control the service.

Copyright 2014, Oracle and/or its affiliates. All rights reserved.

40

Dedicated Environments Database Summary


Use Case

Generic
Clusters

Extended
Cluster

Appendix A

Appendix A

Memory Config
+ OraChk / TFA

As for Generic
Clusters

Network

As discussed
+Appendix B

As discussed
+Appendix B

Cluster

Appendix D

As Generic

Area
Storage
OS

DB

Dedicated
(OLTP / DWH)

Consolidated
Environments

Appendix B +
as discussed
As discussed

Copyright 2014, Oracle and/or its affiliates. All rights reserved.

41

Program Agenda
1

New in Oracle RAC 12.1.0.2 (Install)


Operational Best Practices for

Generic Clusters

Extended Cluster

Dedicated Environments

Consolidated Environments
Appendices A D
Copyright 2014, Oracle and/or its affiliates. All rights reserved.

42

Consolidated Environments No VMs 2 Main Choices


cons1_2
racdb1_3

Oracle RAC
Oracle GI | HUB
germany

Oracle RAC
Oracle GI
germany

Oracle RAC
Oracle GI

Oracle RAC
Oracle GI | HUB
argentina

cons1_1

Oracle RAC
Oracle GI | HUB

Oracle RAC
Oracle GI | HUB

argentina
brazil

italy

cons

Database Consolidation

Use Oracle Multitenant

Multiple database instances running on a server

A limited number of Container DB instances to manage

Need to manage memory across instances

Memory allocation on the server is simplified

Use Instance Caging and QoS (in RAC cluster)

Instance Caging may not be needed (QoS still beneficial)

Copyright 2014, Oracle and/or its affiliates. All rights reserved.

43

Consolidated Environments Make them Dedicated


cons1_2
racdb1_3

Oracle RAC
Oracle GI | HUB
germany

Oracle RAC
Oracle GI
germany

Oracle RAC
Oracle GI

Oracle RAC
Oracle GI | HUB
argentina

cons1_1

Oracle RAC
Oracle GI | HUB

Oracle RAC
Oracle GI | HUB

argentina
brazil

italy

cons

More information:

Use Oracle Multitenant

http://www.oracle.com/technetwork/database/focusareas/database-cloud/database-cons-best-practices-1561461.pdf

Can be operated as a Dedicated Environment,

http://www.oracle.com/technetwork/database/options/cluste
ring/overview/rac-cloud-consolidation-1928888.pdf

at least from the cluster perspective,


if only 1 Container Database Instance per server is used

Copyright 2014, Oracle and/or its affiliates. All rights reserved.

44

Consolidated Environments Network Summary


Use Case

Generic
Clusters

Extended
Cluster

Appendix A

Appendix A

Memory Config
+ OraChk / TFA

As for Generic
Clusters

Network

As discussed
+Appendix B

As discussed
+Appendix B

Cluster

Appendix D

As Generic

Area
Storage
OS

DB

Dedicated
(OLTP / DWH)

Consolidated
Environments

Appendix B +
as discussed

As dedicated +
as discussed

As discussed

Copyright 2014, Oracle and/or its affiliates. All rights reserved.

45

Consolidated Environments Database (DB) Summary


Use Case

Generic
Clusters

Extended
Cluster

Appendix A

Appendix A

Memory Config
+ OraChk / TFA

As for Generic
Clusters

Network

As discussed
+Appendix B

As discussed
+Appendix B

Cluster

Appendix D

As Generic

Area
Storage
OS

DB

Dedicated
(OLTP / DWH)

Consolidated
Environments

Appendix B +
as discussed

As dedicated +
as discussed

As discussed

As above

Specifically for Oracle Multitenant on Oracle RAC, see:


http://www.slideshare.net/MarkusMichalewicz/oraclemultitenant-meets-oracle-rac-ioug-2014-version
Copyright 2014, Oracle and/or its affiliates. All rights reserved.

46

Appendix A
Creating GRID disk group to place the Oracle Clusterware files and the ASM files

Copyright 2014, Oracle and/or its affiliates. All rights reserved. Oracle Confidential Internal/Restricted/Highly Restricted

47

Create GRID Disk Group Generic Cluster

Use
quorum
whenever
possible.

Copyright 2014, Oracle and/or its affiliates. All rights reserved.

48

Create GRID Disk Group Extended Cluster


Use logical names
illustrating the disk
destination
Use a quorum for
ALL (not only GRID)
disk groups used in
an ExtendedCluster
Use Voting
Disk NFS
destination

More information:
http://www.oracle.com/technetwork/d
atabase/options/clustering/overview/e
xtendedracversion11-435972.pdf

Copyright 2014, Oracle and/or its affiliates. All rights reserved.

49

Move Clusterware Files


Replace Voting Disk Location
[GRID]> crsctl query css votedisk
## STATE File Universal Id
File Name Disk group
-- ----- ------------------------- --------1. ONLINE 8bec21793ee84fd3bfc6831746bf60b4 (/dev/sde) [GIMR]
Located 1 voting disk(s).

[GRID]> crsctl replace votedisk +GRID


Successful addition of voting disk 7a205a2588d44f1dbffb10fc91ecd334.
Successful addition of voting disk 8c05b220cfcc4f6fbf5752b6763a18ac.
Successful addition of voting disk 223006a9c28e4fd5bf3b58a465fcb66a.
Successful deletion of voting disk 8bec21793ee84fd3bfc6831746bf60b4.
Successfully replaced voting disk group with +GRID.
CRS-4266: Voting file(s) successfully replaced

[GRID]> crsctl query css votedisk


## STATE File Universal Id
File Name Disk group
-- ----- ------------------------- --------1. ONLINE 7a205a2588d44f1dbffb10fc91ecd334 (/dev/sdd) [GRID]
2. ONLINE 8c05b220cfcc4f6fbf5752b6763a18ac (/dev/sdb) [GRID]
3. ONLINE 223006a9c28e4fd5bf3b58a465fcb66a (/dev/sdc) [GRID]
Located 3 voting disk(s).

Add OCR Location

[GRID]> whoami
Root
[GRID]> ocrconfig -add +GRID
[GRID]> ocrcheck

Use ocrconfig -delete


+GIMR if you want to
replace and maintain
a single OCR location.

Status of Oracle Cluster Registry is as follows :


Version
:
4
Total space (kbytes) : 409568
Used space (kbytes) :
2984
Available space (kbytes) : 406584
ID
: 759001629
Device/File Name
: +GIMR
Device/File integrity check succeeded
Device/File Name
: +GRID
Device/File integrity check succeeded
Device/File not configured
...
Cluster registry integrity check succeeded
Logical corruption check succeeded

Copyright 2014, Oracle and/or its affiliates. All rights reserved.

50

Move ASM SPFILE See also MOS note 1638177.1


Default ASM spfile location is in the
first disk group created (here: GIMR)
[GRID]> export ORACLE_SID=+ASM1
[GRID]> sqlplus / as sysasm

[GRID]> srvctl status asm


ASM is running on argentina,brazil,germany

SQL> show parameter spfile


NAME
TYPE
VALUE
------------------------------------ ----------- -----------------------------Spfile
string
+GIMR/cup-cluster/ASMPARAMETER
FILE/registry.253.857666347
#Change location

SQL> create pfile='/tmp/ASM.pfile' from spfile;


File created.

SQL> create spfile='+GRID' from pfile='/tmp/ASM.pfile';


File created.
#NOTE:

SQL> show parameter spfile

Perform a rolling ASM instance


restart facilitating Flex ASM

[GRID]> srvctl stop asm -n germany -f

Use gpnptool get


and filter for
ASMPARAMETERFILE
to see updated ASM
SPFILE location in
GPnP profile prior to
restarting.

NAME
TYPE
VALUE
------------------------------------ ----------- -----------------------------Spfile
string
+GIMR/cup-cluster/ASMPARAMETER
FILE/registry.253.857666347

[GRID]> srvctl status asm -n germany


ASM is not running on germany
[GRID]> srvctl start asm -n germany
[GRID]> srvctl status asm -n germany
ASM is running on germany

[GRID]> crsctl stat res ora.mgmtdb


NAME=ora.mgmtdb
TYPE=ora.mgmtdb.type
TARGET=ONLINE
STATE=ONLINE on argentina

Copyright 2014, Oracle and/or its affiliates. All rights reserved.

Perform rolling
through cluster.

12c DB instances
remain running!

51

Move ASM Password File


Default ASM shared password file location
is the same as for the SPFILE (here +GIMR)

Path-checking while moving the file


(online operation)

[GRID]> srvctl config ASM


ASM home: <CRS home>
Password file: +GIMR/orapwASM
ASM listener: LISTENER
ASM instance count: 3
Cluster ASM listener: ASMNET1LSNR_ASM

GRID]> srvctl modify asm -pwfile +GRID/orapwASM


[GRID]> srvctl config ASM
ASM home: <CRS home>
Password file: +GRID/orapwASM
ASM listener: LISTENER
ASM instance count: 3
Cluster ASM listener: ASMNET1LSNR_ASM

[GRID]> srvctl modify asm -pwfile GRID


[GRID]> srvctl config ASM
ASM home: <CRS home>
Password file: GRID
ASM listener: LISTENER
ASM instance count: 3
Cluster ASM listener: ASMNET1LSNR_ASM

Use the correct


ASM path syntax!

[GRID]> srvctl modify asm -pwfile +GRID


PRKO-3270 : The specified password file +GRID does not conform to an ASM path syntax

Copyright 2014, Oracle and/or its affiliates. All rights reserved.

52

Appendix B
Creating public and private (DHCP-based) networks including SCAN and SCAN Listeners

Copyright 2014, Oracle and/or its affiliates. All rights reserved.

53

Add Public Network DHCP


Step 1: Add network

Result
[GRID]> srvctl config network -k 2

[GRID]> oifcfg iflist


eth0
eth1
eth2
eth2

10.1.1.0
10.2.2.0
192.168.0.0
169.254.0.0

[GRID]> oifcfg setif -global "*"/10.2.2.0:public


[GRID]> oifcfg getif
eth0 10.1.1.0 global public
eth2 192.168.0.0 global cluster_interconnect,asm
* 10.2.2.0 global public
Only in OCR: eth1 10.2.2.0 global public
PRIF-29: Warning: wildcard in network parameters can cause mismatch among GPnP profile, OCR, and system.

[GRID]> su
Password:

[GRID]> srvctl add network -netnum 2 -subnet 10.2.2.0/255.255.255.0 -nettype dhcp


[GRID]> exit
exit

Network 2 exists
Subnet IPv4: 10.2.2.0/255.255.255.0/, dhcp
Subnet IPv6:
Ping Targets:
Network is enabled
Network is individually enabled on nodes:
Network is individually disabled on nodes:

[GRID]> crsctl stat res -t


-------------------------------------------------------------------------------Name
Target State
Server
State details
-------------------------------------------------------------------------------Local Resources
-------------------------------------------------------------------------------
ora.net2.network
OFFLINE OFFLINE argentina
STABLE
OFFLINE OFFLINE brazil
STABLE
OFFLINE OFFLINE germany
STABLE

Copyright 2014, Oracle and/or its affiliates. All rights reserved.

54

Add Public Network DHCP


Step 2: Add SCAN / SCAN_LISTENER
to the new network (as required)

Result
[GRID]> srvctl config scan -k 2

[GRID]> su
Password:

[GRID]> srvctl update gns -advertise MyScan -address 10.2.2.20


# Need to have a SCAN name. DHCP network requires dynamic VIP resolution via GNS
[GRID]> srvctl modify gns -verify MyScan
The name "MyScan" is advertised through GNS.

SCAN name: MyScan.cupgnsdom.localdomain, Network: 2


Subnet IPv4: 10.2.2.0/255.255.255.0/, dhcp
Subnet IPv6:
SCAN VIP is enabled.
SCAN VIP is individually enabled on nodes:
SCAN VIP is individually disabled on nodes:
SCAN VIP is enabled.
SCAN VIP is individually enabled on nodes:
SCAN VIP is individually disabled on nodes:
SCAN VIP is enabled.
SCAN VIP is individually enabled on nodes:
SCAN VIP is individually disabled on nodes:

[GRID]> srvctl config scan_listener -k 2


[GRID]> srvctl add scan -k 2
PRKO-2082 : Missing mandatory option scanname

[GRID]> su
Password:

[GRID]> srvctl add scan -k 2 -scanname MyScan


[GRID]> exit
[GRID]> srvctl add scan_listener -k 2

SCAN Listener LISTENER_SCAN1_NET2 exists. Port: TCP:1521


Registration invited nodes:
Registration invited subnets:
SCAN Listener is enabled.
SCAN Listener is individually enabled on nodes:
SCAN Listener is individually disabled on nodes:
SCAN Listener LISTENER_SCAN2_NET2 exists. Port: TCP:1521
Registration invited nodes:
Registration invited subnets:
SCAN Listener is enabled.
SCAN Listener is individually enabled on nodes:
SCAN Listener is individually disabled on nodes:
SCAN Listener LISTENER_SCAN3_NET2 exists. Port: TCP:1521
Registration invited nodes:
Registration invited subnets:
SCAN Listener is enabled.
SCAN Listener is individually enabled on nodes:
SCAN Listener is individually disabled on nodes:

Copyright 2014, Oracle and/or its affiliates. All rights reserved.

55

Add Private Network DHCP


oifcfg commands
[GRID]> oifcfg iflist
eth0
eth1
eth2
eth2
eth3

10.1.1.0
10.2.2.0
192.168.0.0
169.254.0.0
172.149.0.0

Result (ifconfig -a on HUB)


BEFORE
eth3

[GRID]> oifcfg getif


eth0 10.1.1.0 global public
eth2 192.168.0.0 global cluster_interconnect,asm
* 10.2.2.0 global public
Only in OCR: eth1 10.2.2.0 global public
PRIF-29: Warning: wildcard in network parameters can cause mismatch among GPnP profile, OCR, and system.

[GRID]> oifcfg setif -global "*"/172.149.0.0:cluster_interconnect,asm


[GRID]> oifcfg getif
eth0 10.1.1.0 global public
eth2 192.168.0.0 global cluster_interconnect,asm
* 10.2.2.0 global public
* 172.149.0.0 global cluster_interconnect,asm
PRIF-29: Warning: wildcard in network parameters can cause mismatch among GPnP profile, OCR, and system.

Link encap:Ethernet HWaddr 08:00:27:1E:2B:FE


inet addr:172.149.2.7 Bcast:172.149.15.255 Mask:255.255.240.0
inet6 addr: fe80::a00:27ff:fe1e:2bfe/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:52 errors:0 dropped:0 overruns:0 frame:0
TX packets:17 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:20974 (20.4 KiB) TX bytes:4230 (4.1 KiB)

AFTER
eth3

Link encap:Ethernet HWaddr 08:00:27:1E:2B:FE


inet addr:172.149.2.7 Bcast:172.149.15.255 Mask:255.255.240.0
inet6 addr: fe80::a00:27ff:fe1e:2bfe/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:1161 errors:0 dropped:0 overruns:0 frame:0
TX packets:864 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:720040 (703.1 KiB) TX bytes:500289 (488.5 KiB)

HAIPs will only


be used for
Load Balancing
once at least
the DB / ASM
instances, of
not the node is
restarted. They
are considered
for failover
immediately.

eth3:1 Link encap:Ethernet HWaddr 08:00:27:1E:2B:FE


inet addr:169.254.245.67 Bcast:169.254.255.255 Mask:255.255.0.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

Copyright 2014, Oracle and/or its affiliates. All rights reserved.

56

Side note: Leaf Nodes dont host HAIPs!


ifconfig -a on HUB excerpt
eth2

Link encap:Ethernet HWaddr 08:00:27:AD:DC:FD


inet addr:192.168.7.11 Bcast:192.168.15.255 Mask:255.255.240.0
inet6 addr: fe80::a00:27ff:fead:dcfd/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:9303 errors:0 dropped:0 overruns:0 frame:0
TX packets:6112 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:8344479 (7.9 MiB) TX bytes:2400797 (2.2 MiB)

eth2:1 Link encap:Ethernet HWaddr 08:00:27:AD:DC:FD


inet addr:169.254.190.250 Bcast:169.254.255.255 Mask:255.255.128.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
eth3

ifconfig -a on Leaf excerpt

Link encap:Ethernet HWaddr 08:00:27:1E:2B:FE


inet addr:172.149.2.5 Bcast:172.149.15.255 Mask:255.255.240.0
inet6 addr: fe80::a00:27ff:fe1e:2bfe/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:4729 errors:0 dropped:0 overruns:0 frame:0
TX packets:5195 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:1555796 (1.4 MiB) TX bytes:2128607 (2.0 MiB)

eth3:1 Link encap:Ethernet HWaddr 08:00:27:1E:2B:FE


inet addr:169.254.6.142 Bcast:169.254.127.255 Mask:255.255.128.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

HAIPs on the interconnect


are only used by ASM / DB
instances. Leaf nodes do
not host those, hence, they
do not host HAIPs. CSSD
(the node management
daemon) uses a different
redundancy approach.

eth2

Link encap:Ethernet HWaddr 08:00:27:CC:98:C3


inet addr:192.168.7.15 Bcast:192.168.15.255 Mask:255.255.240.0
inet6 addr: fe80::a00:27ff:fecc:98c3/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:7218 errors:0 dropped:0 overruns:0 frame:0
TX packets:11354 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:2644101 (2.5 MiB) TX bytes:13979129 (13.3 MiB)

eth3

Link encap:Ethernet HWaddr 08:00:27:06:D5:93


inet addr:172.149.2.6 Bcast:172.149.15.255 Mask:255.255.240.0
inet6 addr: fe80::a00:27ff:fe06:d593/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:6074 errors:0 dropped:0 overruns:0 frame:0
TX packets:5591 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:2262521 (2.1 MiB) TX bytes:1680094 (1.6 MiB)

Copyright 2014, Oracle and/or its affiliates. All rights reserved.

57

Add Public Network STATIC


Step 1: Add network

[GRID]> oifcfg iflist


eth0 10.1.1.0

eth1 10.2.2.0
eth2
eth2
eth3
eth3

192.168.0.0
169.254.128.0
172.149.0.0
169.254.0.0

Result
[GRID]> srvctl config network -k 2
Network 2 exists
Subnet IPv4: 10.2.2.0/255.255.255.0/, static
Subnet IPv6:
Ping Targets:
Network is enabled
Network is individually enabled on nodes:
Network is individually disabled on nodes:

#Assuming you have NO global public interface defined on subnet 10.2.2.0

[GRID]> oifcfg setif -global "*"/10.2.2.0:public


[GRID]> oifcfg getif
eth0 10.1.1.0 global public
eth2 192.168.0.0 global cluster_interconnect,asm
* 172.149.0.0 global cluster_interconnect,asm

* 10.2.2.0 global public


PRIF-29: Warning: wildcard in network parameters can cause mismatch among GPnP profile, OCR, and system.

[GRID]> su
Password:

[GRID]> srvctl add network -netnum 2 -subnet 10.2.2.0/255.255.255.0 -nettype STATIC

[GRID]> crsctl stat res -t


-------------------------------------------------------------------------------Name
Target State
Server
State details
-------------------------------------------------------------------------------Local Resources
-------------------------------------------------------------------------------
ora.net2.network
OFFLINE OFFLINE argentina
STABLE
OFFLINE OFFLINE brazil
STABLE
OFFLINE OFFLINE germany
STABLE

Copyright 2014, Oracle and/or its affiliates. All rights reserved.

58

Add Public Network STATIC


Step 2: Add VIPs
[GRID]> srvctl add vip -node germany -address germany-vip2/255.255.255.0 -netnum 2
[GRID]> srvctl add vip -node argentina -address argentina-vip2/255.255.255.0 -netnum 2
[GRID]> srvctl add vip -node brazil -address brazil-vip2/255.255.255.0 -netnum 2
[GRID]> srvctl config vip -n germany
VIP exists: network number 1, hosting node germany
VIP Name: germany-vip
VIP IPv4 Address: 10.1.1.31
VIP IPv6 Address:
VIP is enabled.
VIP is individually enabled on nodes:
VIP is individually disabled on nodes:
VIP exists: network number 2, hosting node germany
VIP Name: germany-vip2
VIP IPv4 Address: 10.2.2.31
VIP IPv6 Address:
VIP is enabled.
VIP is individually enabled on nodes:
VIP is individually disabled on nodes:

[GRID]> srvctl start vip -n germany -k 2


[GRID]> srvctl start vip -n argentina -k 2
[GRID]> srvctl start vip -n brazil -k 2

Result

[GRID]> srvctl status vip -n germany


VIP germany-vip is enabled
VIP germany-vip is running on node: germany
VIP germany-vip2 is enabled
VIP germany-vip2 is running on node: germany
[GRID]> srvctl status vip -n argentina
VIP argentina-vip is enabled
VIP argentina-vip is running on node: argentina
VIP argentina-vip2 is enabled
VIP argentina-vip2 is running on node: argentina
[GRID]> srvctl status vip -n brazil
VIP brazil-vip is enabled
VIP brazil-vip is running on node: brazil
VIP brazil-vip2 is enabled
VIP brazil-vip2 is running on node: brazil

Copyright 2014, Oracle and/or its affiliates. All rights reserved.

59

Add Public Network STATIC


Step 3: Add SCAN / SCAN_LISTENER
to the new network (as required)

Result

#as root
[GRID]> srvctl add scan -scanname cupscan2 -k 2
[GRID]> exit
[GRID]> srvctl add scan_listener -k 2 -endpoints 1522
[GRID]> srvctl status scan_listener -k 2
SCAN Listener LISTENER_SCAN1_NET2 is enabled
SCAN listener LISTENER_SCAN1_NET2 is not running

[GRID]> srvctl status scan_listener -k 2


SCAN Listener LISTENER_SCAN1_NET2 is enabled
SCAN listener LISTENER_SCAN1_NET2 is running on node brazil
[GRID]> srvctl status scan -k 2
SCAN VIP scan1_net2 is enabled
SCAN VIP scan1_net2 is running on node brazil

[GRID]> srvctl start scan_listener -k 2

Copyright 2014, Oracle and/or its affiliates. All rights reserved.

60

Appendix C
Automatic Diagnostic Repository (ADR) support for Oracle Grid Infrastructure

Copyright 2014, Oracle and/or its affiliates. All rights reserved.

61

Automatic Diagnostic Repository (ADR) Convenience


The ADR is a file-based repository for
diagnostic data such as traces, dumps, the
alert log, health monitor reports, and more.

ADR_base

diag

asm

rdbms

tnslsnr

clients

ADR helps preventing, detecting,


diagnosing, and resolving problems.

crs

(others)

ADR comes with its own command line tool


(adrci) to get easy access to and manage
diagnostic information for Oracle GI + DB.

Copyright 2014, Oracle and/or its affiliates. All rights reserved.

62

Some Management Examples


adrci
[GRID]> adrci
ADRCI: Release 12.1.0.2.0 - Production on Thu Sep 18 11:35:31 2014
Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.
ADR base = "/u01/app/grid

adrci> show homes


ADR Homes:
diag/rdbms/_mgmtdb/-MGMTDB
diag/tnslsnr/germany/asmnet1lsnr_asm
diag/tnslsnr/germany/listener_scan1
diag/tnslsnr/germany/listener
diag/tnslsnr/germany/mgmtlsnr
diag/asm/+asm/+ASM1
diag/crs/germany/crs
diag/clients/user_grid/host_2998292599_82
diag/clients/user_oracle/host_2998292599_82
diag/clients/user_root/host_2998292599_82

adrci incident management


[GRID]> adrci
ADR base = "/u01/app/grid"

adrci> show incident;


ADR Home = /u01/app/grid/diag/rdbms/_mgmtdb/-MGMTDB:
*************************************************************************
INCIDENT_ID
PROBLEM_KEY
CREATE_TIME
-------------------- ----------------------------------------------------------- ---------------------------------------12073
ORA 700 [kskvmstatact: excessive swapping observed]
2014-09-08 17:44:56.580000 -07:00
36081
ORA 700 [kskvmstatact: excessive swapping observed]
2014-09-14 20:11:17.388000 -07:00
40881
ORA 700 [kskvmstatact: excessive swapping observed]
2014-09-16 15:30:18.319000 -07:00

adrci> set home diag/rdbms/_mgmtdb/-MGMTDB


adrci> ips create package incident 12073;
Created package 1 based on incident id 12073, correlation level typical

adrci> ips generate package 1 in /tmp


Generated package 1 in file /tmp/ORA700ksk_20140918110411_COM_1.zip, mode complete

[GRID]> ls lart /tmp


-rw-r--r--. 1 grid oinstall 811806 Sep 18 11:05 ORA700ksk_20140918110411_COM_1.zip

Copyright 2014, Oracle and/or its affiliates. All rights reserved.

63

Space Requirements, Exceptions, and Rules


Binary / Log per Node

Space Requirement

Grid Infra. (GI) Home

~6.6 GB

RAC DB Home

~5.5 GB

TFA Repository
GI Daemon Traces
ASM Traces
DB Traces
Listener Traces
Total over 3 months
For 2 RAC DBs
For 100 RAC DBs

Flex ASM vs. Standard ASM


Flex Cluster vs. Standard Cluster
Does not make a difference for ADR!

10 GB

gnsd

~2.6 GB
~9 GB

ocssd
ghc

ocssdrim

hanfs

1.5 GB per DB per month


60MB per node per month

ghs
gns
APX

havip
helper

~43 GB
~483 GB

agent

exportfs
NFS
mgmtdb

Copyright 2014, Oracle and/or its affiliates. All rights reserved.

Some
OC4J
Logs

Some
GI home
Logs

64

Appendix D
Flex Cluster add nodes as needed

Copyright 2014, Oracle and/or its affiliates. All rights reserved.

65

Recommendation: Install HUB Nodes, Add Leaf Nodes


Initial installation: HUB nodes only

Add Leafs later (addNode)

Copyright 2014, Oracle and/or its affiliates. All rights reserved.

66

Add argentina as a HUB Node addNode Part 1

Copyright 2014, Oracle and/or its affiliates. All rights reserved.

67

Add argentina as a HUB Node addNode Part 2

Copyright 2014, Oracle and/or its affiliates. All rights reserved.

68

Add Leaf Nodes addNode in Short


Note: Leaf nodes do not
require a virtual node
name (VIP). Application
VIPs for non-DB use
cases need to be added
manually later.

Normal, can
be ignored.

Copyright 2014, Oracle and/or its affiliates. All rights reserved.

69

Continue to use Leaf Nodes for Applications in 12.1.0.2


Database installer suggestion

Consider Use Case

Useful, if spain is
likely to become a HUB
at some point in time.

Copyright 2014, Oracle and/or its affiliates. All rights reserved.

70

Continue to use Leaf Nodes for Applications in 12.1.0.2


DBCA

Despite running Leaf Nodes

[GRID]> olsnodes -s -t
germany
argentina
brazil

Active
Active
Active

Unpinned
Unpinned
Unpinned

italy
spain

Active
Active

Unpinned
Unpinned

Copyright 2014, Oracle and/or its affiliates. All rights reserved.

71

Some Examples of Resources running on Leaf Nodes


Leaf Listener (OFFLINE/OFFLINE)

Trace File Analyzer (TFA)

[grid@spain Desktop]$ . grid_profile


[GRID]> crsctl stat res -t
-------------------------------------------------------------------------------Name
Target State
Server
State details
-------------------------------------------------------------------------------Local Resources
-------------------------------------------------------------------------------ora.ASMNET1LSNR_ASM.lsnr
ONLINE ONLINE
argentina
STABLE
ONLINE ONLINE
brazil
STABLE
ONLINE ONLINE
germany
STABLE

ora.LISTENER.lsnr
ONLINE ONLINE
ONLINE ONLINE
ONLINE ONLINE

argentina
brazil
germany

ora.LISTENER_LEAF.lsnr
OFFLINE OFFLINE
OFFLINE OFFLINE
ora.net1.network
ONLINE ONLINE
ONLINE ONLINE
ONLINE ONLINE

italy
spain

argentina
brazil
germany

[GRID]> ps -ef |grep grid_1


root 1431 1 0 14:12 ?
00:00:19 /u01/app/12.1.0/grid_1/jdk/jre/bin/java -Xms128m -Xmx512m -classpath
/u01/app/12.1.0/grid_1/tfa/spain/tfa_home/jlib/RATFA.jar:/u01/app/12.1.0/grid_1/tfa/spain/tfa_home/jlib/je5.0.84.jar:/u01/app/12.1.0/grid_1/tfa/spain/tfa_home/jlib/ojdbc6.jar:/u01/app/12.1.0/grid_1/tfa/spain/tfa_home/jlib/com
mons-io-2.2.jar oracle.rat.tfa.TFAMain /u01/app/12.1.0/grid_1/tfa/spain/tfa_home

STABLE
STABLE
STABLE

STABLE
STABLE
STABLE
STABLE
STABLE

Copyright 2014, Oracle and/or its affiliates. All rights reserved.

72

Copyright 2014, Oracle and/or its affiliates. All rights reserved.

73

Potrebbero piacerti anche