Sei sulla pagina 1di 5

export CRS_HOME=/shared_nas/app/software

export PATH=$CRS_HOME/bin:$PATH
cd $CRS_HOME/bin
-bash-3.2$ crs_stat -t
Name
Type
Target
State
Host
-----------------------------------------------------------ora....ER.lsnr ora....er.type ONLINE
ONLINE
rachostc
ora....N1.lsnr ora....er.type ONLINE
ONLINE
rachostd
ora....N2.lsnr ora....er.type ONLINE
ONLINE
rachostc
ora....N3.lsnr ora....er.type ONLINE
ONLINE
rachostc
ora.asm
ora.asm.type OFFLINE OFFLINE
ora.eons
ora.eons.type ONLINE
ONLINE
rachostc
ora.gsd
ora.gsd.type OFFLINE OFFLINE
ora....network ora....rk.type ONLINE
ONLINE
rachostc
ora.oc4j
ora.oc4j.type OFFLINE OFFLINE
ora.ons
ora.ons.type ONLINE
ONLINE
rachostc
ora.orcl.db
ora....se.type ONLINE
ONLINE
rachostc
ora....ry.acfs ora....fs.type OFFLINE OFFLINE
ora.scan1.vip ora....ip.type ONLINE
ONLINE
rachostd
ora.scan2.vip ora....ip.type ONLINE
ONLINE
rachostc
ora.scan3.vip ora....ip.type ONLINE
ONLINE
rachostc
ora....SM1.asm application
OFFLINE OFFLINE
ora....UC.lsnr application
ONLINE
ONLINE
rachostc
ora....zuc.gsd application
OFFLINE OFFLINE
ora....zuc.ons application
ONLINE
ONLINE
rachostc
ora....zuc.vip ora....t1.type ONLINE
ONLINE
rachostc
ora....SM2.asm application
OFFLINE OFFLINE
ora....UD.lsnr application
ONLINE
ONLINE
rachostd
ora....zud.gsd application
OFFLINE OFFLINE
ora....zud.ons application
ONLINE
ONLINE
rachostd
ora....zud.vip ora....t1.type ONLINE
ONLINE
rachostd
Cluster Name check
Print cluster Name
-bash-3.2$ cemutlo -n
rachost-cluster
Print cluster Version:
-bash-3.2$ cemutlo -w
2:1:
Managing cluster
Starting
/etc/init.d/init.crs start
crsctl start crs
Stopping
/etc/init.d/init.crs stop
crsctl stop crs
Eable/Disable at boot time
/etc/init.d/init.crs enable
/etc/init.d/init.crs disable
crsctl enable crs
crsctl disable crs

Managing Database configuration with srvctl


Start all instances
srvctl start database -d <database name> -o <option>
-o can be force/open/mount/nomount
Stop all instances
srvctl stop database -d <database name> -o <option>
-o can be immediate/abort/normal/transactional
Stop/Start Particular Instance
srvctl start database -d <database name> -i <instance>, <instance>
srvctl stop database -d <database name> -i <instance>, <instance>
Display the registered databases
srvctl config database -d orcl
Database unique name: orcl
Database name: orcl
Oracle home: /shared_nas/app/oracle01/product/11.2.0/dbhome_1
Oracle user: oracle01
Spfile: /shared_nas/oradata/orcl/spfileorcl.ora
Domain: us.oracle.com
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: orcl
Database instances: orcl1,orcl2
Disk Groups:
Services:
Status
-bash-3.2$ srvctl status database -d orcl
Instance orcl1 is running on node rachostc
Instance orcl2 is running on node rachostd
-bash-3.2$ srvctl status instance -d orcl -i orcl1
Instance orcl1 is running on node rachostc
-bash-3.2$ srvctl status nodeapps
VIP rachostc-v is enabled
VIP rachostc-v is running on node: rachostc
VIP rachostd-v is enabled
VIP rachostd-v is running on node: rachostd
Network is enabled
Network is running on node: rachostc
Network is running on node: rachostd
GSD is disabled
GSD is not running on node: rachostc
GSD is not running on node: rachostd
ONS is enabled
ONS daemon is running on node: rachostc
ONS daemon is running on node: rachostd
eONS is enabled
eONS daemon is running on node: rachostc
eONS daemon is running on node: rachostd

-bash-3.2$ srvctl status asm


ASM is not running.
-bash-3.2$ srvctl status asm -n rachostc
ASM is not running on rachostc
-bash-3.2$ srvctl status listener
Listener LISTENER is enabled
Listener LISTENER is running on node(s): rachostc,rachostd
-bash-3.2$ srvctl status listener -n rachostc
Listener LISTENER is enabled on node(s): rachostc
Listener LISTENER is running on node(s): rachostc
Starting/Stopping
srvctl start
srvctl start
srvctl start
srvctl start
srvctl start
srvctl
srvctl
srvctl
srvctl
srvctl

stop
stop
stop
stop
stop

database -d <database>
instance -d <database> -i <instance>,<instance>
nodeapps -n <node>
asm -n <node>
service -d <database>
database -d <database>
instance -d <database> -i <instance>,<instance>
nodeapps -n <node>
asm -n <node>
service -d <database>

Nodes:
Node Names and Number:
-bash-3.2$ olsnodes -n
rachostc
1
rachostd
2
Local Node Name
-bash-3.2$ olsnodes -l
rachostc
Activates Loging
olsnodes -g
Oracle Interface :
Display
-bash-3.2$ oifcfg getif
eth0 10.240.112.0 global public
eth1 192.168.122.0 global cluster_interconnect
Delete
-bash-3.2$ oifcfg delig -global
Set
-bash-3.2$ oifcfg set -global <interface name>/<subnet>:public
-bash-3.2$ oifcfg set -global <interface name>/<subnet>:cluster_intercon
nect

Voting Disk
Adding
crsctl add css votedisk <file>
Deleting
crsctl delete css votedisk <file>
Quering
-bash-3.2$ crsctl query css votedisk
## STATE
File Universal Id
File Name Disk group
-- ----------------------------- --------1. ONLINE 454be1f462484f9ebf1909717f583e1d (/shared_nas/storage/vdsk)
[]
Located 1 voting disk(s).
Node Scripts
Add Node: addnode.sh
Delete Node: deletenode.sh

cluvfy
-bash-3.2$ cluvfy stage -list
USAGE:
cluvfy stage {-pre|-post} <stage-name> <stage-specific options> [-verbo
se]
Valid stage options and stage names are:
-post hwos
: post-check for hardware and operating s
ystem
-pre
-post
-pre
-post
-pre
-post
-pre
-pre
-post
-pre
-pre
-post
-post

cfs
cfs
crsinst
crsinst
hacfg
hacfg
dbinst
acfscfg
acfscfg
dbcfg
nodeadd
nodeadd
nodedel

:
:
:
:
:
:
:
:
:
:
:
:
:

pre-check for CFS setup


post-check for CFS setup
pre-check for CRS installation
post-check for CRS installation
pre-check for HA configuration
post-check for HA configuration
pre-check for database installation
pre-check for ACFS Configuration.
post-check for ACFS Configuration.
pre-check for database configuration
pre-check for node addition.
post-check for node addition.
post-check for node deletion.

-bash-3.2$ cluvfy comp -list


USAGE:
cluvfy comp <component-name> <component-specific options> [-ve
rbose]
Valid components are:
nodereach
nodecon
cfs
ssa
space
sys
clu

:
:
:
:
:
:
:

checks
checks
checks
checks
checks
checks
checks

reachability between nodes


node connectivity
CFS integrity
shared storage accessibility
space availability
minimum system requirements
cluster integrity

clumgr
ocr
olr
ha
crs
nodeapp
admprv
peer
software
asm
acfs
gpnp
gns
scan
ohasd
clocksync
vdisk
.

:
:
:
:
:
:
:
:
:
:

checks cluster manager integrity


checks OCR integrity
checks OLR integrity
checks HA integrity
checks CRS integrity
checks node applications existence
checks administrative privileges
compares properties with peers
checks software distribution
checks ASM integrity
: checks ACFS integrity
: checks GPnP integrity
: checks GNS integrity
: checks SCAN configuration
: checks OHASD integrity
: checks Clock Synchronization
: check Voting Disk Udev settings

Potrebbero piacerti anche