In this below exercise, I have performed
renaming the ASM DISKGROUP, WHILE THE DB IS DOWN. We can perform the rename
when the DB is up and running by placing the tablespace in the offline mode
I have performed this
activity on 11.2.0.2 RAC setup. Having 2 node primary DB & 2 node standby
database
PRIMARY DB è bhuora01/ bhuora02
STANDBY DB è karlora01/ karlora02
Please understand the steps before performing it in the production database
We
are renaming a DISKGROUP from +TEST_RENAME to +TEST. We are performing rename
both in the primary & standby cluster database
STEP#1
First we need to identify the datafile which are
available in the DISKGROUP in primary & standby database
On bhuora01(primary database)
SQL> select name from
v$datafile where name like '+TEST%';
NAME
--------------------------------------------------------------------------------
+TEST_RENAME/BHU_b/datafile/test.256.769969469
On karlora01(standby database)
SQL> select name from
v$datafile where name like '+TEST%';
NAME
--------------------------------------------------------------------------------
+TEST_RENAME/BHU_a/datafile/test.256.769969471
STEP#2
Before starting the RENAME of DISK GROUP, we have checked
the status of Data guard broker
--
we can connect in the primary or standby database to check the sync status using
broker.
DGMGRL>
show configuration;
Configuration
- DG_BHU
Protection Mode: MaxAvailability
Databases:
BHU_B - Primary database
BHU_A - Physical standby database
Fast-Start
Failover: DISABLED
Configuration
Status:
SUCCESS
DGMGRL>
exit
STEP#3
Shutdown
the database on the primary & standby database environments
-- Shutdown primary database
bhuora01>
srvctl stop database -d BHU_b
-- finding the status of the
database after shutdown
bhuora01> srvctl
status database -d BHU_b
Instance
BHU_1 is not running on node bhuora01
Instance
BHU_2 is not running on node bhuora02
-- shutdown standby database
karlora01>
srvctl stop database -d BHU_a
-- finding the status of the
database after shutdown
karlora01>
srvctl status database -d BHU_a
Instance
BHU_1 is not running on node karlora01
Instance
BHU_2 is not running on node karlora02
karlora01>
STEP#4
-- Changed the ASM profile
On primary database, I am looking in to the details of the DISKGROUP
bhuora01>
asmcmd –p
ASMCMD
[+] > lsdg test*
State Type
Rebal Sector Block
AU Total_MB Free_MB
Req_mir_free_MB
Usable_file_MB Offline_disks Voting_files
Name
MOUNTED EXTERN
N 512 4096
4194304 6144 4968 0 4968 0 N
TEST_RENAME/
ASMCMD
[+] >
--
To check the status of diskgroup using
cluster resource
bhuora01>
crsctl stat res ora.TEST.dg
NAME=ora.TEST_RENAME.dg
TYPE=ora.diskgroup.type
TARGET=ONLINE , ONLINE
STATE=ONLINE
on bhuora01, ONLINE on bhuora02
--
Changed the ASM profile On standby
database, I am looking in to the details of the DISKGROUP
karlora01>
asmcmd -p
ASMCMD
[+] > lsdg test*
State Type
Rebal Sector Block
AU Total_MB Free_MB
Req_mir_free_MB Usable_file_MB Offline_disks
Voting_files Name
MOUNTED EXTERN
N 512 4096
1048576 6144 5012 0 5012 0 N
TEST_RENAME/
ASMCMD
[+] >
--
To check the status of diskgroup using cluster resource
karlora01> crsctl stat res ora.TEST.dg
NAME=ora.TEST_RENAME.dg
TYPE=ora.diskgroup.type
TARGET=ONLINE , ONLINE
STATE=ONLINE
on karlora01, ONLINE on karlora02
STEP#5
Now below operation performed only on the primary database
environment and it has to be performed on entire cluster nodes
ALL THE STEPS ARE PERFORMED USING ASM PROFILE (No need
to login as a root user)
Node#1 of primary
database
è checking the status of diskgroup before
performing the umount on node#1
bhuora01>
asmcmd –p
ASMCMD
[+] > lsdg test*
State Type
Rebal Sector Block
AU Total_MB Free_MB
Req_mir_free_MB
Usable_file_MB Offline_disks Voting_files
Name
MOUNTED EXTERN
N 512 4096
4194304 6144 4968 0
4968 0 N
TEST_RENAME/
è UNMOUNT THE DISKGROUP on Node#1 on the
primary cluster environment
ASMCMD
[+] > umount TEST_RENAME
è Checking the diskgroup status after it has
been unmounted on the same node
ASMCMD
[+] > lsdg TEST_RENAME
ASMCMD-08001:
diskgroup 'TEST_RENAME' does not exist or is not mounted
Node#2 of primary
database
è checking the status of diskgroup before
performing the umount on node#2
bhuora02>
asmcmd –p
ASMCMD
[+] > lsdg test*
State Type
Rebal Sector Block
AU Total_MB Free_MB
Req_mir_free_MB
Usable_file_MB Offline_disks Voting_files
Name
MOUNTED EXTERN
N 512 4096
4194304 6144 4968 0 4968 0 N
TEST_RENAME/
è UNMOUNT THE DISKGROUP on Node#2 on the primary
cluster environment
ASMCMD
[+] > umount TEST_RENAME
è Checking the diskgroup status after it has been
unmounted
ASMCMD
[+] > lsdg TEST_RENAME
ASMCMD-08001:
diskgroup 'TEST_RENAME' does not exist or is not mounted
Note: make sure that above steps are
completed successfully. if the diskgroup is accessed by some one in the server
then it will throw an error
à check the status of diskgroup using cluster service
bhuora01>
crsctl stat res
ora.TEST_RENAME.dg
NAME=ora.TEST_RENAME.dg
TYPE=ora.diskgroup.type
TARGET=OFFLINE,
OFFLINE
STATE=OFFLINE,
OFFLINE
STEP#6
Renaming the diskgroup should be performed only on one
node of the cluster database environment. I am performing for primary database.
THE
STEPS ARE PERFORMED USING ASM PROFILE(No need to login as a root user)
I am running rename command on the node#1 of the
primary database. I am renaming the diskgroup from TEST_RENAME to TEST
bhuora01> renamedg phase=both
dgname=test_rename newdgname=TEST verbose=true
Parsing
parameters..
Parameters
in effect:
Old DG name : TEST_RENAME
New DG name : TEST
Phases :
Phase 1
Phase 2
Discovery str : (null)
Clean : TRUE
Raw only : TRUE
renamedg
operation: phase=both dgname=test_rename newdgname=TEST verbose=true
Executing
phase 1
Discovering
the group
Performing
discovery with string:
Identified
disk ASM:/opt/oracle/extapi/64/asm/orcl/1/libasm.so:ORCL:DG_TEST with disk
number:0 and timestamp (32960944 -1401007104)
Identified
disk ASM:/opt/oracle/extapi/64/asm/orcl/1/libasm.so:ORCL:DG_TEST1 with disk
number:1 and timestamp (32960944 -1401007104)
Identified
disk ASM:/opt/oracle/extapi/64/asm/orcl/1/libasm.so:ORCL:DG_TEST2 with disk
number:2 and timestamp (32960944 -1401007104)
Checking
for hearbeat...
Re-discovering
the group
Performing
discovery with string:
Identified
disk ASM:/opt/oracle/extapi/64/asm/orcl/1/libasm.so:ORCL:DG_TEST with disk
number:0 and timestamp (32960944 -1401007104)
Identified
disk ASM:/opt/oracle/extapi/64/asm/orcl/1/libasm.so:ORCL:DG_TEST1 with disk
number:1 and timestamp (32960944 -1401007104)
Identified
disk ASM:/opt/oracle/extapi/64/asm/orcl/1/libasm.so:ORCL:DG_TEST2 with disk
number:2 and timestamp (32960944 -1401007104)
Checking
if the diskgroup is mounted or used by CSS
Checking
disk number:0
Checking
disk number:1
Checking
disk number:2
Generating
configuration file..
Completed
phase 1
Executing
phase 2
Looking
for ORCL:DG_TEST
Modifying
the header
Looking
for ORCL:DG_TEST1
Modifying
the header
Looking
for ORCL:DG_TEST2
Modifying
the header
Completed
phase 2
Terminating
kgfd context 0x2b3a420f00a0
bhuora01>
STEP#7
Once
the rename has been completed successfully, we need to mount the diskgroup on
entire cluster nodes on the primary database environment
è MOUNT THE DISKGROUP with the new name on Node#1 on
the primary cluster environment
bhuora01>
asmcmd mount test
è Checking the diskgroup status after it has
been mounted
bhuora01>
asmcmd lsdg test
State Type
Rebal Sector Block
AU Total_MB Free_MB
Req_mir_free_MB
Usable_file_MB Offline_disks Voting_files
Name
MOUNTED EXTERN
N 512 4096
4194304 6144 4968 0 4968 0 N
TEST/
bhuora01>
è MOUNT THE DISKGROUP with the new name on
Node#2 on the primary cluster environment
+ASM2 bhuora02> asmcmd
mount test
è Checking the diskgroup status after it has
been mounted
+ASM2 bhuora02> asmcmd
lsdg test
State Type
Rebal Sector Block
AU Total_MB Free_MB
Req_mir_free_MB Usable_file_MB Offline_disks
Voting_files Name
MOUNTED EXTERN
N 512 4096
4194304 6144 4968 0 4968 0 N
TEST/
+ASM2 bhuora02>
STEP#8
When you look in to the cluster resource, we can see a
new resource (renamed diskgroup) has
been automatically added on the cluster environment.
bhuora01> crsctl
stat res ora.TEST.dg
NAME=ora.TEST.dg
TYPE=ora.diskgroup.type
TARGET=ONLINE , ONLINE
STATE=ONLINE on bhuora01, ONLINE on bhuora02
karlora01>
crsctl stat res
ora.TEST_RENAME.dg
NAME=ora.TEST_RENAME.dg
TYPE=ora.diskgroup.type
TARGET=OFFLINE,
OFFLINE
STATE=OFFLINE,
OFFLINE
STEP#9
We have to perform the same steps in the standby database
environment also, if we have a plan to change the diskgroup.
We have to follow the step from STEP 3 to
STEP 8.
I am pasting the steps for reference which
we performed in the standby database environment
UNMOUNT IN ALL THE CLUSTER
NODES OF THE CLUSTER
Node#1
karlora01>
asmcmd umount TEST_RENAME
karlora01>
asmcmd lsdg test_rename
ASMCMD-08001:
diskgroup 'test_rename' does not exist or is not mounted
Node#2
karlora02>
asmcmd umount TEST_RENAME
karlora02>
asmcmd lsdg test_rename
ASMCMD-08001:
diskgroup 'test_rename' does not exist or is not mounted
RESOURCE
DETAILS AFTER UMOUNT
ora.TEST_RENAME.dg OFFLINE, OFFLINE
I am
running rename command on the node#1 of the standby database. I am renaming the
diskgroup from TEST_RENAME to TEST
karlora01>
renamedg phase=both dgname=test_rename newdgname=TEST
verbose=true
Parsing
parameters..
Parameters
in effect:
Old DG name : TEST_RENAME
New DG name : TEST
Phases :
Phase 1
Phase 2
Discovery str : (null)
Clean : TRUE
Raw only : TRUE
renamedg
operation: phase=both dgname=test_rename newdgname=TEST verbose=true
Executing
phase 1
Discovering
the group
Performing
discovery with string:
Identified
disk ASM:/opt/oracle/extapi/64/asm/orcl/1/libasm.so:ORCL:DG_TEST with disk
number:0 and timestamp (32960971 1603601408)
Identified
disk ASM:/opt/oracle/extapi/64/asm/orcl/1/libasm.so:ORCL:DG_TEST1 with disk
number:1 and timestamp (32960971 1603601408)
Identified
disk ASM:/opt/oracle/extapi/64/asm/orcl/1/libasm.so:ORCL:DG_TEST2 with disk
number:2 and timestamp (32960971 1603601408)
Checking
for hearbeat...
Re-discovering
the group
Performing
discovery with string:
Identified
disk ASM:/opt/oracle/extapi/64/asm/orcl/1/libasm.so:ORCL:DG_TEST with disk
number:0 and timestamp (32960971 1603601408)
Identified
disk ASM:/opt/oracle/extapi/64/asm/orcl/1/libasm.so:ORCL:DG_TEST1 with disk
number:1 and timestamp (32960971 1603601408)
Identified
disk ASM:/opt/oracle/extapi/64/asm/orcl/1/libasm.so:ORCL:DG_TEST2 with disk
number:2 and timestamp (32960971 1603601408)
Checking
if the diskgroup is mounted or used by CSS
Checking
disk number:0
Checking
disk number:1
Checking
disk number:2
Generating
configuration file..
Completed
phase 1
Executing
phase 2
Looking
for ORCL:DG_TEST
Modifying
the header
Looking
for ORCL:DG_TEST1
Modifying
the header
Looking
for ORCL:DG_TEST2
Modifying
the header
Completed
phase 2
Terminating
kgfd context 0x2ac61d6d40a0
karlora01>
è MOUNT THE DISKGROUP on Node#1 on the standby
cluster environment
karlora01>
asmcmd mount
TEST
karlora01>
asmcmd lsdg test
State Type
Rebal Sector Block
AU Total_MB Free_MB
Req_mir_free_MB
Usable_file_MB Offline_disks Voting_files
Name
MOUNTED EXTERN
N 512 4096
1048576 6144 5012 0 5012 0 N
TEST/
karlora01>
è MOUNT THE DISKGROUP on Node#2 on the standby
cluster environment
karlora02>
asmcmd mount
TEST
karlora02>
asmcmd lsdg test
State Type
Rebal Sector Block
AU Total_MB Free_MB
Req_mir_free_MB
Usable_file_MB Offline_disks Voting_files
Name
MOUNTED EXTERN
N 512 4096
1048576 6144 5012 0 5012 0 N
TEST/
ora.TEST.dg ONLINE on karlora01, ONLINE
on karlora02
ora.TEST_RENAME.dg OFFLINE, OFFLINE
STEP#10 Before starting the
database, we have modify the configuration details for the databases in the
server control
This
is the configuration details for the primary database in the server control. In
this we have modify the DISK GROUPS column in the below output. Below output
shows TEST_RENAME mount point is part of the primary database. If you start the
database with out modifying it, you will be getting an error message but you
can start using the sqlplus option.
bhuora01>
srvctl config database -d BHU_b
Database
unique name: BHU_B
Database
name: BHU
Oracle
home: /oracle/BHU/112_64
Oracle
user: oracle
Spfile:
+BHU_DATA1/BHU_B/PARAMETERFILE/spfile.261.764676511
Domain:
Start
options: open
Stop
options: immediate
Database
role: PRIMARY
Management
policy: AUTOMATIC
Server
pools: BHU_B
Database
instances: BHU_1,BHU_2
Disk Groups: BHU_DATA1,BHU_ARCH,BHU_RECO,BHU_OLOG,BHU_MLOG,TEST_RENAME
Mount
point paths: /oracle/BHU
Services:
racbhu
Type:
RAC
Database
is administrator managed
bhuora01>
we have modify the disk
group from TEST_RENAME
to TEST using
server control for production database
bhuora01> srvctl
modify database -d BHU_B -a BHU_DATA1,BHU_ARCH,BHU_RECO,BHU_OLOG,BHU_MLOG,TEST
we have modify the disk
group from TEST_RENAME
to TEST using
server control for standby database
karlora01> srvctl
modify database -d BHU_A -a BHU_DATA1,BHU_ARCH,BHU_RECO,BHU_OLOG,BHU_MLOG,TEST
Let us check the status of diskgroup after modifying the diskgroup.
We need to check the same in the standby database also.
bhuora01>
srvctl config database -d BHU_b
Database
unique name: BHU_B
Database
name: BHU
Oracle
home: /oracle/BHU/112_64
Oracle
user: oracle
Spfile:
+BHU_DATA1/BHU_B/PARAMETERFILE/spfile.261.764676511
Domain:
Start
options: open
Stop
options: immediate
Database
role: PRIMARY
Management
policy: AUTOMATIC
Server
pools: BHU_B
Database
instances: BHU_1,BHU_2
Disk Groups: BHU_DATA1,BHU_ARCH,BHU_RECO,BHU_OLOG,BHU_MLOG,TEST
Mount
point paths: /oracle/BHU
Services:
racbhu
Type:
RAC
Database
is administrator managed
oracle
bhuora01>
STEP#11
Start only one instance on the primary database to the mount stage. We
have to rename the datafile which is located in the old diskgroup to the new
diskgroup.
On bhuora01(primary database)
bhuora01> sqlplus / as
sysdba
SQL*Plus:
Release 11.2.0.2.0 Production on Mon Dec 19 15:39:57 2011
Copyright
(c) 1982, 2010, Oracle. All rights
reserved.
Connected
to an idle instance.
SQL>
startup mount
ORA-32004:
obsolete or deprecated parameter(s) specified for RDBMS instance
ORACLE
instance started.
Total
System Global Area 1.4431E+10 bytes
Fixed
Size 2240272 bytes
Variable
Size 3892314352 bytes
Database
Buffers 1.0503E+10 bytes
Redo
Buffers 34148352 bytes
Database
mounted.
SQL>
SQL> select name from
v$datafile where name like '+TEST%';
NAME
--------------------------------------------------------------------------------
+TEST_RENAME/BHU_b/datafile/test.256.769969469
-- RENAMING THE DATAFILE TO
THE NEW DISKGROUP
SQL>
alter database rename file ' +TEST_RENAME/BHU_b/datafile/test.256.769969469’ to
‘+TEST/BHU_b/datafile/test.256.769969469’;
Database
altered.
-- AFTER THE RENAME
SQL> select name from
v$datafile where name like '+TEST%';
NAME
--------------------------------------------------------------------------------
+TEST/BHU_b/datafile/test.256.769969469
SQL>
alter database open;
Database
altered.
STEP#12
Start only one instance on the standby database to the mount stage. We
have to rename the datafile which is located in the old diskgroup to the new
diskgroup.
On karlora01(standby database)
karlora01> sqlplus / as sysdba
SQL*Plus:
Release 11.2.0.2.0 Production on Mon Dec 19 15:39:57 2011
Copyright
(c) 1982, 2010, Oracle. All rights
reserved.
Connected
to an idle instance.
SQL>
startup mount
ORA-32004:
obsolete or deprecated parameter(s) specified for RDBMS instance
ORACLE
instance started.
Total
System Global Area 1.4431E+10 bytes
Fixed
Size 2240272 bytes
Variable
Size 3892314352 bytes
Database
Buffers 1.0503E+10 bytes
Redo
Buffers 34148352 bytes
Database
mounted.
SQL>
SQL> select name from
v$datafile where name like '+TEST%';
NAME
--------------------------------------------------------------------------------
+TEST_RENAME/BHU_a/datafile/test.256.769969471
Note: we cant rename a
datafile in the standby database when
the “standby_file_management” is in the AUTO mode
SQL>
show parameter standby
NAME TYPE VALUE
------------------------------------
----------- ------------------------------
standby_archive_dest string
standby_file_management string AUTO
SQL>
alter system set standby_file_management='MANUAL'
scope=both sid='*';
System
altered.
SQL>
alter database rename file '+TEST_RENAME/BHU_a/datafile/test.256.769969471'
to '+TEST/BHU_a/datafile/test.256.769969471';
Database
altered.
SQL>
alter system set standby_file_management='AUTO' scope=both sid='*';
System
altered.
STEP#13
Stop the single instance on both the side (primary
& standby database) and start the database on primary & standby database
(all nodes)
-- Shutdown single instance
in the primary database and start the entire database
bhuora01> srvctl
stop database -d BHU_b
-- starting the entire
database
bhuora01> srvctl
stop database -d BHU_b
-- finding the status of the
database after shutdown
bhuora01> srvctl
status database -d BHU_b
Instance
BHU_1 is running on node bhuora01
Instance
BHU_2 is running on node bhuora02
-- shutdown single instance
in the standby database and start the entire database in to mount stage or read
only mode
karlora01>
srvctl stop database -d BHU_a
-- starting the entire
database
karlora01>
srvctl start database -d BHU_a
-- finding the status of the
database after shutdown
karlora01>
srvctl status database -d BHU_a
Instance
BHU_1 is running on node karlora01
Instance
BHU_2 is running on node karlora02
STEP#14
Now let us check, whether primary & standby
database is sync
DGMGRL>
show configuration;
Configuration
- DG_BHU
Protection Mode: MaxAvailability
Databases:
BHU_B - Primary database
BHU_A - Physical standby database
Fast-Start
Failover: DISABLED
Configuration
Status:
SUCCESS
DGMGRL>
STEP#15
We have removed the old DISKGROUP name from the cluster registry.
On primary database(node#1). Set cluster profile
before running the query
bhuora01> crsctl delete resource
ora.TEST_RENAME.dg
bhuora01>
crsctl stat res
ora.TEST_RENAME.dg
CRS-2613:
Could not find resource 'ora.TEST_RENAME.dg'.
On standby database(node#1)
karlora01> crsctl delete resource
ora.TEST_RENAME.dg
bhuora01>
crsctl stat res
ora.TEST_RENAME.dg
CRS-2613:
Could not find resource 'ora.TEST_RENAME.dg'.
Note: if the resource is not
deleted and if the resource is not used by anyone then we can use the force
option.
karlora01>
crsctl delete resource ora.TEST_RENAME.dg -f