I recently was asked an interesting question: "How is it possible to start the ASM instance if the spfile itself is stored in an ASM disk group? During ASM instance startup the disk groups themselves are closed, aren't they ?"Beginning with the version 11g Release 2, the ASM spfile is stored automatically in the first disk group created during Grid Infrastructure installation:grid@iudb007:~/ [+ASM5] asmcmd spget+GRID/ivorato01/asmparameterfile/registry.253.768409647During startup, the Grid Plug and Play profile delivers the ASM discovery string, i.e. directory containing ASM disks:grid@iudb007:/u00/app/grid/product/18.104.22.168/gpnp/iudb007/profiles/peer/ [+ASM5] gpnptool getpval -asm_disWarning: some command line parameters were defaulted. Resulting command line: /u00/app/grid/product/22.214.171.124/bin/gpnptool.bin getpval -asm_dis -p=profile.xml -o-/dev/mapper/*p1The discovery string will be used to scan device headers to find those, which contain a copy of the ASM spfile (kfdhdb.spfflg=1). In my environment, the ASM disk group GRID created with NORMAL redundancy is used exclusively for ASM spfile, voting and OCR files:grid@iudb007:~/ [+ASM5] asmcmd lsdsk -G GRIDPath/dev/mapper/grid01p1/dev/mapper/grid02p1/dev/mapper/grid03p1Let's scan the headers of those three devices:grid@iudb007:~/ [+ASM5] kfed read /dev/mapper/grid01p1 | grep -E 'spf|ausize'kfdhdb.ausize: 1048576 ; 0x0bc: 0x00100000kfdhdb.spfile: 288 ; 0x0f4: 0x00000120kfdhdb.spfflg: 1 ; 0x0f8: 0x00000001grid@iudb007:~/ [+ASM5] kfed read /dev/mapper/grid02p1 | grep -E 'spf|ausize'kfdhdb.ausize: 1048576 ; 0x0bc: 0x00100000kfdhdb.spfile: 59 ; 0x0f4: 0x0000003bkfdhdb.spfflg: 1 ; 0x0f8: 0x00000001grid@iudb007:~/ [+ASM5] kfed read /dev/mapper/grid03p1 | grep -E 'spf|ausize'kfdhdb.ausize: 1048576 ; 0x0bc: 0x00100000kfdhdb.spfile: 0 ; 0x0f4: 0x00000000kfdhdb.spfflg: 0 ; 0x0f8: 0x00000000In the output above, we see that the first two devices /dev/mapper/grid01p1 and /dev/mapper/grid02p1 each contain a copy of the ASM spfile. On the first device /dev/mapper/grid01p1 the ASM spfile location starts at the disk offset of 288, on the second device /dev/mapper/grid02p1 at the offset of 56.Considering the allocation unit size (kfdhdb.ausize = 1M), let's dump the ASM spfile from those devices:grid@iudb007:~/ [+ASM5] dd if=/dev/mapper/grid01p1 of=spfileASM_Copy1.ora skip=288 bs=1M count=11+0 records in1+0 records out1048576 bytes (1.0 MB) copied, 0.115467 seconds, 9.1 MB/sgrid@iudb007:~/ [+ASM5] dd if=/dev/mapper/grid02p1 of=spfileASM_Copy2.ora skip=59 bs=1M count=11+0 records in1+0 records out1048576 bytes (1.0 MB) copied, 0.029051 seconds, 36.1 MB/sThe output is stripped for readability:grid@iudb007:~/ [+ASM5] strings spfileASM_Copy1.ora...+ASM8.asm_diskgroups='U1010','U1020',...*.asm_diskstring='/dev/mapper/*p1'*.asm_power_limit=10*.db_cache_size=134217728*.diagnostic_dest='/u00/app/oracle'*.instance_type='asm'*.large_pool_size=256M*.memory_target=0*.remote_listener='ivorato01:15300'*.remote_login_passwordfile='EXCLUSIVE'*.sga_target=1G*.shared_pool_size=512MThe output from the second file spfileASM_Copy2.ora is of course the same.Conclusion:To read the ASM spfile during the ASM instance startup, it is not necessary to open the disk group. All information necessary to access the data is stored in the device's header. By the way, the same technique is used to access the Clusterware voting files which are also stored in an ASM disk group. In this case, Clusterware does not need a running ASM instance to access the cluster voting files:grid@iudb007:~/ [+ASM5] kfed read /dev/mapper/grid03p1 | grep vfkfdhdb.vfstart: 256 ; 0x0ec: 0x00000100 <- START offset of the voting filekfdhdb.vfend: 288 ; 0x0f0: 0x00000120 <- END offset of the voting file
Thanks for another good blog.
That's why is very important for 11gR2 to have always a backup of GPnP wallet, profile as well as OLR.
Further explanation which complement to your post can be found here:
Thanks for good informaiton.
Excellent post. Good information for better understand.
Can you please clarify me the below.
As you mentions cluster can only read the information about ocr&vote disk information from an unmounted disk.
But Is it possible to down the ASM instance in 11gR2 Grid while ocr and voting disks are placed in the diskgroups ? Can you please explain the consequences for this?
Thanks in advance..
sorry for the delay.
> But Is it possible to down the ASM instance in 11gR2 Grid while ocr and voting disks are placed in the diskgroups ?
> Can you please explain the consequences for this?
Yes, you can shutdown the ASM instance with:
1. "crsctl stop crs" or "crsctl stop cluster -n node"
This way you will shutdown not only the ASM instance, but rather the cluster and all ASM dependent cluster resources on the server.
2.shutdown abort (other options will not work, because the cluster is also considered as a client).
DB_Name Status Software_Version Compatible_version Instance_Name Disk_Group
+ASM CONNECTED 126.96.36.199.0 188.8.131.52.0 +ASM1 GRID <<<---
asmvol CONNECTED 184.108.40.206.0 220.127.116.11.0 +ASM1 CLUSTERFS
TVD102 CONNECTED 10.2.0.5.0 10.2.0.5.0 TVD1021 U7010
TVD102 CONNECTED 10.2.0.5.0 10.2.0.5.0 TVD1021 U7020
SQL> shutdown abort
ASM instance shutdown
This will lead to a crash of every resource which depends on ASM (TVD1021,TVD1021, ACFS), but the cluster core components will be still up and running:
grid@iudb044:~/ [+ASM1] crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
grid@iudb044:~/ [+ASM1] crsctl status resource -init -t
NAME TARGET STATE SERVER STATE_DETAILS
1 OFFLINE OFFLINE Instance Shutdown
1 ONLINE ONLINE iudb044
1 ONLINE ONLINE iudb044 OBSERVER
1 OFFLINE OFFLINE
Very nice article, thanks. Keep on blogging.
I get the foll.:
$ gpnptool getpval -asm_dis
Error: Can't open profile 'profile.xml' for read: file not found
Can you pls advice as to what could be the reason? The above is when entire RAC stack is up on the node.
1. Change to the $ORACLE_HOME/gpnp/<server>/profiles/peer/ directory and execute the "gpnptool getpval -asm_dis" command, or
2. Use the -p parameter to specify the full path to the profile.xml:
grid@iudb007:~/ [+ASM5] gpnptool getpval -asm_dis -p=$ORACLE_HOME/gpnp/iudb007/profiles/peer/profile.xml
Warning: some command line parameters were defaulted. Resulting command line:
/u00/app/grid/product/18.104.22.168/bin/gpnptool.bin getpval -asm_dis -p=/u00/app/grid/product/22.214.171.124/gpnp/iudb007/profiles/peer/profile.xml -o-
thanks Robert. It worked.
why do you used partitions on multipath devices?
In this particular customer environment, the LUNs will be partitioned (one primary partition / LUN)
for organisational reasons:
- partitioned devices: device in use
- non-partitioned device: brand new device
From technical POV, you don't need to partition the LUNs. But in case you do, consider the partition alignment.
Beautifully explained !!!