Oracle ASM Spfile Stored in an ASM Disk Group - How it Works ?

Oracle ASM Spfile Stored in an ASM Disk Group - How it Works ?

Rate This
  • Comments 11

I recently was asked an interesting question: "How is it possible to start the ASM instance if the spfile itself is stored in an ASM disk group? During ASM instance startup the disk groups themselves are closed, aren't they ?"

Beginning with the version 11g Release 2, the ASM spfile is stored automatically in the first disk group created during Grid Infrastructure installation:

grid@iudb007:~/ [+ASM5] asmcmd spget
+GRID/ivorato01/asmparameterfile/registry.253.768409647

During startup, the Grid Plug and Play profile delivers the ASM discovery string, i.e. directory containing ASM disks:

grid@iudb007:/u00/app/grid/product/11.2.0.3/gpnp/iudb007/profiles/peer/ [+ASM5] gpnptool getpval -asm_dis
Warning: some command line parameters were defaulted. Resulting command line:
         /u00/app/grid/product/11.2.0.3/bin/gpnptool.bin getpval -asm_dis -p=profile.xml -o-

/dev/mapper/*p1

The discovery string will be used to scan device headers to find those, which contain a copy of the ASM spfile (kfdhdb.spfflg=1). In my environment, the ASM disk group GRID created with NORMAL redundancy is used exclusively for ASM spfile, voting and OCR files:

grid@iudb007:~/ [+ASM5] asmcmd lsdsk -G GRID
Path
/dev/mapper/grid01p1
/dev/mapper/grid02p1
/dev/mapper/grid03p1

Let's scan the headers of those three devices:

grid@iudb007:~/ [+ASM5] kfed read /dev/mapper/grid01p1 | grep -E 'spf|ausize'
kfdhdb.ausize:                  1048576 ; 0x0bc: 0x00100000
kfdhdb.spfile:                      288 ; 0x0f4: 0x00000120
kfdhdb.spfflg:                        1 ; 0x0f8: 0x00000001

grid@iudb007:~/ [+ASM5] kfed read /dev/mapper/grid02p1 | grep -E 'spf|ausize'
kfdhdb.ausize:                  1048576 ; 0x0bc: 0x00100000
kfdhdb.spfile:                       59 ; 0x0f4: 0x0000003b
kfdhdb.spfflg:                        1 ; 0x0f8: 0x00000001

grid@iudb007:~/ [+ASM5] kfed read /dev/mapper/grid03p1 | grep -E 'spf|ausize'
kfdhdb.ausize:                  1048576 ; 0x0bc: 0x00100000
kfdhdb.spfile:                        0 ; 0x0f4: 0x00000000
kfdhdb.spfflg:                        0 ; 0x0f8: 0x00000000

In the output above, we see that the first two devices /dev/mapper/grid01p1 and /dev/mapper/grid02p1 each contain a copy of the ASM spfile. On the first device /dev/mapper/grid01p1 the ASM spfile location starts at the disk offset of 288, on the second device /dev/mapper/grid02p1 at the offset of 56.

Considering the allocation unit size (kfdhdb.ausize = 1M), let's dump the ASM spfile from those devices:

grid@iudb007:~/ [+ASM5] dd if=/dev/mapper/grid01p1 of=spfileASM_Copy1.ora skip=288 bs=1M count=1
1+0 records in
1+0 records out
1048576 bytes (1.0 MB) copied, 0.115467 seconds, 9.1 MB/s

grid@iudb007:~/ [+ASM5] dd if=/dev/mapper/grid02p1 of=spfileASM_Copy2.ora skip=59 bs=1M count=1
1+0 records in
1+0 records out
1048576 bytes (1.0 MB) copied, 0.029051 seconds, 36.1 MB/s

The output is stripped for readability:

grid@iudb007:~/ [+ASM5] strings spfileASM_Copy1.ora
...
+ASM8.asm_diskgroups='U1010','U1020',...
*.asm_diskstring='/dev/mapper/*p1'
*.asm_power_limit=10
*.db_cache_size=134217728
*.diagnostic_dest='/u00/app/oracle'
*.instance_type='asm'
*.large_pool_size=256M
*.memory_target=0
*.remote_listener='ivorato01:15300'
*.remote_login_passwordfile='EXCLUSIVE'
*.sga_target=1G
*.shared_pool_size=512M

The output from the second file spfileASM_Copy2.ora is of course the same.

Conclusion:

To read the ASM spfile during the ASM instance startup, it is not necessary to open the disk group. All information necessary to access the data is stored in the device's header. By the way, the same technique is used to access the Clusterware voting files which are also stored in an ASM disk group. In this case, Clusterware does not need a running ASM instance to access the cluster voting files:

grid@iudb007:~/ [+ASM5] kfed read /dev/mapper/grid03p1 | grep vf
kfdhdb.vfstart:                     256 ; 0x0ec: 0x00000100 <- START offset of the voting file
kfdhdb.vfend:                       288 ; 0x0f0: 0x00000120 <- END offset of the voting file

Your comment has been posted.   Close
Thank you, your comment requires moderation so it may take a while to appear.   Close
Leave a Comment
  • Post
  • Hi Robert,

    Thanks for another good blog.

    That's why is very important for 11gR2 to have always a backup of  GPnP wallet, profile as well as OLR.

    Further explanation which complement to your post can be found here:

    aychin.wordpress.com/.../oracle-11gr2-asm-spfile-eng

    regards,

    goran

  • Thanks for good informaiton.

  • Hi Robert.

    Excellent post. Good information for better understand.

    Can you please clarify me the below.

    As you mentions cluster can only read the information about ocr&vote disk information from an unmounted disk.

    But Is it possible to down the ASM instance in 11gR2 Grid while ocr and voting disks are placed in the diskgroups ? Can you please explain the consequences for this?

    Thanks in advance..

    Regards ,

    Ramesh Togara.

  • Hi Ramesh,

    sorry for the delay.

    > But Is it possible to down the ASM instance in 11gR2 Grid while ocr and voting disks are placed in the diskgroups ?

    > Can you please explain the consequences for this?

    Yes, you can shutdown the ASM instance with:

    1. "crsctl stop crs" or "crsctl stop cluster -n node"

    This way you will shutdown not only the ASM instance, but rather the cluster and all ASM dependent cluster resources on the server.

    2.shutdown abort (other options will not work, because the cluster is also considered as a client).

    ASMCMD> lsct

    DB_Name  Status     Software_Version  Compatible_version  Instance_Name  Disk_Group

    +ASM     CONNECTED        11.2.0.3.0          11.2.0.3.0  +ASM1          GRID        <<<---

    asmvol   CONNECTED        11.2.0.3.0          11.2.0.3.0  +ASM1          CLUSTERFS

    TVD102   CONNECTED        10.2.0.5.0          10.2.0.5.0  TVD1021        U7010

    TVD102   CONNECTED        10.2.0.5.0          10.2.0.5.0  TVD1021        U7020

    SQL> shutdown abort

    ASM instance shutdown

    SQL>

    This will lead to a crash of every resource which depends on ASM (TVD1021,TVD1021, ACFS), but the cluster core components will be still up and running:

    grid@iudb044:~/ [+ASM1] crsctl check crs

    CRS-4638: Oracle High Availability Services is online

    CRS-4537: Cluster Ready Services is online

    CRS-4529: Cluster Synchronization Services is online

    CRS-4533: Event Manager is online

    grid@iudb044:~/ [+ASM1] crsctl status resource -init -t

    --------------------------------------------------------------------------------

    NAME           TARGET  STATE        SERVER                   STATE_DETAILS

    --------------------------------------------------------------------------------

    Cluster Resources

    --------------------------------------------------------------------------------

    ora.asm

         1        OFFLINE OFFLINE                               Instance Shutdown

    ora.cluster_interconnect.haip

         1        ONLINE  ONLINE       iudb044

    ora.crf

         1        ONLINE  ONLINE       iudb044

    ora.crsd

         1        ONLINE  ONLINE       iudb044

    ora.cssd

         1        ONLINE  ONLINE       iudb044

    ora.cssdmonitor

         1        ONLINE  ONLINE       iudb044

    ora.ctssd

         1        ONLINE  ONLINE       iudb044                  OBSERVER

    ora.diskmon

         1        OFFLINE OFFLINE

    ora.drivers.acfs

         1        ONLINE  ONLINE       iudb044

    ora.evmd

         1        ONLINE  ONLINE       iudb044

    ora.gipcd

         1        ONLINE  ONLINE       iudb044

    ora.gpnpd

         1        ONLINE  ONLINE       iudb044

    ora.mdnsd

         1        ONLINE  ONLINE       iudb044

    HTH,

    Robert

  • Hi Robert,

    Very nice article, thanks. Keep on blogging.

  • I get the foll.:

    $ gpnptool getpval -asm_dis

    Error: Can't open profile 'profile.xml' for read: file not found

    $

    Can you pls advice as to what could be the reason? The above is when entire RAC stack is up on the node.

    Thanks.

  • 1. Change to the $ORACLE_HOME/gpnp/<server>/profiles/peer/ directory and execute the "gpnptool getpval -asm_dis" command, or

    2. Use the -p parameter to specify the full path to the profile.xml:

    grid@iudb007:~/ [+ASM5] gpnptool getpval -asm_dis -p=$ORACLE_HOME/gpnp/iudb007/profiles/peer/profile.xml

    Warning: some command line parameters were defaulted. Resulting command line:

            /u00/app/grid/product/11.2.0.3/bin/gpnptool.bin getpval -asm_dis -p=/u00/app/grid/product/11.2.0.3/gpnp/iudb007/profiles/peer/profile.xml -o-

    /dev/mapper/*p1

    Cheers,

    Robert

  • thanks Robert. It worked.

    cheers

  • why do you used partitions on multipath devices?

  • In this particular customer environment, the LUNs will be partitioned (one primary partition / LUN)

    for organisational reasons:

    - partitioned devices: device in use

    - non-partitioned device: brand new device

    From technical POV, you don't need to partition the LUNs. But in case you do, consider the partition alignment.

    Cheers,

    Robert

  • Beautifully explained !!!

Page 1 of 1 (11 items)
Your comment has been posted.   Close
Thank you, your comment requires moderation so it may take a while to appear.   Close
Leave a Comment
  • Post