UnixPedia : HPUX / LINUX / SOLARIS: 2015

Friday, March 27, 2015

How to Clear Fin_wait connection.

For fin_wait_2 :-
There is an ndd parameter which is called tcp_fin_wait_2_timeout

This parameter sets the fin_wait_2 timer on 11.x to stop idle fin_wait_2 connections. It will not survive a reboot, so modification of the /etc/rc.config.d/nddconf is a necessary.

tcp_fin_wait_2_timeout specifies an interval, in milliseconds, after which the TCP connection will be unconditionally killed. An appropriate reset segment will be sent when the connection is killed.

The default for tcp_fin_wait_2_timeout is 0, which allows the connection to live forever, as long as the far side continues to answer keepalives.

The current value is 0 , which allows the connection to live forever.

To enable the tcp_fin_wait_2 timer to timeout do the following:

1. To get the current value (0 is turned off):
# ndd -get /dev/tcp tcp_fin_wait_2_timeout
0

2. To set the value to 10 minutes:
# ndd -set /dev/tcp tcp_fin_wait_2_timeout 600000

3. Check the setting:
# ndd -get /dev/tcp tcp_fin_wait_2_timeout
600000

Note: (1000 ms in 1 second) * (60 seconds) * (10 minutes)= 600000 ms.
10 minutes is just an example but probably a good selection. Using a setting less than 10 minutes is not recommended by HP and may cause data loss with half-closed TCP connections.

Using the ndd on command line will not survive a reboot, so you need to update

/etc/rc.config.d/nddconf , with the parameter so that it will be set at boot time.

TRANSPORT_NAME[10]=tcp
NDD_NAME[10]=tcp_fin_wait_2_timeout
NDD_VALUE[10]=600000

Use this commend to read the nddconf file and implement the ndd's:
# ndd -c

For Time_Wait:-

Look at the value with this command:

ndd -get /dev/tcp tcp_time_wait_interval
60000

Set the value with this command:

ndd -set /dev/tcp tcp_time_wait_interval xxxxx

where xxxxx is the desired number of milliseconds. This setting does not persist across a root, so will need to be added to a startup script.

Important: Be careful with the tcp_time_wait_interval setting for the reason described below.

This timer is in place and is part of the TCP Protocol Specification to prevent a particular problem. A TCP connection is made unique by these four numbers:

local IP + local TCP port + remote IP + remote TCP port

If a packet is sent out into the network with these four numbers and the user then tears down the connection and REUSES the same 4-tupple for a NEW connection, then when the packet from the old connection comes
in off the wire, it will overwrite the new connection. Thus, we have the TIME_WAIT state to prevent old connection numbers from being reused before all data is 'flushed' off the network.

Using the ndd on command line will not survive a reboot, so you need to update

/etc/rc.config.d/nddconf, with the parameter so that it will be set at boot time.
TRANSPORT_NAME[2]=tcp
# NDD_NAME[2]=tcp_time_wait_interval
# NDD_VALUE[2]=60000

Use this commend to read the nddconf file and implement the ndd's:
# ndd -c


To release the current fin_wait and time_wait hung connections we recommends to bouncing the Server/Apps/DB after confirming the downtime window.



Saturday, February 7, 2015

HPUX : there is no communication with OVO mgmt. server

server an issue with port 383, there is no communication with OVO mgmt. server

[root@tcscar4:/.root]#
#-> /opt/OV/bin/opcagt -status
scopeux     Perf Agent data collector                        (3563)   Running
midaemon    Measurement Interface daemon                     (3584)   Running
ttd         ARM registration daemon                          (3467)   Running
perfalarm   Alarm generator                                           Stopped
perfd       real time server                                 (3696)   Running
(ctrl-111) Ovcd is not yet started.
Could not contact Message Agent to query buffering state.
[root@tcscar4:/.root]#
#->
[root@tcscar4:/.root]#
#-> /opt/OV/bin/opcagt -cleanstart
(ctrl-111) Ovcd is not yet started.
[root@tcscar4:/.root]#
#-> whereis Ovcd
Ovcd:
[root@tcscar4:/.root]#
#-> /opt/OV/bin/opcagt -status
scopeux     Perf Agent data collector                        (13060)  Running
midaemon    Measurement Interface daemon                     (12930)  Running
ttd         ARM registration daemon                          (3467)   Running
perfalarm   Alarm generator                                  (13077)  Running
perfd       real time server                                 (12919)  Running
coda        OV Performance Core                 COREXT       (12939)  Running
opcacta     OVO Action Agent                    AGENT,EA     (12957)  Running
opcle       OVO Logfile Encapsulator            AGENT,EA     (12961)  Running
opcmona     OVO Monitor Agent                   AGENT,EA     (12963)  Running
opcmsga     OVO Message Agent                   AGENT,EA     (12948)  Running
opcmsgi     OVO Message Interceptor             AGENT,EA     (12959)  Running
ovbbccb     OV Communication Broker             CORE         (12933)  Running
ovcd        OV Control                          CORE         (12932)  Running
ovconfd     OV Config and Deploy                COREXT       (12934)  Running
Message Agent is not buffering.
[root@tcscar4:/.root]#

Wednesday, February 4, 2015

How to Create rename a volume group in LVM



#-> vgdisplay -v /dev/vg_test
--- Volume groups ---
VG Name                     /dev/vg_test
VG Write Access             read/write    
VG Status                   available                
Max LV                      255   
Cur LV                      0     
Open LV                     0     
Max PV                      255   
Cur PV                      4     
Act PV                      4     
Max PE per PV               10000       
VGDA                        8  
PE Size (Mbytes)            128            
Total PE                    1596   
Alloc PE                    0      
Free PE                     1596   
Total PVG                   1       
Total Spare PVs             0             
Total Spare PVs in use      0                    
VG Version                  1.0      
VG Max Size                 318750g   
VG Max Extents              2550000      


   --- Physical volumes ---
   PV Name                     /dev/disk/disk68
   PV Status                   available               
   Total PE                    399    
   Free PE                     399    
   Autoswitch                  On       
   Proactive Polling           On              

   PV Name                     /dev/disk/disk72
   PV Status                   available               
   Total PE                    399    
   Free PE                     399    
   Autoswitch                  On       
   Proactive Polling           On              

   PV Name                     /dev/disk/disk73
   PV Status                   available               
   Total PE                    399    
   Free PE                     399    
   Autoswitch                  On       
   Proactive Polling           On              

   PV Name                     /dev/disk/disk66
   PV Status                   available               
   Total PE                    399    
   Free PE                     399    
   Autoswitch                  On       
   Proactive Polling           On              


   --- Physical volume groups ---
   PVG Name                    PVG001
   PV Name                     /dev/disk/disk68
   PV Name                     /dev/disk/disk72
   PV Name                     /dev/disk/disk73
   PV Name                     /dev/disk/disk66



#-> vgdisplay /dev/vg_test
--- Volume groups ---
VG Name                     /dev/vg_test
VG Write Access             read/write    
VG Status                   available                
Max LV                      255   
Cur LV                      0     
Open LV                     0     
Max PV                      255   
Cur PV                      4     
Act PV                      4     
Max PE per PV               10000       
VGDA                        8  
PE Size (Mbytes)            128            
Total PE                    1596   
Alloc PE                    0      
Free PE                     1596   
Total PVG                   1       
Total Spare PVs             0             
Total Spare PVs in use      0                    
VG Version                  1.0      
VG Max Size                 318750g   
VG Max Extents              2550000      


#-> #Rename the vg vg_test to vg_original

#-> vgexport -s -v -m vg_test.mapfile vg_test
Beginning the export process on Volume Group "vg_test".
vgexport: Volume group "vg_test" is still active.
vgexport: Couldn't export volume group "vg_test".

#-> ll vg_test.mapfile
vg_test.mapfile not found

#-> vgexport -p -s -v -m vg_test.mapfile vg_test
Beginning the export process on Volume Group "vg_test".
vgexport: Volume group "vg_test" is still active.
/dev/disk/disk68
/dev/disk/disk72
/dev/disk/disk73
/dev/disk/disk66
vgexport: Preview of vgexport on volume group "vg_test" succeeded.

Deactivate the volume group by entering the following command

#-> vgchange -a n vg_test
Volume group "vg_test" has been successfully changed.


If you want to retain the same minor number for the volume group, examine the volume group's
group file as follows

#-> ll /dev/vg_test/group
crw-r--r--   1 root       sys         64 0x280000 Jul  8 08:20 /dev/vg_test/group


Remove the volume group device files and its entry from the LVM configuration files by entering
the following command:
#-> vgexport vg_test
Physical volume "/dev/disk/disk68" has been successfully deleted from
physical volume group "PVG001".
Physical volume "/dev/disk/disk72" has been successfully deleted from
physical volume group "PVG001".
Physical volume "/dev/disk/disk73" has been successfully deleted from
physical volume group "PVG001".
Physical volume "/dev/disk/disk66" has been successfully deleted from
physical volume group "PVG001".
vgexport: Volume group "vg_test" has been successfully removed.

#-> mkdir /dev/vg_original

#-> mknod /dev/vg_original/group c 64 0x280000

#-> vgimport -s -v -m vg_test.mapfile /dev/vg_original
The legacy naming model has been disabled on the system.
Try with the -N option.

Add the volume group entry back to the LVM configuration files using the vgimport command
as follows use new volume group name to import the information:

#-> vgimport -s -v -N -m vg_test.mapfile /dev/vg_original
Beginning the import process on Volume Group "/dev/vg_original".
vgimport: Volume group "/dev/vg_original" has been successfully created.
Warning: A backup of this volume group may not exist on this machine.
Please remember to take a backup using the vgcfgbackup command after activating the volume group.

Activate the newly imported volume group as follows:

#-> vgdisplay /dev/vg_original
vgdisplay: Volume group not activated.
vgdisplay: Cannot display volume group "/dev/vg_original".

#-> vgchange -a y /dev/vg_original
Activated volume group.
Volume group "/dev/vg_original" has been successfully changed.

#-> vgdisplay /dev/vg_original
--- Volume groups ---
VG Name                     /dev/vg_original
VG Write Access             read/write    
VG Status                   available                
Max LV                      255   
Cur LV                      0     
Open LV                     0     
Max PV                      255   
Cur PV                      4     
Act PV                      4     
Max PE per PV               10000       
VGDA                        8  
PE Size (Mbytes)            128            
Total PE                    1596   
Alloc PE                    0      
Free PE                     1596   
Total PVG                   0       
Total Spare PVs             0             
Total Spare PVs in use      0                    
VG Version                  1.0      
VG Max Size                 318750g   
VG Max Extents              2550000   

Back up the volume group configuration as follows
#-> vgcfgbackup   /dev/vg_original

How to Create a Stripe - logical Volume File System in LVM


1.    Determine the disk's associated device file. To show the disks attached to the system and their
device file names, enter the ioscan command with the -f, -N, and -n options.

#-> cat >ldev
6D:6F
6D:70
6D:71
6D:72

Execute the xpinfo to collect the ldev and array information.
#/usr/contrib/bin/xpinfo    -il   >xpinfo.out

#collect
#-> cat ldev|while read i
> do
> cat xpout.info |grep -i $i |grep -i 66657
> done
/dev/rdisk/disk68            d1  --- 9e  CL6R  6d:6f  OPEN-V           00066657
/dev/rdisk/disk72            d1  --- 9f  CL6R  6d:70  OPEN-V           00066657
/dev/rdisk/disk73            dc  --- a0  CL5R  6d:71  OPEN-V           00066657
/dev/rdisk/disk66            dc  --- a1  CL5R  6d:72  OPEN-V           00066657

#-> cat >rdisk
/dev/rdisk/disk68
/dev/rdisk/disk72
/dev/rdisk/disk73
/dev/rdisk/disk66

Once disk are sure that not part of any lvm configuration.
#-> strings /etc/lvmtab|grep -iE "disk68|disk72|disk73|disk66"

#-> cat rdisk |sed  "s/rdisk/disk/" |while read i
> do
> strings /etc/lvmtab |grep -w $i
> done

#-> cat rdisk |sed  "s/rdisk/disk/" |while read i
> do
> pvdisplay $i
> done
pvdisplay: Couldn't find the volume group to which
physical volume "/dev/disk/disk68" belongs.
pvdisplay: Cannot display physical volume "/dev/disk/disk68".
pvdisplay: Couldn't find the volume group to which
physical volume "/dev/disk/disk72" belongs.
pvdisplay: Cannot display physical volume "/dev/disk/disk72".
pvdisplay: Couldn't find the volume group to which
physical volume "/dev/disk/disk73" belongs.
pvdisplay: Cannot display physical volume "/dev/disk/disk73".
pvdisplay: Couldn't find the volume group to which
physical volume "/dev/disk/disk66" belongs.
pvdisplay: Cannot display physical volume "/dev/disk/disk66".

#-> cat rdisk |while read i
> do
> pvcreate $i
> done
Physical volume "/dev/rdisk/disk68" has been successfully created.
Physical volume "/dev/rdisk/disk72" has been successfully created.
Physical volume "/dev/rdisk/disk73" has been successfully created.
Physical volume "/dev/rdisk/disk66" has been successfully created.


2.     Create a directory for the volume group. For example:
# mkdir /dev/vgname
#mkdir /dev/vg_testdata

By convention, vgname is vgnn, where nn is a unique number across all volume groups.
However, you can choose any unique name up to 255 characters.   

3.     Create a device file named group in the volume group directory with the mknod command.
For example:

# mknod /dev/vgname/group c major 0xminor

#-> mknod  /dev/vg_test/group c 64  0x280000

The c following the device file name specifies that group is a character device file.
major is the major number for the group device file. For a Version 1.0 volume group, it is
64. For a Version 2.x volume group, it is 128.
  
4.    To create a Version 1.0 volume group, use the vgcreate command, specifying each physical
volume to be included. For example:

#->cp -p /etc/lvmab  /etc/lvmab.mmddyyyy

#-> vgcreate  /dev/vg_test /dev/disk/disk68 /dev/disk/disk72 /dev/disk/disk73 /dev/disk/disk66
Increased the number of physical extents per phy
sical volume to 12800.
Volume group "/dev/vg_test" has been successfully created.
Volume Group configuration for /dev/vg_test has been saved in /etc/lvmconf/vg_test.conf

You can set volume group attributes using the following options:
-V 1.0 Version 1.0 volume group (default)
-s pe_size Size of a physical extent in MB (default 4)
-e max_pe Maximum number of physical extents per physical volume (default 1016)
-l max_lv Maximum number of logical volumes (default 255)
-p max_pv Maximum number of physical volumes (default 255)
The size of a physical volume is limited by pe_size times max_pe. If you plan to assign a disk
larger than approximately 4 GB (1016 * 4 MB) to this volume group, use a larger value of pe_size
or max_pe.

#-> vgcreate -l 255 -p 255 -s 128 -e 10000 /dev/vg_test /dev/disk/disk68
Volume group "/dev/vg_test" has been successfully created.
Volume Group configuration for /dev/vg_test has been saved in /etc/lvmconf/vg_test.conf

Check the attribute of Volume group:
#-> vgdisplay  /dev/vg_test
--- Volume groups ---
VG Name                     /dev/vg_test
VG Write Access             read/write
VG Status                   available
Max LV                      255
Cur LV                      0
Open LV                     0
Max PV                      16
Cur PV                      4
Act PV                      4
Max PE per PV               12800
VGDA                        8
PE Size (Mbytes)            4
Total PE                    51196
Alloc PE                    0
Free PE                     51196
Total PVG                   0
Total Spare PVs             0
Total Spare PVs in use      0
VG Version                  1.0
VG Max Size                 800g
VG Max Extents              204800
   
5.    Add the physical volume to the volume group using the vgextend command and the block
 file for the disk. For example:

#-> vgextend /dev/vg_test /dev/disk/disk72 /dev/disk/disk73 /dev/disk/disk66
Volume group "/dev/vg_test" has been successfully extended.
Volume Group configuration for /dev/vg_test has been saved in /etc/lvmconf/vg_test.conf

#-> vgdisplay -v /dev/vg_test
--- Volume groups ---
VG Name                     /dev/vg_test
VG Write Access             read/write
VG Status                   available
Max LV                      255
Cur LV                      0
Open LV                     0
Max PV                      16
Cur PV                      4
Act PV                      4
Max PE per PV               12800
VGDA                        8
PE Size (Mbytes)            4
Total PE                    51196
Alloc PE                    0
Free PE                     51196
Total PVG                   0
Total Spare PVs             0
Total Spare PVs in use      0
VG Version                  1.0
VG Max Size                 800g
VG Max Extents              204800

   --- Physical volumes ---
   PV Name                     /dev/disk/disk68
   PV Status                   available
   Total PE                    12799
   Free PE                     12799
   Autoswitch                  On
   Proactive Polling           On

   PV Name                     /dev/disk/disk72
   PV Status                   available
   Total PE                    12799
   Free PE                     12799
   Autoswitch                  On
   Proactive Polling           On

   PV Name                     /dev/disk/disk73
   PV Status                   available
   Total PE                    12799
   Free PE                     12799
   Autoswitch                  On
   Proactive Polling           On

   PV Name                     /dev/disk/disk66
   PV Status                   available
   Total PE                    12799
   Free PE                     12799
   Autoswitch                  On
   Proactive Polling           On

6.    To create a logical volume, follow these steps:
6.1    Decide how much disk space the logical volume needs. Calculate the size of the LV and decide the name of  LV : example:

#->  lvcreate -i 4 -I  128 -n Lv_test /dev/vg_test
Logical volume "/dev/vg_test/Lv_test" has been successfully created with
character device "/dev/vg_test/rLv_test".
Volume Group configuration for /dev/vg_test has been saved in /etc/lvmconf/vg_test.conf

6.2    Allocate 1GB to   the logical volume

#-> lvextend -L 1024M /dev/vg_test/Lv_test
Logical volume "/dev/vg_test/Lv_test" has been successfully extended.
Volume Group configuration for /dev/vg_test has been saved in /etc/lvmconf/vg_test.conf

#-> lvdisplay /dev/vg_test/Lv_test
--- Logical volumes ---
LV Name                     /dev/vg_test/Lv_test
VG Name                     /dev/vg_test
LV Permission               read/write
LV Status                   available/syncd
Mirror copies               0
Consistency Recovery        MWC
Schedule                    striped
LV Size (Mbytes)            1024
Current LE                  8
Allocated PE                8
Stripes                     4
Stripe Size (Kbytes)        128
Bad block                   on
Allocation                  strict
IO Timeout (Seconds)        default


7.    To create a logical volume run newfs to raw lv

#-> newfs -F vxfs -o largefiles -b 8192 /dev/vg_test/rLv_test
   version 7 layout
   1048576 sectors, 131072 blocks of size 8192, log size 2048 blocks
   largefiles supported
8.    Check the type of file system. For example:
#-> /usr/sbin/fstyp  /dev/vg_test/rLv_test
             Vxfs
      #-> /usr/sbin/fstyp -v /dev/vg_test/rLv_test
vxfs
version: 7
f_bsize: 8192
f_frsize: 8192
f_blocks: 131072
    f_bfree: 128863
    f_bavail: 127857
    f_files: 32224
    f_ffree: 32192
    f_favail: 32192
    f_fsid: 1076363265
    f_basetype: vxfs
    f_namemax: 254
    f_magic: a501fcf5
    f_featurebits: 0
    f_flag: 16
    f_fsindex: 10
    f_size: 131072

9.    Create a mount point for mouting of Logical Volume.
#mkdir /oracle/Test

10.    Mount /dev/vg_test/Lv_test on /oracle/Test with correct option as per performance guidelines.
#-> mount  /dev/vg_test/Lv_test /test

#-> bdf /test
Filesystem          kbytes    used   avail %used Mounted on
/dev/vg_test/Lv_test
                   1048576   17672 1022864    2% /test
11.    Check the mount option from mnttab.
#-> mount -v |grep -i test
/dev/vg_test/Lv_test on /test type vxfs ioerror=mwdisable,largefiles,delaylog,dev=40280001 on Tue Jul  8 08:57:31 2014

12.    Check the load balancing for the disk

For 11iv3 systems make sure the load balance policy is also set on the newly added disk:
Verify LB policy: scsimgr -p get_attr all_lun -a device_file -a load_bal_policy
Set LB policy on all disks: scsimgr save_attr -N "/escsi/esdisk" -a load_bal_policy=<LB policy>
Set LB policy on one disk :  scsimgr save_attr -D /dev/rdisk/disk530 -a load_bal_policy=<LB policy>

How to Create a Strict-Distributed- logical Volume File System in LVM


1.    Determine the disk's associated device file. To show the disks attached to the system and their
device file names, enter the ioscan command with the -f, -N, and -n options.
#-> cat >ldev
6D:6F
6D:70
6D:71
6D:72

Execute the xpinfo to collect the ldev and array information.
#/usr/contrib/bin/xpinfo    -il   >xpinfo.out

#collect
#-> cat ldev|while read i
> do
> cat xpout.info |grep -i $i |grep -i 66657
> done
/dev/rdisk/disk68            d1  --- 9e  CL6R  6d:6f  OPEN-V           00066657
/dev/rdisk/disk72            d1  --- 9f  CL6R  6d:70  OPEN-V           00066657
/dev/rdisk/disk73            dc  --- a0  CL5R  6d:71  OPEN-V           00066657
/dev/rdisk/disk66            dc  --- a1  CL5R  6d:72  OPEN-V           00066657

#-> cat >rdisk
/dev/rdisk/disk68
/dev/rdisk/disk72
/dev/rdisk/disk73
/dev/rdisk/disk66

Once disk are sure that not part of any lvm configuration.
#-> strings /etc/lvmtab|grep -iE "disk68|disk72|disk73|disk66"

#-> cat rdisk |sed  "s/rdisk/disk/" |while read i
> do
> strings /etc/lvmtab |grep -w $i
> done

#-> cat rdisk |sed  "s/rdisk/disk/" |while read i
> do
> pvdisplay $i
> done
pvdisplay: Couldn't find the volume group to which
physical volume "/dev/disk/disk68" belongs.
pvdisplay: Cannot display physical volume "/dev/disk/disk68".
pvdisplay: Couldn't find the volume group to which
physical volume "/dev/disk/disk72" belongs.
pvdisplay: Cannot display physical volume "/dev/disk/disk72".
pvdisplay: Couldn't find the volume group to which
physical volume "/dev/disk/disk73" belongs.
pvdisplay: Cannot display physical volume "/dev/disk/disk73".
pvdisplay: Couldn't find the volume group to which
physical volume "/dev/disk/disk66" belongs.
pvdisplay: Cannot display physical volume "/dev/disk/disk66".

#-> cat rdisk |while read i
> do
> pvcreate $i
> done
Physical volume "/dev/rdisk/disk68" has been successfully created.
Physical volume "/dev/rdisk/disk72" has been successfully created.
Physical volume "/dev/rdisk/disk73" has been successfully created.
Physical volume "/dev/rdisk/disk66" has been successfully created.


2.     Create a directory for the volume group. For example:
# mkdir /dev/vgname
#mkdir /dev/vg_testdata

By convention, vgname is vgnn, where nn is a unique number across all volume groups.
However, you can choose any unique name up to 255 characters.   

3.     Create a device file named group in the volume group directory with the mknod command.
For example:

# mknod /dev/vgname/group c major 0xminor

#-> mknod  /dev/vg_test/group c 64  0x280000

The c following the device file name specifies that group is a character device file.
major is the major number for the group device file. For a Version 1.0 volume group, it is
64. For a Version 2.x volume group, it is 128.
  
4.    To create a Version 1.0 volume group, use the vgcreate command, specifying each physical
volume to be included. For example:

#->cp -p /etc/lvmpvg /etc/lvmpvg.mmddyyyy
#->cp –p /etc/lvmab  /etc/lvmab.mmddyyyy

#-> vgcreate  /dev/vg_test /dev/disk/disk68 /dev/disk/disk72 /dev/disk/disk73 /dev/disk/disk66
Increased the number of physical extents per phy
sical volume to 12800.
Volume group "/dev/vg_test" has been successfully created.
Volume Group configuration for /dev/vg_test has been saved in /etc/lvmconf/vg_test.conf

You can set volume group attributes using the following options:
-V 1.0 Version 1.0 volume group (default)
-s pe_size Size of a physical extent in MB (default 4)
-e max_pe Maximum number of physical extents per physical volume (default 1016)
-l max_lv Maximum number of logical volumes (default 255)
-p max_pv Maximum number of physical volumes (default 255)
The size of a physical volume is limited by pe_size times max_pe. If you plan to assign a disk
larger than approximately 4 GB (1016 * 4 MB) to this volume group, use a larger value of pe_size
or max_pe.

#-> vgcreate -l 255 -p 255 -s 128 -e 10000 /dev/vg_test /dev/disk/disk68
Volume group "/dev/vg_test" has been successfully created.
Volume Group configuration for /dev/vg_test has been saved in /etc/lvmconf/vg_test.conf

Check the attribute of Volume group:
#-> vgdisplay  /dev/vg_test
--- Volume groups ---
VG Name                     /dev/vg_test
VG Write Access             read/write
VG Status                   available
Max LV                      255
Cur LV                      0
Open LV                     0
Max PV                      16
Cur PV                      4
Act PV                      4
Max PE per PV               12800
VGDA                        8
PE Size (Mbytes)            4
Total PE                    51196
Alloc PE                    0
Free PE                     51196
Total PVG                   0
Total Spare PVs             0
Total Spare PVs in use      0
VG Version                  1.0
VG Max Size                 800g
VG Max Extents              204800
   
5.    Add the physical volume to the volume group using the vgextend command and the block
 file for the disk. For example:

#-> vgextend /dev/vg_test /dev/disk/disk72 /dev/disk/disk73 /dev/disk/disk66
Volume group "/dev/vg_test" has been successfully extended.
Volume Group configuration for /dev/vg_test has been saved in /etc/lvmconf/vg_test.conf

To create a PVG :

#vgextend -g PVG001 /dev/vg_test /dev/disk/disk68 /dev/disk/disk72 /dev/disk/disk73 /dev/disk/disk66
#-> vgdisplay -v /dev/vg_test
--- Volume groups ---
VG Name                     /dev/vg_test
VG Write Access             read/write
VG Status                   available
Max LV                      255
Cur LV                      0
Open LV                     0
Max PV                      255
Cur PV                      4
Act PV                      4
Max PE per PV               10000
VGDA                        8
PE Size (Mbytes)            128
Total PE                    1596
Alloc PE                    0
Free PE                     1596
Total PVG                   1
Total Spare PVs             0
Total Spare PVs in use      0
VG Version                  1.0
VG Max Size                 318750g
VG Max Extents              2550000


   --- Physical volumes ---
   PV Name                     /dev/disk/disk68
   PV Status                   available
   Total PE                    399
   Free PE                     399
   Autoswitch                  On
   Proactive Polling           On

   PV Name                     /dev/disk/disk72
   PV Status                   available
   Total PE                    399
   Free PE                     399
   Autoswitch                  On
   Proactive Polling           On

   PV Name                     /dev/disk/disk73
   PV Status                   available
   Total PE                    399
   Free PE                     399
   Autoswitch                  On
   Proactive Polling           On

   PV Name                     /dev/disk/disk66
   PV Status                   available
   Total PE                    399
   Free PE                     399
   Autoswitch                  On
   Proactive Polling           On


   --- Physical volume groups ---
   PVG Name                    PVG001
   PV Name                     /dev/disk/disk68
   PV Name                     /dev/disk/disk72
   PV Name                     /dev/disk/disk73
   PV Name                     /dev/disk/disk66

6.    To create a logical volume, follow these steps:
6.1    Decide how much disk space the logical volume needs. Calculate the size of the LV and decide the name of  LV : example:

#-> lvcreate -D y -s g -n Lv_test /dev/vg_test
Logical volume "/dev/vg_test/Lv_test" has been successfully created with
character device "/dev/vg_test/rLv_test".
Volume Group configuration for /dev/vg_test has been saved in /etc/lvmconf/vg_test.conf


6.2    Allocate 1GB to   the logical volume

#-> lvextend -L 1024M /dev/vg_test/Lv_test PVG001
Logical volume "/dev/vg_test/Lv_test" has been successfully extended.
Volume Group configuration for /dev/vg_test has been saved in /etc/lvmconf/vg_test.conf


#-> lvdisplay /dev/vg_test/Lv_test
--- Logical volumes ---
LV Name                     /dev/vg_test/Lv_test
VG Name                     /dev/vg_test
LV Permission               read/write
LV Status                   available/syncd
Mirror copies               0
Consistency Recovery        MWC
Schedule                    parallel
LV Size (Mbytes)            1024
Current LE                  8
Allocated PE                8
Stripes                     0
Stripe Size (Kbytes)        0
Bad block                   on
Allocation                  PVG-strict/distributed
IO Timeout (Seconds)        default



7.    To create a logical volume run newfs to raw lv

#-> newfs -F vxfs -o largefiles -b 8192 /dev/vg_test/rLv_test
   version 7 layout
   1048576 sectors, 131072 blocks of size 8192, log size 2048 blocks
   largefiles supported
8.    Check the type of file system. For example:
#-> /usr/sbin/fstyp  /dev/vg_test/rLv_test
             Vxfs
      #-> /usr/sbin/fstyp -v /dev/vg_test/rLv_test
vxfs
version: 7
f_bsize: 8192
f_frsize: 8192
f_blocks: 131072
    f_bfree: 128863
    f_bavail: 127857
    f_files: 32224
    f_ffree: 32192
    f_favail: 32192
    f_fsid: 1076363265
    f_basetype: vxfs
    f_namemax: 254
    f_magic: a501fcf5
    f_featurebits: 0
    f_flag: 16
    f_fsindex: 10
    f_size: 131072

9.    Create a mount point for mouting of Logical Volume.
#mkdir /oracle/Test

10.    Mount /dev/vg_test/Lv_test on /oracle/Test with correct option as per performance guidelines.
#-> mount  /dev/vg_test/Lv_test /test

#-> bdf /test
Filesystem          kbytes    used   avail %used Mounted on
/dev/vg_test/Lv_test
                   1048576   17672 1022864    2% /test
11.    Check the mount option from mnttab.
#-> mount -v |grep -i test
/dev/vg_test/Lv_test on /test type vxfs ioerror=mwdisable,largefiles,delaylog,dev=40280001 on Tue Jul  8 08:57:31 2014

12.    Check the load balancing for the disk

For 11iv3 systems make sure the load balance policy is also set on the newly added disk:
Verify LB policy: scsimgr -p get_attr all_lun -a device_file -a load_bal_policy
Set LB policy on all disks: scsimgr save_attr -N "/escsi/esdisk" -a load_bal_policy=<LB policy>
Set LB policy on one disk :  scsimgr save_attr -D /dev/rdisk/disk530 -a load_bal_policy=<LB policy>

How to Create a Strict - logical Volume File System in LVM


1.    Determine the disk's associated device file. To show the disks attached to the system and their
device file names, enter the ioscan command with the -f, -N, and -n options.


#-> cat >ldev
6D:6F
6D:70
6D:71
6D:72

Execute the xpinfo to collect the ldev and array information.
#/usr/contrib/bin/xpinfo    -il   >xpinfo.out

#collect
#-> cat ldev|while read i
> do
> cat xpout.info |grep -i $i |grep -i 66657
> done
/dev/rdisk/disk68            d1  --- 9e  CL6R  6d:6f  OPEN-V           00066657
/dev/rdisk/disk72            d1  --- 9f  CL6R  6d:70  OPEN-V           00066657
/dev/rdisk/disk73            dc  --- a0  CL5R  6d:71  OPEN-V           00066657
/dev/rdisk/disk66            dc  --- a1  CL5R  6d:72  OPEN-V           00066657

#-> cat >rdisk
/dev/rdisk/disk68
/dev/rdisk/disk72
/dev/rdisk/disk73
/dev/rdisk/disk66

Once disk are sure that not part of any lvm configuration.
#-> strings /etc/lvmtab|grep -iE "disk68|disk72|disk73|disk66"

#-> cat rdisk |sed  "s/rdisk/disk/" |while read i
> do
> strings /etc/lvmtab |grep -w $i
> done

#-> cat rdisk |sed  "s/rdisk/disk/" |while read i
> do
> pvdisplay $i
> done
pvdisplay: Couldn't find the volume group to which
physical volume "/dev/disk/disk68" belongs.
pvdisplay: Cannot display physical volume "/dev/disk/disk68".
pvdisplay: Couldn't find the volume group to which
physical volume "/dev/disk/disk72" belongs.
pvdisplay: Cannot display physical volume "/dev/disk/disk72".
pvdisplay: Couldn't find the volume group to which
physical volume "/dev/disk/disk73" belongs.
pvdisplay: Cannot display physical volume "/dev/disk/disk73".
pvdisplay: Couldn't find the volume group to which
physical volume "/dev/disk/disk66" belongs.
pvdisplay: Cannot display physical volume "/dev/disk/disk66".

#-> cat rdisk |while read i
> do
> pvcreate $i
> done
Physical volume "/dev/rdisk/disk68" has been successfully created.
Physical volume "/dev/rdisk/disk72" has been successfully created.
Physical volume "/dev/rdisk/disk73" has been successfully created.
Physical volume "/dev/rdisk/disk66" has been successfully created.


2.     Create a directory for the volume group. For example:
# mkdir /dev/vgname
#mkdir /dev/vg_testdata

By convention, vgname is vgnn, where nn is a unique number across all volume groups.
However, you can choose any unique name up to 255 characters.   

3.     Create a device file named group in the volume group directory with the mknod command.
For example:

# mknod /dev/vgname/group c major 0xminor

#-> mknod  /dev/vg_test/group c 64  0x280000

The c following the device file name specifies that group is a character device file.
major is the major number for the group device file. For a Version 1.0 volume group, it is
64. For a Version 2.x volume group, it is 128.
  
4.    To create a Version 1.0 volume group, use the vgcreate command, specifying each physical
volume to be included. For example:

#-> vgcreate  /dev/vg_test /dev/disk/disk68 /dev/disk/disk72 /dev/disk/disk73 /dev/disk/disk66
Increased the number of physical extents per phy
sical volume to 12800.
Volume group "/dev/vg_test" has been successfully created.
Volume Group configuration for /dev/vg_test has been saved in /etc/lvmconf/vg_test.conf

You can set volume group attributes using the following options:
-V 1.0 Version 1.0 volume group (default)
-s pe_size Size of a physical extent in MB (default 4)
-e max_pe Maximum number of physical extents per physical volume (default 1016)
-l max_lv Maximum number of logical volumes (default 255)
-p max_pv Maximum number of physical volumes (default 255)
The size of a physical volume is limited by pe_size times max_pe. If you plan to assign a disk
larger than approximately 4 GB (1016 * 4 MB) to this volume group, use a larger value of pe_size
or max_pe.

#-> vgcreate -l 255 -p 255 -s 128 -e 10000 /dev/vg_test /dev/disk/disk68
Volume group "/dev/vg_test" has been successfully created.
Volume Group configuration for /dev/vg_test has been saved in /etc/lvmconf/vg_test.conf

Check the attribute of Volume group:
#-> vgdisplay  /dev/vg_test
--- Volume groups ---
VG Name                     /dev/vg_test
VG Write Access             read/write
VG Status                   available
Max LV                      255
Cur LV                      0
Open LV                     0
Max PV                      16
Cur PV                      4
Act PV                      4
Max PE per PV               12800
VGDA                        8
PE Size (Mbytes)            4
Total PE                    51196
Alloc PE                    0
Free PE                     51196
Total PVG                   0
Total Spare PVs             0
Total Spare PVs in use      0
VG Version                  1.0
VG Max Size                 800g
VG Max Extents              204800
   
5.    Add the physical volume to the volume group using the vgextend command and the block
 file for the disk. For example:

#-> vgextend /dev/vg_test /dev/disk/disk72 /dev/disk/disk73 /dev/disk/disk66
Volume group "/dev/vg_test" has been successfully extended.
Volume Group configuration for /dev/vg_test has been saved in /etc/lvmconf/vg_test.conf

#-> vgdisplay -v /dev/vg_test
--- Volume groups ---
VG Name                     /dev/vg_test
VG Write Access             read/write
VG Status                   available
Max LV                      255
Cur LV                      0
Open LV                     0
Max PV                      16
Cur PV                      4
Act PV                      4
Max PE per PV               12800
VGDA                        8
PE Size (Mbytes)            4
Total PE                    51196
Alloc PE                    0
Free PE                     51196
Total PVG                   0
Total Spare PVs             0
Total Spare PVs in use      0
VG Version                  1.0
VG Max Size                 800g
VG Max Extents              204800

   --- Physical volumes ---
   PV Name                     /dev/disk/disk68
   PV Status                   available
   Total PE                    12799
   Free PE                     12799
   Autoswitch                  On
   Proactive Polling           On

   PV Name                     /dev/disk/disk72
   PV Status                   available
   Total PE                    12799
   Free PE                     12799
   Autoswitch                  On
   Proactive Polling           On

   PV Name                     /dev/disk/disk73
   PV Status                   available
   Total PE                    12799
   Free PE                     12799
   Autoswitch                  On
   Proactive Polling           On

   PV Name                     /dev/disk/disk66
   PV Status                   available
   Total PE                    12799
   Free PE                     12799
   Autoswitch                  On
   Proactive Polling           On

6.    To create a logical volume, follow these steps:
6.1    Decide how much disk space the logical volume needs. Calculate the size of the LV and decide the name of  LV : example:
#-> lvcreate -n Lv_test /dev/vg_test
Logical volume "/dev/vg_test/Lv_test" has been successfully created with
character device "/dev/vg_test/rLv_test".
Volume Group configuration for /dev/vg_test has been saved in /etc/lvmconf/vg_test.conf

6.2    Allocate 1GB to   the logical volume

#-> lvextend -L 1024M /dev/vg_test/Lv_test
Logical volume "/dev/vg_test/Lv_test" has been successfully extended.
Volume Group configuration for /dev/vg_test has been saved in /etc/lvmconf/vg_test.conf

7.    To create a logical volume run newfs to raw lv

#-> newfs -F vxfs -o largefiles -b 8192 /dev/vg_test/rLv_test
   version 7 layout
   1048576 sectors, 131072 blocks of size 8192, log size 2048 blocks
   largefiles supported
8.    Check the type of file system. For example:
#-> /usr/sbin/fstyp  /dev/vg_test/rLv_test
             Vxfs
      #-> /usr/sbin/fstyp -v /dev/vg_test/rLv_test
vxfs
version: 7
f_bsize: 8192
f_frsize: 8192
f_blocks: 131072
    f_bfree: 128863
    f_bavail: 127857
    f_files: 32224
    f_ffree: 32192
    f_favail: 32192
    f_fsid: 1076363265
    f_basetype: vxfs
    f_namemax: 254
    f_magic: a501fcf5
    f_featurebits: 0
    f_flag: 16
    f_fsindex: 10
    f_size: 131072

9.    Create a mount point for mouting of Logical Volume.
#mkdir /oracle/Test

10.    Mount /dev/vg_test/Lv_test on /oracle/Test with correct option as per performance guidelines.
#-> mount  /dev/vg_test/Lv_test /test

#-> bdf /test
Filesystem          kbytes    used   avail %used Mounted on
/dev/vg_test/Lv_test
                   1048576   17672 1022864    2% /test
11.    Check the mount option from mnttab.
#-> mount -v |grep -i test
/dev/vg_test/Lv_test on /test type vxfs ioerror=mwdisable,largefiles,delaylog,dev=40280001 on Tue Jul  8 08:57:31 2014

12.    Check the load balancing for the disk

For 11iv3 systems make sure the load balance policy is also set on the newly added disk:
Verify LB policy: scsimgr -p get_attr all_lun -a device_file -a load_bal_policy
    Set LB policy on all disks: scsimgr save_attr -N "/escsi/esdisk" -a load_bal_policy=<LB policy>
Set LB policy on one disk:  scsimgr save_attr -D /dev/rdisk/disk530 -a load_bal_policy=<LB policy>

Sunday, January 11, 2015

MC/ServiceGuard: How to Disable the IP Monitoring on Running Cluster

For issue :
During switch port maintenance activity when lan1 failed it was getting failed over to lan16 automatically but the vice-versa was not possible. We had to switch from lan16 to lan1 automatically.

 the lan1 should failover to lan16 and vice versa.
Looking into the cmgetconf output  could see that the lan1’s standby interface is lan16.
 lan1 as primary and lan16 as primary.

NODE_NAME               tdsxdbp03
  NETWORK_INTERFACE     lan1
    HEARTBEAT_IP        172.98.212.10
  NETWORK_INTERFACE     lan8
    HEARTBEAT_IP        192.168.112.140
  NETWORK_INTERFACE     lan0
    HEARTBEAT_IP        172.98.213.10
  NETWORK_INTERFACE     lan16
#  CLUSTER_LOCK_LUN
  FIRST_CLUSTER_LOCK_PV /dev/disk/disk5
# Primary Network Interfaces on Bridged Net 1: lan1.
#   Possible standby Network Interfaces on Bridged Net 1: lan16.
# Primary Network Interfaces on Bridged Net 2: lan8.
#   Warning: There are no standby network interfaces on bridged net 2.
# Primary Network Interfaces on Bridged Net 3: lan0.
#   Warning: There are no standby network interfaces on bridged net 3.

Also the network failback is enabled in the configuration.

# NETWORK_AUTO_FAILBACK
# When set to YES a recovery of the primary LAN interface will cause failback
# from the standby LAN interface to the primary.
# When set to NO a recovery of the primary LAN interface will do nothing and
# the standby LAN interface will continue to be used until cmmodnet -e lanX
# is issued for the primary LAN interface.

NETWORK_AUTO_FAILBACK           YES

I could see the following message in the syslog.log for the switch migration activity. It clearly visible that the lan1 is successfully moving the lan16 upon the failover.
However what I can  see that both lans are failed at IP layer.

Jan 10 05:28:20 tdsxdbp03 cmnetd[27697]: 172.98.212.10 failed.
Jan 10 05:28:20 tdsxdbp03 cmnetd[27697]: lan1 is down at the IP layer.
Jan 10 05:28:20 tdsxdbp03 cmnetd[27697]: lan1 failed.
Jan 10 05:28:00 tdsxdbp03 su: + tty?? root-sentrigo
Jan 10 05:28:20 tdsxdbp03  above message repeats 3 times
Jan 10 05:28:20 tdsxdbp03 cmnetd[27697]: lan1 switching to lan16              ? Switching to standby network
Jan 10 05:28:20 tdsxdbp03 cmnetd[27697]: Subnet 172.98.212.0 switching from lan1 to lan16
Jan 10 05:28:20 tdsxdbp03 cmnetd[27697]: Subnet 172.98.212.0 switched from lan1 to lan16
Jan 10 05:28:20 tdsxdbp03 cmnetd[27697]: lan1 switched to lan16
Jan 10 05:28:28 tdsxdbp03 cmnetd[27697]: 172.98.212.10 failed.
Jan 10 05:28:28 tdsxdbp03 cmnetd[27697]: lan16 is down at the IP layer.
Jan 10 05:28:28 tdsxdbp03 cmnetd[27697]: lan16 failed.
Jan 10 05:28:28 tdsxdbp03 cmnetd[27697]: Subnet 172.98.212.0 down
Jan 10 05:28:40 tdsxdbp03 cimserver[25010]: PGS10405: Failed to deliver an indication: PGS08001: CIM HTTP or HTTPS connector cannot connect to 10.36.218.66:50004. Connection failed.
Jan 10 05:28:40 tdsxdbp03 cimserver[25010]: PGS10405: Failed to deliver an indication: PGS08001: CIM HTTP or HTTPS connector cannot connect to 10.36.218.67:50004. Connection failed.
Jan 10 05:28:40 tdsxdbp03 cimserver[25010]: PGS10405: Failed to deliver an indication: PGS08001: CIM HTTP or HTTPS connector cannot connect to 10.36.152.168:50004. Connection failed.
Jan 10 05:28:50 tdsxdbp03 cmnetd[27697]: 172.98.212.10 recovered.
Jan 10 05:28:50 tdsxdbp03 cmnetd[27697]: Subnet 172.98.212.0 up
Jan 10 05:28:40 tdsxdbp03 cimserver[25010]: PGS10405: Failed to deliver an indication: PGS08001: CIM HTTP or HTTPS connector cannot connect to 10.36.218.66:50004. Connection failed.
Jan 10 05:28:50 tdsxdbp03 cmnetd[27697]: lan16 is up at the IP layer.
Jan 10 05:28:40 tdsxdbp03 cimserver[25010]: PGS10405: Failed to deliver an indication: PGS08001: CIM HTTP or HTTPS connector cannot connect to 10.36.152.168:50004. Connection failed.
Jan 10 05:28:50 tdsxdbp03 cmnetd[27697]: lan16 recovered.
Jan 10 05:29:00 tdsxdbp03 su: + tty?? root-conclusr

Jan 10 06:20:52 tdsxdbp03 cmnetd[27697]: 172.98.212.10 failed.
Jan 10 06:20:21 tdsxdbp03 su: + tty?? root-conclusr
Jan 10 06:20:52 tdsxdbp03 cmnetd[27697]: lan16 is down at the IP layer.
Jan 10 06:20:52 tdsxdbp03 cmnetd[27697]: lan16 failed.
Jan 10 06:20:52 tdsxdbp03 cmnetd[27697]: Subnet 172.98.212.0 down
Jan 10 06:21:00 tdsxdbp03 su: + tty?? root-sentrigo
Jan 10 06:21:02 tdsxdbp03 cmnetd[27697]: 172.98.212.10 recovered.
Jan 10 06:21:02 tdsxdbp03 cmnetd[27697]: Subnet 172.98.212.0 up
Jan 10 06:21:02 tdsxdbp03 cmnetd[27697]: lan16 is up at the IP layer.
Jan 10 06:21:02 tdsxdbp03 cmnetd[27697]: lan16 recovered.

Here is the the logs that customer is tried to enable the lan card manually. And it was successful.
Jan 10 12:27:30 tdsxdbp03 syslog: cmmodnet -e lan1
Jan 10 12:27:30 tdsxdbp03 cmnetd[27697]: Request to enable interface lan1
Jan 10 12:27:14 tdsxdbp03 su: + tty?? root-conclusr
Jan 10 12:27:30 tdsxdbp03 cmnetd[27697]: Subnet 172.98.212.0 switching from lan16 to lan1
Jan 10 12:27:30 tdsxdbp03 cmnetd[27697]: Subnet 172.98.212.0 switched from lan16 to lan1
Jan 10 12:27:30 tdsxdbp03 cmnetd[27697]: lan16 switched to lan1

This issue due to the IP MONITOR enabled for subnet 172.98.212.0 , that is reason we are seeing the LAN failed at IP Layer message is showing in syslog.

SUBNET 172.98.212.0
  IP_MONITOR ON
  POLLING_TARGET 172.98.212.1

SUBNET 192.168.112.0
  IP_MONITOR OFF

SUBNET 172.98.213.0
  IP_MONITOR OFF

Let me explain why the LAN failover was not worked  lan16 to lan1 .When IP Monitor is configured we can choose Target Polling method of Peer Polling method.  In either method, using Internet Control Message Protocol (ICMP) and ICMPv6, IP Monitor sends polling messages (ECHO msgs) to target IP addresses and verifies that responses are received. When the IP Monitor detects a failure, it marks the network interface down at the IP level.

If a PRI and STBY cards are configured, and we are using IP Monitor on the IP address configured on the PRI (to start with) then when such a failure takes place and the lan inerface is marked on the IP level, the IP address is moved to the STBY card. By nature of the IP Monitor design, the pings (or the sending of those ECHO msgs) are now being sent by the IP address on the STBY card.

If then the cause of the failure is fixed, there is no way for Serviceguard to failback the IP from STBY to the PRI. This is because there is no IP address on the PRI to do the ping and verify the replies, to know that all is OK. So even when the problem is resolved, the IP address remains on the STBY card.

Now, if the failure continued then even when the IP address moves to the STBY it will not be able to verify the pings, and the STBY card will be marked down on the IP level, and subnet will go down. However, the IP address will NOT be removed with the STBY (there is nowhere else to place it, and also since we are now down anyway, we keep it there to check for possible replies). So if the cause is now fixed, and the reply to the ECHO msgs are verified, then the subnet is returned to the STBY card, as the IP is configured there.

So solution for this issue is to disable the IP monitor for Subnet 172.98.212.0 . so that SG will not mark the Primary interface in down state.

 Issue Setup



MC/Service Guard Version : A.11.19.00
IP Monitoring was configured on the existing cluster.
SUBNET 172.17.1.0
IP_MONITOR ON
POLLING_TARGET 172.17.1.3
How to disable the IP Monitoring and can it be done Online while cluster is running?

Solution


IP Monitoring can be disabled Online while cluster is running.
Can be changed while the cluster is running; must be removed, with its accompanying IP_MONITOR and POLLING_TARGET entries, if the subnet in question is removed from the cluster configuration.
IP Monitor
Can be changed while the cluster is running; must be removed if the preceding SUBNET entry is removed.
POLLING_TARGET
Can be changed while the cluster is running; must be removed if the preceding SUBNET entry is removed.
To Temporarily disable the IP Subnet monitoring from the cluster configuration, Modify the cluster ascii file like below and check/apply the configuration:
SUBNET 172.17.1.0
IP_MONITOR OFF
# POLLING_TARGET 172.17.1.3
NOTE:POLLING_TARGET 172.17.1.3 entry should be removed/commented when we are making the IP_MONITOR OFF.
To Permanently remove the IP Subnet monitoring from the cluster configuration, remove the following entries from the cluster ascii file and check/apply the configuration:
#SUBNET 172.17.1.0
# IP_MONITOR ON
# POLLING_TARGET 172.17.1.3
Steps:
  1. Get the running cluster configuration file using cmgetconf file:
       #cmgetconf /etc/cmcluster/<clustername_date>.ascii
  2. Modify the /etc/cmcluster/<clustername_date>.ascii file depending on the requirement as mentioned above.
  3. Run cmcheckconf to check any errors:
      #cmcheckconf -v -C /etc/cmcluster/<clustername_date>.ascii
  4. If no errors on cmcheckconf then run cmapplyconf:
      #cmapplyconf -v -C /etc/cmcluster/<clustername_date>.ascii
NOTE:When disabling/deleting the IP Subnet you will get below messages with cmcheckconf and cmapplyconf.
Setting IP_MONITOR to OFF for SUBNET 172.17.1.0 while cluster is running.
Removing POLLING_TARGET 172.17.1.3 from SUBNET 172.17.1.0 while cluster is running.
-----------------