UnixPedia : HPUX / LINUX / SOLARIS: July 2014

Friday, July 25, 2014

HPUX How to resolve the fsadm not working in case of FS full due to fragmentation. .



How to resolve the fsadm not working in case of FS full due to fragmentation. .
Overview
Resolve the fsadm not working in case of FS full due to fragmentation.
Issue :- #->fsadm -F vxfs -b 16192m   /export/sapmnt/QSJ
UX:vxfs fsadm: INFO: V-3-23585: /dev/vgdbQSJ/rlvmntQSJ is currently 8192000 sectors - size will be increased
vxfs: msgcnt 48348 mesg 001: V-2-1: vx_nospace - /dev/vgdbQSJ/lvmntQSJ file system full (2048 block extent)UX:vxfs fsadm: ERROR: V-3-20340: attempt to resize /dev/vgdbQSJ/rlvmntQSJ failed with errno 28
UX:vxfs fsadm: ERROR: V-3-23643: Retry the operation after freeing up some space
While the file system is /export/sapmnt/QSJ is 86 % utilized
#-> bdf /export/sapmnt/QSJ
Filesystem    kbytes    used   avail %used Mounted on
/dev/vgdbQSJ/lvmntQSJ
                   8192000 6972865 1142942   86% /export/sapmnt/QSJ

We could see that fsadm failed with errno 28  means ENOSPC from /usr/include/sys/errno.h

#define ENOSPC          28      /* No space left on device      */

Why is the ENOSPC
error while running fsadm even though there is enough free space on the filesystem?
Procedures

Solution
NOTE: If a file system is full, busy or too fragmented, the resize operation may fails.
Follow the below steps to extend the filesystem online using fsadm :
It seems filesystem is heavily fragmented. As it can be seen from the above output that there is no free extent of 8KB or above. 

Please find the below article which will help you to resolve the issue. if you are unable to open this link, we have attached it over email.
HP-UX 11.x - VxFS: Extending a Filesystem Online Using fsadm Command Fails with Error Number 28
Issue
Extending a filesystem from 5504MB to 10GB online using the fsadm command:
Filesystem is not only 100% full before extending the filesystem:


# bdf /fs_test 
Filesystem          kbytes    used   avail %used Mounted on
/dev/vg00/fstest   5636096 4122013 1420435   74% /fs_test



Logical volume size is already extended to 10GB:


# lvdisplay /dev/vg00/fstest
--- Logical volumes ---
LV Name                     /dev/vg00/fstest
VG Name                     /dev/vg00
LV Permission               read/write   
LV Status                   available/syncd           
LV Size (Mbytes)            10240 



When extending the filesystem online using fsadm command its failing with below error:


# fsadm -F vxfs -b 10240M /fs_test
vxfs fsadm: /dev/vg00/rfstest is currently 5636096 sectors - size will be increased
vxfs fsadm: attempt to resize /dev/vg00/rfstest failed with errno 28
vxfs fsadm:  Retry the operation after freeing up some space

Error number 28 means ENOSPC from /usr/include/sys/errno.h
#define ENOSPC          28      /* No space left on device      */
Why is the ENOSPC
error while running fsadm even though there is enough free space on the filesystem?
Solution
NOTE: If a file system is full, busy or too fragmented, the resize operation may fails.
Follow the below steps to extend the filesystem online using fsadm :
Checked and found filesystem is heavily fragmented. As it can be seen from the above output that there is no free extent of 8KB or above.
# fsadm -F vxfs -D -E /fs_test
 Directory Fragmentation Report
             Dirs        Total      Immed    Immeds   Dirs to   Blocks to
             Searched    Blocks     Dirs     to Add   Reduce    Reduce
  total          2887      1996      2131         0         3          99

  Extent Fragmentation Report
        Total    Average      Average     Total
        Files    File Blks    # Extents   Free Blks
        47977          85           5     1513016
    blocks used for indirects: 26820
    % Free blocks in extents smaller than 64 blks: 100.00
    % Free blocks in extents smaller than  8 blks: 100.00
    % blks allocated to extents 64 blks or larger: 90.52
    Free Extents By Size
           1:     215670            2:     221005            4:     213834
           8:          0           16:          0           32:          0
          64:          0          128:          0          256:          0
         512:          0         1024:          0         2048:          0
        4096:          0         8192:          0        16384:          0
       32768:          0        65536:          0       131072:          0
      262144:          0       524288:          0      1048576:          0
     2097152:          0      4194304:          0      8388608:          0
    16777216:          0     33554432:          0     67108864:          0
   134217728:          0    268435456:          0    536870912:          0
  1073741824:          0   2147483648
Ran the defragmentation on the filesystem.
# fsadm -F vxfs -d -e /fs_test
NOTE: Defragmentation can take long time depending on the degree of fragmentation, disk speed and number of inodes in the filesystem. In this case it took about 45 min. More information is available on man 1M fsadm_vxfs command.
Fragmentation report after defragmentation of filesystem:
# fsadm -F vxfs -D -E /fs_test

  Directory Fragmentation Report
             Dirs        Total      Immed    Immeds   Dirs to   Blocks to
             Searched    Blocks     Dirs     to Add   Reduce    Reduce
  total          2887      1879      2131         0         2           5

  Extent Fragmentation Report
        Total    Average      Average     Total
        Files    File Blks    # Extents   Free Blks
        47977          84           1     1539889
    blocks used for indirects: 48
    % Free blocks in extents smaller than 64 blks: 4.94
    % Free blocks in extents smaller than  8 blks: 0.52
    % blks allocated to extents 64 blks or larger: 91.40
    Free Extents By Size
           1:       1029            2:       1144            4:       1165
           8:       1319           16:       1253           32:       1170
          64:       1023          128:        879          256:        611
         512:        388         1024:        171         2048:         89
        4096:         36         8192:         12        16384:          4
       32768:          4        65536:          0       131072:          1
      262144:          0       524288:          0      1048576:          0
     2097152:          0      4194304:          0      8388608:          0
    16777216:          0     33554432:          0     67108864:          0
   134217728:          0    268435456:          0    536870912:          0
  1073741824:          0   2147483648:          0
Now its looks much better. Free extents are larger blocks.
Tried extending the filesystem now using fsadm command and its successful now:
# fsadm -F vxfs -b 10485760 /fs_test
UX:vxfs fsadm: INFO: V-3-25942: /dev/vg00/rfstest size increased from 5636096 sectors to 10485760 sectors

# bdf /fs_test
Filesystem              kbytes    used   avail %used Mounted on
/dev/vg00/fstest      10485760 4097395 5990074   41% /fs_test

Keywords.
fsadm

Wednesday, July 16, 2014

HPUX : How to rename a Volume group in LVM,

How to rename a volume group in LVM

Overview
 How to rename a Volume group in LVM
Procedures

Create a vg_test and rename it to vg_original.
#-> vgdisplay -v /dev/vg_test
--- Volume groups ---
VG Name                     /dev/vg_test
VG Write Access             read/write    
VG Status                   available                
Max LV                      255   
Cur LV                      0     
Open LV                     0     
Max PV                      255   
Cur PV                      4     
Act PV                      4     
Max PE per PV               10000       
VGDA                        8  
PE Size (Mbytes)            128            
Total PE                    1596   
Alloc PE                    0      
Free PE                     1596   
Total PVG                   1       
Total Spare PVs             0             
Total Spare PVs in use      0                    
VG Version                  1.0      
VG Max Size                 318750g   
VG Max Extents              2550000      


   --- Physical volumes ---
   PV Name                     /dev/disk/disk68
   PV Status                   available               
   Total PE                    399    
   Free PE                     399    
   Autoswitch                  On       
   Proactive Polling           On              

   PV Name                     /dev/disk/disk72
   PV Status                   available               
   Total PE                    399    
   Free PE                     399    
   Autoswitch                  On       
   Proactive Polling           On              

   PV Name                     /dev/disk/disk73
   PV Status                   available               
   Total PE                    399    
   Free PE                     399    
   Autoswitch                  On       
   Proactive Polling           On              

   PV Name                     /dev/disk/disk66
   PV Status                   available               
   Total PE                    399    
   Free PE                     399    
   Autoswitch                  On       
   Proactive Polling           On              


   --- Physical volume groups ---
   PVG Name                    PVG001
   PV Name                     /dev/disk/disk68
   PV Name                     /dev/disk/disk72
   PV Name                     /dev/disk/disk73
   PV Name                     /dev/disk/disk66



#-> vgdisplay /dev/vg_test
--- Volume groups ---
VG Name                     /dev/vg_test
VG Write Access             read/write    
VG Status                   available                
Max LV                      255   
Cur LV                      0     
Open LV                     0     
Max PV                      255   
Cur PV                      4     
Act PV                      4     
Max PE per PV               10000       
VGDA                        8  
PE Size (Mbytes)            128            
Total PE                    1596   
Alloc PE                    0      
Free PE                     1596   
Total PVG                   1       
Total Spare PVs             0             
Total Spare PVs in use      0                    
VG Version                  1.0      
VG Max Size                 318750g   
VG Max Extents              2550000      


#-> #Rename the vg vg_test to vg_original

#-> vgexport -s -v -m vg_test.mapfile vg_test
Beginning the export process on Volume Group "vg_test".
vgexport: Volume group "vg_test" is still active.
vgexport: Couldn't export volume group "vg_test".

#-> ll vg_test.mapfile
vg_test.mapfile not found

#-> vgexport -p -s -v -m vg_test.mapfile vg_test
Beginning the export process on Volume Group "vg_test".
vgexport: Volume group "vg_test" is still active.
/dev/disk/disk68
/dev/disk/disk72
/dev/disk/disk73
/dev/disk/disk66
vgexport: Preview of vgexport on volume group "vg_test" succeeded.

Deactivate the volume group by entering the following command

#-> vgchange -a n vg_test
Volume group "vg_test" has been successfully changed.


If you want to retain the same minor number for the volume group, examine the volume group's
group file as follows

#-> ll /dev/vg_test/group
crw-r--r--   1 root       sys         64 0x280000 Jul  8 08:20 /dev/vg_test/group


Remove the volume group device files and its entry from the LVM configuration files by entering
the following command:
#-> vgexport vg_test
Physical volume "/dev/disk/disk68" has been successfully deleted from
physical volume group "PVG001".
Physical volume "/dev/disk/disk72" has been successfully deleted from
physical volume group "PVG001".
Physical volume "/dev/disk/disk73" has been successfully deleted from
physical volume group "PVG001".
Physical volume "/dev/disk/disk66" has been successfully deleted from
physical volume group "PVG001".
vgexport: Volume group "vg_test" has been successfully removed.

#-> mkdir /dev/vg_original

#-> mknod /dev/vg_original/group c 64 0x280000

#-> vgimport -s -v -m vg_test.mapfile /dev/vg_original
The legacy naming model has been disabled on the system.
Try with the -N option.

Add the volume group entry back to the LVM configuration files using the vgimport command
as follows use new volume group name to import the information:

#-> vgimport -s -v -N -m vg_test.mapfile /dev/vg_original
Beginning the import process on Volume Group "/dev/vg_original".
vgimport: Volume group "/dev/vg_original" has been successfully created.
Warning: A backup of this volume group may not exist on this machine.
Please remember to take a backup using the vgcfgbackup command after activating the volume group.

Activate the newly imported volume group as follows:

#-> vgdisplay /dev/vg_original
vgdisplay: Volume group not activated.
vgdisplay: Cannot display volume group "/dev/vg_original".

#-> vgchange -a y /dev/vg_original
Activated volume group.
Volume group "/dev/vg_original" has been successfully changed.

#-> vgdisplay /dev/vg_original
--- Volume groups ---
VG Name                     /dev/vg_original
VG Write Access             read/write    
VG Status                   available                
Max LV                      255   
Cur LV                      0     
Open LV                     0     
Max PV                      255   
Cur PV                      4     
Act PV                      4     
Max PE per PV               10000       
VGDA                        8  
PE Size (Mbytes)            128            
Total PE                    1596   
Alloc PE                    0      
Free PE                     1596   
Total PVG                   0       
Total Spare PVs             0             
Total Spare PVs in use      0                    
VG Version                  1.0      
VG Max Size                 318750g   
VG Max Extents              2550000   

Back up the volume group configuration as follows
#-> vgcfgbackup   /dev/vg_original

Keywords.                                                                                                                                           
vgimport , vgexport