UnixPedia : HPUX / LINUX / SOLARIS: March 2014

Saturday, March 29, 2014

HPUX :Mounting NFS FS via autofs with options

Mounting NFS FS via autofs with options
Overview
Mounting NFS FS via autofs with options
Procedures
Nfs file systems which are exported via package should be mounted via autofs.

Make the entry of Mounitng NFS in auto.direct file with expected option of mount point.
[root@Apple:/.root]#
#-> vi /etc/auto.direct
"/etc/auto.direct" 4 lines, 403 characters
/opr_nphpnecc1 "-o vers=3,proto=udp,retry=3" nphpnecc1.company.com:/opr_nphpnecc1
/sapmnt/EJ1     "-o vers=3,proto=udp,retry=3" nphpnecc1.company.com:/export/sapmnt/EJ1
#/oracle/EJ1/sapbackup  "-o vers=3,proto=udp,retry=3" nphpnecc1.company.com:/export/oracle/EJ1/sapbackup
/oracle/EJ1/sapbackup   "-o rw,bg,hard,rsize=32768,wsize=32768,vers=3,forcedirectio,nointr,proto=tcp,suid" nphpnecc1.company.com:/export/oracle/EJ1/sapbackup
~
~
:wq!

Run below command to re-read the auto.direct file.

#-> automount -v
automount: /oracle/EJ1/sapbackup mounted
automount: no unmounts
[root@Apple:/.root]#

Check the content of file to check NFS is getting mounted via autofs.
#-> ll /oracle/EJ1/sapbackup
total 508
-rw-r--r--   1 oraej1     dba           1316 Mar 16 03:34 backEJ1.log
-rw-r--r--   1 oraej1     dba          21328 Mar  3 05:18 beniaszj.anf
-rw-r--r--   1 oraej1     dba          23645 Mar  6 18:52 benisgyv.anf
-rw-r--r--   1 oraej1     dba          21539 Mar  9 01:04 benjdjsd.anf
-rw-r--r--   1 oraej1     dba          22938 Mar  9 02:35 benjdrtg.anf
-rw-r--r--   1 oraej1     dba          23141 Mar  9 04:45 benjedgk.anf
-rw-r--r--   1 oraej1     dba         141384 Mar 16 03:35 benkkbpv.anf
drwxr-xr-x   2 root       root            96 Feb 20 20:17 lost+found
[root@Apple:/.root]#
#-> bdf  -t nfs
Filesystem          kbytes    used   avail %used Mounted on

nphpnecc1.company.com:/export/oracle/EJ1/sapbackup
                   20512768   71927 19163294    0% /oracle/EJ1/sapbackup

[root@Apple:/.root]#

#Check the parameter of Mounted NFS with #nfsstat –m and #mount -v
#-> nfsstat -m
/oracle/EJ1/sapbackup from nphpnecc1.company.com:/export/oracle/EJ1/sapbackup
 Flags:         vers=3,proto=tcp,sec=sys,hard,nointr,forcedirectio,link,symlink,acl,nodevs,rsize=32768,wsize=32768,retrans=5,timeo=600
 Attr cache:    acregmin=3,acregmax=60,acdirmin=30,acdirmax=60

[root@Apple:/.root]#
#-> mount  -v |grep  -I  “/oracle/EJ1/sapbackup”
nphpnecc1.company.com:/export/oracle/EJ1/sapbackup on /oracle/EJ1/sapbackup type nfs nointr,nodevs,forcedirectio,rsize=32768,wsize=32768,NFSv3,dev=4000096 on Sat Mar 29 03:06:02 2014
[root@Apple:/.root]#
Keywords
Automountd , autofs , mount, nfsstat


Thursday, March 27, 2014

VXVM :Create a volume group and a File system under Vxvm

Create a volume group and a File system under Vxvm
Overview
Create a volume group and a File system under Vxvm
Procedures
1.    Initialize disks to be used with VxVM by running the vxdisksetup command only on the
primary system.
# /etc/vx/bin/vxdisksetup -i c5t0d0
2.    Create the disk group to be used with the vxdg command only on the primary system.
# vxdg init logdata c5t0d0
3.    Verify the configuration.
# vxdg list
4.    Use the vxassist command to create the logical volume.
a.       # vxassist -g logdata make logfile 2048m
5.    Verify the configuration.
             # vxprint -g logdata
6.     Make the filesystem.
             # newfs -F vxfs /dev/vx/rdsk/logdata/logfile
7.     Create a directory to mount the volume group.
             # mkdir /logs
8.     Mount the volume group.
# mount /dev/vx/dsk/logdata/logfile /logs
9.     Check if file system exits, then unmount the file system.
# umount /logs
Keywords
Vxdg,vxvm,vxassit


HPUX :Create a volume group version 2.0



Create a volume group version 2.0
Overview
Create a volume group version 2.0 with 8 disk, for pvg-strict/distributed.
Procedures
Create a volume group version 2.0 named /dev/vg09 with 8 physical volumes, an extent size of 256 megabytes and a maximum total size of 1 petabyte.
#cat xp.out |grep -iE "4f:14|4f:15|4f:16|4f:17|5d:c4|5d:c5|5d:c6|5d:c7"
/dev/rdisk/disk212           82  --- 24  CL5H  4f:14  OPEN-V           00065760  
/dev/rdisk/disk173           82  --- 25  CL5H  4f:15  OPEN-V           00065760  
/dev/rdisk/disk215           74  --- 26  CL6H  4f:16  OPEN-V           00065760  
/dev/rdisk/disk225           74  --- 27  CL6H  4f:17  OPEN-V           00065760  
/dev/rdisk/disk763           88  --- 88  CL1H  5d:c4  OPEN-V           00066657  
/dev/rdisk/disk764           88  --- 89  CL1H  5d:c5  OPEN-V           00066657  
/dev/rdisk/disk766           75  --- 8a  CL4H  5d:c6  OPEN-V           00066657  
/dev/rdisk/disk770           75  --- 8b  CL4H  5d:c7  OPEN-V           00066657  
#

#cat RDISK |sed "s/rdisk/disk/"
/dev/disk/disk212
/dev/disk/disk173
/dev/disk/disk215
/dev/disk/disk225
/dev/disk/disk763
/dev/disk/disk764
/dev/disk/disk766
/dev/disk/disk770

#cat RDISK |sed "s/rdisk/disk/" |while read disk
> do
> strings /etc/lvmtab|grep -w $disk
> done
#

 Initialize the disks using pvcreate
cat RDISK |while read disk
> do
> pvcreate  $disk
> done
Physical volume "/dev/rdisk/disk212" has been successfully created.
Physical volume "/dev/rdisk/disk173" has been successfully created.
Physical volume "/dev/rdisk/disk215" has been successfully created.
Physical volume "/dev/rdisk/disk225" has been successfully created.
Physical volume "/dev/rdisk/disk763" has been successfully created.
Physical volume "/dev/rdisk/disk764" has been successfully created.
Physical volume "/dev/rdisk/disk766" has been successfully created.
Physical volume "/dev/rdisk/disk770" has been successfully created.
The vg_name directory and group file will be created automatically.  Optionally, these files can be created before  doing the vgcreate, as follows:

           mkdir /dev/vg09
           mknod /dev/vg09/group c 128 0x009000

           NOTE: Notice that the major number for a volume group version 2.0
           or higher is 128 while the major number for a volume group
           version 1.0 is 64.  Also, the volume group number occupies the
           high order 12 bits of the minor number rather than the high order
           8 bits as in volume groups version 1.0.
Create the volume group version 2.0.
#  vgcreate -V 2.0 -s 256 -S 1p /dev/vg09 /dev/disk/disk212  /dev/disk/disk173
Volume group "/dev/vg09" has been successfully created.
Volume Group configuration for /dev/vg09 has been saved in /etc/lvmconf/vg09.conf
 #vgextend -g PVGNAME01 /dev/vg09 /dev/disk/disk212 /dev/disk/disk173 /dev/disk/disk215 /dev/disk/disk225
vgextend: The physical volume "/dev/disk/disk212" is already recorded in the "/etc/lvmtab_p" file.
vgextend: The physical volume "/dev/disk/disk173" is already recorded in the "/etc/lvmtab_p" file.
Physical volume group "PVGNAME01" has been successfully extended.
Volume Group configuration for /dev/vg09 has been saved in /etc/lvmconf/vg09.conf
# vgextend -g PVGNAME02 /dev/vg09 /dev/disk/disk763 /dev/disk/disk764 /dev/disk/disk766 /dev/disk/disk770
Volume group "/dev/vg09" has been successfully extended.
Physical volume group "PVGNAME02" has been successfully extended.
Volume Group configuration for /dev/vg09 has been saved in /etc/lvmconf/vg09.conf
#

#vgdisplay -v /dev/vg09
--- Volume groups ---
VG Name                     /dev/vg09
VG Write Access             read/write    
VG Status                   available                 
Max LV                      511   
Cur LV                      0     
Open LV                     0     
Max PV                      511   
Cur PV                      8     
Act PV                      8     
Max PE per PV               65536         
VGDA                        16 
PE Size (Mbytes)            256            
Total PE                    2840          
Alloc PE                    0             
Free PE                     2840          
Total PVG                   2       
Total Spare PVs             0             
Total Spare PVs in use      0                    
VG Version                  2.0      
VG Max Size                 1p        
VG Max Extents              4194304       


   --- Physical volumes ---
   PV Name                     /dev/disk/disk212
   PV Status                   available               
   Total PE                    499           
   Free PE                     499           
   Autoswitch                  On       
   Proactive Polling           On              

   PV Name                     /dev/disk/disk173
   PV Status                   available               
   Total PE                    499           
   Free PE                     499           
   Autoswitch                  On       
   Proactive Polling           On              

   PV Name                     /dev/disk/disk215
   PV Status                     available               
   Total PE                       499           
   Free PE                         499           
   Autoswitch                  On       
   Proactive Polling         On              

   PV Name                     /dev/disk/disk225
   PV Status                   available               
   Total PE                    499           
   Free PE                     499           
   Autoswitch                  On       
   Proactive Polling           On              

   PV Name                     /dev/disk/disk763
   PV Status                   available               
   Total PE                    211           
   Free PE                     211           
   Autoswitch                  On       
   Proactive Polling           On              

   PV Name                     /dev/disk/disk764
   PV Status                   available               
   Total PE                    211           
   Free PE                     211           
   Autoswitch                  On       
   Proactive Polling           On              

   PV Name                     /dev/disk/disk766
   PV Status                   available               
   Total PE                    211           
   Free PE                     211           
   Autoswitch                  On       
   Proactive Polling           On              

   PV Name                     /dev/disk/disk770
   PV Status                   available               
   Total PE                    211           
   Free PE                     211           
   Autoswitch                  On       
   Proactive Polling           On              


   --- Physical volume groups ---
   PVG Name                    PVGNAME01
   PV Name                     /dev/disk/disk212
   PV Name                     /dev/disk/disk173
   PV Name                     /dev/disk/disk215
   PV Name                     /dev/disk/disk225

   PVG Name                    PVGNAME02
   PV Name                     /dev/disk/disk763
   PV Name                     /dev/disk/disk764
   PV Name                     /dev/disk/disk766
   PV Name                     /dev/disk/disk770

#Create LV  ---as per requirement

#lvcreate -n lv_Data /dev/vg09
#lvextend -L 101024 /dev/vg09/lv_Data PVGNAME01
#newfs -F vxfs -o largefiles -b 8192 /dev/vg09/rlv_Data
#mkdir /oracle/SAP/sapdata1
#mount /dev/vg09/lv_Data /oracle/SAP/sapdata1
#bdf /oracle/SAP/sapdata1



#strings /etc/lvmtab_p
/dev/vg09
A0000000000000001Thu Mar 27 16:27:27 20148bcaad08-1ae4-11e1-be63-8775a8ebba59
/dev/disk/disk212
/dev/disk/disk173
/dev/disk/disk215
/dev/disk/disk225
/dev/disk/disk763
/dev/disk/disk764
/dev/disk/disk766
/dev/disk/disk770

How to Decom the VG if it is not in Use :

#vgchange -a n /dev/vg09
Volume group "/dev/vg09" has been successfully changed.
#vgexport /dev/vg09
Physical volume "/dev/disk/disk212" has been successfully deleted from
physical volume group "PVGNAME01".
Physical volume "/dev/disk/disk173" has been successfully deleted from
physical volume group "PVGNAME01".
Physical volume "/dev/disk/disk215" has been successfully deleted from
physical volume group "PVGNAME01".
Physical volume "/dev/disk/disk225" has been successfully deleted from
physical volume group "PVGNAME01".
Physical volume "/dev/disk/disk763" has been successfully deleted from
physical volume group "PVGNAME02".
Physical volume "/dev/disk/disk764" has been successfully deleted from
physical volume group "PVGNAME02".
Physical volume "/dev/disk/disk766" has been successfully deleted from
physical volume group "PVGNAME02".
Physical volume "/dev/disk/disk770" has been successfully deleted from
physical volume group "PVGNAME02".
vgexport: Volume group "/dev/vg09" has been successfully removed.
#


           Create a volume group version 2.0 of size comparable to a volume
           group version 1.0 created with pe_size=64, max_pe=4096, and
           max_pv=16 on an already initialized disk.

           First calculate the appropriate vg_size parameter as follows:
           max_pe x max_pv x pe_size=vg_size (in megabytes) 4096 x 16 x 64 =
           4194304m = 4t.  Now create the volume group version 2.0.

           vgcreate -V 2.0 -s 64 -S 4t vg10 /dev/disk/disk04

           Display the minimum extent size required to create a 512 terabyte
           volume group version 2.0.

           vgcreate -V 2.0 -E -S 512t
Keywords
Vgextend ,

HPUX : System Tool for Blade Servers – CPROP

Overview
Procedured describes how to use system tool cprop in blade servers.
Procedures
As CSTM is not supported on blade servers, cprop utility can be used to gather system info.
·               To list the components which can be queried by cprop :-  Check status tab for any error.
#-> /opt/propplus/bin/cprop –list 
·         Check summary of the device
#-> /opt/propplus/bin/cprop -summary -c Memory 
·         Query the detail view of the device
#-> /opt/propplus/bin/cprop -detail -d Memory:5cb50569x139ce0d 
Use man cprop;  man cprop_healthtest to get more info about this utility.
Keywords
cprop, hp-ux, blade, bl860, bl890, bl870, linux, server, system utility, system tool, server health, hardware