UnixPedia : HPUX / LINUX / SOLARIS: HPUX : FILE SYSTEM CREATION IN CLUSTER ENVIROMENT WITH DISK FROM STORAGE

Friday, September 26, 2014

HPUX : FILE SYSTEM CREATION IN CLUSTER ENVIROMENT WITH DISK FROM STORAGE



1.              Purpose

This document describes disks and filesystems.

2.              scope

The Operation Checklist addresses the Filesystem creation /extention with disk process and defines usage instructions for creating and publishing checklists, preliminary configuration information and prerequisite requirements, procedural steps, and verification activities

3.              Filesystem  creatoin in cluster enviroment and disk

3.1                  Add disks (LUNs) to volume group / logical volume

To add new LUNs to the system you can use the following procedure: The ticket or SAN team will inform the following information (probably under comments): cu:ldev of disk presented to Unix server via requested HBA.

The lun number points to the XP-ARRAY. This number is of no importance. The hwu number is the unique internal address to the storage. Use the following command to find out the corresponding device number:
XP-Array disk should be presented to all node , on which package is capable to make switch.

#-> cmviewcl –vp <package name>

     Node_Switching_Parameters:
      NODE_TYPE    STATUS       SWITCHING    NAME
      Primary      up           enabled      NODEA (current)
      Alternate    up           enabled      NODEB
i.e Lun should be visible to both NODEA and NODEB

xpinfo command is ideally run after running the insf and ioscan on the system.

#-> /usr/local/CPR/bin/xpinfo -il >xpinfo.output &
#-> grep -iE '40:90|40:91' /home/ssingh45/xpinfo.output |grep   -i <array number>
/dev/rdisk/disk32           dc  --- 00  CL5B   40:90  OPEN-V           00090451
/dev/rdisk/disk33            d1  --- 01  CL6B  40:91  OPEN-V           00090451
/dev/rdisk/disk34            d1  --- 01  CL6B  40:92  OPEN-V           00090451
/dev/rdisk/disk35            d1  --- 01  CL6B  40:93  OPEN-V           00090451

 

Depednig on the version of OS, cul:dev will give the device file output as legacy or persisteant device file (11.31)

3.2                  Verificatoin of disk

Check the disk provided by SAN is not been used and it is new on server.

#-> strings /etc/lvmtab |grep  -i  -e disk32 –e disk33 -e disk34 –e disk35


Above command should not return any output , it will show no disk are part of any VG/LV on the system.

#->
cat  <<'EOF' >> ~ssingh45/disk.output
/dev/disk/disk32
/dev/disk/disk33
/dev/disk/disk34
/dev/disk/disk35
EOF

#-> cat  ~ssingh45/disk.output |while read disk
> do
> pvdisplay $disk
> done

Expected output
pvdisplay: Warning: couldn't query physical volume "/dev/disk/disk24":
The specified path does not correspond to physical volume attached to
this volume group

# insf

(Note: if this does not work try 'insf -e' to force re-scanning of all devices). Search for devices (u = usable; quicker):

# ioscan –funCdisk

Create a physical volume for every physical device:


3.3                  Initialization of disk for filesystem creation


# pvcreate /dev/rdsk/c.t.d.
Physical volume "/dev/rdsk/c.t.d." has been successfully created.

If the system complains (because it finds the device is already in use) and you are sure the volume can be created, you can use the '-f' option to force the command:

# pvcreate -f /dev/rdsk/c.t.

Create the device directory for the volume group:

3.4                  volume group creation


# mkdir /dev/VGNAME

Search for the next available major device number. First display the existing:

# ll /dev/*/group
> crw-r-----   1 root       sys         64 0x000000 Nov 19  1999 group
> crw-r--r--   1 root       sys         64 0x020000 Dec 11  2002 group
> crw-------   1 root       sys         64 0x010000 May 15  2001 group

Watch Out: The (hexadecimal) number after 64 is the node-number. Create a new one by choosing the next available, in this case 0x030000:

# mknod /dev/VGNAME/group c 64 0x030000

Now add the physical volumes to the volume group. When you don't use 'alternate pathing' (= two separate paths to same device) then you can leave out the last dev:

# vgcreate /dev/VGNAME /dev/dsk/c.t.d. /dev/dsk/c.t.d.
Volume group "/dev/VGNAME" has been successfully created.
Volume Group configuration for /dev/VGNAME has been saved in /etc/lvmconf/VGNAME.conf

Add the other physical volumes to the vg (if necessary):




LVM Parameter
Default Value
Maximum Value
Can be set by
Comments
Max # of Volume Groups
10
255
Kernel parameter maxvgs
HP-UX 11.11 – Default value in the OS image is 10.
HP-UX 11.23 - Default value in the OS image is 256.
HP-UX 11.31 – Kernel parameter is obsolete. The default value is 256 when using LVM 1.0.
# of physical volumes(PVs) per volume group(VG)
16
255
vgcreate –p <max_pv>
The default is too low and must be set when the volume group is created.
# of logical volumes(LVs) per Volume group(VG)
255
255
vgcreate –l <max_lv>
This setting can be left at the default setting or lowered.
Physical Extent size
4MB
256MB
vgcreate –s <pe_size>
Setting the pe_size has a direct impact on the max size of a single logical volume.  The following table shows the correlation to this effect.
PE Size
Largest size of a single Logical Volume
4MB
256GB
8MB
512GB
16MB
1TB
32MB
2TB
64MB
4TB
128MB
8TB
256MB
16TB
Max # of Physical Extents(PE) per Physical Volume(PV)
1016
65535
vgcreate –e <max_pe>
Formula : pe_size * max_pe = Max size of usable disk.
Refer to Appendix 1 for a table of these values.

# vgextend /dev/VGNAME /dev/dsk/c.t.d. /dev/dsk/c.t.d.

Display the characteristics of the new volume group:

#vgdisplay -v VGNAME
--- Volume groups ---
VG Name                     /dev/VGNAME
VG Write Access             read/write
VG Status                   available
Max LV                      255
Cur LV                      0
Open LV                     0
Max PV                      16
Cur PV                      1
Act PV                      1
Max PE per PV               1016
VGDA                        2
PE Size (Mbytes)            4
Total PE                    538
Alloc PE                    0
Free PE                     538
Total PVG                   0
Total Spare PVs             0
Total Spare PVs in use      0
   --- Physical volumes ---
   PV Name                     /dev/dsk/c36t14d2
   PV Name                     /dev/dsk/c37t14d2        Alternate Link
   PV Status                   available
   Total PE                    538
   Free PE                     538
   Autoswitch                  On

VG creation for distributed layout file system.

Physical volume groups can be created by using the "-g" option on the vgcreate and vgextend commands, or by manually editing the /etc/lvmpvg file. Refer to lvmpvg(4) man page for syntax.
Physical volume groups are used in conjunction with PVG-strict allocation policy. PVG-strict allocation ensures that primary and mirror extents do not reside within the same PVG. Typically each PVG consists of disk devices from the same controller card or set of controllers. Another example would be the case where LVs are mirrored between two separate disk arrays - the PVs in each of the arrays being assigned to separate PVGs.

Example /etc/lvmpvg file:
VG /dev/vgDist
PVG PVG1
/dev/dsk/c0t0d0
/dev/dsk/c0t1d0
/dev/dsk/c0t6d0
/dev/dsk/c0t7d0

PVG PVG2
/dev/dsk/c1t2d0
/dev/dsk/c1t3d0
/dev/dsk/c0t4d0
/dev/dsk/c0t5d0

vgextend –g PVG<NAME> /dev/VGNAME /dev/dsk/c.t.d. /dev/dsk/c.t.d. /dev/dsk/c.t.d. /dev/dsk/c.t.d.  #four way distribution layout.

3.5                  Activated the volume group in cluster aware mode.

#-> vgchange -c y /dev/VGNAME
#-> vgchange -a e /dev/VGNAME

#-> vgdisplay /dev/VGNAME
--- Volume groups ---
VG Name                     /dev/ VGNAME
VG Write Access             read/write
VG Status                   available, exclusive
Max LV                      255
Cur LV                      62
Open LV                     62
Max PV                      255
Cur PV                      66
Act PV                      66
Max PE per PV               10000
VGDA                        132
PE Size (Mbytes)            32
Total PE                    230142
Alloc PE                    228989
Free PE                     1153
Total PVG                   14
Total Spare PVs             0
Total Spare PVs in use      0
VG Version                  1.0
VG Max Size                 81600000m
VG Max Extents              2550000

3.6                  logical volume creation.

You can define a size in megabytes by using the '-L' option or in number of Physical Extents (PE's) by using the '-l' option.
If you want a size equal to the total available size you can use '-l' and the amount of 'Free PE's'
As default lvol1 is taken:

3.6.1                   Strict  logical creatoin


# lvcreate -l SIZE /dev/VGNAME

3.6.2                   PVG-strict /distributed  logical creation



# lvcreate -D y -s g -m 1 -L 1000 -n LVOL VGNAME
-D y              specifies distributed allocation policy
-s g               specifies PVG-strict allocation policy
-m 1             specifies 1 mirror copy
-L 1000         specifies size in MBytes = 1000
-n LVOL        names logical volume VGNAME

3.6.3                   Strip/distributed  logical creatoin


# lvcreate -i 3 -I 32 -l 240 -n lvol1 /dev/VGNAME

Please use the parameter as per standard what exist on system or consult  standard hpux guidline for Filesystem.

Now the devices should exist in the lvmtab:
# strings /etc/lvmtab|grep c.t.
# ll /dev/VGNAME
total 0
crw-r--r--   1 root       sys         64 0x0a0000 Dec 15 14:18 group
brw-r-----   1 root       sys         64 0x0a0001 Dec 15 14:30 lvol1
crw-r-----   1 root       sys         64 0x0a0001 Dec 15 14:30 rlvol1

Check the new volume group:

# vgdisplay -v /dev/VGNAME

create a filesystem on the nem logical volume:

# newfs -o largefiles -F vxfs /dev/VGNAME/rlvol.
   version 3 layout
   2203648 sectors, 2203648 blocks of size 1024, log size 1024 blocks
   unlimited inodes, 2203648 data blocks, 2202000 free data blocks
   68 allocation units of 32768 blocks, 32768 data blocks
   last allocation unit has 8192 data blocks
   first allocation unit starts at block 0
   overhead per allocation unit is 0 blocks

Note that following command must be used to create new sapdata-filesystems:

# newfs -F vxfs –o largefiles -b 8192 /dev/VGNAME/rlvol

(blocksize of 8kB is needed by SAP/Oracle).
(blocksize of 1kB is needed by Redo / Mirr logs).


Create a mount-point:

# mkdir /.....

#mount -v   

# Mounting option as per file system.

Type of File System
Required Blocksize (use –o bsize with mkfs)
Required Mount options
Comments
Oracle redolog file systems
1K
largefiles,rw,mincache=direct,convosync=direct,delaylog,nodatainlog
Striping is not required. These file systems should not be on the same disks as the Oracle Indexes and Datafiles.
Oracle archive log file systems
1K
largefiles,rw,mincache=direct,convosync=direct,delaylog,nodatainlog

Striping is not required. Can be on the same disks as the Oracle redologs.
Oracle Indexes and  Datafiles
8K
largefiles,rw,delaylog,nodatainlog
Striping is required. Create file systems using Distributing extent based Striping
General Purpose File systems
8K
largefiles,rw,delaylog,nodatainlog
Striping can be used but is not required. If striping is used, create the file system(s) using Distributed striping.


Set the permissions for the new topdir :

# chown ...:... /.....
# chmod ... /.....

Check the new situation:

# bdf
# vgdisplay -v /dev/VGNAME

3.7                  Validaton of Filesystem creation

Check the layout of the Filesystem as per standard and requested.
#-> fstyp  -v  /dev/VGNAME/LVOL
vxfs
version: 6     <version of the VxFS>
f_bsize: 8192   ß This value represents the maximum block size that can be set. This value cannot be changed.
f_frsize: 2048  ß This value represents the fragmentation block size which was set using –o bsize option
f_blocks: 4612096      To understand more about f_bsize and f_frsize you can review the man page for statvfs
f_bfree: 4602773
f_bavail: 4458937
f_files: 1150720
f_ffree: 1150688
f_favail: 1150688
f_fsid: 1074987009
f_basetype: vxfs
f_namemax: 254
f_magic: a501fcf5
f_featurebits: 0
f_flag: 16
f_fsindex: 9
f_size: 4612096
#->lvdisplay -v /dev/VGNAME/LVOL

This will show the value and parameter which are used during the LV creation.

3.8                  Adding the disk into secure path

Secure path is used for loadbalacing of I/O on the server. it is very important to set the policy as per guidline.

For 11.11 and 11.23
autopath discover
autopath display | grep -e /dev/dsk | awk '{print $1}' | while read device
do
autopath set_lbpolicy SST $device
done

For 11.31 :

For 11iv3 systems make sure the load balance policy is also set on the newly added disk:
Verify LB policy: scsimgr -p get_attr all_lun -a device_file -a load_bal_policy
Set LB policy on all disks: scsimgr save_attr -N "/escsi/esdisk" -a load_bal_policy=<LB policy>
Set LB policy on one disk:  scsimgr save_attr -D /dev/rdisk/disk530 -a load_bal_policy=<LB policy>

3.9                  vgexport and vgimport of mapfile in cluster environment.

For 11.11 and 11.23

In cluster server, vgimport from one node to other node to maintain to consistency of layout, in case of failover situation it happen smoothly.
ON NODEA
#-> vgexport -psv -m  /home/ssingh45/VGNAME.mapfile VGNAME

Copy the mapfile to NODEB
#scp -p  /home/ssingh45/VGNAME.mapfile ssingh45@NODEB:

ON NODEB

#-> ll /dev/*/group

Make not of Minor and Major number if you want to maintain the same.

#->vgexport VGNAME ( remove the old configuration)
#->mkdir /dev/VGANME
#->mknod /dev/VGANME/group c 64 <major number>
#->vgimport -sv -m /home/ssingh45/VGNAME.mapfile VGNAME
For 11.31 :
#->vgimport -svN -m /home/ssingh45/VGNAME.mapfile VGNAME   # "C" in case of clusterwide disk.
Now activate the VG in read only mode to check the configuration.
#->vgchange -a r  VGANME

3.10               Updating the cluster configuration.

If in Vg or LV has been created then it is important to update the New VG or LV in cntl file or config file in case of modular package.

And update the same configuration on other node as well.
#cmcheckconf -P <package>
In case of modular package
#cmapplyconf -P <package>

No comments:

Post a Comment