UnixPedia : HPUX / LINUX / SOLARIS: HPUX :HP ServiceGuard for HP-UX - How to Add a Volume Group and Logical Volume to a Package.

Friday, April 11, 2014

HPUX :HP ServiceGuard for HP-UX - How to Add a Volume Group and Logical Volume to a Package.



HP ServiceGuard for HP-UX - How to Add a Volume Group and Logical Volume to a Package.
Overview
HP ServiceGuard for HP-UX - How to Add a Volume Group and Logical Volume to a Package.
Procedures

Issue

How to add a new Volume Group (VG) and file system to a ServiceGuard package?

Solution

This document includes techniques for both modular and legacy packages.
Process:
Depending on the version of System Management Homepage (SMH) available, the administrator can create a clustered volume group using this click order:
SMH -> Tools -> Disks and File Systems -> Volume Groups -> Create Cluster VG.
The following describes the manual technique using a fictitious volume group to demonstrate the process.
1.     1. On one node in the cluster, create the volume group. Be careful not to select a disk already in use on another server outside of the cluster.

     # pvcreate <options> /dev/dsk/c_t_d_ 

Repeat as needed.
(If desired, agile/persistent addressing can be used but all nodes must use it.)
Create the volume group directory:
           
     # mkdir /dev/vg07 
     # mknod /dev/vg07 group c 64 0x070000 
                                     \_ unique 

The "group" special file must have a unique minor number (00-FF).
Use 'll /dev/*/group' to determine those already used. If the VG will support Network File System (NFS) mounts, the minor number must be the same across all nodes in the number.

     # vgcreate vg07 /dev/dsk/___ ... 

(Important: include all disk special file paths to each LUN)
The volume group will be activated. If not, use:

    # vgchange -a y vg07 

2.     2. Create logical volumes in the new VG. Example:

    # lvcreate -L 7000 vg07 ... 

Add a mirror for redundancy (syncing takes time):

    # lvextend -m 1  /dev/vg07/lvol1  

3.     3. Create a file system on the new logical volume if needed. Example:

    # newfs -F vxfs -o largefiles /dev/vg07/rlvol1 

4.     4. Verify the file system will mount properly:

    # mkdir <mount_dir>      (repeat on all nodes) 
    # mount /dev/vg07/lvol1 /vg07/lvol1_mount 

Deactivate the VG:

    # umount /<mount_dir> 
     # vgchange -a n vg07 

5.     5. With the node running ServiceGuard (cmviewcl), make the VG cluster aware. Doing so prevents accidental VG activation and permits the VG to be activated by the package control script:

     # vgchange -c y vg07 

NOTE: For SGeRAC shared-mode VGs, use vgchange -c y -S y vg07.
6.     6. Create a VG map file prior to vgimport'ing the VG on other nodes.
   
     # vgexport -pvs -m /etc/lvmconf/map.vg07 /dev/vg07 

(-s insures the VG's unique ID is added at the top of the map file)
The results of this command will produce a file of the following format:

      VGID 2c80715b3462331e   
      1 lvol1 
      2 lvol2 
      3 lvol3 
      4 lvol4 
      5 lvol5 
       \   \_ custom lvol names 
        \    
         \_ lvol numbers 

Copy the map file to the other nodes. Example:

     # rcp /etc/lvmconf/map.vg07 (othernode):/etc/lvmconf/map.vg07 

NOTE: "othernode" is a reference to the hostname of the destination server.
7.     7. On the other nodes in the cluster, prepare to vgimport the new VG:

     # mkdir /dev/vg07 
     # mknod /dev/vg07/group c 64 0x0N0000  
                                    /|\ 
     ... where N is a unique number among group files 

NOTE: NFS restriction for minor number in step 1 above.
8.     8. The map file created in step 7 avoids the need to specify each disk with the vgimport command. When used with the '-s' option and a map file headed by the VGID, vgimport causes LVM to inspect all attached disks, loading /etc/lvmtab with those matching the VGID.
Import the new VG on the adoptive node:

    # vgimport -vs -m /etc/lvmconf/map.vg07 vg07 

9.     9. To insure that future cmapplyconf operations do not uncluster the VG, locate and edit the cluster configuration file and add the new volume group name.
Locating the cluster configuration file. There is no naming convention for the file. The SAM utility names it /etc/cmcluster/cmclconfig.ascii. Admins sometimes call it cluster.ascii.
If the file cannot be found on one of the nodes, reconstitute it with:

  # cmgetconf cluster.ascii 

Add a reference to the cluster configuration file:


     VOLUME_GROUP            /dev/vg07 

Copy the file to the other nodes as backup.
It is crucial to activate the VG on each cluster node and mount the logical volumes mount to the mount directories before proceding with the remainder of these steps (unmount and deactivate after verifying).
10.   10. Add the new LVM references to the package.
Modular packages:
i. Add the LVM references to the package configuration file:
vg vg07
fs_name /dev/vg07/lvol1
fs_directory /113
fs_type "vxfs"
fs_mount_opt "-o largefiles,rw"
fs_umount_opt ""
fs_fsck_opt ""
ii. Perform cmapplyconf on the modular package configuration file.
iii. If the package is not running, start it.
Legacy packages:
i. Add the VG, LVOL and mount points to the package control script that controls the new VG.
Example lines added to package control script:

# Note: The FS_TYPE parameter lets you specify the type of filesystem to be
# mounted. Specifying a particular FS_TYPE will improve package failover time.
# The FSCK_OPT and FS_UMOUNT_OPT parameters can be used to include the
# -s option with the fsck and umount commands to improve performance for
# environments that use a large number of filesystems. (An example of a
# large environment is given below following the decription of the
# CONCURRENT_MOUNT_AND_UMOUNT_OPERATIONS parameter.)
#
# Example: If a package uses two JFS filesystems, pkg01a and pkg01b,
# which are mounted on LVM logical volumes lvol1 and lvol2 for read and
# write operation, you would enter the following:
#      LV[0]=/dev/vg01/lvol1; FS[0]=/pkg01a; FS_MOUNT_OPT[0]="-o rw";
#      FS_UMOUNT_OPT[0]=""; FS_FSCK_OPT[0]=""; FS_TYPE[0]="vxfs"
#
#      LV[1]=/dev/vg01/lvol2; FS[1]=/pkg01b; FS_MOUNT_OPT[1]="-o rw"
#      FS_UMOUNT_OPT[1]=""; FS_FSCK_OPT[1]=""; FS_TYPE[1]="vxfs"
#
#LV[0]=""; FS[0]=""; FS_MOUNT_OPT[0]=""; FS_UMOUNT_OPT[0]=""; FS_FSCK_OPT[0]=""
#FS_TYPE[0]=""



     VG[7]="vg07"  
         and  
     LV[4]="/dev/vg01/lvol1"; FS[4]="/sg1"; FS_MOUNT_OPT[4]="-o rw" 
     LV[5]="/dev/vg01/lvol2"; FS[5]="/sg2"; FS_MOUNT_OPT[5]="-o rw" 
     LV[6]="/dev/vg01/lvol3"; FS[6]="/dump5"; FS_MOUNT_OPT[6]="-o rw" 
     LV[7]="/dev/vg01/lvol4"; FS[7]="/depot"; FS_MOUNT_OPT[7]="-o rw"  
         \                      /                         / 

Note the consecutive incremented index values. This is mandatory.
ii. Check the script for syntax errors:

     # sh -n <pkg.cntl script>  

iii. Copy the updated control script to the adoptive node(s).
iv. Ensure the modified package control script works by testing package startup and stop when downtime is available.
If the legacy package is already running, activate the VG and mount its logical volume(s). This will ensure the package halts properly when necessary.
To stop a currently running package:

     # cmhaltpkg <package name> 

To start a package on a specific node:

     # cmrunpkg -n <nodename> <pkg name> 

Drop the '-n <nodename>' if the package is to be started on the current node.
NOTE: It is not necessary to 'cmapplyconf' the cluster.ascii since the cluster ID is already in the VG metadata.

cluster, cmhaltpkg , cmruncl.

No comments:

Post a Comment