Patch Name: PHKL_7868 Patch Description: s700 10.10 LVM kernel cumulative patch Creation Date: 96/06/28 Post Date: 96/07/10 Hardware Platforms - OS Releases: s700: 10.10 Products: N/A Filesets: LVM.LVM-KRN Automatic Reboot?: Yes Status: General Superseded Critical: Yes PHKL_7868: PANIC PHKL_7164: ABORT PHKL_7187: CORRUPTION PHKL_7050: PANIC PHKL_6969: PANIC Path Name: /hp-ux_patches/s700/10.X/PHKL_7868 Symptoms: PHKL_7868: lvreduce(1M) may cause a system panic, if it is used to reduce an lvol which was left inconsistent by a prior LVM operation. lvreduce(1M) could not be used to remove lvols that were somehow corrupted, if it was, the command would cause a system panic. PHKL_7164: Performance suffers for sequential i/o with LVM and disk arrays with stripe depth > 64k. Edison ARM utilities (and diagnostic tools that require exclusive access to a device) fail reporting device busy (EBUSY) even when all volume groups accessing the device are deactivated. Edison ARM utilities (and diagnostic tools that require exclusive access to a device) fail reporting device busy (EBUSY) if the device was ever used as a Service Guard Cluster Lock disk. PHKL_7187: lvmerge could merge an lvol back with all PEs marked as current and yet the syncing of stale LTGs had failed. lv_recover_ltg(k), which does the syncing, had no mechanism to return error to lv_table_reimage(k). lv_table_reimage(k) therefore returns success when this may not be the case. PHKL_7050: pvmove leads to panic: lv_reducelv extmap, when the mirrored logical volume contains unallocated physical extents (might caused by previous unsuccessful pvmove operation (kill -9). vgchange: Couldn't activate volume group "/dev/vgpam": Invalid argument when users is doing deactivate a volume group, the system panic with "lv_cache_deactivate inflight". PHKL_6675 fixes some case and this patch fixes a special case. PHKL_6969: system panic when vgchange -a y VG with bad sectors on the bad block directory area. PHKL_6852: When a customer has a /etc/lvmtab which is out of data with running kernel, this will cause vgcfgbackup to fail. Defect Description: PHKL_7868: The problem was that the kernel forced a panic whenever any inconsistency was found during an lvreduce. For example, if a logical extent in an lvol referred to a physical extent that was not allocated, it would cause lvreduce(1M) to panic the system. This occured even when the objective was to remove the offending lvol. This is a very rare occurance. PHKL_7164: LVM user data was aligned to a 1k bound. For sequential i/o directed to disk arrays which stripe the data, i/os with a buf size approaching the stripe depth of the device (64k for the Edison array) would require the device to perform two i/os for each i/o directed to the device. LVM was not always closing and releasing devices held when volume groups were deactivated or when a device was used as a Service Guard Cluster Lock disk. PHKL_7187: lvmerge could merge an lvol back with all PEs marked as current and yet the syncing of stale LTGs had failed. lv_recover_ltg(k), which does the syncing, had no mechanism to return error to lv_table_reimage(k). lv_table_reimage(k) therefore returns success when this may not be the case PHKL_7050: Customer were using pvmove command to move data from one disk to another. For some reason, the mirrored logical volume to be moved contains unallocated physical extents. system panic: lv_reducelv extmap For vgchange: Couldn't activate volume group problem, it is easily duplicatable following the replacement of a disk mech, or with a simply vgcfgrestore/vgchange combination to the right disks. - connect an HP-FL disk to system, then do the following: Create a volume group with this disk in it (automatically runs vgcfgbackup) pvcreate -f /dev/rdsk/c0t2d0 mkdir /dev/vgdan mknod /dev/vgdan/group c 64 0x090000 vgcreate /dev/vgdan /dev/dsk/c0t2d0 then de-activate the vg: vgchange -a n /dev/vgdan restore the lvm data structures to the disk: vgcfgrestore -n /dev/vgdan /dev/rdsk/c0t2d0 At this point, any further activation of the VG will fail: vgchange -a y /dev/vgdan vgchange: Couldn't activate volume group "/dev/vgpam": Invalid argument The lv_cache_deactivate inflight panics problem were: When user is doing deactivate a volume group, if mwc cache is not clean, the deactivation routine will detect it and panic. When the last user LV (/dev/vgXX/lvol[1-n]) of a volume group is closing and the controlling LV (/dev/vgXX/group) still opened, it should also wait for all outstanding MWC cache writes to be finished. PHKL_6969: if a LVM disk have bad sectors on the bad block directory area. vgchange -a y VG will cause the panic. PHKL_6852: The customer has a T500 & HPUX10.01 and ran into the problem with "/etc/lvmtab is out of date with running kernel" when doing a vgcfgbackup. The current PV value is set to 7, while there are only 6 disks in the /etc/lvmtab. SR: 1653162669 4701315317 4701317131 4701318352 4701321554 4701321695 4701321711 5000697466 5000714352 5003323493 Patch Files: /usr/conf/lib/liblvm.a(lv_cluster_lock.o) /usr/conf/lib/liblvm.a(lv_hp.o) /usr/conf/lib/liblvm.a(lv_ioctls.o) /usr/conf/lib/liblvm.a(lv_lvsubr.o) /usr/conf/lib/liblvm.a(lv_mircons.o) /usr/conf/lib/liblvm.a(lv_schedule.o) /usr/conf/lib/liblvm.a(lv_strategy.o) /usr/conf/lib/liblvm.a(lv_subr.o) /usr/conf/lib/liblvm.a(lv_syscalls.o) /usr/conf/lib/liblvm.a(lv_vgsa.o) /usr/conf/lib/liblvm.a(sh_vgsa.o) /usr/conf/lib/liblvm.a(slvm_schedule.o) what(1) Output: /usr/conf/lib/liblvm.a(lv_cluster_lock.o): lv_cluster_lock.c $Date: 96/05/12 21:28:54 $ $Revi sion: 1.3.89.11 $ PATCH_10.10 (PHKL_7164) /usr/conf/lib/liblvm.a(lv_hp.o): lv_hp.c $Date: 96/04/03 17:32:33 $ $Revision: 1.8.89 .34 $ PATCH_10.10 (PHKL_7187) /usr/conf/lib/liblvm.a(lv_ioctls.o): lv_ioctls.c $Date: 96/05/12 21:19:10 $ $Revision: 1.8.89.19 $ PATCH_10.10 (PHKL_7164) /usr/conf/lib/liblvm.a(lv_lvsubr.o): lv_lvsubr.c $Date: 96/06/28 17:15:43 $ $Revision: 1.8.89.26 $ PATCH_10.10 (PHKL_7868) /usr/conf/lib/liblvm.a(lv_mircons.o): lv_mircons.c $Date: 96/04/03 17:24:40 $ $Revision: 1 .8.89.16 $ PATCH_10.10 (PHKL_7187) /usr/conf/lib/liblvm.a(lv_schedule.o): lv_schedule.c $Date: 96/03/07 10:30:05 $ $Revision: 1.8.89.13 $ PATCH_10.10 (PHKL_6969) /usr/conf/lib/liblvm.a(lv_strategy.o): lv_strategy.c $Date: 96/03/07 10:30:11 $ $Revision: 1.8.89.11 $ PATCH_10.10 (PHKL_6969) /usr/conf/lib/liblvm.a(lv_subr.o): lv_subr.c $Date: 96/05/12 21:28:51 $ $Revision: 1.8. 89.15 $ PATCH_10.10 (PHKL_7164) /usr/conf/lib/liblvm.a(lv_syscalls.o): lv_syscalls.c $Date: 96/03/20 09:39:08 $ $Revision: 1.8.89.11 $ PATCH_10.10 (PHKL_7050) /usr/conf/lib/liblvm.a(lv_vgsa.o): lv_vgsa.c $Date: 96/03/07 10:30:14 $ $Revision: 1.8. 89.18 $ PATCH_10.10 (PHKL_6969) /usr/conf/lib/liblvm.a(sh_vgsa.o): sh_vgsa.c $Date: 96/03/07 10:32:58 $ $Revision: 1.2. 89.11 $ PATCH_10.10 (PHKL_6969) /usr/conf/lib/liblvm.a(slvm_schedule.o): slvm_schedule.c $Date: 96/03/07 10:30:17 $ $Revision : 1.2.89.15 $ PATCH_10.10 (PHKL_6969) cksum(1) Output: 2029190196 10124 /usr/conf/lib/liblvm.a(lv_cluster_lock.o) 1814693663 41352 /usr/conf/lib/liblvm.a(lv_hp.o) 3155594092 22584 /usr/conf/lib/liblvm.a(lv_ioctls.o) 1252925419 33576 /usr/conf/lib/liblvm.a(lv_lvsubr.o) 1211201130 18160 /usr/conf/lib/liblvm.a(lv_mircons.o) 4123578667 19824 /usr/conf/lib/liblvm.a(lv_schedule.o) 3832661300 7340 /usr/conf/lib/liblvm.a(lv_strategy.o) 1767820908 8600 /usr/conf/lib/liblvm.a(lv_subr.o) 1429141636 17700 /usr/conf/lib/liblvm.a(lv_syscalls.o) 2787520754 12764 /usr/conf/lib/liblvm.a(lv_vgsa.o) 3274061036 28620 /usr/conf/lib/liblvm.a(sh_vgsa.o) 266753548 6984 /usr/conf/lib/liblvm.a(slvm_schedule.o) Patch Conflicts: None Patch Dependencies: s700: 10.10: PHCO_6702 Hardware Dependencies: None Other Dependencies: None Supersedes: PHKL_6852 PHKL_6969 PHKL_7050 PHKL_7164 PHKL_7187 Equivalent Patches: PHKL_7846: s700: 10.01 PHKL_7847: s800: 10.01 PHKL_7869: s800: 10.10 PHKL_7870: s700: 10.20 PHKL_7871: s800: 10.20 Patch Package Size: 290 Kbytes Installation Instructions: Please review all instructions and the Hewlett-Packard SupportLine User Guide or your Hewlett-Packard support terms and conditions for precautions, scope of license, restrictions, and, limitation of liability and warranties, before installing this patch. ------------------------------------------------------------ 1. Back up your system before installing a patch. 2. Login as root. 3. Copy the patch to the /tmp directory. 4. Move to the /tmp directory and unshar the patch: cd /tmp sh PHKL_7868 5a. For a standalone system, run swinstall to install the patch: swinstall -x autoreboot=true -x match_target=true \ -s /tmp/PHKL_7868.depot 5b. For a homogeneous NFS Diskless cluster run swcluster on the server to install the patch on the server and the clients: swcluster -i -b This will invoke swcluster in the interactive mode and force all clients to be shut down. WARNING: All cluster clients must be shut down prior to the patch installation. Installing the patch while the clients are booted is unsupported and can lead to serious problems. The swcluster command will invoke an swinstall session in which you must specify: alternate root path - default is /export/shared_root/OS_700 source depot path - /tmp/PHKL_7868.depot To complete the installation, select the patch by choosing "Actions -> Match What Target Has" and then "Actions -> Install" from the Menubar. 5c. For a heterogeneous NFS Diskless cluster: - run swinstall on the server as in step 5a to install the patch on the cluster server. - run swcluster on the server as in step 5b to install the patch on the cluster clients. The cluster clients must be shut down as described in step 5b. By default swinstall will archive the original software in /var/adm/sw/patch/PHKL_7868. If you do not wish to retain a copy of the original software, you can create an empty file named /var/adm/sw/patch/PATCH_NOSAVE. Warning: If this file exists when a patch is installed, the patch cannot be deinstalled. Please be careful when using this feature. It is recommended that you move the PHKL_7868.text file to /var/adm/sw/patch for future reference. To put this patch on a magnetic tape and install from the tape drive, use the command: dd if=/tmp/PHKL_7868.depot of=/dev/rmt/0m bs=2k Special Installation Instructions: Due to the number of objects in this patch, the customization phase of the update may take more than 10 minutes. During that time the system will not appear to make forward progress, but it will actually be installing the objects. If a system has the base VxFS product and the patch is installed and the VxFS advanced product needs to be installed, it is essential that first the patch be removed and then the advanced product be installed. After the installation of the advanced VxFS product, the patch could be installed again.