Patch Name: PHKL_21369 Patch Description: s700 10.20 LVM Cumulative patch w/FC Hub + Optimus support Creation Date: 00/04/28 Post Date: 00/05/02 Hardware Platforms - OS Releases: s700: 10.20 Products: N/A Filesets: LVM.LVM-KRN OS-Core.CORE-KRN Automatic Reboot?: Yes Status: General Superseded Critical: No (superseded patches were critical) PHKL_19704: PANIC PHKL_20963: HANG PHKL_19696: PANIC PHKL_19209: OTHER This patch is essential to improve the recovery of FC devices in large configurations. PHKL_19166: HANG Path Name: /hp-ux_patches/s700/10.X/PHKL_21369 Symptoms: PHKL_21369: (SR: 8606108373 CR: JAGab78776) Reads from a mirrored LV can take a very long time to complete if one of the mirrors is unavailable. (SR: 8606106798 CR: JAGab76189) With PHKL_20963 (recalled) installed, LVM may perform a full resync every time a volume group is activated. While the resync is in progress, system performance may be degraded. In addition, since LVM does not have two valid copies of all data during the resync, the system is vulnerable to a disk failure until the resync completes. PHKL_19704: ( SR: 1653305987 DTS: JAGab17773 ) If bad block relocation is enabled on a logical volume with parallel read support, then any requests to a block currently being relocated results in system panic. PHKL_20963: (SR: 8606128444 CR: JAGac81735) It might not be possible to activate a volume group in shared mode if any of its physical volumes are on an Optimus Prime Disk Array. (SR: 8606106637 CR: JAGab75913) An LVM deadlock (hang) can occur when LVM commands which operate on logical volumes are run at the same time as device query operations. The result is that the LVM commands and the query operations never complete (and cannot be terminated), and it is not possible to run any subsequent LVM commands. Furthermore, it is possible that subsequent device recovery will be delayed indefinitely. The only way to restore normal operation is to reboot the system. For example, the lvmerge(1M), lvsplit(1M) commands and glance(1) when run together could cause the commands to deadlock, resulting in a situation where they make no forward progress and cannot be interrupted or killed. The same result could occur running lvchange(1M) or lvextend(1M) and lvdisplay(1M) together. The defect was introduced in PHKL_19209 which has since been recalled. This new patch supersedes PHKL_19696, PHKL_19209, PHKL_20040 and PHKL_20807. This patch contains all the fixes contained in these patches. Customers with any of these superseded patches installed should apply this new patch. (SR: 8606106012 CR: JAGab74797) It's possible for an I/O request to be accepted while a logical volume is being closed, causing the operating system to panic. Typical actions that close a logical volume are unmounting a filesystem and closing a database (or other) application which uses raw logical volumes. The panic would likely be a data page fault in an lvm ("lv_") routine. PHKL_20807: ( SR:8606100412 DTS: JAGab31786 ) This is an interim patch to support the Optimus disk array. PHKL_20040: ( SR:8606100412 DTS: JAGab31786 ) LVM incorrectly treats two volumes within an Optimus disk array as alternate paths to a single volume because they have the same LUN ID, even though they are distinct volumes and have different target IDs. PHKL_19696: ( SR:8606101971 DTS: JAGab66231 ) If there is a problem with the physical volume to which the logical volume is mapped, LVM returns EIO error for logical requests, without retrying till the I/O timeout value set on that logical volume. PHKL_19209: ( SR:5003437970 DTS: JAGaa40887 ) When multiple physical volumes or paths to physical volumes are lost, it can require minutes to recover them. During the time the PVs for a given volume group are tested, locks were held which delayed other LVM operations and the opens and closes of logical volumes. Prior changes to the device recovery code provided some benefit, assuring that device recovery was 1-2 minutes regardless of the number of paths or devices to be recovered, however this still was not enough. The new device recovery code in this patch reduces the recovery time to under 35 seconds, again independent of the number of paths or devices offline. PHKL_19166: ( SR: 8606100864 DTS: JAGab39559 ) ( SR: 4701424846 DTS: JAGab14452 ) Performance degradation when massively parallel subpage size (<8K) reads are performed (as with Informix). ( SR: 8606100864 DTS: JAGab39559 ) ( SR: 1653289132 DTS: JAGaa67952 ) The system hangs when lvmkd is waiting for the lock obtained earlier by an application that performs a vg_create operation. The hang does not happen unless there is a powerfailed disk. ( SR: 8606100864 DTS: JAGab39559 ) ( SR: 4701424895 DTS: JAGab14455 ) Optimus Disk Arrays (model number A5277A-HP) are not recognized as an ACTIVE/PASSIVE device and subsequently are not handled properly by the driver. PHKL_17546: ( SR: 1653289553 DTS: JAGaa46305 ) LVM's autoresync after disk powerfail can leave extents stale. Defect Description: PHKL_21369: (SR: 8606108373 CR: JAGab78776) When reading from a mirrored logical volume, LVM might try a disk that is known to be off line before it tries another disk which is still available. The read is delayed while the first I/O times out. Resolution: In selecting the best mirror to read from, give preference to disks that are still available over disks that are known to be off line. (SR: 8606106798 CR: JAGab76189) When a volume group is activated, LVM validates a data structure on each physical volume called the Mirror Consistency Record (MCR). If the MCR is not valid, LVM performs a full resync of the physical volume and should rewrite the MCR. But with PHKL_20963 installed, LVM does not rewrite the MCR. Instead, if the MCR is invalid, LVM performs a full resync every time the volume group is activated, rather than just the first time. While the resync is in progress, system performance may be degraded. In addition, since LVM does not have two valid copies of all data during the resync, the system is vulnerable to a disk failure until the resync completes. (This problem does not affect performance or availability of mirrored logical volumes after the resync has completed.) With PHKL_20963 installed, LVM might also display the wrong physical volume (PV) number in certain diagnostic messages. PHKL_21369 corrects this, as well. Resolution: If the MCR is not valid, rewrite it after performing a full resync. PHKL_19704: ( SR: 1653305987 DTS: JAGab17773 ) Currently, LVM does not allow bad block relocation with parallel read operation for consistency in bad block relocation. So when a bad block relocation is going on and a new request comes on to the same block then we panic. As an enhancment, we enable bad block relocation with parallel read operation if we notice that the block we are accessing is being relocated, depending on the state either initiate bad block relocation or wait till the bad block relocation is completed. If REL_DESIRED meaning a read noticed a bad block is set on a block then we initiate the relocation for this block. If REL_PENDING or REL_DEVICE meaning relocation is going on, then we wait till the relocation is completed and then we will do the I/O from the new location. Resolution : Modified lv_hwreloc() and lv_validblk() for taking action appropriately for the above mentioned states. PHKL_20963: (SR: 8606128444 CR: JAGac81735) It might not be possible to activate a volume group in shared mode if any of its physical volumes are on an Optimus Prime Disk Array, because the serial numbers for these device are truncated when they're passed between nodes in a ServiceGuard cluster. Resolution: Don't truncate Optimus Prime serial numbers. [Although this defect was resolved in PHKL_20963, the original documentation did not mention it.] (SR: 8606106637 CR: JAGab75913) The LVM deadlock (hang) occurs due to a defect introduced in PHKL_19209 (recalled). In the bad patch, the problem was that an easily encountered deadlock condition was introduced while attempting to correct another relatively rare deadlock. The problem can be easily reproduced by running LVM commands which operate on existing logical volumes such as lvextend(1M), lvsplit(1M) or lvmerge(1M) along with commands that query logical volumes, such as glance(1). The deadlock occurs roughly 10% of the time, but when it does happen there are severe consequences. The deadlock makes it impossible to complete the operations or to run any other LVM commands, without rebooting the system. Resolution: The LVM kernel code was modified. The volume group lock and other LVM locks were reordered and a new volume group data lock was added to allow device recovery operations to occur simultaneously with command operations. Thus correcting the old and newly introduced deadlock defects. This patch supersedes the interim PHKL_20807 patch. It reintroduces the device recovery changes from PHKL_19209 and the bug fixes from PHKL_19696 which were purposely excluded from PHKL_20807. (SR: 8606106012 CR: JAGab74797) Because of a race condition in LVM, it is possible for an I/O request to be accepted when the logical volume is being closed. Eventually, a data structure that has already been freed (as a result of closing the logical volume) is referenced, causing the operating system to panic. Resolution: Eliminate the race condition so that I/O cannot proceed after a logical volume has been closed. [Although this defect was resolved in PHKL_20963, the original documentation did not mention it.] PHKL_20807: ( SR:8606100412 DTS: JAGab31786 ) This is an interim patch to support the Optimus disk array. PHKL_20040: ( SR:8606100412 DTS: JAGab31786 ) For some disk arrays, LVM treats all occurrences of the same LUN ID as alternate paths to a single volume. This assumption is not correct for the Optimus disk array: two distinct volumes may have the same LUN ID, but different target IDs. Resolution: To identify a unique volume in an Optimus array, LVM now uses both its LUN ID and target ID. PHKL_19696: ( SR:8606101971 DTS: JAGab66231 ) If the physical volume to which the logical volume is mapped has problems, instead of retrying till the lv_iotimeout value set on the logical volume, LVM returns EIO for logical requests before lv_iotimeout. This is because we are initializing the start time on the request during scheduling of the request. If the PV to which the request is to be scheduled is down then we append the request to powerfail wait queue without scheduling. When the PV comes back, we start resending the buffers in the powerfail wait queue and at that time we check the elapsed time (current time - initial time set) of the request, since we had not initialized the time on the request as we did not do the scheduling, it will be set to zero resulting in a value higher than the lv_iotimeout. Hence we bail out the request without processing it, although the time elapsed will be much less than the lv_iotimeout value. Resolution : Initializing logical buf start time in lv_strategy(), at the time of processing the request instead of setting it during scheduling in lv_schedule(). PHKL_19209: ( SR:5003437970 DTS: JAGaa40887 ) The problem was that some of the LVM device recovery was still a serial process. Resolution: The LVM device recovery code was modified to cause all tests of devices and paths to be conducted in parallel. Devices which are available are immediately brought online again, irrespective of other failed devices or paths. The changes in this patch assure that devices recover within the time it takes to test the device/path and to update its data structures. The volume group data structures and LVM operations that require them --LVM commands and opens and closes of logical volumes should be held off no more than 35 seconds. PHKL_19166: ( SR: 8606100864 DTS: JAGab39559 ) ( SR: 4701424846 DTS: JAGab14452 ) Informix issues massive amounts of 1K reads in parallel. With an 8K page size and I/Os serialized within the page, performance suffers. Resolution: Logic was added to allow reads from the same 8K page to proceed in parallel when bad block relocation is completely disabled (lvchange -r N). ( SR: 8606100864 DTS: JAGab39559 ) ( SR: 1653289132 DTS: JAGaa67952 ) If the holder of the vg_lock is waiting for I/O to finish, and if the I/O can't finish until we switch to another link, then we get into a deadlock. Resolution: To resolve the deadlock, the code now obtains the lock temporarily, in order to switch to the alternate link, then returns the lock to the original holder to finish the I/O. ( SR: 8606100864 DTS: JAGab39559 ) ( SR: 4701424895 DTS: JAGab14455 ) We need to recognize Optimus Array as an ACTIVE/PASSIVE device. Resolution: Added code to recognize the Optimus Array as an ACTIVE/PASSIVE device. PHKL_17546: ( SR: 1653289553 DTS: JAGaa46305 ) lv_syncx() may return with stale extents without actually syncing all the extents. Resolution: Added additional check to see if all the extents are synced; otherwise return error. lv_syncx() will return SUCCESS only when the syncing is completed. Made changes in lv_resyncpv() to preserve error value. SR: 1653289132 1653289553 1653305987 4701424846 4701424895 5003437970 8606100412 8606100864 8606101971 8606106637 8606108373 8606106798 Patch Files: /usr/conf/lib/libhp-ux.a(lv_lvm.o) /usr/conf/lib/libhp-ux.a(rw_lock.o) /usr/conf/lib/liblvm.a(lv_block.o) /usr/conf/lib/liblvm.a(lv_cluster_lock.o) /usr/conf/lib/liblvm.a(lv_defect.o) /usr/conf/lib/liblvm.a(lv_hp.o) /usr/conf/lib/liblvm.a(lv_ioctls.o) /usr/conf/lib/liblvm.a(lv_lvsubr.o) /usr/conf/lib/liblvm.a(lv_mircons.o) /usr/conf/lib/liblvm.a(lv_phys.o) /usr/conf/lib/liblvm.a(lv_schedule.o) /usr/conf/lib/liblvm.a(lv_spare.o) /usr/conf/lib/liblvm.a(lv_strategy.o) /usr/conf/lib/liblvm.a(lv_subr.o) /usr/conf/lib/liblvm.a(lv_syscalls.o) /usr/conf/lib/liblvm.a(lv_vgda.o) /usr/conf/lib/liblvm.a(lv_vgsa.o) /usr/conf/lib/liblvm.a(sh_vgsa.o) what(1) Output: /usr/conf/lib/libhp-ux.a(lv_lvm.o): lv_lvm.c $Date: 2000/04/27 14:02:14 $ $Revision: 1.3 .98.5 $ PATCH_10.20 (PHKL_21369) /usr/conf/lib/libhp-ux.a(rw_lock.o): rw_lock.c $Date: 2000/02/17 09:45:49 $ $Revision: 1. 8.98.6 $ PATCH_10.20 (PHKL_20963) /usr/conf/lib/liblvm.a(lv_block.o): lv_block.c $Date: 2000/04/27 13:31:03 $ $Revision: 1 .13.98.8 $ PATCH_10.20 (PHKL_21369) /usr/conf/lib/liblvm.a(lv_cluster_lock.o): lv_cluster_lock.c $Date: 2000/04/27 13:40:16 $ $Revi sion: 1.10.98.9 $ PATCH_10.20 (PHKL_21369) /usr/conf/lib/liblvm.a(lv_defect.o): lv_defect.c $Date: 2000/04/27 13:54:21 $ $Revision: 1.16.98.8 $ PATCH_10.20 (PHKL_21369) /usr/conf/lib/liblvm.a(lv_hp.o): lv_hp.c $Date: 2000/04/27 13:59:06 $ $Revision: 1.18 .98.38 $ PATCH_10.20 (PHKL_21369) /usr/conf/lib/liblvm.a(lv_ioctls.o): lv_ioctls.c $Date: 2000/04/27 13:59:23 $ $Revision: 1.18.98.27 $ PATCH_10.20 (PHKL_21369) /usr/conf/lib/liblvm.a(lv_lvsubr.o): lv_lvsubr.c $Date: 2000/04/27 13:59:31 $ $Revision: 1.15.98.26 $ PATCH_10.20 (PHKL_21369) /usr/conf/lib/liblvm.a(lv_mircons.o): lv_mircons.c $Date: 2000/04/27 13:59:34 $ $Revision: 1.14.98.9 $ PATCH_10.20 (PHKL_21369) /usr/conf/lib/liblvm.a(lv_phys.o): lv_phys.c $Date: 2000/04/27 13:59:37 $ $Revision: 1. 14.98.21 $ PATCH_10.20 (PHKL_21369) /usr/conf/lib/liblvm.a(lv_schedule.o): lv_schedule.c $Date: 2000/04/27 13:59:39 $ $Revision : 1.18.98.16 $ PATCH_10.20 (PHKL_21369) /usr/conf/lib/liblvm.a(lv_spare.o): lv_spare.c $Date: 2000/04/27 13:59:58 $ $Revision: 1 .3.98.12 $ PATCH_10.20 (PHKL_21369) /usr/conf/lib/liblvm.a(lv_strategy.o): lv_strategy.c $Date: 2000/04/27 14:00:04 $ $Revision : 1.14.98.18 $ PATCH_10.20 (PHKL_21369) /usr/conf/lib/liblvm.a(lv_subr.o): lv_subr.c $Date: 2000/04/27 14:00:06 $ $Revision: 1. 18.98.11 $ PATCH_10.20 (PHKL_21369) /usr/conf/lib/liblvm.a(lv_syscalls.o): lv_syscalls.c $Date: 2000/04/27 14:00:07 $ $Revision : 1.14.98.13 $ PATCH_10.20 (PHKL_21369) /usr/conf/lib/liblvm.a(lv_vgda.o): lv_vgda.c $Date: 2000/04/27 14:00:09 $ $Revision: 1. 18.98.8 $ PATCH_10.20 (PHKL_21369) /usr/conf/lib/liblvm.a(lv_vgsa.o): lv_vgsa.c $Date: 2000/04/27 14:00:11 $ $Revision: 1. 14.98.8 $ PATCH_10.20 (PHKL_21369) /usr/conf/lib/liblvm.a(sh_vgsa.o): sh_vgsa.c $Date: 2000/04/27 14:00:13 $ $Revision: 1 .3.98.11 $ PATCH_10.20 (PHKL_21369) cksum(1) Output: 29448133 156624 /usr/conf/lib/libhp-ux.a(lv_lvm.o) 2303322019 6588 /usr/conf/lib/libhp-ux.a(rw_lock.o) 2979949460 2664 /usr/conf/lib/liblvm.a(lv_block.o) 2417573758 10592 /usr/conf/lib/liblvm.a(lv_cluster_lock.o) 3494888456 12920 /usr/conf/lib/liblvm.a(lv_defect.o) 1593089359 89316 /usr/conf/lib/liblvm.a(lv_hp.o) 3566191263 34648 /usr/conf/lib/liblvm.a(lv_ioctls.o) 2991866460 38412 /usr/conf/lib/liblvm.a(lv_lvsubr.o) 1339508606 18180 /usr/conf/lib/liblvm.a(lv_mircons.o) 1289736306 7740 /usr/conf/lib/liblvm.a(lv_phys.o) 3027240879 26432 /usr/conf/lib/liblvm.a(lv_schedule.o) 775224458 38920 /usr/conf/lib/liblvm.a(lv_spare.o) 911903142 7668 /usr/conf/lib/liblvm.a(lv_strategy.o) 3771042602 10180 /usr/conf/lib/liblvm.a(lv_subr.o) 4065182126 14080 /usr/conf/lib/liblvm.a(lv_syscalls.o) 2819255649 9436 /usr/conf/lib/liblvm.a(lv_vgda.o) 1055806990 12696 /usr/conf/lib/liblvm.a(lv_vgsa.o) 1991856172 42260 /usr/conf/lib/liblvm.a(sh_vgsa.o) Patch Conflicts: None Patch Dependencies: s700: 10.20: PHKL_16750 Hardware Dependencies: None Other Dependencies: None Supersedes: PHKL_17546 PHKL_19166 PHKL_19209 PHKL_19696 PHKL_19704 PHKL_20040 PHKL_20807 PHKL_20963 Equivalent Patches: PHKL_21370: s800: 10.20 Patch Package Size: 610 KBytes Installation Instructions: Please review all instructions and the Hewlett-Packard SupportLine User Guide or your Hewlett-Packard support terms and conditions for precautions, scope of license, restrictions, and, limitation of liability and warranties, before installing this patch. ------------------------------------------------------------ 1. Back up your system before installing a patch. 2. Login as root. 3. Copy the patch to the /tmp directory. 4. Move to the /tmp directory and unshar the patch: cd /tmp sh PHKL_21369 5a. For a standalone system, run swinstall to install the patch: swinstall -x autoreboot=true -x match_target=true \ -s /tmp/PHKL_21369.depot By default swinstall will archive the original software in /var/adm/sw/patch/PHKL_21369. If you do not wish to retain a copy of the original software, you can create an empty file named /var/adm/sw/patch/PATCH_NOSAVE. WARNING: If this file exists when a patch is installed, the patch cannot be deinstalled. Please be careful when using this feature. It is recommended that you move the PHKL_21369.text file to /var/adm/sw/patch for future reference. To put this patch on a magnetic tape and install from the tape drive, use the command: dd if=/tmp/PHKL_21369.depot of=/dev/rmt/0m bs=2k Special Installation Instructions: New tests run for the first time on this patch revealed a preexisting problem with some LVM commands which will be fixed in a separate commands patch. Commands that are run independently work fine, but when a logical volume command (lvchange, lvcreate, lvextend, lvreduce, lvremove, lvrmboot) is run at the same time as a volume group command (vgchgid, vgexport, vgreduce, vgremove, vgscan), it is possible that the 'lvlnboot -R' and vgcfgbackup portions of the logical volume command may fail. The best way to avoid the problem is not to run the indicated LVM commands simultaneously. If the 'lvlnboot -R' or vgcfgbackup operation fails, the workaround is simply to repeat these manually. Similarly, vgdisplay and lvdisplay might fail if they are run while the LVM configuration is changing. If this happens, simply repeat the vgdisplay or lvdisplay command. There are no inconsistencies introduced into the LVM configuration file or the on-disk LVM data structures by this defect. However, it is important to run the failed 'lvlnboot -R' command when boot logical volumes are changed and to perform configuration backups whenever the volume group configuration is changed. This patch depends on base patch PHKL_16750. For successful installation, please ensure that PHKL_16750 is in the same depot with this patch, or PHKL_16750 is already installed. Due to the number of objects in this patch, the customization phase of the update may take more than 10 minutes. During that time the system will not appear to make forward progress, but it will actually be installing the objects.