Patch Name: PHNE_9359 Patch Description: s700 10.10 NFS Kernel Megapatch Creation Date: 96/11/27 Post Date: 96/12/06 Hardware Platforms - OS Releases: s700: 10.10 Products: N/A Filesets: OS-Core.CORE-KRN Automatic Reboot?: Yes Status: General Superseded Critical: Yes PHNE_9359: PANIC Data page fault in binvalfree_vfs() PHKL_8542: HANG PANIC Data page fault PHKL_8416: PANIC Data page fault in svr_getreqset() Path Name: /hp-ux_patches/s700/10.X/PHNE_9359 Symptoms: PHNE_9359: 1. A data page fault from binvalfree_vfs occurs in an MP environment. 2. System hangs caused in large systems. 3. The length of a timeout for an NFS request may become extremely long (on the order of minutes). PHKL_8542: 1. The previous NFS Megapatch caused another data page fault in an MP environment due to synchronization problems with the client's biod's. 2. PCNFSD requests have been seen to hang the system. PHKL_8416: 1. The previous NFS Megapatch caused another data page fault in an MP environment due to synchronization problems with the client's biod's. 2. A panic within kernel RPC (in svr_getreqset) in an MP environment is generated due to another synchronization problem. PHKL_7633: 1.When systems which support large UIDs are clients of or servers to systems supporting a smaller maximim UID, several types of symptoms may occur: - logins on NFS clients may receive incorrect access on NFS servers - files from NFS servers may appear to be owned by the wrong logins on NFS clients - setuid and setgid binaries available on NFS servers may allow client logins to run with incorrect access 2.Performance for MP clients on larger n-way systems may be less than desirable. 3. Unmounting NFS file system temporarily hangs client Defect Description: PHNE_9359: 1. The binvalfree_vfs algorithm did not recheck the status of a buffer cache pointer after acquiring the spinlock meant to protect the cache entry, allowing a race condition window between the initial check and the actual spinlock. 2. Incorrect usage of the dnlc purge functions. 3. The maximum timeout values defined in RPC were very long, and neither RPC nor NFS values matched that of SUN. PHKL_8542: 1. The kernel's biod support code did not sufficently protect against MP race conditions. 2. Read requests with offsets which are out of bounds will hang the system in the lower (vfs, ufs) layers. PHKL_8416: 1. The kernel's biod support code did not sufficently protect against MP race conditions. 2. The RPC processor affinity implementation used by nfsd's was not sufficently protected against MP race conditions. PHKL_7633: 1.A future HP-UX release will increase the value of MAXUID, providing for a greater range of valid UIDs and GIDs. It will also introduce problems in mixed-mode NFS environments. Let "LUID" specify a machine running a version of HP-UX with large-UID capability. Let "SUID" specify a machine with current small-UID capability. The following problems may occur: LUID client, SUID server - Client logins outside the server's range may appear as the anonymous user. However, the anonymous user UID is configurable, and is sometimes configured as the root user (in order to "trust" all client root logins without large-scale modifications to the /etc/exports file). Thus, all logins with large UIDs on the client could be mapped to root on the server. - If this previous patch has not been applied, files created by logins with large UIDs on the client will have the wrong UID on the server. This could be exploited by particular UIDs to gain root access on the server. - Files owned by the nobody user on the server will appear to be owned by the wrong user on the client. SUID client, LUID server - Files owned by large-UID logins on the server will appear to be owned by the wrong user on the client. - Executables with the setuid or setgid mode turned on will allow logins on the client to run as the wrong users. 2. MP clients use the file system semaphore (an alpha semaphore) within NFS, which is not an efficient synchronization technique. 3. The algorithm for flushing buffer caches is inefficient, forcing multiple walks of the buffer cache. Large system memory forces large buffer caches, with the result being very slow cache flushes. SR: 1653192294 5003344226 5003330894 5003327502 5003327338 5003321513 5003319145 5003279927 4701314302 1653167379 Patch Files: /usr/conf/lib/libnfs.a(clnt_kudp.o) /usr/conf/lib/libnfs.a(nfs_export.o) /usr/conf/lib/libnfs.a(nfs_server.o) /usr/conf/lib/libnfs.a(nfs_subr.o) /usr/conf/lib/libnfs.a(nfs_vnops.o) /usr/conf/lib/libnfs.a(svc.o) what(1) Output: /usr/conf/lib/libnfs.a(nfs_export.o): nfs_export.c $Date: 96/11/26 16:21:21 $ $Revision: 1.1.102.8 $ PATCH_10.10 PHNE_9359 /usr/conf/lib/libnfs.a(nfs_server.o): nfs_server.c $Date: 96/11/26 16:21:38 $ $Revision: 1.1.102.13 $ PATCH_10.10 PHNE_9359 /usr/conf/lib/libnfs.a(nfs_subr.o): nfs_subr.c $Date: 96/12/03 13:19:32 $ $Revision: 1. 1.102.8 $ PATCH_10.10 PHNE_9359 /usr/conf/lib/libnfs.a(nfs_vnops.o): nfs_vnops.c $Date: 96/11/26 16:25:22 $ $Revision: 1 .1.102.11 $ PATCH_10.10 PHNE_9359 /usr/conf/lib/libnfs.a(svc.o): svc.c $Date: 96/11/26 16:17:02 $ $Revision: 1.7.102. 6 $ PATCH_10.10 PHNE_9359 /usr/conf/lib/libnfs.a(clnt_kudp.o): clnt_kudp.c $Date: 96/11/26 16:18:03 $ $Revision: 1. 8.102.6 $ PATCH_10.10 PHNE_9359 cksum(1) Output: 2739201053 9876 /usr/conf/lib/libnfs.a(nfs_export.o) 591827135 29004 /usr/conf/lib/libnfs.a(nfs_server.o) 1899923912 22124 /usr/conf/lib/libnfs.a(nfs_subr.o) 1741488548 35448 /usr/conf/lib/libnfs.a(nfs_vnops.o) 364840424 5924 /usr/conf/lib/libnfs.a(svc.o) 20046535 11932 /usr/conf/lib/libnfs.a(clnt_kudp.o) Patch Conflicts: None Patch Dependencies: s700: 10.10: PHKL_9073 Hardware Dependencies: None Other Dependencies: None Supersedes: PHKL_7633 PHKL_8416 PHKL_8542 Equivalent Patches: PHNE_9360: s800: 10.10 Patch Package Size: 180 Kbytes Installation Instructions: Please review all instructions and the Hewlett-Packard SupportLine User Guide or your Hewlett-Packard support terms and conditions for precautions, scope of license, restrictions, and, limitation of liability and warranties, before installing this patch. ------------------------------------------------------------ 1. Back up your system before installing a patch. 2. Login as root. 3. Copy the patch to the /tmp directory. 4. Move to the /tmp directory and unshar the patch: cd /tmp sh PHNE_9359 5a. For a standalone system, run swinstall to install the patch: swinstall -x autoreboot=true -x match_target=true \ -s /tmp/PHNE_9359.depot 5b. For a homogeneous NFS Diskless cluster run swcluster on the server to install the patch on the server and the clients: swcluster -i -b This will invoke swcluster in the interactive mode and force all clients to be shut down. WARNING: All cluster clients must be shut down prior to the patch installation. Installing the patch while the clients are booted is unsupported and can lead to serious problems. The swcluster command will invoke an swinstall session in which you must specify: alternate root path - default is /export/shared_root/OS_700 source depot path - /tmp/PHNE_9359.depot To complete the installation, select the patch by choosing "Actions -> Match What Target Has" and then "Actions -> Install" from the Menubar. 5c. For a heterogeneous NFS Diskless cluster: - run swinstall on the server as in step 5a to install the patch on the cluster server. - run swcluster on the server as in step 5b to install the patch on the cluster clients. By default swinstall will archive the original software in /var/adm/sw/patch/PHNE_9359. If you do not wish to retain a copy of the original software, you can create an empty file named /var/adm/sw/patch/PATCH_NOSAVE. Warning: If this file exists when a patch is installed, the patch cannot be deinstalled. Please be careful when using this feature. It is recommended that you move the PHNE_9359.text file to /var/adm/sw/patch for future reference. To put this patch on a magnetic tape and install from the tape drive, use the command: dd if=/tmp/PHNE_9359.depot of=/dev/rmt/0m bs=2k Special Installation Instructions: The installation order for this patch and the dependent patches, containing the non-NFS part of the fix for handling large UIDs, is irrelevant. The following is a useful process for applying more than one patch while only requiring a single reboot after the final patch installation: 1) Get the individual depots over into /tmp. 2) Make a new directory to contain the set of patches: mkdir /tmp/DEPOT # For example 3) For each patch "PHNE_9359": swcopy -s /tmp/PHNE_9359.depot \* @ /tmp/DEPOT 4) swinstall -x match_target=true -x autoreboot=true \ -s /tmp/DEPOT