Patch Name: PHNE_11384 Patch Description: s700 10.10 NFS Kernel General Release and Performance Patch Creation Date: 97/06/15 Post Date: 97/06/18 Hardware Platforms - OS Releases: s700: 10.10 Products: N/A Filesets: OS-Core.CORE-KRN Automatic Reboot?: Yes Status: General Superseded Critical: Yes PHNE_11384: HANG Hang encountered in NFS IO PHNE_10962: CORRUPTION PANIC Overwritten rnode error in do_bio() Rename of jfs file system from PCNFS causes panic Data page fault in ckuwakeup() PHNE_9359: PANIC Data page fault in binvalfree_vfs() PHKL_8542: HANG PANIC Data page fault PHKL_8416: PANIC Data page fault in svr_getreqset() Path Name: /hp-ux_patches/s700/10.X/PHNE_11384 Symptoms: PHNE_11384: Hang when doing IO across an NFS mount PHNE_10962: 1. Some instances of NFS writes (such as cp from an NFS client) may complete successfully even when errors occur. 2. Renaming a VxFS file to another VxFS from a PCNFS client causes a panic. 3. Disabling anonymous access is not recognized by PCNFS clients, allowing them to run as a priviledged user. 4. Data page faults caused in the client 5. Directories in an NFS mounted file system are created with 000 permissions value. PHNE_9359: 1. A data page fault from binvalfree_vfs occurs in an MP environment. 2. System hangs caused in large systems. 3. The length of a timeout for an NFS request may become extremely long (on the order of minutes). PHKL_8542: 1. The previous NFS Megapatch caused another data page fault in an MP environment due to synchronization problems with the client's biod's. 2. PCNFSD requests have been seen to hang the system. PHKL_8416: 1. The previous NFS Megapatch caused another data page fault in an MP environment due to synchronization problems with the client's biod's. 2. A panic within kernel RPC (in svr_getreqset) in an MP environment is generated due to another synchronization problem. PHKL_7633: 1.When systems which support large UIDs are clients of or servers to systems supporting a smaller maximim UID, several types of symptoms may occur: - logins on NFS clients may receive incorrect access on NFS servers - files from NFS servers may appear to be owned by the wrong logins on NFS clients - setuid and setgid binaries available on NFS servers may allow client logins to run with incorrect access 2.Performance for MP clients on larger n-way systems may be less than desirable. 3. Unmounting NFS file system temporarily hangs client Defect Description: PHNE_11384: Hang in clnt_kudp.o PHNE_10962: 1. Error codes kept in the rnode for an NFS client's file may get overwritten, and therefore not reported back to the caller when the file is closed. 2. The NFS server renaming procedures do not check for differing VxFS file systems when asking for a rename, which will cause a panic down in VxFS. 3. The server authorization program does not properly check for anonymous access when user IDs of -2 are used. 4. The netisr callout function did not protect against a race condition. 5. The sattr_to_vattr routine does not initialized attribute values when a file size error is encountered. The calling routines should handle the error, and not perform the action (making a directory, or making a symbolic link). PHNE_9359: 1. The binvalfree_vfs algorithm did not recheck the status of a buffer cache pointer after acquiring the spinlock meant to protect the cache entry, allowing a race condition window between the initial check and the actual spinlock. 2. Incorrect usage of the dnlc purge functions. 3. The maximum timeout values defined in RPC were very long, and neither RPC nor NFS values matched that of SUN. PHKL_8542: 1. The kernel's biod support code did not sufficently protect against MP race conditions. 2. Read requests with offsets which are out of bounds will hang the system in the lower (vfs, ufs) layers. PHKL_8416: 1. The kernel's biod support code did not sufficently protect against MP race conditions. 2. The RPC processor affinity implementation used by nfsd's was not sufficently protected against MP race conditions. PHKL_7633: 1.A future HP-UX release will increase the value of MAXUID, providing for a greater range of valid UIDs and GIDs. It will also introduce problems in mixed-mode NFS environments. Let "LUID" specify a machine running a version of HP-UX with large-UID capability. Let "SUID" specify a machine with current small-UID capability. The following problems may occur: LUID client, SUID server - Client logins outside the server's range may appear as the anonymous user. However, the anonymous user UID is configurable, and is sometimes configured as the root user (in order to "trust" all client root logins without large-scale modifications to the /etc/exports file). Thus, all logins with large UIDs on the client could be mapped to root on the server. - If this previous patch has not been applied, files created by logins with large UIDs on the client will have the wrong UID on the server. This could be exploited by particular UIDs to gain root access on the server. - Files owned by the nobody user on the server will appear to be owned by the wrong user on the client. SUID client, LUID server - Files owned by large-UID logins on the server will appear to be owned by the wrong user on the client. - Executables with the setuid or setgid mode turned on will allow logins on the client to run as the wrong users. 2. MP clients use the file system semaphore (an alpha semaphore) within NFS, which is not an efficient synchronization technique. 3. The algorithm for flushing buffer caches is inefficient, forcing multiple walks of the buffer cache. Large system memory forces large buffer caches, with the result being very slow cache flushes. SR: 5003352534 5003344226 5003343277 5003330894 5003327338 5003326090 5003322370 5003321513 5003319665 5003319145 5003279927 5003279091 4701341669 4701314302 4701306837 4701306829 1653197632 1653192294 1653150599 1653146886 1653146308 1653134924 1653101337 Patch Files: /usr/conf/lib/libnfs.a(clnt_kudp.o) /usr/conf/lib/libnfs.a(nfs_export.o) /usr/conf/lib/libnfs.a(nfs_server.o) /usr/conf/lib/libnfs.a(nfs_subr.o) /usr/conf/lib/libnfs.a(nfs_vnops.o) /usr/conf/lib/libnfs.a(svc.o) what(1) Output: /usr/conf/lib/libnfs.a(clnt_kudp.o): clnt_kudp.c $Date: 97/06/16 13:36:39 $ $Revision: 1. 8.102.10 $ PATCH_10.10 PHNE_11384 /usr/conf/lib/libnfs.a(nfs_export.o): nfs_export.c $Date: 97/05/16 09:35:47 $ $Revision: 1.1.102.9 $ PATCH_10.10 PHNE_10962 /usr/conf/lib/libnfs.a(nfs_server.o): nfs_server.c $Date: 97/05/16 09:35:56 $ $Revision: 1.1.102.15 $ PATCH_10.10 PHNE_10962 /usr/conf/lib/libnfs.a(nfs_subr.o): nfs_subr.c $Date: 97/05/16 09:36:01 $ $Revision: 1. 1.102.9 $ PATCH_10.10 PHNE_10962 /usr/conf/lib/libnfs.a(nfs_vnops.o): nfs_vnops.c $Date: 97/05/16 09:36:09 $ $Revision: 1 .1.102.13 $ PATCH_10.10 PHNE_10962 /usr/conf/lib/libnfs.a(svc.o): svc.c $Date: 97/05/16 09:38:03 $ $Revision: 1.7.102. 7 $ PATCH_10.10 PHNE_10962 cksum(1) Output: 1853323236 11756 /usr/conf/lib/libnfs.a(clnt_kudp.o) 380687738 9868 /usr/conf/lib/libnfs.a(nfs_export.o) 1586907384 29080 /usr/conf/lib/libnfs.a(nfs_server.o) 1940677823 22112 /usr/conf/lib/libnfs.a(nfs_subr.o) 1887860949 35552 /usr/conf/lib/libnfs.a(nfs_vnops.o) 559485542 5916 /usr/conf/lib/libnfs.a(svc.o) Patch Conflicts: None Patch Dependencies: s700: 10.10: PHKL_10827 Hardware Dependencies: None Other Dependencies: None Supersedes: PHKL_7633 PHKL_8416 PHKL_8542 PHNE_9359 PHNE_10962 Equivalent Patches: PHNE_11385: s800: 10.10 Patch Package Size: 180 Kbytes Installation Instructions: Please review all instructions and the Hewlett-Packard SupportLine User Guide or your Hewlett-Packard support terms and conditions for precautions, scope of license, restrictions, and, limitation of liability and warranties, before installing this patch. ------------------------------------------------------------ 1. Back up your system before installing a patch. 2. Login as root. 3. Copy the patch to the /tmp directory. 4. Move to the /tmp directory and unshar the patch: cd /tmp sh PHNE_11384 5a. For a standalone system, run swinstall to install the patch: swinstall -x autoreboot=true -x match_target=true \ -s /tmp/PHNE_11384.depot 5b. For a homogeneous NFS Diskless cluster run swcluster on the server to install the patch on the server and the clients: swcluster -i -b This will invoke swcluster in the interactive mode and force all clients to be shut down. WARNING: All cluster clients must be shut down prior to the patch installation. Installing the patch while the clients are booted is unsupported and can lead to serious problems. The swcluster command will invoke an swinstall session in which you must specify: alternate root path - default is /export/shared_root/OS_700 source depot path - /tmp/PHNE_11384.depot To complete the installation, select the patch by choosing "Actions -> Match What Target Has" and then "Actions -> Install" from the Menubar. 5c. For a heterogeneous NFS Diskless cluster: - run swinstall on the server as in step 5a to install the patch on the cluster server. - run swcluster on the server as in step 5b to install the patch on the cluster clients. By default swinstall will archive the original software in /var/adm/sw/patch/PHNE_11384. If you do not wish to retain a copy of the original software, you can create an empty file named /var/adm/sw/patch/PATCH_NOSAVE. Warning: If this file exists when a patch is installed, the patch cannot be deinstalled. Please be careful when using this feature. It is recommended that you move the PHNE_11384.text file to /var/adm/sw/patch for future reference. To put this patch on a magnetic tape and install from the tape drive, use the command: dd if=/tmp/PHNE_11384.depot of=/dev/rmt/0m bs=2k Special Installation Instructions: The installation order for this patch and the dependent patches, containing the non-NFS part of the fix for handling large UIDs, is irrelevant. The following is a useful process for applying more than one patch while only requiring a single reboot after the final patch installation: 1) Get the individual depots over into /tmp. 2) Make a new directory to contain the set of patches: mkdir /tmp/DEPOT # For example 3) For each patch "PHNE_11384": swcopy -s /tmp/PHNE_11384.depot \* @ /tmp/DEPOT 4) swinstall -x match_target=true -x autoreboot=true \ -s /tmp/DEPOT