Patch Name: PHNE_20313 Patch Description: s700_800 10.20 NFS Kernel General Rel & Perf Patch Creation Date: 99/10/28 Post Date: 00/01/20 Warning: 00/01/27 - This Critical Warning has been issued by HP. - On systems that have PHNE_20313 installed the RPM Tools (Glance and MeasureWare) will no longer run. Specifically GlancePlus and MeasureWare will terminate with a fatal error message similar to: mi_get_nfsv3_stats - error getting NFS client statistic: buffers_allocated Bad address. == Fatal Nums Error == C.02.30.00 04/15/99 08:10:07 gailp == User: root Date: Tue Jan 25 21:41:40 File: nums.C Line: 930 Product id: Glance System: asp B.10.20 9000/780 nmsi:AccessCounters MI_GET_NFS_STATS failed: -1 == End of Error Msg ============================= - Systems that are not running the RPM Tools (Glance and MeasureWare) are not affected by this problem. - HP recommends that PHNE_20313 be removed from all systems that utilize the RPM Tools (Glance and MeasureWare). PHNE_20313 should also be removed from all software depots that may be used to install patches on these systems. - The problem is corrected in patch PHNE_20957, which was released today. PHNE_20957 may be installed after PHNE_20313 is removed. - For system maintenance reasons, HP recommends that PHNE_20313 be removed before PHNE_20957 is installed. If you choose not to remove PHNE_20313 before installing PHNE_20957, the system will still function properly after PHNE_20957 is installed. Hardware Platforms - OS Releases: s700: 10.20 s800: 10.20 Products: N/A Filesets: OS-Core.CORE-KRN Automatic Reboot?: Yes Status: General Superseded With Warnings Critical: No (superseded patches were critical) PHNE_20021: PANIC panic in internal test - cachefs with autofs. PHNE_16924: PANIC CORRUPTION Panic due to buffer cache corruption PHNE_16925: PANIC CORRUPTION Panic due to buffer cache corruption PHNE_15863: PANIC HANG CORRUPTION Hang encountered in nfs_fsync() Panic with m_free(), sbdrop(), and clntkudp_callit() Panic with truncated file Panic with nfs_purge_caches(), binvalfree(), bwrite(), nfs_strategy(), do_bio(), and nfswrite() recursion Hang encountered with autofs (this applies only to systems that have the ACE 2 software bundle installed) Hang encountered because of rfs_readdirplus3() memory leak (this applies only to systems that have the ACE 2 software bundle installed) Hang encountered with cachefs (this applies only to systems that have the ACE 2 software bundle installed) Panic with rfs3_readlink_free() (this applies only to systems that have the ACE 2 software bundle installed) Corruption encountered with reads (this applies only to systems that have the ACE 2 software bundle installed) PHNE_15864: PANIC HANG CORRUPTION Hang encountered in nfs_fsync() Panic with m_free(), sbdrop(), and clntkudp_callit() Panic with truncated file Panic with nfs_purge_caches(), binvalfree(), bwrite(), nfs_strategy(), do_bio(), and nfswrite() recursion Hang encountered with autofs (this applies only to systems that have the ACE 2 software bundle installed) Hang encountered because of rfs_readdirplus3() memory leak (this applies only to systems that have the ACE 2 software bundle installed) Hang encountered with cachefs (this applies only to systems that have the ACE 2 software bundle installed) Panic with rfs3_readlink_free() (this applies only to systems that have the ACE 2 software bundle installed) Corruption encountered with reads (this applies only to systems that have the ACE 2 software bundle installed) PHNE_15041: PANIC Panic encountered with autofs (this applies only to systems that have the ACE 2 software bundle installed) PHNE_15042: PANIC Panic encountered with autofs (this applies only to systems that have the ACE 2 software bundle installed) PHNE_14071: PANIC HANG Panic encountered in ku_sendto_mbuf() Panic encountered in ckuwakeup() Panic encountered in vn_rele() Hang encountered in nfs_fsync() PHNE_14072: PANIC HANG Panic encountered in ku_sendto_mbuf() Panic encountered in ckuwakeup() Panic encountered in vn_rele() Hang encountered in nfs_fsync() PHNE_13823: HANG PANIC Hang encountered in nfs_fsync() Panic encountered in clnt_kudpinit() PHNE_13824: HANG PANIC Hang encountered in nfs_fsync() Panic encountered in clnt_kudpinit() PHNE_13668: HANG Hang encountered in Instant Ignition PHNE_13669: HANG Hang encountered in instant ignition PHNE_13235: PANIC CORRUPTION Panic encountered in nfs_prealloc() Corruption encountered with retransmissions PHNE_13236: PANIC CORRUPTION Panic encountered in nfs_prealloc() Corruption encountered with retransmissions PHNE_12427: HANG Hang encountered with exportfs command PHNE_12428: HANG Hang encountered with exportfs command PHNE_11386: HANG Hang encountered in NFS IO PHNE_11387: HANG Hang encountered in NFS IO PHNE_11008: CORRUPTION PANIC Overwritten rnode error in do_bio() Rename of jfs file system from PCNFS causes panic Data page fault in ckuwakeup() PHNE_11009: CORRUPTION PANIC Overwritten rnode error in do_bio() Rename of jfs file system from PCNFS causes panic Data page fault in ckuwakeup() PHKL_8544: HANG PHKL_8545: HANG Path Name: /hp-ux_patches/s700_800/10.X/PHNE_20313 Symptoms: PHNE_20313: 1. Automounter hangs when trying to mount cachefs filesystem. 2. The timeo option in the mount command does not have any effect when set. 3. Untar-ing a large quantity of files over NFS can be slow. 4. NFS version 3 client is very slow when performing a write operation to a Celerra server. 5. 'maxcnodes' is a constant but should be made a configurable variable to improve cachefs performance. 6. fuser does not work over Cachefs mountpoint. 7. diff(1) failed on AIX client with HP 10.20 server due to name of a regular file passed to pathconf(2) call. 8. MP_SPINLOCK was not taken in nfs3_do_bio()which could have caused a credential related panic. 9. NFS3ERR_TOOSMALL reported as a last packet when it should not be. SUNs solstice PC-NFS-client and DEC-clients cannot handle this error message and fail while accessing the directory. 10. mount command fails with kernel data page fault panic instead of failing with ETIMEDOUT. 11. Enhanced NFS version 3 to support the full 32k read/write block size. PHNE_20021: 1. Panic found during internal testing in cachefs with autofs. PHNE_19426: 1. Client process can hang forever over NFS. This can occur when an NFS client generates a high amount of write requests, and the NFS server is very busy. Once the process is hung it cannot be killed. 2. Client might see many networking timeouts using NFS version 3. 3. NFS file creation fails with EACCESS when open() is called with O_TRUNC|O_EXCL. 4. mknod of /dev/rroot (c 255 0xffffff) fails over NFS mount. 5. 10.20: implicit UDP bind results in using inp_lport==0 in ku_fastsend. 6. NFS client panic when mounting using smaller than 1k block size. 7. Loading executable file or running memory map applications over NFS will fail when NFS read/write block size is not set to 4k increment. 8. Autofs not always triggering a re-mount with concurrent mounts and umounts, incorrectly gets error ENOENT - "No such file or directory" 9. Autofs hangs even with PHNE_17200 when running scripts triggering concurrent mount/umount 10. Autofs - cp(1) to inactive direct mount fails with error EOPNOTSUPP - "Operation not supported" 11. Autofs - mv command fails in direct mnt with error ENOSYS - "Function is not available" PHNE_18961: 1. Some Sun's clients using Automount might fail to mount when HPUX's server setting MOUNTD_VER to 2 in /etc/rc.config.d/nfsconf 2. Memory leak in the NFS client system when using NFS file locking. PHNE_18962: 1. Some Sun's clients using Automount might fail to mount when HPUX's server setting MOUNTD_VER to 2 in /etc/rc.config.d/nfsconf 2. Memory leak in the NFS client system when using NFS file locking. PHNE_17619: NOTE: This ONC+/NFS patch has been reconstructed to deliver the ONC+/NFS Networking ACE products (NFS PV3, AutoFS, and CacheFS). In addition, the older NFS Automounter will be delivered, along with configuration scripts (in /etc/rc.config.d/nfsconf) necessary to select AutoFS, or the Automounter, or neither. Please read the NOTE in Special Installation Instructions for more details. 1. With a umask of 027, a write followed by a sleep followed by another write (all over an NFS mount) fails with a "Permission denied" error. 2. Autofs hangs when manual unmounts are used 3. NFS PV3 only supports up to 8k read/write transfer size 4. NFS PV3 performs poorly when writing small records in synchronous mode. 5. Cannot copy NFS PV3 Large files greater than 2GB from the NFS mounted file system to the local file system. 6. NFS3ERR_JUKEBOX is not handled in 10.20 ACE PV3 7. System hang/sleep at clntkudp_callit() at outbuf and not waking up 8. NFS file can be removed even when the file is still being referenced. 9. The system may panic with a kernel stack overflow. 10. Root trying to write to an NFS mounted file that is not writable by root can reset the file size to zero on 10.20 PV3. 11. 10.20 Client writes NULL characters into file even though client application did not generate them. 12. System panic while doing reading or writing over cachefs mount point for a long period of time. PHNE_17620: NOTE: This ONC+/NFS patch has been reconstructed to deliver the ONC+/NFS Networking ACE products (NFS PV3, AutoFS, and CacheFS). In addition, the older NFS Automounter will be delivered, along with configuration scripts (in /etc/rc.config.d/nfsconf) necessary to select AutoFS, or the Automounter, or neither. Please read the NOTE in Special Installation Instructions for more details. 1. With a umask of 027, a write followed by a sleep followed by another write (all over an NFS mount) fails with a "Permission denied" error. 2. Autofs hangs when manual unmounts are used 3. NFS PV3 only supports up to 8k read/write transfer size 4. NFS PV3 performs poorly when writing small records in synchronous mode. 5. Cannot copy NFS PV3 Large files greater than 2GB from the NFS mounted file system to the local file system. 6. NFS3ERR_JUKEBOX is not handled in 10.20 ACE PV3 7. System hang/sleep at clntkudp_callit() at outbuf and not waking up 8. NFS file can be removed even when the file is still being referenced. 9. The system may panic with a kernel stack overflow. 10. Root trying to write to an NFS mounted file that is not writable by root can reset the file size to zero on 10.20 PV3. 11. 10.20 Client writes NULL characters into file even though client application did not generate them. 12. System panic while doing reading or writing over cachefs mount point for a long period of time. PHNE_16924: The system may panic with a data page fault. NOTE: Patch PHNE_16924 installs a patch for the networking ACE 2 software bundle (PHNE_13833) only if that bundle has been installed on the system. Otherwise, patch PHNE_16924 installs a patch for the standard release and its patches (represented by PHNE_13823). PHNE_16925: The system may panic with a data page fault. NOTE: Patch PHNE_16925 installs a patch for the networking ACE 2 software bundle (PHNE_13834) only if that bundle has been installed on the system. Otherwise, patch PHNE_16925 installs a patch for the standard release and its patches (represented by PHNE_13824). PHNE_15863: 1. Processes may hang when trying to access files across NFS. 2. The system may panic with a data page fault when an NFS operation is interrupted on a uniprocessor system. 3. The system may panic with a data page fault when reading a file that has been truncated. 4. The system may panic with a kernel stack overflow. 5. Syslog shows the message: vxfs: mesg 016: vx_ilisterr 6. Quotas are not honored on a diskless client in the same way that they are honored on its server under certain circumstances. 7. The setting of NFS file/directory modification and access time stamps is inconsistent. 8. An HP NFS server does not permit NFS file/directory time stamps to be set from a non-HP NFS client. 9. Autofs hangs when remounting hierarchical autofs mount points (this applies only to systems that have the ACE 2 software bundle installed). 10. Autofs hangs when running Netscape (this applies only to systems that have the ACE 2 software bundle installed). 11. System hangs when using NFS PV3 as a server (this applies only to systems that have the ACE 2 software bundle installed). 12. Cachefs hangs (this applies only to systems that have the ACE 2 software bundle installed). 13. When an archive library is in an NFS PV3 mounted directory, nm gives the "bad magic" error string after listing all symbols (the symbols list is fine). This problem does not occur with NFS PV2 (this applies only to systems that have the ACE 2 software bundle installed). 14. The system may panic with a data page fault on an NFS PV3 server and shows rfs3_readlink_free() in the panic stack trace (this applies only to systems that have the ACE 2 software bundle installed). 15. A read() after an lseek() past EOF is successful (this applies only to systems that have the ACE 2 software bundle installed). 16. A cp over an NFS PV3 mount encounters an error: Value too large to be stored in data type NFS PV2 does not encounter the error (this applies only to systems that have the ACE 2 software bundle installed). NOTE: Patch PHNE_15863 installs a patch for the networking ACE 2 software bundle (PHNE_13833) only if that bundle has been installed on the system. Otherwise, patch PHNE_15863 installs a patch for the standard release and its patches (represented by PHNE_13823). PHNE_15864: 1. Processes may hang when trying to access files across NFS. 2. The system may panic with a data page fault when an NFS operation is interrupted on a uniprocessor system. 3. The system may panic with a data page fault when reading a file that has been truncated. 4. The system may panic with a kernel stack overflow. 5. Syslog shows the message: vxfs: mesg 016: vx_ilisterr 6. Quotas are not honored on a diskless client in the same way that they are honored on its server under certain circumstances. 7. The setting of NFS file/directory modification and access time stamps is inconsistent. 8. An HP NFS server does not permit NFS file/directory 9. Autofs hangs when remounting hierarchical autofs mount points (this applies only to systems that have the ACE 2 software bundle installed). 10. Autofs hangs when running Netscape (this applies only to systems that have the ACE 2 software bundle installed). 11. System hangs when using NFS PV3 as a server (this applies only to systems that have the ACE 2 software bundle installed). 12. Cachefs hangs (this applies only to systems that have the ACE 2 software bundle installed). 13. When an archive library is in an NFS PV3 mounted directory, nm gives the "bad magic" error string after listing all symbols (the symbols list is fine). This problem does not occur with NFS PV2 (this applies only to systems that have the ACE 2 software bundle installed). 14. The system may panic with a data page fault on an NFS PV3 server and shows rfs3_readlink_free() in the panic stack trace (this applies only to systems that have the ACE 2 software bundle installed). 15. A read() after an lseek() past EOF is successful (this applies only to systems that have the ACE 2 software bundle installed). 16. A cp over an NFS PV3 mount encounters an error: Value too large to be stored in data type NFS PV2 does not encounter the error (this applies only to systems that have the ACE 2 software bundle installed). NOTE: Patch PHNE_15864 installs a patch for the networking ACE 2 software bundle (PHNE_13834) only if that bundle has been installed on the system. Otherwise, patch PHNE_15864 installs a patch for the standard release and its patches (represented by PHNE_13824). PHNE_15041: 1. Poor NFS performance over 100BT. 2. System panics with data page fault (this applies only to systems that have the ACE 2 software bundle installed). 3. Application fails (this applies only to systems that have the ACE 2 software bundle installed). 4. Application fails (this applies only to systems that have the ACE 2 software bundle installed). NOTE: Patch PHNE_15041 installs a patch for the networking ACE 2 software bundle (PHNE_13833) only if that bundle has been installed on the system. Otherwise, patch PHNE_15041 installs a patch for the standard release and its patches (represented by PHNE_13823). PHNE_15042: 1. Poor NFS performance over 100BT. 2. System panics with data page fault (this applies only to systems that have the ACE 2 software bundle installed). 3. Application fails (this applies only to systems that have the ACE 2 software bundle installed). 4. Application fails (this applies only to systems that have the ACE 2 software bundle installed). NOTE: Patch PHNE_15042 installs a patch for the networking ACE 2 software bundle (PHNE_13834) only if that bundle has been installed on the system. Otherwise, patch PHNE_15042 installs a patch for the standard release and its patches (represented by PHNE_13824). PHNE_14071: 1. The system may panic with a data memory protection fault. 2. Processes may hang when trying to access files across NFS. 3. The system may panic with a data page fault. 4. The system may panic with a vn_rele. NOTE: Patch PHNE_14071 installs a patch for the networking ACE 2 software bundle (PHNE_13833) only if that bundle has been installed on the system. Otherwise, patch PHNE_14071 installs a patch for the standard release and its patches (represented by PHNE_13823). PHNE_14072: 1. The system may panic with a data memory protection fault. 2. Processes may hang when trying to access files across NFS. 3. The system may panic with a data page fault. 4. The system may panic with a vn_rele. NOTE: Patch PHNE_14072 installs a patch for the networking ACE 2 software bundle (PHNE_13834) only if that bundle has been installed on the system. Otherwise, patch PHNE_14072 installs a patch for the standard release and its patches (represented by PHNE_13824). PHNE_13833: This patch is part of the 10.20 ACE 2 bundle which adds networking enhancements to 10.20. New networking features supported in ACE 2 include NFS Version 3.0, AutoFS, and CacheFS. PHNE_13834: This patch is part of the 10.20 ACE 2 bundle which adds networking enhancements to 10.20. New networking features supported in ACE 2 include NFS Version 3.0, AutoFS, and CacheFS. PHNE_13823: 1. Processes may hang when trying to access files across NFS. 2. The system may panic with a spinlock deadlock. PHNE_13824: 1. Processes may hang when trying to access files across NFS. 2. The system may panic with a spinlock deadlock. PHNE_13668: Client IO from root is denied across an NFS mount point, causing hang in instant ignition and NFS clients. PHNE_13669: Client IO from root is denied across an NFS mount point causing hang in instant ignition and NFS clients. PHNE_13235: 1. JFS Servers with NFS clients may see poor performance when doing large file transfers across NFS 2. When using swap across an NFS mount, and the mounted disk becomes full, the system may panic. 3. Data corruption may occur when a transmission errors on a crowded network force retransmissions. PHNE_13236: 1. JFS Servers with NFS clients may see poor performance when doing large file transfers across NFS 2. When using swap across an NFS mount, and the mounted disk becomes full, the system may panic. 3. Data corruption may occur when a transmission errors on a crowded network force retransmissions. PHNE_12427: The exportfs -u command hangs. PHNE_12428: The exportfs -u command hangs. PHNE_11386: Hang when doing IO across an NFS mount PHNE_11387: Hang when doing IO across an NFS mount PHNE_11008: 1. Some instances of NFS writes (such as cp from an NFS client) may complete successfully even when errors occur. 2. Renaming a VxFS file to another VxFS from a PCNFS client causes a panic. 3. Disabling anonymous access is not recognized by PCNFS clients, allowing them to run as a priviledged user. 4. Data page faults caused in the client 5. Directories in an NFS mounted file system are created with 000 permissions value. PHNE_11009: 1. Some instances of NFS writes (such as cp from an NFS client) may complete successfully even when errors occur. 2. Renaming a VxFS file to another VxFS from a PCNFS client causes a panic. 3. Disabling anonymous access is not recognized by PCNFS clients, allowing them to run as a priviledged user. 4. Data page faults caused in the client PHNE_9864: Directories in an NFS mounted file system are created with 000 permissions value. PHKL_9155: 1. Add write-gathering support for NFS servers. 2. The length of a timeout for an NFS request may become extremely long (on the order of minutes). PHKL_9156: 1. Add write-gathering support for NFS servers. 2. The length of a timeout for an NFS request may become extremely long (on the order of minutes). PHKL_8544: 1. Data page fault in an MP environment due to synchronization problems with the client's biod's. 2. A panic within kernel RPC (in svc_getreqset) in an MP environment is generated due to another synchronization problem. 3. System hangs caused in large systems. PHKL_8545: 1. Data page fault in an MP environment due to synchronization problems with the client's biod's. 2. A panic within kernel RPC (in svc_getreqset) in an MP environment is generated due to another synchronization problem. 3. System hangs caused in large systems. Defect Description: PHNE_20313: 1. With PHNE_19426, autofs controlling cachefs indirect map will hang on mount Resolution: Improve detection logic to allow automountd detection logic to detect mount calling mount or unmount calling unmount. Change applied to AUTOFS_PROCESS_IS AUTOMOUNTED macro in kernel. 2. The timeo option in the mount command does not have any effect when set. Resolution: A change in the NFS kernel to make use of this option passed down by the NFS mount command. 3. Untar-ing a large quantity of files over NFS can be slow. NFS sometimes needs to invalidate it's client cache to ensure that it's cache is not stale (binvalfree). It is in this path that NFS makes some unnecessary calls to binvalfree, which slows down the Untar operation. Resolution: A performance enhancement has been made to the NFS kernel to avoid some calls to binvalfree. 4. NFS version 3 client is very slow when performing a write operation to a Celerra server. The Celerra server doesn't implement post attribute return on error which is part of the NFS version 3 protocol, but is not a required feature. What is required is that a client must handle this scenario correctly and this is what out NFS client has failed to do. Resolution: A change in the NFS path has been made to handle the scenario where a server does't return post attribute on error. 5. The 'maxcnodes' s a constant(= MAXCNODES ) the value of which is 500. Sometimes this is too low a value when a dedicated file system is used for Cahefs. Resolution: The value of 'maxcnodes' variable will now be a computed value which defaults to 50% of ninode. One can cahnge the value with adb. There is a feature, that if the user does not like the changed value, you can set maxcnodes=0x7fffffff and on the next Cachefs mount the value will be set to 50% of ninode again. 6. The command fuser does not work over a Cachefs mount point though fuser -c does. As a result the ServiceGuard scripts fail to work because fuser -k cannot kill the processes keeping the mount points busy. The root cause is that the v_nodeid field in the cnode is not set. Resolution: Set the v_nodeid field in the cnode with the fid returned in the attributes. Also one has to set the kernel parameter "pi_newmnttype". One can use adb to turn on the pi_newmnttype, or this can be done by the command onccompat -n. 7. diff(1) failed due to invalid name length returned by NFS pathconf(2) from HP Server on regular files. HP's local file system does not support pathconf(2) on regular files and when AIX generates pathconf(2) call on regular files, server replies with name length of zero. Resolution: When a file argument is passed in to nfs server code: rfs_pathconf(),it is now converted to the parent directory before calling and passing the value to the underlying filesystem pathconf() call. 8. MP_Spinlock was omitted in nfs3_do_bio() in hpnfs_vnops.c. This could have caused a panic when cred was decremented. Resolution: MP_Spinlock was added to nfs3_do_bio(). 9. 10.20 NFS-servers always do send an NFS3ERR_TOOSMALL reply as a last packet during an readdirplus-call. SUNs solstice PC-NFS-client and DEC-clients cannot handle this error message and fail while accessing the directory. Resolution: Code change. Remove nents==0 code from rfs_readdirplus3. 10. nfs_mount() routine frees mount info pointer and then calls nfs_inactive() which references one of the fields in mount info structure causing a kernel data page fault panic. Resolution: The order of freeing the pointers looks obvisously wrong. The reason kernel does not crash every time mount timesout is the following: The answer is in kmem_free(), FREE() and how they work. After kmem_free() returns the memory we just freed ends up on a free list. So referencing it does not cause a page fault everytime we go through through nfs_inactive3(), after freeing mi.We page fault if that memory gets allocated to some other process and the data is no longer valid. 11. Enhanced NFS version 3 to support the full 32k read/write block size. Resolution: Enhanced NFS version 3 to support the full 32k read/write block size. PHNE_20021: 1. Internal testing with cachefs controlled by autofs will get a 'data page fault' panic in <30min. Resolution: In autofs, added a concurrency check on node before using (vnode_t*)vp->v_vfsmountedhere. PHNE_19426: 1. This turned out to be a server problem where the NFS server dropped the client request. This caused the client to retry in a infinite loop, which is normal. The reason this occurred is because the server corrupted the cache data entry. Resolution: A fix has been made in the NFS server code to prevent duplicate cache table corruption. 2. The timeout table for NFS version 3 wasn't correct. This leads to client timing out too quickly. Resolution: The timeout's table has been fixed in the NFS kernel to match Sun's timeout implementation of NFS. 3. Specifying both O_EXCL and O_TRUNC at file creation time caused NFS client to pass 0 file size to the server who treats the file as an existing file and tries to verify the credential which failed. Resolution: A fix has been implemented to not allow 0 file size to pass to the server so it will not verify the credential of a newly created file. 4. mknod of a character device with -1 minor number will cause the NFS client to create a FIFO file. Resolution: A new error message "Operation not supported" will be returned when the user attempts to create a file with the properties described above. 5. Connectivity problem can occur when all dynamic port numbers are in use. Resolution: Return error when no port can be allocated. 6. The NFS kernel doesn't support block size of less than 1k even though the mount commands allows it. Resolution: The kernel mount routine has been changed to allow the use of less than 1k block size. 7. Loading executable file or running memory map applications over NFS will fail when NFS read/write block size is not set to 4k increment. Resolution: A check in the NFS mount code to ensure that the buffer page is allocated at 4k increment. 8. AutoFS is not triggering re-mount === Running AutoFS with very short node timeout (e.g. "automount -t0") can cause failures to access files which are present. Resolution: Adjust the logic used to detect special requests from the "automountd" daemon. Include tests to avoid race conditions between autofs_proc and file lookups. Set a minimum hold time for autonodes so that they are not recycled immediately after being created. 9. AutoFS hangs even with PHNE_17200 === AutoFS with a short node timeout with a script using manual umounts in a tight loop can hang autofs after 10-15 minutes. Resolution: Adjust the logic used to detect special requests from the "automountd" daemon. Include tests to avoid race conditions between autofs_proc and file lookups. Set a minimum hold time for autonodes so that they are not recycled immediately after being created. 10. cp(1) fails to inactive direct mount === When a command is given to copy a file to the top of an unmounted direct mount point, 'cp' fails with "Operation not supported". Resolution: Add a pathconf() handler for direct mounts. 11. mv command fails in AutoFS direct mnt == When the 'mv' command is used to rename a file from the current directory and it is a direct mount point, it will fail if the direct mount is not already active (mounted), with error ENOSYS. Resolution: Add direct mount handler to auto_access() and auto_rename(). With changes, 'cd' triggers mount. PHNE_18961: 1. Setting MOUNTD_VER to 2 will force HPUX server to start rpc.mountd servicing only version 2 of NFS. This is a special feature introduced in 10.20 for backward compatibility reason. According to NFS specification, a client should contact rpc.mountd on the server to see which is the highest version it supports before attempting to use that version. This wasn't the case for some Sun's clients. Sun's clients attempt to contact nfsd instead of rpc.mountd for service. This is a problem. Since HP's nfsd always servicing both versions and rpc.mountd only servicing version 2, some Sun's client get confuse. HP's client doesn't have this problem because it is smart enough to fall back to version 2 if 3 is not available. Resolution: Setting MOUNTD_VER to 2 will now force HPUX server to start both rpc.mountd and nfsd servicing only version 2 of NFS. This solved the confusion for some Sun's clients. 2. When using NFS file locking/unlocking heavily, the kernel RPC on the client system leaks memory in chunks of 32 byte. Resolution: The memory leak is due to not deallocating a spinlock structure associated with the RPC client handle when the client handle is freed. This has been fixed PHNE_18962: 1. Setting MOUNTD_VER to 2 will force HPUX server to start rpc.mountd servicing only version 2 of NFS. This is a special feature introduced in 10.20 for backward compatibility reason. According to NFS specification, a client should contact rpc.mountd on the server to see which is the highest version it supports before attempting to use that version. This wasn't the case for some Sun's clients. Sun's clients attempt to contact nfsd instead of rpc.mountd for service. This is a problem. Since HP's nfsd always servicing both versions and rpc.mountd only servicing version 2, some Sun's client get confuse. HP's client doesn't have this problem because it is smart enough to fall back to version 2 if 3 is not available. Resolution: Setting MOUNTD_VER to 2 will now force HPUX server to start both rpc.mountd and nfsd servicing only version 2 of NFS. This solved the confusion for some Sun's clients. 2. When using NFS file locking/unlocking heavily, the kernel RPC on the client system leaks memory in chunks of 32 byte. Resolution: The memory leak is due to not deallocating a spinlock structure associated with the RPC client handle when the client handle is freed. This has been fixed PHNE_17619: 1. The problem is that there are conditions in which read credentials are not set. That means that any reads performed as part of a write operation will fail because there are no read credentials available for them to pass and all work done on the server is as root. 2. Remount logic causes hangs 3. Enhanced NFS version 3 to support up to 24k read/ write block size. 4. Tunned the code to perform better in small block synchronous write. 5. A NFS PV3 client is unable to copy large files greater than 2GB from a NFS mounted file system to the local file system. 6. NFS3ERR_JUKEBOX is not handled in 10.20 ACE PV3 code 7. Fixed system hang/sleep in clntkudp_callit() at outbuf and not waking up. Also added statistcial data collection for sleep and wakeup calls. 8. The code in nfs_remove checks if a file is busy and if it has been renamed. If the test is true then the file will be removed. This can cause a busy file that has not been renamed removed. The 2 conditions should be checked separately, i.e. first check if a file is busy, if it is, then check if it's been renamed? Do nothing if it is; otherwise, rename it. Only when a file is not busy then it will be removed. 9. the code path to purge the buffer caches of a stale file handle who has large amount of delayed write data could be too long and causes kernel stack overflow. 10. When a root user can not write to the file, the file attributes are changed including the file size. The code closing the file does not check if the user closing the file has write permission before it sets the file attribute using the new one thus causing the file size gets reset to zero. 11. The problem occurs due to avoiding the read of the block when multiple processes are appending data to the same file block when nfs_no_read_before_write is turned on. Due to NFS client avoiding the read of the file block and initializing the buffer to NULLs,NULLs are written out. 12. The cachefs read and write path is not releasing credential correctly. PHNE_17620: 1. The problem is that there are conditions in which read credentials are not set. That means that any reads performed as part of a write operation will fail because there are no read credentials available for them to pass and all work done on the server is as root. 2. Remount logic causes hangs 3. Enhanced NFS version 3 to support up to 24k read/ write block size. 4. Tunned the code to perform better in small block synchronous write. 5. A NFS PV3 client is unable to copy large files greater than 2GB from a NFS mounted file system to the local file system. 6. NFS3ERR_JUKEBOX is not handled in 10.20 ACE PV3 code 7. Fixed system hang/sleep in clntkudp_callit() at outbuf and not waking up. Also added statistcial data collection for sleep and wakeup calls. 8. The code in nfs_remove checks if a file is busy and if it has been renamed. If the test is true then the file will be removed. This can cause a busy file that has not been renamed removed. The 2 conditions should be checked separately, i.e. first check if a file is busy, if it is, then check if it's been renamed? Do nothing if it is; otherwise, rename it. Only when a file is not busy then it will be removed. 9. the code path to purge the buffer caches of a stale file handle who has large amount of delayed write data could be too long and causes kernel stack overflow. 10. When a root user can not write to the file, the file attributes are changed including the file size. The code closing the file does not check if the user closing the file has write permission before it sets the file attribute using the new one thus causing the file size gets reset to zero. 11. The problem occurs due to avoiding the read of the block when multiple processes are appending data to the same file block when nfs_no_read_before_write is turned on. Due to NFS client avoiding the read of the file block and initializing the buffer to NULLs,NULLs are written out. 12. The cachefs read and write path is not releasing credential correctly. PHNE_16924: A buffer cache entry is being released more than once causing corruption in the buffer cache hash lists. NOTE: Patch PHNE_16924 installs a patch for the networking ACE 2 software bundle (PHNE_13833) only if that bundle has been installed on the system. Otherwise, patch PHNE_16924 installs a patch for the standard release and its patches (represented by PHNE_13823). PHNE_16925: A buffer cache entry is being released more than once causing corruption in the buffer cache hash lists. NOTE: Patch PHNE_16925 installs a patch for the networking ACE 2 software bundle (PHNE_13834) only if that bundle has been installed on the system. Otherwise, patch PHNE_16925 installs a patch for the standard release and its patches (represented by PHNE_13824). PHNE_15863: 1. An access of a kernel variable on an NFS client was not protected by a spinlock and thus conflicted with other accesses of the same variable causing the integrity of that variable to be compromised and leading to a case where a sleep could not be woken up. 2. An interrupt performing an operation on a socket on which NFS is attempting a socket buffer data drop (sbdrop) causes the socket operation to access a bad socket buffer address. 3. A read of a file that was truncated during the read operation causes an access of non-existent data. 4. With no biods running, a write of a stale file in which the writes are less than 8K bytes in length causes a recursive call stack that can get very large. 5. Before reading the entries of a directory, no check is performed to determine if the entity being read is a directory, which leads to the given syslog message. 6. The file open credentials are used by NFS and not the thread credentials. This can lead to a quota problem when the user that opened a file is not the user accessing the file. 7. When a utime() operation with a NULL timestamp is performed on an NFS file/directory, the NFS client uses the system time of the client to set file/directory modification and access times with SETATTR operations. However, WRITE operations use the system time of the server to set file/directory modification and access times. This causes inconsistencies when the client and server systems are in different time zones. 8. A SUN NFS client attempting to perform a utime() operation with a NULL timestamp on an HP NFS server file/directory is rejected due to a permissions error even though the process has write permissions but is not the file/directory owner. 9. When hierarchical maps are used, a umount of the mount points causes the autofs daemon to hang (this applies only to systems that have the ACE 2 software bundle installed). 10. Running Netscape from automounter paths causes hangs (this applies only to systems that have the ACE 2 software bundle installed). 11. A slow memory leak in rfs_readdirplus3() eventually starves the server of free memory (this applies only to systems that have the ACE 2 software bundle installed). 12. The calls from cachefs to do attribute lookups hang when trying to make a cachefs node (this applies only to systems that have the ACE 2 software bundle installed). 13. The nm tool is given incorrect file length data and reports a standard HPPA error (this applies only to systems that have the ACE 2 software bundle installed). 14. The system panics with data page fault on the NFS PV3 server when trying to remove a symbolic link (this applies only to systems that have the ACE 2 software bundle installed). 15. An application can lseek to EOF and then read past EOF without having an error status returned (this applies only to systems that have the ACE 2 software bundle installed). 16. The maximum file size field in the NFS PV3 protocol has been initialized incorrectly (this applies only to systems that have the ACE 2 software bundle installed). NOTE: Patch PHNE_15863 installs a patch for the networking ACE 2 software bundle (PHNE_13833) only if that bundle has been installed on the system. Otherwise, patch PHNE_15863 installs a patch for the standard release and its patches (represented by PHNE_13823). PHNE_15864: 1. An access of a kernel variable on an NFS client was not protected by a spinlock and thus conflicted with other accesses of the same variable causing the integrity of that variable to be compromised and leading to a case where a sleep could not be woken up. 2. An interrupt performing an operation on a socket on which NFS is attempting a socket buffer data drop (sbdrop) causes the socket operation to access a bad socket buffer address. 3. A read of a file that was truncated during the read operation causes an access of non-existent data. 4. With no biods running, a write of a stale file in which the writes are less than 8K bytes in length causes a recursive call stack that can get very large. 5. Before reading the entries of a directory, no check is performed to determine if the entity being read is a directory, which leads to the given syslog message. 6. The file open credentials are used by NFS and not the thread credentials. This can lead to a quota problem when the user that opened a file is not the user accessing the file. 7. When a utime() operation with a NULL timestamp is performed on an NFS file/directory, the NFS client uses the system time of the client to set file/directory modification and access times with SETATTR operations. However, WRITE operations use the system time of the server to set file/directory modification and access times. This causes inconsistencies when the client and server systems are in different time zones. 8. A SUN NFS client attempting to perform a utime() operation with a NULL timestamp on an HP NFS server file/directory is rejected due to a permissions error even though the process has write permissions but is not the file/directory owner. 9. When hierarchical maps are used, a umount of the mount points causes the autofs daemon to hang (this applies only to systems that have the ACE 2 software bundle installed). 10. Running Netscape from automounter paths causes hangs (this applies only to systems that have the ACE 2 software bundle installed). 11. A slow memory leak in rfs_readdirplus3() eventually starves the server of free memory (this applies only to systems that have the ACE 2 software bundle installed). 12. The calls from cachefs to do attribute lookups hang when trying to make a cachefs node (this applies only to systems that have the ACE 2 software bundle installed). 13. The nm tool is given incorrect file length data and reports a standard HPPA error (this applies only to systems that have the ACE 2 software bundle installed). 14. The system panics with data page fault on the NFS PV3 server when trying to remove a symbolic link (this applies only to systems that have the ACE 2 software bundle installed). 15. An application can lseek to EOF and then read past EOF without having an error status returned (this applies only to systems that have the ACE 2 software bundle installed). 16. The maximum file size field in the NFS PV3 protocol has been initialized incorrectly (this applies only to systems that have the ACE 2 software bundle installed). NOTE: Patch PHNE_15864 installs a patch for the networking ACE 2 software bundle (PHNE_13834) only if that bundle has been installed on the system. Otherwise, patch PHNE_15864 installs a patch for the standard release and its patches (represented by PHNE_13824). PHNE_15041: 1. NFS performance over 100BT is poor. 2. Autofs panics system with data page fault (this applies only to systems that have the ACE 2 software bundle installed). 3. Autofs fails to work with 9.X archived directory path libraries (this applies only to systems that have the ACE 2 software bundle installed). 4. Autofs causes swlist to fail (this applies only to systems that have the ACE 2 software bundle installed). NOTE: Patch PHNE_15041 installs a patch for the networking ACE 2 software bundle (PHNE_13833) only if that bundle has been installed on the system. Otherwise, patch PHNE_15041 installs a patch for the standard release and its patches (represented by PHNE_13823). PHNE_15042: 1. NFS performance over 100BT is poor. 2. Autofs panics system with data page fault (this applies only to systems that have the ACE 2 software bundle installed). 3. Autofs fails to work with 9.X archived directory path libraries (this applies only to systems that have the ACE 2 software bundle installed). 4. Autofs causes swlist to fail (this applies only to systems that have the ACE 2 software bundle installed). NOTE: Patch PHNE_15042 installs a patch for the networking ACE 2 software bundle (PHNE_13834) only if that bundle has been installed on the system. Otherwise, patch PHNE_15042 installs a patch for the standard release and its patches (represented by PHNE_13824). PHNE_14071: 1. An uninitialized kernel variable on an NFS server causes an address to be decremented by one and thus leaves it not pointing to a word-aligned area. 2. An access of a kernel variable on an NFS client was not protected by a spinlock and thus conflicted with other accesses of the same variable causing the integrity of that variable to be compromised and leading to a case where a loop cannot be terminated. 3. A lack of synchronization within the RPC kernel layer causes an access of a kernel variable after its memory has already been freed. 4. The NFS server allows access of a previously released vnode when a file lock is unblocked. NOTE: Patch PHNE_14071 installs a patch for the networking ACE 2 software bundle (PHNE_13833) only if that bundle has been installed on the system. Otherwise, patch PHNE_14071 installs a patch for the standard release and its patches (represented by PHNE_13823). PHNE_14072: 1. An uninitialized kernel variable on an NFS server causes an address to be decremented by one and thus leaves it not pointing to a word-aligned area. 2. An access of a kernel variable on an NFS client was not protected by a spinlock and thus conflicted with other accesses of the same variable causing the integrity of that variable to be compromised and leading to a case where a loop cannot be terminated. 3. A lack of synchronization within the RPC kernel layer causes an access of a kernel variable after its memory has already been freed. 4. The NFS server allows access of a previously released vnode when a file lock is unblocked. NOTE: Patch PHNE_14072 installs a patch for the networking ACE 2 software bundle (PHNE_13834) only if that bundle has been installed on the system. Otherwise, patch PHNE_14072 installs a patch for the standard release and its patches (represented by PHNE_13824). PHNE_13833: New functionality to support networking features in 10.20. PHNE_13834: New functionality to support networking features in 10.20. PHNE_13823: 1. Processes within the RPC kernel layer are not releasing a lock which is needed by other NFS client processes to synchronize IO requests (which requires a flush of all outstanding client to server IO). 2. A spinlock is held by a kernel RPC process when it tries to acquire a beta semaphore. PHNE_13824: 1. Processes within the RPC kernel layer are not releasing a lock which is needed by other NFS client processes to synchronize IO requests (which requires a flush of all outstanding client to server IO). 2. A spinlock is held by a kernel RPC process when it tries to acquire a beta semaphore. PHNE_13668: The NFS client prevents root from opening a file on the server. It will allow file creation, but not IO to an existing file. PHNE_13669: The NFS client prevents root from opening a file on the server. It will allow file creation, but not IO to an existing file. PHNE_13235: 1. NFS Clients may send serial IO requests out of order, causing performance problems for JFS on the Server. 2. NFS writes to a full disk used as a swap device will return an error which results in a call to panic() from nfs_prealloc(). 3. When retransmitting, the XID may not be properly reinitialized, allowing data corruption in the form of null-valued blocks of 8192 bytes (or less). PHNE_13236: 1. NFS Clients may send serial IO requests out of order, causing performance problems for JFS on the Server. 2. NFS writes to a full disk used as a swap device will return an error which results in a call to panic() from nfs_prealloc(). 3. When retransmitting, the XID may not be properly reinitialized, allowing data corruption in the form of null-valued blocks of 8192 bytes (or less). PHNE_12427: The reference count of the exported entry is not managed correctly, and may remain greater than 0 when unused. PHNE_12428: The reference count of the exported entry is not managed correctly, and may remain greater than 0 when unused. PHNE_11386: Hang in clnt_kudp.o PHNE_11387: Hang in clnt_kudp.o PHNE_11008: 1. Error codes kept in the rnode for an NFS client's file may get overwritten, and therefore not reported back to the caller when the file is closed. 2. The NFS server renaming procedures do not check for differing VxFS file systems when asking for a rename, which will cause a panic down in VxFS. 3. The server authorization program does not properly check for anonymous access when user IDs of -2 are used. 4. The netisr callout function did not protect against a race condition. null-valued blocks of 8192 bytes (or less). 5. The system does not initialize the vnode attributes when it sees a file size which is too large for the 10.20 file system, and returns an error. The server is making the directory anyway, with uninitialized (000) attributes. PHNE_11009: 1. Error codes kept in the rnode for an NFS client's file may get overwritten, and therefore not reported back to the caller when the file is closed. 2. The NFS server renaming procedures do not check for differing VxFS file systems when asking for a rename, which will cause a panic down in VxFS. 3. The server authorization program does not properly check for anonymous access when user IDs of -2 are used. 4. The netisr callout function did not protect against a race condition. PHNE_9864: The system does not initialize the vnode attributes when it sees a file size which is too large for the 10.20 file system, and returns an error. The server is making the directory anyway, with uninitialized (000) attributes. PHKL_9155: 1. NFS write performance can be improved by doing gather writes at the server. This patch implements the NFS portion of gather writes. 2. The maximum timeout values defined in RPC were very long, and neither RPC nor NFS values matched that of SUN. PHKL_9156: 1. NFS write performance can be improved by doing gather writes at the server. This patch implements the NFS portion of gather writes. 2. The maximum timeout values defined in RPC were very long, and neither RPC nor NFS values matched that of SUN. PHKL_8544: 1. The kernel's biod support code did not sufficently protect against MP race conditions. 2. The RPC processor affinity implementation used by nfsd's was not sufficently protected against MP race conditions. 3. Incorrect usage of the dnlc purge functions. PHKL_8545: 1. The kernel's biod support code did not sufficently protect against MP race conditions. 2. The RPC processor affinity implementation used by nfsd's was not sufficently protected against MP race conditions. 3. Incorrect usage of the dnlc purge functions. SR: 5003445601 1653280412 5003433078 5003429753 5003428292 5003427963 5003425116 5003423590 5003423368 5003423111 5003419325 5003418962 5003417329 5003417071 5003406660 5003404616 5003402743 5003402677 5003398826 5003394056 5003368050 5003352534 5003344226 5003343277 5003340042 5003330894 5003327338 5003326090 5003324657 5003322370 5003321513 5003319665 5003319145 5003279927 5003279091 4701408047 4701400903 4701378117 4701351577 4701341669 4701314302 4701306837 4701306829 1653275800 1653272385 1653266577 1653249268 1653197632 1653192294 1653150599 1653146886 1653146308 1653134924 1653101337 1653281691 5003456574 1653275974 1653299602 5003456897 1653308254 5003467373 1653298828 5003458299 1653299826 5003462911 Patch Files: /usr/conf/lib/libnfs.a /usr/conf/lib/libhp-ux.a(cachefs.o) /usr/conf/lib/libhp-ux.a(nfs.o) /usr/conf/lib/libhp-ux.a(nfs_iface.o) /usr/conf/lib/onc_debug.o /usr/conf/master.d/nfs what(1) Output: /usr/conf/lib/libnfs.a: svc_kudp.c $Date: 99/12/07 17:43:14 $ $Revision: 1.7 .112.5 $ PATCH_10.20 PHNE_20313 700/800 svc.c $Date: 98/11/13 13:37:28 $ $Revision: 1.8.112 .16 $ PATCH_10.20 PHNE_16924 kudp_fsend.c $Date: 99/08/13 15:42:23 $ $Revision: 1.4.112.3 $ PATCH_10.20 PHNE_19426 700/800 clnt_kudp.c $Date: 99/12/07 17:41:23 $ $Revision: 1 .9.112.29 $ PATCH_10.20 PHNE_20313 700/800 hpautofs.c $Date: 99/11/18 15:08:46 $ $Revision: 1. 1.112.4 $ PATCH_10.20 PHNE_20313 700/800 auto_subr.c $Date: 99/08/24 16:05:14 $ $Revision: 1 .1.112.7 $ PATCH_10.20 PHNE_19426 700/800 auto_vnops.c $Date: 99/10/22 11:14:47 $ $Revision: 1.1.112.6 $ PATCH_10.20 PHNE_20021 700/800 cachefs_vnops.c $Date: 99/12/07 18:19:37 $ $Revisio n: 1.1.112.7 $ PATCH_10.20 PHNE_20313 700/8 00 cachefs_vfsops.c $Date: 99/12/06 13:52:33 $ $Revisi on: 1.1.112.4 $ PATCH_10.20 PHNE _20313 700/ 800 cachefs_module.c $Date: 99/12/06 13:49:30 $ $Revisi on: 1.1.112.4 $ PATCH_10.20 PHNE _20313 700/ 800 cachefs_cnode.c $Date: 99/12/06 13:44:21 $ $Revisio n: 1.1.112.5 $ PATCH_10.20 PHNE_20313 700/80 0 hpnfs_vnops.c $Date: 99/12/07 17:39:25 $ $Revision: 1.1.112.26 $ PATCH_10.20 PHNE_20313 700/800 nfs_vfsops3.c $Date: 00/01/07 16:41:29 $ $Revision: 1.1.112.11 $ PATCH_10.20 PHNE_20313 700/800 nfs_vnops3.c $Date: 99/12/07 17:34:52 $ $Revision: 1.1.112.15 $ PATCH_10.20 PHNE_20313 700/800 nfs_subr3.c $Date: 99/11/23 14:45:30 $ $Revision: 1 .1.112.8 $ PATCH_10.20 PHNE_19426 700/800 nfs_server3.c $Date: 99/12/07 17:53:49 $ $Revision: 1.1.112.8 $ PATCH_10.20 PHNE_20313 700/800 nfs_export3.c $Date: 00/01/04 13:58:42 $ $Revision: 1.1.112.2 $ PATCH_10.20 PHNE_20313 700/800 klm_lckmgr.c $Revision: 1.5.112.3 $ klm_kprot.c $Revision: 1.1.112.2 $ nfs_vfsops.c $Date: 00/01/07 16:45:56 $ $Revision: 1.1.112.9 $ PATCH_10.2 0 PHNE_20313 700/800 nfs_vnops.c $Date: 99/12/07 17:22:34 $ $Revision: 1 .3.112.55 $ PATCH_10.20 PHNE_20313 700/800 nfs_subr.c $Date: 99/11/23 14:47:56 $ $Revision: 1. 1.112.31 $ PATCH_10.20 PHNE_14071 nfs_server.c $Date: 99/08/13 15:41:02 $ $Revision: 1.3.112.33 $ PATCH_10.20 PHNE_19426 700/800 nfs_fcntl.c $Revision: 1.1.112.18 $ /usr/conf/lib/libhp-ux.a(cachefs.o): None /usr/conf/lib/libhp-ux.a(nfs.o): None /usr/conf/lib/onc_debug.o: None /usr/conf/lib/libhp-ux.a(nfs_iface.o): None /usr/conf/master.d/nfs: $Revision: 1.2.113.3 $ cksum(1) Output: 1760544344 646382 /usr/conf/lib/libnfs.a 566132716 191044 /usr/conf/lib/libhp-ux.a(cachefs.o) 3631930508 166548 /usr/conf/lib/libhp-ux.a(nfs.o) 3472542863 2012 /usr/conf/lib/libhp-ux.a(nfs_iface.o) 566132716 191044 /usr/conf/lib/onc_debug.o 1421096347 4241 /usr/conf/master.d/nfs Patch Conflicts: None Patch Dependencies: s700: 10.20: PHKL_16750 PHNE_17731 PHNE_18915 s800: 10.20: PHKL_16751 PHNE_17730 PHNE_18915 Hardware Dependencies: None Other Dependencies: None Supersedes: PHKL_8545 PHKL_8544 PHKL_9156 PHKL_9155 PHNE_9864 PHNE_11009 PHNE_11008 PHNE_11387 PHNE_11386 PHNE_12428 PHNE_12427 PHNE_13236 PHNE_13235 PHNE_13669 PHNE_13668 PHNE_13824 PHNE_13823 PHNE_13834 PHNE_13833 PHNE_14072 PHNE_14071 PHNE_15042 PHNE_15041 PHNE_15864 PHNE_15863 PHNE_16925 PHNE_16924 PHNE_17620 PHNE_17619 PHNE_18962 PHNE_18961 PHNE_19426 PHNE_20021 Equivalent Patches: None Patch Package Size: 1250 KBytes Installation Instructions: Please review all instructions and the Hewlett-Packard SupportLine User Guide or your Hewlett-Packard support terms and conditions for precautions, scope of license, restrictions, and, limitation of liability and warranties, before installing this patch. ------------------------------------------------------------ 1. Back up your system before installing a patch. 2. Login as root. 3. Copy the patch to the /tmp directory. 4. Move to the /tmp directory and unshar the patch: cd /tmp sh PHNE_20313 5a. For a standalone system, run swinstall to install the patch: swinstall -x autoreboot=true -x match_target=true \ -s /tmp/PHNE_20313.depot By default swinstall will archive the original software in /var/adm/sw/patch/PHNE_20313. If you do not wish to retain a copy of the original software, you can create an empty file named /var/adm/sw/patch/PATCH_NOSAVE. WARNING: If this file exists when a patch is installed, the patch cannot be deinstalled. Please be careful when using this feature. It is recommended that you move the PHNE_20313.text file to /var/adm/sw/patch for future reference. To put this patch on a magnetic tape and install from the tape drive, use the command: dd if=/tmp/PHNE_20313.depot of=/dev/rmt/0m bs=2k Special Installation Instructions: PHNE_17619: PHNE_17620: After installation of this patch, the NFS configuration file will have been modified to control the behavior of the system, and, as an ascii file, can be altered by the system administrator. The environment variable names and values defined by this patch (and the resulting system behavior based on those values) are as follows: For systems not previously running NFS Version 3 either via patch installation or by installation of the ACE/HWE Networking Bundles (B6378AA or B6379AA), the NFS configuration file will now contain AUTOFS=0 MOUNT_VER=2 MOUNTD_VER=2 This indicates that only the Automounter may be used (as previously set by the AUTOMOUNT variable in this file) and that the AutoFS product will not be used. In addition, the default client protocol requested at mount time will be PV2 (MOUNT_VER=2), and only PV2 will be supported by the server (MOUNTD_VER=2). For systems currently running NFS Version 3 either via patch installation or by installation of the ACE/HWE Networking Bundles (B6378AA or B6379AA), the NFS configuration file will contain AUTOFS=1 MOUNT_VER=3 MOUNTD_VER=3 This indicates that AutoFS will be used, if the previously defined AUTOMOUNT variable is set to 1. In addition, the default client protocol requested at mount time will be PV3, and the default protocol supported for exported file systems will be PV3. PV2 will be supported as well in both instances. NOTE: When using AutoFS (AUTOFS=1) please verify that execute ("x") file access is turned OFF for all existing map files ("auto_master", "auto.home", etc.). Any map file which has execute access set will be interpreted by AutoFS as an executable map and not as a regular map file. Execute access to the "/etc/auto.home" file can, for example, be removed with the chmod command: chmod a-x /etc/auto.home The system administrator may modify the NFS variables as is deemed appropriate, but selecting PV3 or AutoFS should not be attempted unless all patches included in the Networking Bundle have been installed. These patches have been listed in the April 15th DataComm Newsletter, and are also listed below (using the latest superceding patches): In the s700 ACE Networking Bundle: hp-ux_patches/s700_800/10.X/PHCO_16591 hp-ux_patches/s700_800/10.X/PHCO_18018 hp-ux_patches/s700_800/10.X/PHCO_14645 hp-ux_patches/s700_800/10.X/PHCO_15336 hp-ux_patches/s700_800/10.X/PHCO_18135 hp-ux_patches/s700_800/10.X/PHCO_15262 hp-ux_patches/s700_800/10.X/PHCO_15263 hp-ux_patches/s700_800/10.X/PHCO_15337 hp-ux_patches/s700_800/10.X/PHCO_16809 hp-ux_patches/s700_800/10.X/PHCO_15339 hp-ux_patches/s700_800/10.X/PHCO_15340 hp-ux_patches/s700_800/10.X/PHCO_15341 hp-ux_patches/s700_800/10.X/PHCO_16874 hp-ux_patches/s700_800/10.X/PHCO_15343 hp-ux_patches/s700_800/10.X/PHCO_15344 hp-ux_patches/s700_800/10.X/PHCO_17389 hp-ux_patches/s700_800/10.X/PHCO_10947 hp-ux_patches/s700_800/10.X/PHCO_17699 hp-ux_patches/s700/10.X/PHCO_13851 hp-ux_patches/s700/10.X/PHKL_17573 hp-ux_patches/s700/10.X/PHKL_8693 hp-ux_patches/s700/10.X/PHKL_18197 hp-ux_patches/s700/10.X/PHKL_16750 hp-ux_patches/s700/10.X/PHKL_15240 hp-ux_patches/s700/10.X/PHKL_16959 hp-ux_patches/s700/10.X/PHKL_18439 hp-ux_patches/s700/10.X/PHKL_17253 hp-ux_patches/s700/10.X/PHNE_17731 hp-ux_patches/s700/10.X/PHNE_16924 hp-ux_patches/s700/10.X/PHNE_16999 hp-ux_patches/s700_800/10.X/PHNE_17098 hp-ux_patches/s700_800/10.X/PHNE_15159 hp-ux_patches/s700_800/10.X/PHNE_16692 In the s800 HWE Networking Bundle: hp-ux_patches/s700_800/10.X/PHCO_16591 hp-ux_patches/s700_800/10.X/PHCO_17389 hp-ux_patches/s700_800/10.X/PHCO_18018 hp-ux_patches/s700_800/10.X/PHCO_14645 hp-ux_patches/s700_800/10.X/PHCO_15336 hp-ux_patches/s700_800/10.X/PHCO_18135 hp-ux_patches/s700_800/10.X/PHCO_15262 hp-ux_patches/s700_800/10.X/PHCO_15263 hp-ux_patches/s700_800/10.X/PHCO_15337 hp-ux_patches/s700_800/10.X/PHCO_15344 hp-ux_patches/s700_800/10.X/PHCO_16809 hp-ux_patches/s700_800/10.X/PHCO_15339 hp-ux_patches/s700_800/10.X/PHCO_15340 hp-ux_patches/s700_800/10.X/PHCO_15341 hp-ux_patches/s700_800/10.X/PHCO_16874 hp-ux_patches/s700_800/10.X/PHCO_15343 hp-ux_patches/s700_800/10.X/PHCO_17699 hp-ux_patches/s700_800/10.X/PHCO_10947 hp-ux_patches/s800/10.X/PHCO_14016 hp-ux_patches/s800/10.X/PHKL_17574 hp-ux_patches/s800/10.X/PHKL_8694 hp-ux_patches/s800/10.X/PHKL_16751 hp-ux_patches/s800/10.X/PHKL_18198 hp-ux_patches/s800/10.X/PHKL_15247 hp-ux_patches/s800/10.X/PHKL_18440 hp-ux_patches/s800/10.X/PHKL_16957 hp-ux_patches/s800/10.X/PHKL_17254 hp-ux_patches/s800/10.X/PHNE_17730 hp-ux_patches/s800/10.X/PHNE_16925 hp-ux_patches/s700_800/10.X/PHNE_17098 hp-ux_patches/s700_800/10.X/PHNE_15159 hp-ux_patches/s700_800/10.X/PHNE_16692 hp-ux_patches/s800/10.X/PHNE_18174 ------------------- A performance enhancement which was originally introduced in PHNE_15863/4 addressed a problem where an NFS client sent READ calls over the wire prior to sending WRITESs even in cases where the READ was not necessary. Part of the fix for this problem was to introduce a new kernel variable called "nfs_no_read_before_write" By default, the no_read_before_write behavior, which avoids redundant read request to an NFS Server when appending to a file, is on. To turn it off, the system manager must do the following using adb: echo "nfs_no_read_before_write/W 0" | \ adb -k -w /stand/vmunix /dev/mem One good reason to turn this flag off is when there are multiple processes on a single client writing to the same file over an nfs mount point without holding a lock. -------------------------------------------------------- The following is a useful process for applying more than one patch while only requiring a single reboot after the final patch installation: 1) Get the individual depots over into /tmp. 2) Make a new directory to contain the set of patches: mkdir /tmp/DEPOT # For example 3) For each patch "PHKL_xxxx": swcopy -s /tmp/PHKL_xxxx.depot \* @ /tmp/DEPOT 4) swinstall -x match_target=true -x autoreboot=true \ -s /tmp/DEPOT