Patch Name: PHNE_21704 Patch Description: s700_800 10.20 NFS Kernel General Rel & Perf Patch Creation Date: 00/05/17 Post Date: 00/06/20 Warning: 00/07/12 - This Critical Warning has been issued by HP. - PHNE_21108 introduced behavior on NFS clients where extra NULL characters are observed at the end of a file read from the NFS server. The behavior is observed when a file is modified on the NFS server to be smaller than it was originally. Although the contents of the file is correct on the NFS server, the NFS client will continue to display the original size of the file and will pad the file contents with NULL characters. - The problem also exists with superseding patch PHNE_21704. - In order to avoid the problem of an NFS client displaying incorrect file contents, HP recommends that PHNE_21704 and PHNE_21108 be removed from NFS clients. The previous NFS patch, PHNE_20957, does not exhibit this same behavior. In order to address as many known issues as possible, PHNE_20957 should be installed after PHNE_21704 and PHNE_21108 are removed. - Although PHNE_20957 does not exhibit this same problem, there are two other issues with the patch you should be aware of: - CacheFS debug messages are logged to syslog and dmesg. This is mostly an annoyance, unless sufficient CacheFS activity is generated to cause the syslog file to fill the /var file system. Care should be taken on systems using CacheFS to monitor the size of the /var/adm/syslog/syslog.log file. - Simultaneous writes by two or more processes to a file over NFS may result in NULL characters embedded in the file. This problem has existed with NFS for an extended period on HP-UX 10.20 and has only been experienced by one customer. To avoid the problem, the NFS file system can be mounted with the '-o noac' option to suppress attribute and name caching. Hardware Platforms - OS Releases: s700: 10.20 s800: 10.20 Products: N/A Filesets: OS-Core.CORE-KRN Automatic Reboot?: Yes Status: General Superseded With Warnings Critical: No (superseded patches were critical) PHNE_20021: PANIC panic in internal test - cachefs with autofs. PHNE_16924: PANIC CORRUPTION Panic due to buffer cache corruption PHNE_16925: PANIC CORRUPTION Panic due to buffer cache corruption PHNE_15863: PANIC HANG CORRUPTION Hang encountered in nfs_fsync() Panic with m_free(), sbdrop(), and clntkudp_callit() Panic with truncated file Panic with nfs_purge_caches(), binvalfree(), bwrite(), nfs_strategy(), do_bio(), and nfswrite() recursion Hang encountered with autofs (this applies only to systems that have the ACE 2 software bundle installed) Hang encountered because of rfs_readdirplus3() memory leak (this applies only to systems that have the ACE 2 software bundle installed) Hang encountered with cachefs (this applies only to systems that have the ACE 2 software bundle installed) Panic with rfs3_readlink_free() (this applies only to systems that have the ACE 2 software bundle installed) Corruption encountered with reads (this applies only to systems that have the ACE 2 software bundle installed) PHNE_15864: PANIC HANG CORRUPTION Hang encountered in nfs_fsync() Panic with m_free(), sbdrop(), and clntkudp_callit() Panic with truncated file Panic with nfs_purge_caches(), binvalfree(), bwrite(), nfs_strategy(), do_bio(), and nfswrite() recursion Hang encountered with autofs (this applies only to systems that have the ACE 2 software bundle installed) Hang encountered because of rfs_readdirplus3() memory leak (this applies only to systems that have the ACE 2 software bundle installed) Hang encountered with cachefs (this applies only to systems that have the ACE 2 software bundle installed) Panic with rfs3_readlink_free() (this applies only to systems that have the ACE 2 software bundle installed) Corruption encountered with reads (this applies only to systems that have the ACE 2 software bundle installed) PHNE_15041: PANIC Panic encountered with autofs (this applies only to systems that have the ACE 2 software bundle installed) PHNE_15042: PANIC Panic encountered with autofs (this applies only to systems that have the ACE 2 software bundle installed) PHNE_14071: PANIC HANG Panic encountered in ku_sendto_mbuf() Panic encountered in ckuwakeup() Panic encountered in vn_rele() Hang encountered in nfs_fsync() PHNE_14072: PANIC HANG Panic encountered in ku_sendto_mbuf() Panic encountered in ckuwakeup() Panic encountered in vn_rele() Hang encountered in nfs_fsync() PHNE_13823: HANG PANIC Hang encountered in nfs_fsync() Panic encountered in clnt_kudpinit() PHNE_13824: HANG PANIC Hang encountered in nfs_fsync() Panic encountered in clnt_kudpinit() PHNE_13668: HANG Hang encountered in Instant Ignition PHNE_13669: HANG Hang encountered in instant ignition PHNE_13235: PANIC CORRUPTION Panic encountered in nfs_prealloc() Corruption encountered with retransmissions PHNE_13236: PANIC CORRUPTION Panic encountered in nfs_prealloc() Corruption encountered with retransmissions PHNE_12427: HANG Hang encountered with exportfs command PHNE_12428: HANG Hang encountered with exportfs command PHNE_11386: HANG Hang encountered in NFS IO PHNE_11387: HANG Hang encountered in NFS IO PHNE_11008: CORRUPTION PANIC Overwritten rnode error in do_bio() Rename of jfs file system from PCNFS causes panic Data page fault in ckuwakeup() PHNE_11009: CORRUPTION PANIC Overwritten rnode error in do_bio() Rename of jfs file system from PCNFS causes panic Data page fault in ckuwakeup() PHKL_8544: HANG PHKL_8545: HANG Path Name: /hp-ux_patches/s700_800/10.X/PHNE_21704 Symptoms: PHNE_21704: 1. File System may eventually get full due to several debug messages in dmesg and syslog when Cachefs is used after installing PHNE_21108, PHNE_20957 or PHNE_20313. The debug messages are like: vmunix: cachefs_getattr: backvp: vmunix: cachefs_getattr: vap->va_nodeid: JAGad08894 SR:8606139585 PHNE_21108: 1. AutoFS incorrectly handles certain indirect hierarchical maps 2. Improved read-ahead algorithm to enhance READ performance. 3. Data corruption occurs in the form of embedded NULL characters in files written to simultaneously by more than one process. 4. nfsstat(1M) output does not accurately report retransmissions. 5. ls -s reports incorrect number of blocks for files on NFS PV3 server. 6. The link(1M) command does not return an error on NFS PV3 mounted directory when trying to create a new linked file and an existing linked file is already present. 7. A system panic occurs when a process writes to a file and then immediately tries to perform another file operation, such as truncating the file. PHNE_20957: 1. Glance tool fails to start after installing PHNE_20313. PHNE_20313: 1. Automounter hangs when trying to mount cachefs filesystem. 2. The timeo option in the mount command does not have any effect when set. 3. Untar-ing a large quantity of files over NFS can be slow. 4. NFS version 3 client is very slow when performing a write operation to a Celerra server. 5. 'maxcnodes' is a constant but should be made a configurable variable to improve cachefs performance. 6. fuser does not work over Cachefs mountpoint. 7. diff(1) failed on AIX client with HP 10.20 server due to name of a regular file passed to pathconf(2) call. 8. MP_SPINLOCK was not taken in nfs3_do_bio()which could have caused a credential related panic. 9. NFS3ERR_TOOSMALL reported as a last packet when it should not be. SUNs solstice PC-NFS-client and DEC-clients cannot handle this error message and fail while accessing the directory. 10. mount command fails with kernel data page fault panic instead of failing with ETIMEDOUT. 11. Enhanced NFS version 3 to support the full 32k read/write block size. PHNE_20021: 1. Panic found during internal testing in cachefs with autofs. PHNE_19426: 1. Client process can hang forever over NFS. This can occur when an NFS client generates a high amount of write requests, and the NFS server is very busy. Once the process is hung it cannot be killed. 2. Client might see many networking timeouts using NFS version 3. 3. NFS file creation fails with EACCESS when open() is called with O_TRUNC|O_EXCL. 4. mknod of /dev/rroot (c 255 0xffffff) fails over NFS mount. 5. 10.20: implicit UDP bind results in using inp_lport==0 in ku_fastsend. 6. NFS client panic when mounting using smaller than 1k block size. 7. Loading executable file or running memory map applications over NFS will fail when NFS read/write block size is not set to 4k increment. 8. Autofs not always triggering a re-mount with concurrent mounts and umounts, incorrectly gets error ENOENT - "No such file or directory" 9. Autofs hangs even with PHNE_17200 when running scripts triggering concurrent mount/umount 10. Autofs - cp(1) to inactive direct mount fails with error EOPNOTSUPP - "Operation not supported" 11. Autofs - mv command fails in direct mnt with error ENOSYS - "Function is not available" PHNE_18961: 1. Some Sun's clients using Automount might fail to mount when HPUX's server setting MOUNTD_VER to 2 in /etc/rc.config.d/nfsconf 2. Memory leak in the NFS client system when using NFS file locking. PHNE_18962: 1. Some Sun's clients using Automount might fail to mount when HPUX's server setting MOUNTD_VER to 2 in /etc/rc.config.d/nfsconf 2. Memory leak in the NFS client system when using NFS file locking. PHNE_17619: NOTE: This ONC+/NFS patch has been reconstructed to deliver the ONC+/NFS Networking ACE products (NFS PV3, AutoFS, and CacheFS). In addition, the older NFS Automounter will be delivered, along with configuration scripts (in /etc/rc.config.d/nfsconf) necessary to select AutoFS, or the Automounter, or neither. Please read the NOTE in Special Installation Instructions for more details. 1. With a umask of 027, a write followed by a sleep followed by another write (all over an NFS mount) fails with a "Permission denied" error. 2. Autofs hangs when manual unmounts are used 3. NFS PV3 only supports up to 8k read/write transfer size 4. NFS PV3 performs poorly when writing small records in synchronous mode. 5. Cannot copy NFS PV3 Large files greater than 2GB from the NFS mounted file system to the local file system. 6. NFS3ERR_JUKEBOX is not handled in 10.20 ACE PV3 7. System hang/sleep at clntkudp_callit() at outbuf and not waking up 8. NFS file can be removed even when the file is still being referenced. 9. The system may panic with a kernel stack overflow. 10. Root trying to write to an NFS mounted file that is not writable by root can reset the file size to zero on 10.20 PV3. 11. 10.20 Client writes NULL characters into file even though client application did not generate them. 12. System panic while doing reading or writing over cachefs mount point for a long period of time. PHNE_17620: NOTE: This ONC+/NFS patch has been reconstructed to deliver the ONC+/NFS Networking ACE products (NFS PV3, AutoFS, and CacheFS). In addition, the older NFS Automounter will be delivered, along with configuration scripts (in /etc/rc.config.d/nfsconf) necessary to select AutoFS, or the Automounter, or neither. Please read the NOTE in Special Installation Instructions for more details. 1. With a umask of 027, a write followed by a sleep followed by another write (all over an NFS mount) fails with a "Permission denied" error. 2. Autofs hangs when manual unmounts are used 3. NFS PV3 only supports up to 8k read/write transfer size 4. NFS PV3 performs poorly when writing small records in synchronous mode. 5. Cannot copy NFS PV3 Large files greater than 2GB from the NFS mounted file system to the local file system. 6. NFS3ERR_JUKEBOX is not handled in 10.20 ACE PV3 7. System hang/sleep at clntkudp_callit() at outbuf and not waking up 8. NFS file can be removed even when the file is still being referenced. 9. The system may panic with a kernel stack overflow. 10. Root trying to write to an NFS mounted file that is not writable by root can reset the file size to zero on 10.20 PV3. 11. 10.20 Client writes NULL characters into file even though client application did not generate them. 12. System panic while doing reading or writing over cachefs mount point for a long period of time. PHNE_16924: The system may panic with a data page fault. NOTE: Patch PHNE_16924 installs a patch for the networking ACE 2 software bundle (PHNE_13833) only if that bundle has been installed on the system. Otherwise, patch PHNE_16924 installs a patch for the standard release and its patches (represented by PHNE_13823). PHNE_16925: The system may panic with a data page fault. NOTE: Patch PHNE_16925 installs a patch for the networking ACE 2 software bundle (PHNE_13834) only if that bundle has been installed on the system. Otherwise, patch PHNE_16925 installs a patch for the standard release and its patches (represented by PHNE_13824). PHNE_15863: 1. Processes may hang when trying to access files across NFS. 2. The system may panic with a data page fault when an NFS operation is interrupted on a uniprocessor system. 3. The system may panic with a data page fault when reading a file that has been truncated. 4. The system may panic with a kernel stack overflow. 5. Syslog shows the message: vxfs: mesg 016: vx_ilisterr 6. Quotas are not honored on a diskless client in the same way that they are honored on its server under certain circumstances. 7. The setting of NFS file/directory modification and access time stamps is inconsistent. 8. An HP NFS server does not permit NFS file/directory time stamps to be set from a non-HP NFS client. 9. Autofs hangs when remounting hierarchical autofs mount points (this applies only to systems that have the ACE 2 software bundle installed). 10. Autofs hangs when running Netscape (this applies only to systems that have the ACE 2 software bundle installed). 11. System hangs when using NFS PV3 as a server (this applies only to systems that have the ACE 2 software bundle installed). 12. Cachefs hangs (this applies only to systems that have the ACE 2 software bundle installed). 13. When an archive library is in an NFS PV3 mounted directory, nm gives the "bad magic" error string after listing all symbols (the symbols list is fine). This problem does not occur with NFS PV2 (this applies only to systems that have the ACE 2 software bundle installed). 14. The system may panic with a data page fault on an NFS PV3 server and shows rfs3_readlink_free() in the panic stack trace (this applies only to systems that have the ACE 2 software bundle installed). 15. A read() after an lseek() past EOF is successful (this applies only to systems that have the ACE 2 software bundle installed). 16. A cp over an NFS PV3 mount encounters an error: Value too large to be stored in data type NFS PV2 does not encounter the error (this applies only to systems that have the ACE 2 software bundle installed). NOTE: Patch PHNE_15863 installs a patch for the networking ACE 2 software bundle (PHNE_13833) only if that bundle has been installed on the system. Otherwise, patch PHNE_15863 installs a patch for the standard release and its patches (represented by PHNE_13823). PHNE_15864: 1. Processes may hang when trying to access files across NFS. 2. The system may panic with a data page fault when an NFS operation is interrupted on a uniprocessor system. 3. The system may panic with a data page fault when reading a file that has been truncated. 4. The system may panic with a kernel stack overflow. 5. Syslog shows the message: vxfs: mesg 016: vx_ilisterr 6. Quotas are not honored on a diskless client in the same way that they are honored on its server under certain circumstances. 7. The setting of NFS file/directory modification and access time stamps is inconsistent. 8. An HP NFS server does not permit NFS file/directory 9. Autofs hangs when remounting hierarchical autofs mount points (this applies only to systems that have the ACE 2 software bundle installed). 10. Autofs hangs when running Netscape (this applies only to systems that have the ACE 2 software bundle installed). 11. System hangs when using NFS PV3 as a server (this applies only to systems that have the ACE 2 software bundle installed). 12. Cachefs hangs (this applies only to systems that have the ACE 2 software bundle installed). 13. When an archive library is in an NFS PV3 mounted directory, nm gives the "bad magic" error string after listing all symbols (the symbols list is fine). This problem does not occur with NFS PV2 (this applies only to systems that have the ACE 2 software bundle installed). 14. The system may panic with a data page fault on an NFS PV3 server and shows rfs3_readlink_free() in the panic stack trace (this applies only to systems that have the ACE 2 software bundle installed). 15. A read() after an lseek() past EOF is successful (this applies only to systems that have the ACE 2 software bundle installed). 16. A cp over an NFS PV3 mount encounters an error: Value too large to be stored in data type NFS PV2 does not encounter the error (this applies only to systems that have the ACE 2 software bundle installed). NOTE: Patch PHNE_15864 installs a patch for the networking ACE 2 software bundle (PHNE_13834) only if that bundle has been installed on the system. Otherwise, patch PHNE_15864 installs a patch for the standard release and its patches (represented by PHNE_13824). PHNE_15041: 1. Poor NFS performance over 100BT. 2. System panics with data page fault (this applies only to systems that have the ACE 2 software bundle installed). 3. Application fails (this applies only to systems that have the ACE 2 software bundle installed). 4. Application fails (this applies only to systems that have the ACE 2 software bundle installed). NOTE: Patch PHNE_15041 installs a patch for the networking ACE 2 software bundle (PHNE_13833) only if that bundle has been installed on the system. Otherwise, patch PHNE_15041 installs a patch for the standard release and its patches (represented by PHNE_13823). PHNE_15042: 1. Poor NFS performance over 100BT. 2. System panics with data page fault (this applies only to systems that have the ACE 2 software bundle installed). 3. Application fails (this applies only to systems that have the ACE 2 software bundle installed). 4. Application fails (this applies only to systems that have the ACE 2 software bundle installed). NOTE: Patch PHNE_15042 installs a patch for the networking ACE 2 software bundle (PHNE_13834) only if that bundle has been installed on the system. Otherwise, patch PHNE_15042 installs a patch for the standard release and its patches (represented by PHNE_13824). PHNE_14071: 1. The system may panic with a data memory protection fault. 2. Processes may hang when trying to access files across NFS. 3. The system may panic with a data page fault. 4. The system may panic with a vn_rele. NOTE: Patch PHNE_14071 installs a patch for the networking ACE 2 software bundle (PHNE_13833) only if that bundle has been installed on the system. Otherwise, patch PHNE_14071 installs a patch for the standard release and its patches (represented by PHNE_13823). PHNE_14072: 1. The system may panic with a data memory protection fault. 2. Processes may hang when trying to access files across NFS. 3. The system may panic with a data page fault. 4. The system may panic with a vn_rele. NOTE: Patch PHNE_14072 installs a patch for the networking ACE 2 software bundle (PHNE_13834) only if that bundle has been installed on the system. Otherwise, patch PHNE_14072 installs a patch for the standard release and its patches (represented by PHNE_13824). PHNE_13833: This patch is part of the 10.20 ACE 2 bundle which adds networking enhancements to 10.20. New networking features supported in ACE 2 include NFS Version 3.0, AutoFS, and CacheFS. PHNE_13834: This patch is part of the 10.20 ACE 2 bundle which adds networking enhancements to 10.20. New networking features supported in ACE 2 include NFS Version 3.0, AutoFS, and CacheFS. PHNE_13823: 1. Processes may hang when trying to access files across NFS. 2. The system may panic with a spinlock deadlock. PHNE_13824: 1. Processes may hang when trying to access files across NFS. 2. The system may panic with a spinlock deadlock. PHNE_13668: Client IO from root is denied across an NFS mount point, causing hang in instant ignition and NFS clients. PHNE_13669: Client IO from root is denied across an NFS mount point causing hang in instant ignition and NFS clients. PHNE_13235: 1. JFS Servers with NFS clients may see poor performance when doing large file transfers across NFS 2. When using swap across an NFS mount, and the mounted disk becomes full, the system may panic. 3. Data corruption may occur when a transmission errors on a crowded network force retransmissions. PHNE_13236: 1. JFS Servers with NFS clients may see poor performance when doing large file transfers across NFS 2. When using swap across an NFS mount, and the mounted disk becomes full, the system may panic. 3. Data corruption may occur when a transmission errors on a crowded network force retransmissions. PHNE_12427: The exportfs -u command hangs. PHNE_12428: The exportfs -u command hangs. PHNE_11386: Hang when doing IO across an NFS mount PHNE_11387: Hang when doing IO across an NFS mount PHNE_11008: 1. Some instances of NFS writes (such as cp from an NFS client) may complete successfully even when errors occur. 2. Renaming a VxFS file to another VxFS from a PCNFS client causes a panic. 3. Disabling anonymous access is not recognized by PCNFS clients, allowing them to run as a priviledged user. 4. Data page faults caused in the client 5. Directories in an NFS mounted file system are created with 000 permissions value. PHNE_11009: 1. Some instances of NFS writes (such as cp from an NFS client) may complete successfully even when errors occur. 2. Renaming a VxFS file to another VxFS from a PCNFS client causes a panic. 3. Disabling anonymous access is not recognized by PCNFS clients, allowing them to run as a priviledged user. 4. Data page faults caused in the client PHNE_9864: Directories in an NFS mounted file system are created with 000 permissions value. PHKL_9155: 1. Add write-gathering support for NFS servers. 2. The length of a timeout for an NFS request may become extremely long (on the order of minutes). PHKL_9156: 1. Add write-gathering support for NFS servers. 2. The length of a timeout for an NFS request may become extremely long (on the order of minutes). PHKL_8544: 1. Data page fault in an MP environment due to synchronization problems with the client's biod's. 2. A panic within kernel RPC (in svc_getreqset) in an MP environment is generated due to another synchronization problem. 3. System hangs caused in large systems. PHKL_8545: 1. Data page fault in an MP environment due to synchronization problems with the client's biod's. 2. A panic within kernel RPC (in svc_getreqset) in an MP environment is generated due to another synchronization problem. 3. System hangs caused in large systems. Defect Description: PHNE_21704: 1. Code had some printf messages which filled the syslog and dmesg when CacheFs is used. JAGad08894 SR:8606139585 Resolution: The printf messages which were causing this have been deleted. PHNE_21108: 1. When hierarchical indirect maps are used on 10.20 these maps fail with a mkdir error from automountd. The same maps work on 11.0 systems. Resolution: Allow indirect maps lookups initiated by the daemon to always succeed. Affected module: auto_vnops.c 2. The new read ahead algorithm calculates the number of blocks to read ahead, instead of doing two blocks read ahead everytime. The max number of blocks to read ahead is 4. If the calculated read ahead blocks is larger than the nfs read head max, the max is applied. Resolution: Enhancement. The new read ahead algorithm calculates the number of blocks to read ahead, instead of doing two blocks read ahead everytime. The max number of blocks to read ahead is 4. If the calculated read ahead blocks is larger than the nfs read head max, the max is applied. The rw3vp() routine in nfs3/hpnfs_vnops.c and rwvp() routine in nfs/nfs_vnops.c have been changed. 3. When two processes write to the same file, Null characters are written to the file. This problem does not occur when -noac (no attribute caching) is specified at mount time. NFS write code has a race condition where the second process zeroes out the buffer the first process has written to, if the write size is less than nfs block size of 8K. Resolution: A partial fix was put in to check if r_size has increased while the process waits for an empty buffer and not zero out the buffer is r_size has increased. The condition was coded in error to zero out the buffer is r_size has increased. Fix the if condition to not zero out if r_size is larger than previous value. rw3vp() routine in nfs3/hpnfs_vnops.c and rwvp() routine in nfs/nfs_vnops.c have been changed. The second race condition is in getattr code, where r_size is updated with size received from the server, when r_size is larger and more accurate. r_size should be updated only if r_size is smaller than the server file size and client has dirty buffers to flush. nfsgetattr3() routine in nfs3/nfs_vnops3.c and nfsgetaatr() routine in nfs/nfs_vnops.c have been changed. 4. A NFS client was experiencing many retransmissions of NFS packets (viewable in a nettle trace) but nfsstat is not incrementing the retrans counter. This was because the retrans counter was only being incremented if a packet we tried to send on the wire failed due to resource problem. But the retrans counter should be incremented regardless of why the call failed, if the call failed. Resolution: Code change. In clntkdup_callit() in rpc/clnt_kudp.c the retrans incrementation check was moved out of the RPC_SYSTEMERROR if statement. 5. For each directory argument,the ls command lists the contents of the directory. The -s options gives size in blocks for each entry. If the ls -s command was run on the same directory on the client and the server, different block sizes was reported for the files. The problem was due to the fact that the MAXBSIZE used for the calculation of va_blksize in fattr_to_vattr3() in nfs3/nfs_subr3.c was 64K, unlike 11.0 code or Sun code where it is 8K. Resolution: The solution was to include onc+sys/param.h in nfs_subr3.c where MAXBSIZE = 8K. Previously only h/param.h was included where MAXBSIZE=64K. 6. The link(2)(1M) does not return an EEXIST on NFS PV3 mounted directory though there is a linking file. This was due to the fact that that in most error cases we were branching to out: where the error conditon was reset to NFS3_OK. This erased the previous error condition. Resolution: Code change in rfs_link3(). Setting the status to NFS3_OK has been moved out of out:. 7. Holding a spinlock and calling binvalfree() causes the panic. Resolution: Code change. Release spinlock before calling binvalfree() in nfs_setattr3(). PHNE_20957: 1. A kernel variable has been changed to support NFS 32k read/write block size. Glance made reference to this variable and it fails to start when it can't find this variable even though Glance doesn't use this variable for NFS statistics. Resolution: Reintroduce this variable back to NFS kernel to allow glance to start correctly. PHNE_20313: 1. With PHNE_19426, autofs controlling cachefs indirect map will hang on mount Resolution: Improve detection logic to allow automountd detection logic to detect mount calling mount or unmount calling unmount. Change applied to AUTOFS_PROCESS_IS AUTOMOUNTED macro in kernel. 2. The timeo option in the mount command does not have any effect when set. Resolution: A change in the NFS kernel to make use of this option passed down by the NFS mount command. 3. Untar-ing a large quantity of files over NFS can be slow. NFS sometimes needs to invalidate it's client cache to ensure that it's cache is not stale (binvalfree). It is in this path that NFS makes some unnecessary calls to binvalfree, which slows down the Untar operation. Resolution: A performance enhancement has been made to the NFS kernel to avoid some calls to binvalfree. 4. NFS version 3 client is very slow when performing a write operation to a Celerra server. The Celerra server doesn't implement post attribute return on error which is part of the NFS version 3 protocol, but is not a required feature. What is required is that a client must handle this scenario correctly and this is what out NFS client has failed to do. Resolution: A change in the NFS path has been made to handle the scenario where a server does't return post attribute on error. 5. The 'maxcnodes' s a constant(= MAXCNODES ) the value of which is 500. Sometimes this is too low a value when a dedicated file system is used for Cahefs. Resolution: The value of 'maxcnodes' variable will now be a computed value which defaults to 50% of ninode. One can cahnge the value with adb. There is a feature, that if the user does not like the changed value, you can set maxcnodes=0x7fffffff and on the next Cachefs mount the value will be set to 50% of ninode again. 6. The command fuser does not work over a Cachefs mount point though fuser -c does. As a result the ServiceGuard scripts fail to work because fuser -k cannot kill the processes keeping the mount points busy. The root cause is that the v_nodeid field in the cnode is not set. Resolution: Set the v_nodeid field in the cnode with the fid returned in the attributes. Also one has to set the kernel parameter "pi_newmnttype". One can use adb to turn on the pi_newmnttype, or this can be done by the command onccompat -n. 7. diff(1) failed due to invalid name length returned by NFS pathconf(2) from HP Server on regular files. HP's local file system does not support pathconf(2) on regular files and when AIX generates pathconf(2) call on regular files, server replies with name length of zero. Resolution: When a file argument is passed in to nfs server code: rfs_pathconf(),it is now converted to the parent directory before calling and passing the value to the underlying filesystem pathconf() call. 8. MP_Spinlock was omitted in nfs3_do_bio() in hpnfs_vnops.c. This could have caused a panic when cred was decremented. Resolution: MP_Spinlock was added to nfs3_do_bio(). 9. 10.20 NFS-servers always do send an NFS3ERR_TOOSMALL reply as a last packet during an readdirplus-call. SUNs solstice PC-NFS-client and DEC-clients cannot handle this error message and fail while accessing the directory. Resolution: Code change. Remove nents==0 code from rfs_readdirplus3. 10. nfs_mount() routine frees mount info pointer and then calls nfs_inactive() which references one of the fields in mount info structure causing a kernel data page fault panic. Resolution: The order of freeing the pointers looks obvisously wrong. The reason kernel does not crash every time mount timesout is the following: The answer is in kmem_free(), FREE() and how they work. After kmem_free() returns the memory we just freed ends up on a free list. So referencing it does not cause a page fault everytime we go through through nfs_inactive3(), after freeing mi.We page fault if that memory gets allocated to some other process and the data is no longer valid. 11. Enhanced NFS version 3 to support the full 32k read/write block size. Resolution: Enhanced NFS version 3 to support the full 32k read/write block size. PHNE_20021: 1. Internal testing with cachefs controlled by autofs will get a 'data page fault' panic in <30min. Resolution: In autofs, added a concurrency check on node before using (vnode_t*)vp->v_vfsmountedhere. PHNE_19426: 1. This turned out to be a server problem where the NFS server dropped the client request. This caused the client to retry in a infinite loop, which is normal. The reason this occurred is because the server corrupted the cache data entry. Resolution: A fix has been made in the NFS server code to prevent duplicate cache table corruption. 2. The timeout table for NFS version 3 wasn't correct. This leads to client timing out too quickly. Resolution: The timeout's table has been fixed in the NFS kernel to match Sun's timeout implementation of NFS. 3. Specifying both O_EXCL and O_TRUNC at file creation time caused NFS client to pass 0 file size to the server who treats the file as an existing file and tries to verify the credential which failed. Resolution: A fix has been implemented to not allow 0 file size to pass to the server so it will not verify the credential of a newly created file. 4. mknod of a character device with -1 minor number will cause the NFS client to create a FIFO file. Resolution: A new error message "Operation not supported" will be returned when the user attempts to create a file with the properties described above. 5. Connectivity problem can occur when all dynamic port numbers are in use. Resolution: Return error when no port can be allocated. 6. The NFS kernel doesn't support block size of less than 1k even though the mount commands allows it. Resolution: The kernel mount routine has been changed to allow the use of less than 1k block size. 7. Loading executable file or running memory map applications over NFS will fail when NFS read/write block size is not set to 4k increment. Resolution: A check in the NFS mount code to ensure that the buffer page is allocated at 4k increment. 8. AutoFS is not triggering re-mount === Running AutoFS with very short node timeout (e.g. "automount -t0") can cause failures to access files which are present. Resolution: Adjust the logic used to detect special requests from the "automountd" daemon. Include tests to avoid race conditions between autofs_proc and file lookups. Set a minimum hold time for autonodes so that they are not recycled immediately after being created. 9. AutoFS hangs even with PHNE_17200 === AutoFS with a short node timeout with a script using manual umounts in a tight loop can hang autofs after 10-15 minutes. Resolution: Adjust the logic used to detect special requests from the "automountd" daemon. Include tests to avoid race conditions between autofs_proc and file lookups. Set a minimum hold time for autonodes so that they are not recycled immediately after being created. 10. cp(1) fails to inactive direct mount === When a command is given to copy a file to the top of an unmounted direct mount point, 'cp' fails with "Operation not supported". Resolution: Add a pathconf() handler for direct mounts. 11. mv command fails in AutoFS direct mnt == When the 'mv' command is used to rename a file from the current directory and it is a direct mount point, it will fail if the direct mount is not already active (mounted), with error ENOSYS. Resolution: Add direct mount handler to auto_access() and auto_rename(). With changes, 'cd' triggers mount. PHNE_18961: 1. Setting MOUNTD_VER to 2 will force HPUX server to start rpc.mountd servicing only version 2 of NFS. This is a special feature introduced in 10.20 for backward compatibility reason. According to NFS specification, a client should contact rpc.mountd on the server to see which is the highest version it supports before attempting to use that version. This wasn't the case for some Sun's clients. Sun's clients attempt to contact nfsd instead of rpc.mountd for service. This is a problem. Since HP's nfsd always servicing both versions and rpc.mountd only servicing version 2, some Sun's client get confuse. HP's client doesn't have this problem because it is smart enough to fall back to version 2 if 3 is not available. Resolution: Setting MOUNTD_VER to 2 will now force HPUX server to start both rpc.mountd and nfsd servicing only version 2 of NFS. This solved the confusion for some Sun's clients. 2. When using NFS file locking/unlocking heavily, the kernel RPC on the client system leaks memory in chunks of 32 byte. Resolution: The memory leak is due to not deallocating a spinlock structure associated with the RPC client handle when the client handle is freed. This has been fixed PHNE_18962: 1. Setting MOUNTD_VER to 2 will force HPUX server to start rpc.mountd servicing only version 2 of NFS. This is a special feature introduced in 10.20 for backward compatibility reason. According to NFS specification, a client should contact rpc.mountd on the server to see which is the highest version it supports before attempting to use that version. This wasn't the case for some Sun's clients. Sun's clients attempt to contact nfsd instead of rpc.mountd for service. This is a problem. Since HP's nfsd always servicing both versions and rpc.mountd only servicing version 2, some Sun's client get confuse. HP's client doesn't have this problem because it is smart enough to fall back to version 2 if 3 is not available. Resolution: Setting MOUNTD_VER to 2 will now force HPUX server to start both rpc.mountd and nfsd servicing only version 2 of NFS. This solved the confusion for some Sun's clients. 2. When using NFS file locking/unlocking heavily, the kernel RPC on the client system leaks memory in chunks of 32 byte. Resolution: The memory leak is due to not deallocating a spinlock structure associated with the RPC client handle when the client handle is freed. This has been fixed PHNE_17619: 1. The problem is that there are conditions in which read credentials are not set. That means that any reads performed as part of a write operation will fail because there are no read credentials available for them to pass and all work done on the server is as root. 2. Remount logic causes hangs 3. Enhanced NFS version 3 to support up to 24k read/ write block size. 4. Tunned the code to perform better in small block synchronous write. 5. A NFS PV3 client is unable to copy large files greater than 2GB from a NFS mounted file system to the local file system. 6. NFS3ERR_JUKEBOX is not handled in 10.20 ACE PV3 code 7. Fixed system hang/sleep in clntkudp_callit() at outbuf and not waking up. Also added statistcial data collection for sleep and wakeup calls. 8. The code in nfs_remove checks if a file is busy and if it has been renamed. If the test is true then the file will be removed. This can cause a busy file that has not been renamed removed. The 2 conditions should be checked separately, i.e. first check if a file is busy, if it is, then check if it's been renamed? Do nothing if it is; otherwise, rename it. Only when a file is not busy then it will be removed. 9. the code path to purge the buffer caches of a stale file handle who has large amount of delayed write data could be too long and causes kernel stack overflow. 10. When a root user can not write to the file, the file attributes are changed including the file size. The code closing the file does not check if the user closing the file has write permission before it sets the file attribute using the new one thus causing the file size gets reset to zero. 11. The problem occurs due to avoiding the read of the block when multiple processes are appending data to the same file block when nfs_no_read_before_write is turned on. Due to NFS client avoiding the read of the file block and initializing the buffer to NULLs,NULLs are written out. 12. The cachefs read and write path is not releasing credential correctly. PHNE_17620: 1. The problem is that there are conditions in which read credentials are not set. That means that any reads performed as part of a write operation will fail because there are no read credentials available for them to pass and all work done on the server is as root. 2. Remount logic causes hangs 3. Enhanced NFS version 3 to support up to 24k read/ write block size. 4. Tunned the code to perform better in small block synchronous write. 5. A NFS PV3 client is unable to copy large files greater than 2GB from a NFS mounted file system to the local file system. 6. NFS3ERR_JUKEBOX is not handled in 10.20 ACE PV3 code 7. Fixed system hang/sleep in clntkudp_callit() at outbuf and not waking up. Also added statistcial data collection for sleep and wakeup calls. 8. The code in nfs_remove checks if a file is busy and if it has been renamed. If the test is true then the file will be removed. This can cause a busy file that has not been renamed removed. The 2 conditions should be checked separately, i.e. first check if a file is busy, if it is, then check if it's been renamed? Do nothing if it is; otherwise, rename it. Only when a file is not busy then it will be removed. 9. the code path to purge the buffer caches of a stale file handle who has large amount of delayed write data could be too long and causes kernel stack overflow. 10. When a root user can not write to the file, the file attributes are changed including the file size. The code closing the file does not check if the user closing the file has write permission before it sets the file attribute using the new one thus causing the file size gets reset to zero. 11. The problem occurs due to avoiding the read of the block when multiple processes are appending data to the same file block when nfs_no_read_before_write is turned on. Due to NFS client avoiding the read of the file block and initializing the buffer to NULLs,NULLs are written out. 12. The cachefs read and write path is not releasing credential correctly. PHNE_16924: A buffer cache entry is being released more than once causing corruption in the buffer cache hash lists. NOTE: Patch PHNE_16924 installs a patch for the networking ACE 2 software bundle (PHNE_13833) only if that bundle has been installed on the system. Otherwise, patch PHNE_16924 installs a patch for the standard release and its patches (represented by PHNE_13823). PHNE_16925: A buffer cache entry is being released more than once causing corruption in the buffer cache hash lists. NOTE: Patch PHNE_16925 installs a patch for the networking ACE 2 software bundle (PHNE_13834) only if that bundle has been installed on the system. Otherwise, patch PHNE_16925 installs a patch for the standard release and its patches (represented by PHNE_13824). PHNE_15863: 1. An access of a kernel variable on an NFS client was not protected by a spinlock and thus conflicted with other accesses of the same variable causing the integrity of that variable to be compromised and leading to a case where a sleep could not be woken up. 2. An interrupt performing an operation on a socket on which NFS is attempting a socket buffer data drop (sbdrop) causes the socket operation to access a bad socket buffer address. 3. A read of a file that was truncated during the read operation causes an access of non-existent data. 4. With no biods running, a write of a stale file in which the writes are less than 8K bytes in length causes a recursive call stack that can get very large. 5. Before reading the entries of a directory, no check is performed to determine if the entity being read is a directory, which leads to the given syslog message. 6. The file open credentials are used by NFS and not the thread credentials. This can lead to a quota problem when the user that opened a file is not the user accessing the file. 7. When a utime() operation with a NULL timestamp is performed on an NFS file/directory, the NFS client uses the system time of the client to set file/directory modification and access times with SETATTR operations. However, WRITE operations use the system time of the server to set file/directory modification and access times. This causes inconsistencies when the client and server systems are in different time zones. 8. A SUN NFS client attempting to perform a utime() operation with a NULL timestamp on an HP NFS server file/directory is rejected due to a permissions error even though the process has write permissions but is not the file/directory owner. 9. When hierarchical maps are used, a umount of the mount points causes the autofs daemon to hang (this applies only to systems that have the ACE 2 software bundle installed). 10. Running Netscape from automounter paths causes hangs (this applies only to systems that have the ACE 2 software bundle installed). 11. A slow memory leak in rfs_readdirplus3() eventually starves the server of free memory (this applies only to systems that have the ACE 2 software bundle installed). 12. The calls from cachefs to do attribute lookups hang when trying to make a cachefs node (this applies only to systems that have the ACE 2 software bundle installed). 13. The nm tool is given incorrect file length data and reports a standard HPPA error (this applies only to systems that have the ACE 2 software bundle installed). 14. The system panics with data page fault on the NFS PV3 server when trying to remove a symbolic link (this applies only to systems that have the ACE 2 software bundle installed). 15. An application can lseek to EOF and then read past EOF without having an error status returned (this applies only to systems that have the ACE 2 software bundle installed). 16. The maximum file size field in the NFS PV3 protocol has been initialized incorrectly (this applies only to systems that have the ACE 2 software bundle installed). NOTE: Patch PHNE_15863 installs a patch for the networking ACE 2 software bundle (PHNE_13833) only if that bundle has been installed on the system. Otherwise, patch PHNE_15863 installs a patch for the standard release and its patches (represented by PHNE_13823). PHNE_15864: 1. An access of a kernel variable on an NFS client was not protected by a spinlock and thus conflicted with other accesses of the same variable causing the integrity of that variable to be compromised and leading to a case where a sleep could not be woken up. 2. An interrupt performing an operation on a socket on which NFS is attempting a socket buffer data drop (sbdrop) causes the socket operation to access a bad socket buffer address. 3. A read of a file that was truncated during the read operation causes an access of non-existent data. 4. With no biods running, a write of a stale file in which the writes are less than 8K bytes in length causes a recursive call stack that can get very large. 5. Before reading the entries of a directory, no check is performed to determine if the entity being read is a directory, which leads to the given syslog message. 6. The file open credentials are used by NFS and not the thread credentials. This can lead to a quota problem when the user that opened a file is not the user accessing the file. 7. When a utime() operation with a NULL timestamp is performed on an NFS file/directory, the NFS client uses the system time of the client to set file/directory modification and access times with SETATTR operations. However, WRITE operations use the system time of the server to set file/directory modification and access times. This causes inconsistencies when the client and server systems are in different time zones. 8. A SUN NFS client attempting to perform a utime() operation with a NULL timestamp on an HP NFS server file/directory is rejected due to a permissions error even though the process has write permissions but is not the file/directory owner. 9. When hierarchical maps are used, a umount of the mount points causes the autofs daemon to hang (this applies only to systems that have the ACE 2 software bundle installed). 10. Running Netscape from automounter paths causes hangs (this applies only to systems that have the ACE 2 software bundle installed). 11. A slow memory leak in rfs_readdirplus3() eventually starves the server of free memory (this applies only to systems that have the ACE 2 software bundle installed). 12. The calls from cachefs to do attribute lookups hang when trying to make a cachefs node (this applies only to systems that have the ACE 2 software bundle installed). 13. The nm tool is given incorrect file length data and reports a standard HPPA error (this applies only to systems that have the ACE 2 software bundle installed). 14. The system panics with data page fault on the NFS PV3 server when trying to remove a symbolic link (this applies only to systems that have the ACE 2 software bundle installed). 15. An application can lseek to EOF and then read past EOF without having an error status returned (this applies only to systems that have the ACE 2 software bundle installed). 16. The maximum file size field in the NFS PV3 protocol has been initialized incorrectly (this applies only to systems that have the ACE 2 software bundle installed). NOTE: Patch PHNE_15864 installs a patch for the networking ACE 2 software bundle (PHNE_13834) only if that bundle has been installed on the system. Otherwise, patch PHNE_15864 installs a patch for the standard release and its patches (represented by PHNE_13824). PHNE_15041: 1. NFS performance over 100BT is poor. 2. Autofs panics system with data page fault (this applies only to systems that have the ACE 2 software bundle installed). 3. Autofs fails to work with 9.X archived directory path libraries (this applies only to systems that have the ACE 2 software bundle installed). 4. Autofs causes swlist to fail (this applies only to systems that have the ACE 2 software bundle installed). NOTE: Patch PHNE_15041 installs a patch for the networking ACE 2 software bundle (PHNE_13833) only if that bundle has been installed on the system. Otherwise, patch PHNE_15041 installs a patch for the standard release and its patches (represented by PHNE_13823). PHNE_15042: 1. NFS performance over 100BT is poor. 2. Autofs panics system with data page fault (this applies only to systems that have the ACE 2 software bundle installed). 3. Autofs fails to work with 9.X archived directory path libraries (this applies only to systems that have the ACE 2 software bundle installed). 4. Autofs causes swlist to fail (this applies only to systems that have the ACE 2 software bundle installed). NOTE: Patch PHNE_15042 installs a patch for the networking ACE 2 software bundle (PHNE_13834) only if that bundle has been installed on the system. Otherwise, patch PHNE_15042 installs a patch for the standard release and its patches (represented by PHNE_13824). PHNE_14071: 1. An uninitialized kernel variable on an NFS server causes an address to be decremented by one and thus leaves it not pointing to a word-aligned area. 2. An access of a kernel variable on an NFS client was not protected by a spinlock and thus conflicted with other accesses of the same variable causing the integrity of that variable to be compromised and leading to a case where a loop cannot be terminated. 3. A lack of synchronization within the RPC kernel layer causes an access of a kernel variable after its memory has already been freed. 4. The NFS server allows access of a previously released vnode when a file lock is unblocked. NOTE: Patch PHNE_14071 installs a patch for the networking ACE 2 software bundle (PHNE_13833) only if that bundle has been installed on the system. Otherwise, patch PHNE_14071 installs a patch for the standard release and its patches (represented by PHNE_13823). PHNE_14072: 1. An uninitialized kernel variable on an NFS server causes an address to be decremented by one and thus leaves it not pointing to a word-aligned area. 2. An access of a kernel variable on an NFS client was not protected by a spinlock and thus conflicted with other accesses of the same variable causing the integrity of that variable to be compromised and leading to a case where a loop cannot be terminated. 3. A lack of synchronization within the RPC kernel layer causes an access of a kernel variable after its memory has already been freed. 4. The NFS server allows access of a previously released vnode when a file lock is unblocked. NOTE: Patch PHNE_14072 installs a patch for the networking ACE 2 software bundle (PHNE_13834) only if that bundle has been installed on the system. Otherwise, patch PHNE_14072 installs a patch for the standard release and its patches (represented by PHNE_13824). PHNE_13833: New functionality to support networking features in 10.20. PHNE_13834: New functionality to support networking features in 10.20. PHNE_13823: 1. Processes within the RPC kernel layer are not releasing a lock which is needed by other NFS client processes to synchronize IO requests (which requires a flush of all outstanding client to server IO). 2. A spinlock is held by a kernel RPC process when it tries to acquire a beta semaphore. PHNE_13824: 1. Processes within the RPC kernel layer are not releasing a lock which is needed by other NFS client processes to synchronize IO requests (which requires a flush of all outstanding client to server IO). 2. A spinlock is held by a kernel RPC process when it tries to acquire a beta semaphore. PHNE_13668: The NFS client prevents root from opening a file on the server. It will allow file creation, but not IO to an existing file. PHNE_13669: The NFS client prevents root from opening a file on the server. It will allow file creation, but not IO to an existing file. PHNE_13235: 1. NFS Clients may send serial IO requests out of order, causing performance problems for JFS on the Server. 2. NFS writes to a full disk used as a swap device will return an error which results in a call to panic() from nfs_prealloc(). 3. When retransmitting, the XID may not be properly reinitialized, allowing data corruption in the form of null-valued blocks of 8192 bytes (or less). PHNE_13236: 1. NFS Clients may send serial IO requests out of order, causing performance problems for JFS on the Server. 2. NFS writes to a full disk used as a swap device will return an error which results in a call to panic() from nfs_prealloc(). 3. When retransmitting, the XID may not be properly reinitialized, allowing data corruption in the form of null-valued blocks of 8192 bytes (or less). PHNE_12427: The reference count of the exported entry is not managed correctly, and may remain greater than 0 when unused. PHNE_12428: The reference count of the exported entry is not managed correctly, and may remain greater than 0 when unused. PHNE_11386: Hang in clnt_kudp.o PHNE_11387: Hang in clnt_kudp.o PHNE_11008: 1. Error codes kept in the rnode for an NFS client's file may get overwritten, and therefore not reported back to the caller when the file is closed. 2. The NFS server renaming procedures do not check for differing VxFS file systems when asking for a rename, which will cause a panic down in VxFS. 3. The server authorization program does not properly check for anonymous access when user IDs of -2 are used. 4. The netisr callout function did not protect against a race condition. null-valued blocks of 8192 bytes (or less). 5. The system does not initialize the vnode attributes when it sees a file size which is too large for the 10.20 file system, and returns an error. The server is making the directory anyway, with uninitialized (000) attributes. PHNE_11009: 1. Error codes kept in the rnode for an NFS client's file may get overwritten, and therefore not reported back to the caller when the file is closed. 2. The NFS server renaming procedures do not check for differing VxFS file systems when asking for a rename, which will cause a panic down in VxFS. 3. The server authorization program does not properly check for anonymous access when user IDs of -2 are used. 4. The netisr callout function did not protect against a race condition. PHNE_9864: The system does not initialize the vnode attributes when it sees a file size which is too large for the 10.20 file system, and returns an error. The server is making the directory anyway, with uninitialized (000) attributes. PHKL_9155: 1. NFS write performance can be improved by doing gather writes at the server. This patch implements the NFS portion of gather writes. 2. The maximum timeout values defined in RPC were very long, and neither RPC nor NFS values matched that of SUN. PHKL_9156: 1. NFS write performance can be improved by doing gather writes at the server. This patch implements the NFS portion of gather writes. 2. The maximum timeout values defined in RPC were very long, and neither RPC nor NFS values matched that of SUN. PHKL_8544: 1. The kernel's biod support code did not sufficently protect against MP race conditions. 2. The RPC processor affinity implementation used by nfsd's was not sufficently protected against MP race conditions. 3. Incorrect usage of the dnlc purge functions. PHKL_8545: 1. The kernel's biod support code did not sufficently protect against MP race conditions. 2. The RPC processor affinity implementation used by nfsd's was not sufficently protected against MP race conditions. 3. Incorrect usage of the dnlc purge functions. SR: 5003445601 1653280412 5003433078 5003429753 5003428292 5003427963 5003425116 5003423590 5003423368 5003423111 5003419325 5003418962 5003417329 5003417071 5003406660 5003404616 5003402743 5003402677 5003398826 5003394056 5003368050 5003352534 5003344226 5003343277 5003340042 5003330894 5003327338 5003326090 5003324657 5003322370 5003321513 5003319665 5003319145 5003279927 5003279091 4701408047 4701400903 4701378117 4701351577 4701341669 4701314302 4701306837 4701306829 1653275800 1653272385 1653266577 1653249268 1653197632 1653192294 1653150599 1653146886 1653146308 1653134924 1653101337 1653281691 5003456574 1653275974 1653299602 5003456897 1653308254 5003467373 1653298828 5003458299 1653299826 5003462911 8606139585 Patch Files: /usr/conf/lib/libnfs.a /usr/conf/lib/libhp-ux.a(cachefs.o) /usr/conf/lib/libhp-ux.a(nfs.o) /usr/conf/lib/libhp-ux.a(nfs_iface.o) /usr/conf/lib/onc_debug.o /usr/conf/master.d/nfs what(1) Output: /usr/conf/lib/libnfs.a: svc_kudp.c $Date: 99/12/07 17:43:14 $ $Revision: 1.7 .112.5 $ PATCH_10.20 PHNE_20313 700/800 svc.c $Date: 98/11/13 13:37:28 $ $Revision: 1.8.112 .16 $ PATCH_10.20 PHNE_16924 kudp_fsend.c $Date: 99/08/13 15:42:23 $ $Revision: 1.4.112.3 $ PATCH_10.20 PHNE_19426 700/800 clnt_kudp.c $Date: 00/02/10 15:30:16 $ $Revision: 1 .9.112.30 $ PATCH_10.20 PHNE_20313 700/800 hpautofs.c $Date: 99/11/18 15:08:46 $ $Revision: 1. 1.112.4 $ PATCH_10.20 PHNE_20313 700/800 auto_subr.c $Date: 99/08/24 16:05:14 $ $Revision: 1 .1.112.7 $ PATCH_10.20 PHNE_19426 700/800 auto_vnops.c $Date: 00/03/06 14:09:26 $ $Revision: 1.1.112.8 $ PATCH_10.20 PHNE_21108 700/800 cachefs_vnops.c $Date: 00/06/07 16:10:58 $ $Revisio n: 1.1.112.9 $ PATCH_10.20 PHNE_21704 700/8 00 cachefs_vfsops.c $Date: 99/12/06 13:52:33 $ $Revisi on: 1.1.112.4 $ PATCH_10.20 PHNE _20313 700/ 800 cachefs_module.c $Date: 99/12/06 13:49:30 $ $Revisi on: 1.1.112.4 $ PATCH_10.20 PHNE _20313 700/ 800 cachefs_cnode.c $Date: 99/12/06 13:44:21 $ $Revisio n: 1.1.112.5 $ PATCH_10.20 PHNE_20313 700/80 0 hpnfs_vnops.c $Date: 00/03/16 19:35:42 $ $Revision: 1.1.112.30 $ PATCH_10.20 PHNE_21108 700/800 nfs_vfsops3.c $Date: 00/01/07 16:41:29 $ $Revision: 1.1.112.11 $ PATCH_10.20 PHNE_20313 700/800 nfs_vnops3.c $Date: 00/03/16 19:12:53 $ $Revision: 1.1.112.17 $ PATCH_10.20 PHNE_21108 700/800 nfs_subr3.c $Date: 00/03/15 18:06:44 $ $Revision: 1 .1.112.9 $ PATCH_10.20 PHNE_21108 700/800 nfs_server3.c $Date: 00/03/16 16:14:16 $ $Revision: 1.1.112.9 $ PATCH_10.20 PHNE_21108 700/800 nfs_export3.c $Date: 00/01/04 13:58:42 $ $Revision: 1.1.112.2 $ PATCH_10.20 PHNE_20313 700/800 klm_lckmgr.c $Revision: 1.5.112.3 $ klm_kprot.c $Revision: 1.1.112.2 $ nfs_vfsops.c $Date: 00/01/07 16:45:56 $ $Revision: 1.1.112.9 $ PATCH_10.2 0 PHNE_20313 700/800 nfs_vnops.c $Date: 00/03/16 19:37:10 $ $Revision: 1 .3.112.58 $ PATCH_10.20 PHNE_21108 700/800 nfs_subr.c $Date: 99/11/23 14:47:56 $ $Revision: 1. 1.112.31 $ PATCH_10.20 PHNE_14071 nfs_server.c $Date: 99/08/13 15:41:02 $ $Revision: 1.3.112.33 $ PATCH_10.20 PHNE_19426 700/800 nfs_fcntl.c $Revision: 1.1.112.18 $ /usr/conf/lib/libhp-ux.a(cachefs.o): None /usr/conf/lib/libhp-ux.a(nfs.o): None /usr/conf/lib/onc_debug.o: None /usr/conf/lib/libhp-ux.a(nfs_iface.o): None /usr/conf/master.d/nfs: $Revision: 1.2.113.3 $ cksum(1) Output: 470913622 646638 /usr/conf/lib/libnfs.a 566132716 191044 /usr/conf/lib/libhp-ux.a(cachefs.o) 3631930508 166548 /usr/conf/lib/libhp-ux.a(nfs.o) 3472542863 2012 /usr/conf/lib/libhp-ux.a(nfs_iface.o) 566132716 191044 /usr/conf/lib/onc_debug.o 1421096347 4241 /usr/conf/master.d/nfs Patch Conflicts: None Patch Dependencies: s700: 10.20: PHKL_16750 PHNE_17731 PHNE_18915 s800: 10.20: PHKL_16751 PHNE_17730 PHNE_18915 Hardware Dependencies: None Other Dependencies: None Supersedes: PHKL_8545 PHKL_8544 PHKL_9156 PHKL_9155 PHNE_9864 PHNE_11009 PHNE_11008 PHNE_11387 PHNE_11386 PHNE_12428 PHNE_12427 PHNE_13236 PHNE_13235 PHNE_13669 PHNE_13668 PHNE_13824 PHNE_13823 PHNE_13834 PHNE_13833 PHNE_14072 PHNE_14071 PHNE_15042 PHNE_15041 PHNE_15864 PHNE_15863 PHNE_16925 PHNE_16924 PHNE_17620 PHNE_17619 PHNE_18962 PHNE_18961 PHNE_19426 PHNE_20021 PHNE_20313 PHNE_20957 PHNE_21108 Equivalent Patches: None Patch Package Size: 1250 KBytes Installation Instructions: Please review all instructions and the Hewlett-Packard SupportLine User Guide or your Hewlett-Packard support terms and conditions for precautions, scope of license, restrictions, and, limitation of liability and warranties, before installing this patch. ------------------------------------------------------------ 1. Back up your system before installing a patch. 2. Login as root. 3. Copy the patch to the /tmp directory. 4. Move to the /tmp directory and unshar the patch: cd /tmp sh PHNE_21704 5a. For a standalone system, run swinstall to install the patch: swinstall -x autoreboot=true -x match_target=true \ -s /tmp/PHNE_21704.depot By default swinstall will archive the original software in /var/adm/sw/patch/PHNE_21704. If you do not wish to retain a copy of the original software, you can create an empty file named /var/adm/sw/patch/PATCH_NOSAVE. WARNING: If this file exists when a patch is installed, the patch cannot be deinstalled. Please be careful when using this feature. It is recommended that you move the PHNE_21704.text file to /var/adm/sw/patch for future reference. To put this patch on a magnetic tape and install from the tape drive, use the command: dd if=/tmp/PHNE_21704.depot of=/dev/rmt/0m bs=2k Special Installation Instructions: PHNE_17619: PHNE_17620: After installation of this patch, the NFS configuration file will have been modified to control the behavior of the system, and, as an ascii file, can be altered by the system administrator. The environment variable names and values defined by this patch (and the resulting system behavior based on those values) are as follows: For systems not previously running NFS Version 3 either via patch installation or by installation of the ACE/HWE Networking Bundles (B6378AA or B6379AA), the NFS configuration file will now contain AUTOFS=0 MOUNT_VER=2 MOUNTD_VER=2 This indicates that only the Automounter may be used (as previously set by the AUTOMOUNT variable in this file) and that the AutoFS product will not be used. In addition, the default client protocol requested at mount time will be PV2 (MOUNT_VER=2), and only PV2 will be supported by the server (MOUNTD_VER=2). For systems currently running NFS Version 3 either via patch installation or by installation of the ACE/HWE Networking Bundles (B6378AA or B6379AA), the NFS configuration file will contain AUTOFS=1 MOUNT_VER=3 MOUNTD_VER=3 This indicates that AutoFS will be used, if the previously defined AUTOMOUNT variable is set to 1. In addition, the default client protocol requested at mount time will be PV3, and the default protocol supported for exported file systems will be PV3. PV2 will be supported as well in both instances. NOTE: When using AutoFS (AUTOFS=1) please verify that execute ("x") file access is turned OFF for all existing map files ("auto_master", "auto.home", etc.). Any map file which has execute access set will be interpreted by AutoFS as an executable map and not as a regular map file. Execute access to the "/etc/auto.home" file can, for example, be removed with the chmod command: chmod a-x /etc/auto.home The system administrator may modify the NFS variables as is deemed appropriate, but selecting PV3 or AutoFS should not be attempted unless all patches included in the Networking Bundle have been installed. These patches have been listed in the April 15th DataComm Newsletter, and are also listed below (using the latest superceding patches): In the s700 ACE Networking Bundle: hp-ux_patches/s700_800/10.X/PHCO_16591 hp-ux_patches/s700_800/10.X/PHCO_18018 hp-ux_patches/s700_800/10.X/PHCO_14645 hp-ux_patches/s700_800/10.X/PHCO_15336 hp-ux_patches/s700_800/10.X/PHCO_18135 hp-ux_patches/s700_800/10.X/PHCO_15262 hp-ux_patches/s700_800/10.X/PHCO_15263 hp-ux_patches/s700_800/10.X/PHCO_15337 hp-ux_patches/s700_800/10.X/PHCO_16809 hp-ux_patches/s700_800/10.X/PHCO_15339 hp-ux_patches/s700_800/10.X/PHCO_15340 hp-ux_patches/s700_800/10.X/PHCO_15341 hp-ux_patches/s700_800/10.X/PHCO_16874 hp-ux_patches/s700_800/10.X/PHCO_15343 hp-ux_patches/s700_800/10.X/PHCO_15344 hp-ux_patches/s700_800/10.X/PHCO_17389 hp-ux_patches/s700_800/10.X/PHCO_10947 hp-ux_patches/s700_800/10.X/PHCO_17699 hp-ux_patches/s700/10.X/PHCO_13851 hp-ux_patches/s700/10.X/PHKL_17573 hp-ux_patches/s700/10.X/PHKL_8693 hp-ux_patches/s700/10.X/PHKL_18197 hp-ux_patches/s700/10.X/PHKL_16750 hp-ux_patches/s700/10.X/PHKL_15240 hp-ux_patches/s700/10.X/PHKL_16959 hp-ux_patches/s700/10.X/PHKL_18439 hp-ux_patches/s700/10.X/PHKL_17253 hp-ux_patches/s700/10.X/PHNE_17731 hp-ux_patches/s700/10.X/PHNE_16924 hp-ux_patches/s700/10.X/PHNE_16999 hp-ux_patches/s700_800/10.X/PHNE_17098 hp-ux_patches/s700_800/10.X/PHNE_15159 hp-ux_patches/s700_800/10.X/PHNE_16692 In the s800 HWE Networking Bundle: hp-ux_patches/s700_800/10.X/PHCO_16591 hp-ux_patches/s700_800/10.X/PHCO_17389 hp-ux_patches/s700_800/10.X/PHCO_18018 hp-ux_patches/s700_800/10.X/PHCO_14645 hp-ux_patches/s700_800/10.X/PHCO_15336 hp-ux_patches/s700_800/10.X/PHCO_18135 hp-ux_patches/s700_800/10.X/PHCO_15262 hp-ux_patches/s700_800/10.X/PHCO_15263 hp-ux_patches/s700_800/10.X/PHCO_15337 hp-ux_patches/s700_800/10.X/PHCO_15344 hp-ux_patches/s700_800/10.X/PHCO_16809 hp-ux_patches/s700_800/10.X/PHCO_15339 hp-ux_patches/s700_800/10.X/PHCO_15340 hp-ux_patches/s700_800/10.X/PHCO_15341 hp-ux_patches/s700_800/10.X/PHCO_16874 hp-ux_patches/s700_800/10.X/PHCO_15343 hp-ux_patches/s700_800/10.X/PHCO_17699 hp-ux_patches/s700_800/10.X/PHCO_10947 hp-ux_patches/s800/10.X/PHCO_14016 hp-ux_patches/s800/10.X/PHKL_17574 hp-ux_patches/s800/10.X/PHKL_8694 hp-ux_patches/s800/10.X/PHKL_16751 hp-ux_patches/s800/10.X/PHKL_18198 hp-ux_patches/s800/10.X/PHKL_15247 hp-ux_patches/s800/10.X/PHKL_18440 hp-ux_patches/s800/10.X/PHKL_16957 hp-ux_patches/s800/10.X/PHKL_17254 hp-ux_patches/s800/10.X/PHNE_17730 hp-ux_patches/s800/10.X/PHNE_16925 hp-ux_patches/s700_800/10.X/PHNE_17098 hp-ux_patches/s700_800/10.X/PHNE_15159 hp-ux_patches/s700_800/10.X/PHNE_16692 hp-ux_patches/s800/10.X/PHNE_18174 ------------------- A performance enhancement which was originally introduced in PHNE_15863/4 addressed a problem where an NFS client sent READ calls over the wire prior to sending WRITESs even in cases where the READ was not necessary. Part of the fix for this problem was to introduce a new kernel variable called "nfs_no_read_before_write" By default, the no_read_before_write behavior, which avoids redundant read request to an NFS Server when appending to a file, is on. To turn it off, the system manager must do the following using adb: echo "nfs_no_read_before_write/W 0" | \ adb -k -w /stand/vmunix /dev/mem One good reason to turn this flag off is when there are multiple processes on a single client writing to the same file over an nfs mount point without holding a lock. -------------------------------------------------------- The following is a useful process for applying more than one patch while only requiring a single reboot after the final patch installation: 1) Get the individual depots over into /tmp. 2) Make a new directory to contain the set of patches: mkdir /tmp/DEPOT # For example 3) For each patch "PHKL_xxxx": swcopy -s /tmp/PHKL_xxxx.depot \* @ /tmp/DEPOT 4) swinstall -x match_target=true -x autoreboot=true \ -s /tmp/DEPOT