This errata supports Oracle8i™ Release Notes for Intel UNIX (DG/UX Intel, SCO UnixWare, Solaris Intel). Oracle8i has been certified to run on NonStop Clusters (NSC), running SCO UnixWare 7 Release 7.1.1. Its contents supplement or supersede corresponding information found in the Oracle8i Installation Guide Release 8.1.5 for Intel UNIX (DG/UX Intel, SCO UnixWare, Solaris Intel) and above.
Oracle8i Release 8.1.5 and above for SCO UnixWare supports both UnixWare and SCO UnixWare 7 Release 7.1.1 NonStop Clusters. The same version of Oracle works on both systems. The following cluster specific information applies only if you run Oracle8i on SCO UnixWare 7 Release 7.1.1 NonStop Clusters (NSC).
"/proc####### unable to open file"To avoid this problem, Oracle recommends performing one of the following procedures. Before installing Oracle8i Release 8.1.5 on a NonStop Clusters machine:
http://oracle.com/supportDownload the "JRE patch for NonStop Clusters and Oracle8i Release 8.1.5" into the /tmp directory of your computer.
./jfix.sh install
./jfix.sh install
This updates your installation of Oracle8i Release 8.1.5 with the JRE patch.
"onnode -p <node number>"Alternatively, add the following syntax to the beginning of the .profile file of the Oracle account to automate pinning the shell to the same node as the file system. You can change the "where/" to follow any file system.
Instructions | Meaning |
---|---|
FS_NODE='/sbin/where -n/' | #File system is local to FS_NODE |
SH_NODE='node_self' | #Shell is on SH_NODE |
trap "" 35 | #Ignore migrate signal or "pin to current node" |
if [$FS_NODE -ne $SH_NORE] then exec /sbin/onnode -p $FS_NODE $SHELL fi | #If not on proper node then re-execute shell on #proper node and pin it there #remainder .profile initialization here |
Oracle client processes can be distributed across the cluster. However, ensure that the shadow processes start on the same node as the server. This does not apply when using the default BEQ protocol, since the client forks a shadow process on the same node as the client.
Configuring Oracle for multi-threaded servers prevents forking on the same node as the client since the server daemons control the shadow processes. The IPC and TCP protocols cause the shadow processes to be spawned on the same node as the server daemons. This optimizes your application for maximum scaling across the cluster. Make sure that all the shadow processes (with process names oracle<$ORACLE_SID>) run on the same node as the server daemons.
The disks that contain Oracle distribution binaries and database files should be mounted on the same node as the server processes. To maximize database availability, put Oracle distribution binaries and Oracle data files on a failover file system using dual hosted RAID or Cross Node Mirroring.
The Oracle8i Installation Guide Release 8.1.5 for Intel UNIX (DG/UX Intel, SCO UnixWare, Solaris Intel) recommends installation of the following SCO PTFs:
See the Oracle8i Installation Guide Release 8.1.5 for Intel UNIX (DG/UX Intel, SCO UnixWare, Solaris Intel) and Oracle8i Administrator's Reference Release 8.1.5 for Intel UNIX (DG/UX Intel, SCO UnixWare, Solaris Intel) for more information. This information also applies to NSC.
Hardware Item | SCO UnixWare Requirement |
---|---|
CPU | An Intel-based system. See UnixWare documentation for a list of supported hardware systems. |
Memory | Oracle Corporation recommends a minimum of 64MB RAM. If you use the ConText cartridge, Oracle recommends 128MB. |
Swap Space | Three times the amount of RAM is recommended. |
Disk Space | At lease 750MB is required when installing the entire
Oracle8i Server distribution. Less space is required if installing
only a subset of the available products.
If you use an OFA-compliant model, at least four devices are required: one for the Oracle software distribution and three for creating an OFA-compliant database. Note: Oracle Corporation recommends that disk space be spread across several smaller drives, rather than fewer larger ones for improved performance and fault tolerance. |
CD-ROM Device | A RockRidge format CD-ROM drive supported by UnixWare. |
Ethernet Controller | An Ethernet card supported by UnixWare. |
See the Oracle8i Installation Guide Release 8.1.5 for Intel UNIX (DG/UX Intel, SCO UnixWare, Solaris Intel) and Oracle8i Administrator's Reference Release 8.1.5 for Intel UNIX (DG/UX Intel, SCO UnixWare, Solaris Intel) for more information. This information also applies to NSC.
Store Oracle data files and distribution binaries on file systems mounted with the chard option, as described in Configuration.
If you use raw data files, the Oracle instance must run on the node where the disks attach. If you use a RAID box attached to nodes 1 and 2, then Oracle has to run on nodes 1 and 2, depending on where you mount the raw devices. In the Oracle instance runs on nodes other than 1 or 2, Oracle displays I/O errors during failover. This is not a problem for Oracle data files mounted on UNIX file systems with the chard option, since Oracle I/O will be blocked during failover. Do not use raw devices if you are using Cross Node Mirroring.
Set the read permissions on /dev/kmem:
chmod +r /dev/kmemMake a link from metreg.data to the node on which Oracle is running. For example, if Oracle is running on node 1:
ln -s /var/adm/metreg/node1.data /var/adm/metreg.dataEnterprise Manager performance statistics then reflect performance data for the specific node instead of the entire cluster. For more information on cluster performance statistics, see the sar command.
dbassist /seedloc \ /mnt/stage/Components/oracle/rdbms/8.1.5.0.0/1/DataFiles/Expanded/NA
Copyright © 2000. Oracle Corporation. All Rights Reserved.
Oracle is a registered trademark, and Oracle5, Oracle8i, Oracle DataGatherer, and Oracle Enterprise Manager are trademarks or registered trademarks of Oracle Corporation. All other company or product names mentioned are used for identification purposes only and may be trademarks of their respective owners.