Oracle8i for NonStop Clusters Errata for SCO UnixWare 7

June 2000


This errata supports Oracle8i™ Release Notes for Intel UNIX (DG/UX Intel, SCO UnixWare, Solaris Intel). Oracle8i has been certified to run on NonStop Clusters (NSC), running SCO UnixWare 7 Release 7.1.1. Its contents supplement or supersede corresponding information found in the Oracle8i Installation Guide Release 8.1.5 for Intel UNIX (DG/UX Intel, SCO UnixWare, Solaris Intel) and above.

Oracle8i Release 8.1.5 and above for SCO UnixWare supports both UnixWare and SCO UnixWare 7 Release 7.1.1 NonStop Clusters. The same version of Oracle works on both systems. The following cluster specific information applies only if you run Oracle8i on SCO UnixWare 7 Release 7.1.1 NonStop Clusters (NSC).

Configuration

Recommendation for NonStop Clusters and Oracle8i Release 8.1.5

When installing Oracle8i Release 8.1.5 on a NonStop Cluster machine, the Process Identification Number (PID) can exceed six digits. A PID is a distinct number assigned by the operating system to a process running in the computer. However, the Java Runtime Environment (JRE) program cannot handle PIDs longer than six digits. If a process related to the installation of Oracle8i gets a PID longer than six digits, the installation fails. The error appears as:
  "/proc####### unable to open file"
To avoid this problem, Oracle recommends performing one of the following procedures. Before installing Oracle8i Release 8.1.5 on a NonStop Clusters machine:

Download the JRE patch file from the Metalink website

  1. Go to the Oracle Metalink website:
      http://oracle.com/support
    
    Download the "JRE patch for NonStop Clusters and Oracle8i Release 8.1.5" into the /tmp directory of your computer.

  2. Follow the instructions on the website to extract the patch files and ensure that you save the files in the /tmp directory.

Install the JRE patch

Perform the following steps depending on whether or not you have installed Oracle8i Release 8.1.5.

Pin to a Single Node

Pin all database server processes of a single Oracle Server instance to a single node. Heavily Write-intensive System V Inter Process Communication (IPC) performs optimally on a single node. The following command pins a process and all its children to a single node:
  "onnode -p <node number>"
Alternatively, add the following syntax to the beginning of the .profile file of the Oracle account to automate pinning the shell to the same node as the file system. You can change the "where/" to follow any file system.

InstructionsMeaning
FS_NODE='/sbin/where -n/'#File system is local to FS_NODE
SH_NODE='node_self'#Shell is on SH_NODE
trap "" 35#Ignore migrate signal or "pin to current node"
if [$FS_NODE -ne $SH_NORE]
then
exec /sbin/onnode -p $FS_NODE
$SHELL
fi
#If not on proper node then re-execute shell on

#proper node and pin it there

#remainder .profile initialization here

Oracle client processes can be distributed across the cluster. However, ensure that the shadow processes start on the same node as the server. This does not apply when using the default BEQ protocol, since the client forks a shadow process on the same node as the client.

Configuring Oracle for multi-threaded servers prevents forking on the same node as the client since the server daemons control the shadow processes. The IPC and TCP protocols cause the shadow processes to be spawned on the same node as the server daemons. This optimizes your application for maximum scaling across the cluster. Make sure that all the shadow processes (with process names oracle<$ORACLE_SID>) run on the same node as the server daemons.

The disks that contain Oracle distribution binaries and database files should be mounted on the same node as the server processes. To maximize database availability, put Oracle distribution binaries and Oracle data files on a failover file system using dual hosted RAID or Cross Node Mirroring.

Mounting with the chard Option

Store Oracle data files and distribution binaries on file systems mounted with the chard option, which is the default. With chard, I/O is blocked until failover happens. If Oracle is running on a node different from the failover node, it continues to operate through the file system failover process with only a pause in operation.


Note: The csoft option causes errors to be returned to the Oracle server processes on failover and should not be used.

Software Requirements

You must install NSC Release 7.1.1 for Oracle8i Release 8.1.5. In addition, you must install the JRE large process ID fix, as described in Configuration.

The Oracle8i Installation Guide Release 8.1.5 for Intel UNIX (DG/UX Intel, SCO UnixWare, Solaris Intel) recommends installation of the following SCO PTFs:

You do not have to install these patches because they are included in SCO UnixWare 7 Release 7.1.1 NonStop Clusters.

See the Oracle8i Installation Guide Release 8.1.5 for Intel UNIX (DG/UX Intel, SCO UnixWare, Solaris Intel) and Oracle8i Administrator's Reference Release 8.1.5 for Intel UNIX (DG/UX Intel, SCO UnixWare, Solaris Intel) for more information. This information also applies to NSC.

Hardware Requirements

As described in the Oracle8i Installation Guide Release 8.1.5 for Intel UNIX (DG/UX Intel, SCO UnixWare, Solaris Intel), the minimum memory configuration for Oracle on UnixWare is 32MB. However, the minimum memory per node for NSC is 64MB. Oracle recommends 128MB.

Hardware ItemSCO UnixWare Requirement
CPUAn Intel-based system. See UnixWare documentation for a list of supported hardware systems.
MemoryOracle Corporation recommends a minimum of 64MB RAM. If you use the ConText cartridge, Oracle recommends 128MB.
Swap SpaceThree times the amount of RAM is recommended.
Disk SpaceAt lease 750MB is required when installing the entire Oracle8i Server distribution. Less space is required if installing only a subset of the available products.

If you use an OFA-compliant model, at least four devices are required: one for the Oracle software distribution and three for creating an OFA-compliant database.

Note: Oracle Corporation recommends that disk space be spread across several smaller drives, rather than fewer larger ones for improved performance and fault tolerance.

CD-ROM DeviceA RockRidge format CD-ROM drive supported by UnixWare.
Ethernet ControllerAn Ethernet card supported by UnixWare.

See the Oracle8i Installation Guide Release 8.1.5 for Intel UNIX (DG/UX Intel, SCO UnixWare, Solaris Intel) and Oracle8i Administrator's Reference Release 8.1.5 for Intel UNIX (DG/UX Intel, SCO UnixWare, Solaris Intel) for more information. This information also applies to NSC.

Notes

NSC provides the dbms_guard scripts to improve database availability. The dbms_guard scripts monitor the node on which the Oracle database servers are running and automatically restarts the database server on a backup node if the primary node fails. See the dbms_guard man page for more information

Store Oracle data files and distribution binaries on file systems mounted with the chard option, as described in Configuration.

If you use raw data files, the Oracle instance must run on the node where the disks attach. If you use a RAID box attached to nodes 1 and 2, then Oracle has to run on nodes 1 and 2, depending on where you mount the raw devices. In the Oracle instance runs on nodes other than 1 or 2, Oracle displays I/O errors during failover. This is not a problem for Oracle data files mounted on UNIX file systems with the chard option, since Oracle I/O will be blocked during failover. Do not use raw devices if you are using Cross Node Mirroring.

Restrictions

Known Problems


Copyright © 2000. Oracle Corporation. All Rights Reserved.

Oracle is a registered trademark, and Oracle5, Oracle8i, Oracle DataGatherer, and Oracle Enterprise Manager are trademarks or registered trademarks of Oracle Corporation. All other company or product names mentioned are used for identification purposes only and may be trademarks of their respective owners.