When you are performing an installation, you must understand certain SCO UnixWare 7 NonStop Clusters installation concepts. Before you attempt any installation and configuration tasks, be sure you understand the installation concepts, familiarize yourself with the procedures you plan to perform, and gather all the information you need before you begin the procedures.
Read the following information carefully and plan your installation accordingly:
After you perform the initial set up tasks for the installation, you can customize the cluster to suit your site by adding file systems, setting up network information, or adding devices to your cluster. For those tasks, use the information in the UnixWare documentation and the SCO UnixWare 7 NonStop Clusters System Administrator's Guide.
The root file system in a cluster is controlled by the initial root node (sometimes called the initial installation node). In the event of initial root node failure, the failover root node assumes control of the root file system. Normally the cluster boots from the initial root node; however, if that node becomes unavailable, the cluster boots from the failover root node.
The initial root node is the system on which you install UnixWare 7 and then perform the first installation of SCO UnixWare 7 NonStop Clusters software. The failover root node is the second system on which you install the SCO UnixWare 7 NonStop Clusters software. In your installation plans, note which machines are the initial and failover root nodes.
In a shared disk configuration, the root file system resides on an external disk shared by the initial and failover root nodes. In a cross-node mirroring configuration, the root file system resides on the internal disk of the initial root node and is mirrored on the internal disk of the failover node. For more information about root failover configurations, see Understanding Root File System Failover Options.
The UnixWare installation requires that you specify the size of the root file system. Calculate the size of the root file system carefully according to the steps provided in the procedure. A very large root file system size can significantly extend installation time, especially during the mirroring phase of a cross-node mirrored cluster.
For shared disk configurations, the SCO UnixWare 7 NonStop Clusters installation initially sets up the root filesystem on the internal drive of the initial installation node. Later during the installation, it moves the root filesystem to the external drive. Therefore, you cannot specify a root file system size that is larger than the internal disk of the initial installation node.
To use OSU in the future on shared disk systems, you must plan for the additional space needed. The external drive that will contain your root filesystem must be at least twice the size of the root filesystem. Additionally, that external drive must have room for any non-root filesystems you plan to store there. For example, you may have a 9G drive in your initial installation node and a 9G drive in your external array drive. If you plan to use OSU, the largest root filesystem you could specify would be 4.5G. However, that would leave no room for any non-root filesystems on the external drive. During installation, add up the space used for the non-root filesystems and subtract that from the total space. The remaining space is available for the root filesystem.
For cross-node mirroring systems, the internal disk of the initial root node and that of the failover root node must each hold the root filesystem (and a spare copy of that filesystem if you plan to configure for OSU usage), as well as any non-root filesystems.
For more information about OSU, see Understanding Online System Upgrade (OSU) Disk Usage.
When deciding how large you want the root filesystem to be, you must balance operational needs and the time it takes for the root to be moved (on systems with shared disk configurations). On systems where the initial installation node has a 9G internal drive, the root filesystem size should be a minimum of 2G. An average root filesystem size is around 3G. Moving a 3G root filesystem to an external drive takes 20 to 30 minutes, depending on the speed of the target drive. Software mirroring (for cross-node mirrored clusters) depends on processor speed and can sometimes take more than an hour per Gb of space.
Unused space on space on the root volumes of cross-node mirrored or external shared-root configurations is available for non-root filesystems, including /home and /home2. Do not create other filesystems during installation because they will not be available after SCO UnixWare 7 NonStop Clusters installation.
SCO UnixWare 7 NonStop Clusters installation first installs the root file system onto the internal disk of the initial installation node, then moves it to the external storage disk. Non-root file systems are not moved to the disk that contains the root file system after SCO UnixWare 7 NonStop Clusters installation.
In a cross-node mirroring system, the non-root file systems are not automatically mirrored to the internal disk of the failover node. Consequently, the other file systems are not automatically available after failover in a cross-node mirroring configuration. You must mirror the non-root file systems manually so they are available after a failover. If a file system exists on only one of the two drives and is not mirrored to the other drive, then data integrity is not guaranteed.
NOTE: Create non-root file systems only after you have installed the SCO UnixWare 7 NonStop Clusters software.
Complete initial setup tasks or the installation, then refer to the SCO UnixWare 7 NonStop Clusters System Administrator's Guide for file system administration tasks.
Root file system failover ensures that certain critical system files are always available. The failover option you have is based on the storage system for your cluster. Note which root file system failover option you have and plan accordingly.
The following file system failover configurations rely on an disks shared by the initial root node and the failover root node:
Dual-hosted SCSI and Cross-Node Mirroring options rely on Veritas software mirroring. The cross-node mirroring configuration relies on a mirrored copy of the root file system on internal disks and is normally used for a two-node cluster that uses no external storage devices. Dual-hosted SCSI configuration mirrors external disks.
Both types of root file system failover options rely on the root file system being under Veritas control and require the root file system to be encapsulated. See Root File System Encapsulation.
The Dual-hosted External SCSI system has two external SCSI array boxes connected to the initial and failover root node. Failover is handled by the Veritas mirroring software. The Dual-Hosted External SCSI System Diagram shows such a system:
The cross-node mirroring system consists of the initial root node and the failover root node. The root filesystem resides on an internal disk in each of these nodes. Failover is handled by the Veritas mirroring software. The Cross-Node Mirroring System Diagram shows such a system:
The root file system failover configurations rely on Veritas Volume Manager's disk mirroring feature. Clusters that use RAID storage must also have root file systems under Veritas control. As part of the installation process, you must encapsulate the root file system on your cluster when you install SCO UnixWare 7 NonStop Clusters software on the initial root node. Encapsulating the root file system brings it under Veritas control. The directions for encapsulation at installation time are included in Installing the SCO UnixWare 7 NonStop Clusters Software
For further information about encapsulation, see Initializing the Volume Manager in the online documentation set.
Both the Veritas Volume Manager and the SCO UnixWare 7 NonStop Clusters
installation program use a common method of identifying disk devices. Both
identify disk devices through a physical address of the form
c
C
b
B
t
T
d
D, which has the
following elements:
c
C -- Where
C is the number of the controller to which the
disk drive is attached
b
B -- Where
B is the bus to which the corresponding
controller is attached
t
T -- Where
T is the target ID of the disk on the controller
d
D -- Where
D is the device number of the disk on the controller
For example, the internal drive on most cluster systems is identified as
c0b0t0d0
. See disk(7)
for more details.
SCO UnixWare 7 NonStop Clusters relies on an additional
piece of information that precedes the device name and contains the node
number of the node to which the device is attached. For example, an internal
drive attached to node 3 is identified as
n3c0b0t0d0
.
NOTE: The internal (boot) drive on all cluster nodes must be configured
as c0b0t0d0
.
Each node in your cluster can have one or more network interface cards (NICs). These NICs, which act as physical network interfaces, are each assigned a interface (host) name and unique IP address as part of network configuration.
Additionally, the SCO UnixWare 7 NonStop Clusters software provides the Cluster Virtual IP (CVIP) facility, which allows one or more network names and IP addresses to be associated with the cluster as a whole, rather than to a particular NIC. The CVIP facility allows the cluster to appear to the network as a single system, thereby enabling cluster networking to survive the loss of a single node or its NIC.
After you have installed UnixWare on the initial installation node, you set up a NIC with the CVIP cluster name and IP address. Although you associate this CVIP information with a NIC in the initial installation node, later during NonStop Clusters installation, this information becomes the CVIP and you supply the interface name and IP address for the NIC itself.
In your planning, note the name and IP address by which you want the cluster known. Carefully fill out the installation checklists with the correct information for the NICs in your cluster.
If the cluster is interconnected with Ethernet rather than ServerNet SAN, you must plan for an additional subnet to act as the interconnection. The NICs you configure for the interconnection must be identical and cabled to each other using cross-over cable or a hub. The addresses must be on the same subnet. When you set up NICs after UnixWare installation, set up the interconnection NIC first, then the CVIP. When you set up the failover node of an Ethernet-interconnected cluster, the interconnection NIC is configured as the first NIC in the node by default. For consistency, it is best to make the first NIC in each node the interconnection NIC. When you set up the interconnection NIC, however, specify the default router for the cluster. By default, UnixWare sets up the default router from the first NIC you configure. For more information about installing the Ethernet-interconnected two-node cluster, see Understanding the Two-Node Cluster Interconnection
As you install the dependent nodes as part of SCO UnixWare 7 NonStop Clusters installation, you set up each physical NIC with a name and IP address.
For easier installation, plan your network names and addresses before you begin. The installation checklists help you note and keep track of the cluster's networking information.
Later releases of the SCO UnixWare 7 NonStop Clusters software can be installed while the cluster is fully functional. To upgrade to future releases, you can install updated software onto a running copy of the SCO UnixWare 7 NonStop Clusters software by setting aside additional disk space during this installation. After you install future releases on the set-aside space, you can reboot the cluster to the new version. In this way, an upgrade affects system availability only for the extent of a reboot. Should the new software be unacceptable, you can reboot to the previously installed version.
During installation, you are prompted to set aside space for future OSU use. When you plan current file system space on the drive containing the root file system, include enough space for future OSU use. You must have room on the drive containing the root file system for a duplicate of that file system.
On the drive containing the root file system, you can have additional, non-root file systems. Subtract the non-root file system sizes from the total space on the drive. The remaining space can be used for the root file system and the OSU set-aside space. Make sure your root file system is half the remaining space or less.
For example, if you have an 8Gb drive, and non-root file systems take up 1Gb of the drive, you have 7G of remaining space to use for the root file systems. To enable the OSU feature, you should make the root file system no larger than 3.5G.
The term split brain refers to the condition in which a cluster's root nodes cannot communicate to determine which node should be the current root node filesystem server.
In dual-hosted SCSI or cross-node mirroring root failover configurations that do not employ a ServerNet SAN switch, both nodes can access the root filesystem simultaneously. In these configurations, a split-brain situation can result in a damaged or out-of-date root filesystem.
If your cluster uses a ServerNet SAN switch, the SCO UnixWare 7 NonStop Clusters software automatically detects a potential split-brain situation and avoids it based on each node's ability to communicate with the switch. However, if your two-node cluster uses a direct ServerNet connection or an Ethernet IP interconnection, you must install and configure an alternate serial communications path between the nodes to avoid a split-brain situation. In this case, during failover node installation, a dialog helps you set up your alternate communications path.
This serial cable is the Cluster Integrity cable and provides the alternate communication path. During installation, you are prompted to supply the port numbers to which the cable is attached..
Each node in a cluster must be licensed for the same UnixWare 7 Release 7.1.1 Edition, and the cluster must be licensed for the correct number of NonStop Clusters nodes. If the edition you install does not include the Online Data Manager (ODM), you will need one UnixWare 7 Disk Mirroring license for the entire cluster.
You must supply this information when you add a node to your cluster or perform an installation. Each time the information is required, you are prompted through a series of menus to enter the license information.
You will be asked for the following licensing information:
Be sure you have all licensing information available when you add a node or perform an installation.
Your two-node cluster can be interconnected with one of the following options:
With an Ethernet IP interconnection, two nodes are interconnected by standard Ethernet crossover cabling connected to identical NICs in each node or to identical dual-ported NICs in each node. The interconnection is a private network and should not be used for public network traffic or as the CVIP for the cluster.
192.168.0.0 - 192.168.0.255 192.168.1.0 - 192.168.1.255 192.168.2.0 - 192.168.2.255
through
192.168.254.0 - 192.168.254.255 192.168.255.0 - 192.168.255.255
The Ethernet-interconnection interfaces for a single cluster must be in the same block of addresses. For example, if the root node of an Ethernet-interconnected cluster has an interconnection address of 192.168.15.1, then the failover node must have an address in the 192.168.15.* block.
When you set up the network interfaces during this procedure, you must set up the private network NIC before the public network NIC. However, when you set up the private network NIC, you must supply the default router for the cluster as a whole. The UnixWare software requires the default router to be specified for the first NIC configured, and the NonStop Clusters software requires that the interconnection NIC be configured first. The installation checklists and procedures note this task.
When you install the failover node of an Ethernet-interconnected cluster, you are prompted for network information in a slightly different format, which is noted in the Ethernet Interconnection Failover Node TCP/IP Information (private network) checklist. When you fill out these fields, keep the following tips in mind:
When you use Ethernet as the cluster interconnection, you must also use the Cluster Integrity (CI) cable. This cable is described in the preceding Understanding Split Brain Avoidance section.
With a direct ServerNet connection, two nodes are connected by ServerNet, but do not contain a ServerNet switch. Such a configuration must also use the CI cable. This configuration requires only a single NIC in each node for a public network connection.