Installing
the SCO UnixWare 7 NonStop Clusters Software
The installation procedure requires
several steps. Familiarize yourself with these steps before using them. Read
through each procedure, gather all the necessary information and media, and fill
out the installation checklists before you
begin. Depending on your configuration, some checklists may
not apply to your installation.
Use the following procedures to
perform an installation:
Preparing for
Software Installation
A successful installation depends on
preparation and planning. Be sure you have read Understanding
Installation Concepts and created a plan for installation. When you are ready, perform the following
preparatory tasks:
Back Up
Important Files
If you are reinstalling a cluster,
back up
important files in the root file system that contain data specific to your
system. If you have experienced a failure that requires a re-installation, you may not
be able to back up these files. Skip this step if you are installing the cluster for the first time.
After the installation is complete,
you must either restore these files or merge the original version with the newly
created version. The files include but are not limited to the following:
/etc/passwd
/etc/group
/etc/hosts
/etc/vfstab
/etc/syslog.conf
/etc/sendmail.cf
/etc/inetd.conf
/etc/init.d/
*
/etc/inet/
*
- NFS, NIS, FTP,
routed
,
gated
, and other related configuration files
Complete the
Installation Checklists
You must have the following
information in order to install UnixWare 7 Release 7.1.1 and the SCO UnixWare 7
NonStop Clusters software successfully. Read through the
installation instructions, fill out these checklists, and keep them
on hand throughout the installation:
UnixWare License for Initial Installation Node |
License Number: |
|
License Code: |
|
License Data: |
|
System, Owner, and Root Information |
Cluster name |
|
System owner name |
|
System owner login name |
|
System owner user ID number |
|
System owner password |
|
Root password |
|
Ethernet-Interconnection Root Node
TCP/IP Information (private network NIC) |
Interface name (different from CVIP host name) |
|
DHCP client |
|
Domain Name |
|
System IP Address |
|
System Netmask |
|
Broadcast Address |
|
Default Router (Use the default router for the cluster. This address will be on the public network rather than the private.) |
|
Domain Search |
|
Primary DNS Nameserver Address |
|
Other DNS Nameserver Address |
|
CVIP TCP/IP Information |
Host name (or Node name) |
|
DHCP client |
|
Domain Name |
|
System IP Address |
|
System Netmask |
|
Broadcast Address |
|
Default Router |
|
Domain Search |
|
Primary DNS Nameserver Address |
|
Other DNS Nameserver Address |
|
Root Node TCP/IP Information (public network
NIC) |
Host name for the node (not the CVIP name) |
|
DHCP client |
|
Domain Name |
|
System IP Address |
|
System Netmask |
|
Broadcast Address |
|
Default Router |
|
Domain Search |
|
Primary DNS Nameserver Address |
|
Other DNS Nameserver Address |
|
UnixWare Licenses for Each Dependent
Node |
2nd Node
License |
License Number: |
|
License Code: |
|
License Data: |
|
3rd Node
License |
License Number: |
|
License Code: |
|
License Data: |
|
4th Node
License |
License Number: |
|
License Code: |
|
License Data: |
|
5th Node
License |
License Number: |
|
License Code: |
|
License Data: |
|
6th Node
License |
License Number: |
|
License Code: |
|
License Data: |
|
Initial Root Node and Cluster
Licenses (Expandable) |
2-Node
Expandable
License
(Minimum) |
License Number: |
|
License Code: |
|
License Data: |
|
3rd Node
License |
License Number: |
|
License Code: |
|
License Data: |
|
4th Node
License |
License Number: |
|
License Code: |
|
License Data: |
|
5th Node
License |
License Number: |
|
License Code: |
|
License Data: |
|
6th Node
License |
License Number: |
|
License Code: |
|
License Data: |
|
Initial Root Node or Cluster License
(Non-Expandable; 2-Node ONLY) |
2-Node
Non-Expandable
License |
License Number: |
|
License Code: |
|
License Data: |
|
Alternate Communication Path Information (for CI
cable) |
Node |
Port |
Node 1 |
|
Node 2 |
|
Failover Node TCP/IP Information (public
network) |
Host name |
|
DHCP client |
|
Domain Name |
|
System IP Address |
|
System Netmask |
|
Broadcast Address |
|
Default Router |
|
Domain Search |
|
Primary DNS Nameserver Address |
|
Other DNS Nameserver Address |
|
Ethernet Interconnection
Failover Node TCP/IP Information (private network) |
System IP Address (private
network interconnection address of failover node)
|
|
System Netmask |
|
Broadcast Address |
|
Default Router |
Leave blank |
Domain Name |
|
Broadcast Address |
|
Primary DNS Address |
|
Other DNS Address |
|
Other DNS Address |
|
Frame Format |
|
Server IP Address (initial root
node's private network interconnection address) |
|
Cluster Node Number (of
failover node, usually 2) |
|
SNMP Responses |
System Contact: |
|
System Location: |
|
Community String: |
|
Manager Address: |
|
Trap Destination: |
|
Enable Sets? |
|
Assemble
Required Components
You must have the following
components to complete the UnixWare 7 Release 7.1.1 and SCO UnixWare 7 NonStop
Clusters installations:
Required Installation Media |
SCO UnixWare 7 Release 7.1.1 media kit |
- SCO UnixWare 7 Release
7.1 Getting Started Guide
- Two UnixWare 7
Installation Diskettes (diskettes 1 and 2)
- Three SCO UnixWare 7
Release 7.1.1 installation CD-ROMs containing the UnixWare 7 Release
7.1.1 software
- One SCO UnixWare 7
Release 7.1.1 Host Bus Adapters diskette containing the drivers for
the system
|
One SCO UnixWare 7 NonStop Clusters installation
CD-ROM |
For installing the initial installation node |
Two SCO UnixWare 7 NonStop Clusters Dependent Node Boot
Diskettes or two
blank diskettes (ServerNet Clusters) |
For installing SCO UnixWare 7 NonStop Clusters software on the dependent
nodes in a ServerNet SAN-interconnected cluster |
Four blank diskettes (Ethernet Clusters) |
For installing the failover node in an Ethernet interconnected two-node
cluster |
Optional hardware may require
drivers that must be installed in addition to required SCO UnixWare 7 NonStop
Clusters software. If so, follow the instructions provided with each individual
hardware item to install its software.
Installing
UnixWare 7 Release 7.1.1 on the Initial Installation Node
The SCO UnixWare 7 NonStop Clusters installation begins with a UnixWare 7
Release 7.1.1 installation on the initial root node. Have your completed installation
checklists available before installing UnixWare 7 and the NonStop
Clusters software.
The following procedure overwrites the hard disk, including user data and DOS
partitions.
- Power on any external storage systems connected to the initial
installation node.
- Power on the initial installation node and insert the first UnixWare
installation CD-ROM into the CD drive.
After displaying hardware information, the message Starting
UnixWare... and the animated SCO logo appear.
- Choose the desired language at the Language Selection window.
- From the Welcome window, press the F10 key.
- Enter zone, locale, keyboard, and UnixWare licensing information in the
next several windows.
One or two lines at the bottom of the window give brief instructions for
completing each field. For more detailed help, press the F1 key.
Press the F10 key when you have entered the UnixWare licensing
information.
-
When you are prompted to insert your HBA diskette
into the drive, insert the non-SCO HBA diskette that came
with your hardware
(not the SCO-supplied HBA diskette) and press the F10
key.
When you are prompted to install any additional HBA diskettes, remove the
non-SCO HBA diskette, insert the SCO-supplied HBA diskette, and Press the F10 key.
After you finish installing
drivers from the HBA diskettes, select Proceed with Installation,
press the F10 key, and remove the diskette.
- At the Device Configuration Utility (DCU)
window, select Do not enter the DCU (autoconfigure drivers) and
press the F10 key.
- When prompted to enter the system node name, enter the name you want to
use for the cluster as a whole, not the name of the node. Press the F10 key.
NOTE: For the Cluster Virtual IP (CVIP) feature, the system node
name is the name of the virtual interface for the cluster and is the
collective name for all the nodes in the cluster. The cluster name is not
the name of the node on which you are installing UnixWare 7 Release 7.1.1.
After installation, the cluster as a whole will be known
by this name you provide here.
- When the Installation Method window appears, use the arrow keys to select Install from CD-ROM, and
press the F10 key.
If Install from CD-ROM does not appear in the list, but a CD
drive is present in your system, the device may not be configured correctly.
Select Cancel installation and shut down, restart the installation,
and run the DCU interactively, as shown in Step 10 of the SCO UnixWare 7
Release 7.1 Getting Started Guide, which is included on the UnixWare 7
CD-ROM.
- Configure up to two detected hard disks.
- Using the Tab key, choose
the first disk, then press the F2 key for choices.
- Select Customize disk
partitions to edit the partitions table and press the F10 key.
- If necessary, modify the
disk partition table to have an active UNIX partition based on
values required for your disk's size and type. When you enter numbers
into the percent (%) column and tab out of that column, the other values
are calculated for you.
Press the F10 key when you are finished.
- Set up file systems and slices using the following information to ensure
proper cluster operation and to set aside space for future Online Software
Upgrades (OSU). The OSU feature installs the upgrade software into an
alternate root filesystem, then reboots from the alternate root. The system
down time for such an upgrade is the length of the reboot. To use OSU in the
future, you must be sure that you leave enough space on the disk for an
extra root filesystem.
- Select Customize file
systems and slices and press the F10 key.
- Set up your file systems as
shown in the following table, Recommended
File Systems Setup. Change the Type fields as indicated. Note that
the Size entries change as you tab through the table after changing The
Type fields.
Do not create /home
or /home2
filesystems now. Create them after installation is complete.
NonStop Clusters functionality requires that /tmp, /var,
and /var/tmp are set to off.
Recommended
File Systems Setup
|
Description
|
Type
|
|
|
|
|
|
/stand
|
Boot filesystem
|
bfs
|
|
/dev/swap
|
Swap slice
|
|
Calculated by system. Do not
decrease.
|
/dev/dump
|
|
|
Calculated by system based on
installed memory.
|
|
|
off
|
Do not create until after you install
the NonStop Clusters software.
|
|
|
|
Do not create until after you install the NonStop Clusters
software.
|
|
|
|
|
/tmp
|
|
|
|
|
Temporary filesystem
|
|
|
/dev/volprivate
|
|
|
Calculated by system. Do not change.
|
|
|
|
Calculated by system. Do not change
|
- To calculate OSU space, add up the space needed for all non-root filesystems.
Subtract that amount from the root filesystem size currently in the
table. Divide the result by 2 to obtain the maximum root file system size.
This size allows space for the alternate root file system required for
OSU.
Consider using a smaller amount for the root filesystem of a
cross-node mirrored system to avoid lengthy re-mirroring time,
especially with an Ethernet-interconnected system. The value for the
root filesystem should be no smaller than 1500.
Clusters with external storage are limited to the amount of space
available on the internal drive of the initial installation node.
- Verify that the space you allocated is less
than or equal to the amount of space available. If you plan to use OSU
to install later releases, verify that the available space is at least
twice the size of the space for root.
After verifying and correcting filesystem sizes as necessary, press
the F10 key when you are finished.
- Choose Customize disk options and
press the F10 key.
- Using the Tab key to move among the choices and the F2 key to toggle
the value, mark all choices no except Install a new boot
sector. Set the boot sector choice yes. Press the F10 key
when done.
- When prompted to choose a system profile, choose that profile by
performing the following steps.
NOTE: Do not deselect any of the items that UnixWare has selected
by default, unless you are directed to do so.
- Use the arrow keys to
choose Customize Installation of Packages and press the F10
key.
- Use the arrow keys to choose
Core System Services and press the F10 key.
- Use the arrow keys to choose
Extended OS Utilities and press the F2 key.
- Use the arrow keys to choose
Select individual packages and press the Enter key.
- Using the arrow
and page-down keys to move
among the choices and the space bar to select them, select Kernel
Debugger and OS Multiprocessor Support (OSMP) and press
the Enter key to return to the original Select/Deselect Services window.
- Verify that State
is set to FULL for Online Virtual Disk Volume
Administration. For the UnixWare 7 Enterprise and Data Center
Editions, FULL is the default state. For all other Editions,
the state should be toggled from OFF to FULL. In this
latter case, the Volume Manager must be licensed using the License
Manager following installation of UnixWare 7.
- Use the arrow and
page-down keys to choose
SCO NetWare and press the F2 key.
- Use the arrow keys to choose
Select no packages (OFF) and press the Enter key.
- Press the F10 key to return
to the Current Selections window.
- Use the arrow keys to choose
Accept current selections and continue and press the F10 key.
- At the Choose Platform
Support Module window, use the arrow keys to choose Intel MP
Specification and press the Enter key.
- At the next screen, choose the Defer network configuration option
and press the F10 key. You will configure networking information later in
the installation process.
- In the windows that follow, enter date and time, security level, system
owner, and root (superuser) data, taking the information from the System,
Owner, and Root Information installation checklist.
Press the F10 key after you complete each screen.
Do not forget the root password. To restore a forgotten root
password, you must reinstall your system.
- View the license terms, Tab to highlight Accept, and press the F10
key to continue.
- You are prompted to continue the installation, deleting any data in the
active partition and (depending on which special disk options you chose)
possibly other partitions as well. At this point, you can:
- Continue the installation, by pressing the F10 key.
- Save your installation answers to a pre-formatted diskette by
pressing the F3 key. You can then use this diskette to quickly install
this or another UnixWare 7 system using the same responses.
- Step back through the installation to change any of your answers by
pressing the F9 key.
- After you confirm that you want to install the software on your system,
the software begins to load. Software load is a long stage of the
installation. The Installation Progress indicator shows the
progress of the individual software sets rather than the installation as a
whole.
The installation program prompts you to re-insert the HBA diskettes one
at a time. Insert the SCO-supplied HBA diskette first, followed by the
non-SCO HBA diskette. Press the Enter key after you insert each
diskette.
After the software is loaded, the operating system is rebuilt. Rebuilding
takes several minutes. You may see informational messages during this time.
- After you see a message indicating that the kernel was rebuilt
successfully, remove all media from the various drives in your system and
press the Enter key to reboot when directed.
- During the reboot, you are prompted to set up the mouse.
- When prompted, choose PS/2-compatible
Mouse and press the F10 key
- Set the Number of
buttons to 2 and press the F10 key.
- Press any key to start the
mouse test. Verify the mouse is functional by moving the mouse and
pressing a mouse key.
If the mouse is configured successfully, the installation continues. If
it is not, you move back to the Mouse Configuration window.
- After disk 1 of the UnixWare 7 Release 7.1.1 CD-ROM set finishes
installing, you are prompted to install the second UnixWare 7 Release 7.1.1
CD-ROM and press the F10 key. Insert the second CD and press the F10 key.
Use the arrow keys to move among the choices
and the space bar to select or deselect the choices from the second CD-ROM.
Deselect Enhanced Event Logging System at the Select Products
to Install menu, and press the Enter key. This software load is also a
long stage of installation. Wait for the packages to be installed.
- When the packages from the second CD-ROM are installed,
press the F10 key and remove the CD.
You have these choices:
- Skip installing packages from the third CD by pressing the F8 key.
Installation continues with rebuilding the kernel. The system finishes
booting and displays the graphical login prompt. Continue with the next section.
- Press the F10 key to install software from the third CD-ROM. Do not
install the Merge package or the Advance File and Print Service (AFPS)
package because they are not compatible with NonStop Clusters software.
Respond as necessary to the prompts. The system rebuilds the kernel,
reboots, and displays the graphical login prompt. Continue with the next
section.
Preparing to
Install the SCO UnixWare 7 NonStop Clusters Software
After you have installed UnixWare on the initial
installation node, you are ready to perform some preliminary tasks that must be
done before the SCO UnixWare 7 NonStop Clusters software can be installed on the
initial installation node. During these procedures, to obtain the command line
prompt from the graphical login prompt, press the Ctrl-Alt-Esc key
combination. To obtain the graphical prompt from the command line, press the
Ctrl-Alt-F1 key combination.
To prepare for installing the NonStop Clusters software, follow these
steps:
Install Any
Third-Party Software
After you have installed UnixWare 7
Release 7.1.1, add any third-party software to the initial installation node.
Follow the installation instructions provided by the vendors for those software
packages.
Install
UnixWare 7 Release 7.1.1 Updates
The UW7PTF package set contains updates
to the UnixWare 7 Release 7.1.1 operating system needed by the SCO UnixWare 7
NonStop Clusters software. Install these updates by performing the following
steps:
- Switch to the command-line
login prompt by pressing the Ctrl-Alt-Esc key combination.
- Log in as the root user to the
initial installation node.
- Insert the SCO UnixWare 7 NonStop
Clusters installation CD-ROM into the CD-ROM drive of the initial
installation node.
- Enter the following command:
pkgadd -d cdrom1 UW7PTF
- When the UW7PTF package
set finishes installing, remove the CD-ROM from the drive.
- Proceed to the next section.
Configure Network Interface Cards on
the Initial Installation Node
Configure network interface cards (NICs) on the
initial installation node and assign the cluster name and Cluster Virtual IP
(CVIP) address to one NIC.
If your cluster is interconnected using Ethernet
instead of ServerNet SAN, configure two NICs. If your cluster is Ethernet interconnected, configure the first NIC to be
the interconnect. However, when you supply the default router address for this first NIC, use the router address for the public network.
For the CVIP of both ServerNet-SAN-interconnected
clusters and Ethernet-interconnected clusters, supply the name
of the cluster and the IP address for the cluster. Do not
supply the name and address for the specific NIC. During NonStop
Clusters installation, the name and address you supply here become the CVIP and
you are prompted to enter a different name and address for this NIC.
For Ethernet-interconnected clusters, be sure to
configure the CVIP for the NIC that is not cabled to the other node.
Refer to your installation
checklist for the networking information.
To configure NICs, perform the
following steps:
- Press the Ctrl-Alt-F1
keys to switch to the graphical login prompt. Log in as the root user.
- Open the SCOadmin utility. To do
so, click on the arrow on top of the SCO emblem icon at the bottom of the
screen, and select SCO Admin .
- Scroll down to
the Networking folder and double-click it. Double-click Network Configuration Manager. The Network
Configuration Manager window appears.
- From the Hardware menu,
select Add new LAN adapter. The Add new LAN adapter window
appears, containing a list of physical interfaces installed in the cluster.
- A list of NICs and available drivers is displayed. Select the appropriate
driver for the NIC.
- Click the Continue
button. The Network Driver Configuration window appears.
- Click on OK and wait
while the network drivers are added.
- When the TCP/IP protocol icon
appears, click on the Add button. The Internet Protocol
Configuration window appears.
- Supply the following information:
Ethernet-Interconnection - Configure First |
CVIP - Configure Second |
Supply an interface name as the hostname value and an IP
address for the private subnet.
For the default router, supply the default router for the cluster's public network connection.
Supply or accept default values for
remaining fields.
|
Supply the cluster name as the hostname value, the IP address to use
as the virtual address for the cluster, domain name, netmask and other
information for the external network connection.
Do not supply the node name or physical NIC address. You supply
these later during NonStop Clusters software installation.
|
When the adapter and protocol are installed successfully, the Configure
Network Product window appears, prompting you to confirm your input.
- Confirm your choices by clicking OK. You
return to the Network Configuration Manager's main window.
- If you configured the interconnection NIC, repeat the preceding steps to
configure the CVIP external networking interface.
- If you have set up the CVIP on a ServerNet-interconnected cluster or
have set up both NICs of an Ethernet-interconnected cluster, proceed to
the next step.
- To add Domain Name
Service information, perform the following substeps:
- From the Networking
menu item, select Client Manager. The Network Client
Manager window appears.
- Double click on DNS
client in the box labeled Configured network client
services. The DNS Client Service window appears.
- For each DNS address you
want to add, enter that address in the Nameserver search order
boxes and click on the Add button.
- When you are finished adding DNS
addresses, click the OK button. You are returned to the Network
Client Manager window.
- When you are finished configuring NICs,
exit from the submenus, and then
select Exit from the Host menu.
- Exit from the SCOadmin
utility by selecting Exit from the File menu.
- Log out from the SCO
graphical desktop by holding down the right mouse button, selecting Log
off, and pressing the Enter key to confirm.
- Continue with the next procedure
to encapsulate the root file system.
Encapsulate the
root File System
In this section you encapsulate the
root filesystem on the internal drive of the initial installation node. Encapsulation configures a disk with existing slices as a Veritas disk. Do not
install or encapsulate any other drives. If you want to create additional file
systems on the root drive, or place other drives under VxVM management, do so
after you have installed the SCO UnixWare 7 NonStop Clusters software.
To encapsulate the root filesystem,
perform the following steps:
- Use the Ctrl-Alt-Esc
key combination to switch from the graphical desktop
to the console command line and, enter the following command:
vxinstall
vxinstall
generates a list of controllers found in the initial installation node and
describes the disk device naming conventions used by the Veritas Volume
Manager. After displaying the list of controllers found, vxinstall
instructs you to press the Enter key. Press the Enter key.
vxinstall
presents more information about the differences between a quick installation
and a custom installation, then instructs you to press the Enter key. Press
the Enter key.
vxinstall
displays a menu of possible operations. At the Select
an operation to perform:
prompt, enter 2
to select Custom installation
.
- At the
Encapsulate
Boot Disk [y,n,q,?]
prompt, enter y
. At
this point, vxinstall
configures your root file
system (and all other partitions on the boot disk) for encapsulation as a
VxFS volume on the next reboot.
- At the
Enter
disk name for c0b0t0d0
prompt, accept the default value of rootdisk
by pressing the Enter key.
vxinstall
confirms that the disk has been configured for encapsulation and prompts you to press the Enter key to
continue. Press the Enter key.
vxinstall
checks for more disks on controller 0.
- If there is only 1 disk on
controller 0, press the Enter key.
- If there is more than 1
disk on controller 0,
vxinstall
offers to
install other disks. Enter 4
to leave the
other disks alone.
vxinstall
checks for more controllers in the initial installation node.
- If there are no more
controllers, press the Enter key.
- For each extra controller
found, enter
4
to leave the disks on that
controller alone.
vxinstall
prompts you to confirm that you have selected c0b0t0d0
to be encapsulated by pressing y
. Press y
and then the Enter key.
- Finally,
vxinstall
informs you that the system must be rebooted for your changes to take
effect. Press y
and then the
Enter key to reboot the initial
installation node.
Your system will reboot multiple
times -- how many depends on its configuration.
Run fdisk to
Verify the Existence of a UNIX Partition
On a Dual-hosted External SCSI
system, the drives in both arrays must be initialized with a UNIX partition before you
install the SCO UnixWare 7 NonStop Clusters software. Skip
this step if your cluster uses Cross-Node Mirroring.
To determine if a drive has a UNIX
partition (and to create one if it does not), run the fdisk(1M)
command, which is interactive:
fdisk /dev/rdsk/cCbBtTdDs0
Where C is the controller number, B is the bus
number, T
is the SCSI
target ID, and D is the SCSI logical unit number
(generally 0).
here X is the controller to which the disk is attached.
The
following example shows what fdisk
may return if a
UNIX partition already exists:
Total disk size is 4091 cylinders (4091.0 MB)
Partition Status Type Start End Length % MB
========= ====== =========== ===== ==== ====== === ======
1 Active UNIX System 0 4090 4091 100 4091.0
SELECT ONE OF THE FOLLOWING:
0. Overwrite system master boot code
1. Create a partition
2. Change Active (Boot from) partition
3. Delete a partition
4. Exit (Update disk configuration and exit)
5. Cancel (Exit without updating disk configuration)
- If a UNIX partition exists (as
shown in the example), enter
5
to exit without
updating the disk partition information.
- If no UNIX partition exists,
fdisk
prompts you with a recommended default partition. If that partition is a 100%
UNIX System
partition, accept the recommendation by pressing y
.
Otherwise, press n
and select a 100%
UNIX System
partition.
After creating the UNIX partition,
enter 4
to update the disk configuration and exit.
Installing the
SCO UnixWare 7 NonStop Clusters Software
Carefully read through all of the steps in these procedures to avoid
errors.
After you have read the release
notes and the following section, you are ready to install the SCO UnixWare 7
NonStop Clusters software. To do so, perform the steps in the following
sections:
Installing SCO
UnixWare 7 NonStop Clusters Software on the Initial Installation Node
Use these instructions to install the SCO UnixWare
7 NonStop Clusters software on the initial installation node. Perform the SCO
UnixWare 7 NonStop Clusters installation before configuring applications or
enabling other users.
To obtain the command line prompt from the graphical login prompt, press the
Ctrl-Alt-Esc key combination. To obtain the graphical login prompt, press
the Ctrl-Alt-F1 key combination.
NOTE: If you terminate NonStop Clusters installation for any
reason, remove any partially installed packages before attempting to
re-install the NonStop Clusters software. Do not attempt to re-install over
partially installed NonStop Clusters software.
Perform the following steps:
-
Log in as the root user on the
command line.
-
Enter the
following command:
pkgadd -d cdrom1 -q NSC
-
When prompted, insert the SCO UnixWare 7 NonStop Clusters
installation CD-ROM in the drive and press the Enter key.
-
At the NonStop Clusters Set Installation menu, use the right-arrow key to
select optional features. When you have made your
selections, use the Tab key to move to Apply, and press
the Enter key.
Choose from the following:
-
Event Processor Subsystem
-
Monitors event messages as they are added to the system log, searches
for specific patterns in the messages, and takes a designated action
when a pattern match is found.
-
Virtual Interface Architecture
- Do not install this feature on clusters interconnected with Ethernet.
The Virtual Interface (VI) emulator provides an interface for
point-to-point communications between processes over the
high-performance ServerNet System Area Network (SAN) interconnecting the
cluster nodes.
- Identify the node being installed by pressing the Enter key
to accept the default value of 1.
-
After you identify the node, the Do you want to
configure failover prompt appears. Press the Enter
key to configure
failover and use the instructions in the following section.
Configure Failover
The on-screen instructions depend on whether your cluster uses Dual-Hosted SCSI
storage or Cross-Node
Mirroring. Select the instructions appropriate for your configuration from the following:
Configuring root File System Failover for a Dual-hosted External SCSI Cluster
After you have responded yes to the Do
you want to configure failover
prompt, a series of prompts appear,
listing the disk devices found on the node, and instructing you select the disk
device(s) for the root
file system. Perform the
following steps to configure root file system failover on a dual-hosted external
SCSI system:
- Enter the
drive number that corresponds to the first external SCSI drive on your
cluster.
-
You are prompted to set aside space for OSU. Enter
y if you allowed space for OSU when you
set up the disk.
- You
are prompted for the number of the
other node connected to the disk selected in Step 1. Normally, this
number is
2
.
- You
are prompted for the node number of an
additional node connected to the disk selected in Step 1. Enter
0
.
-
You are prompted to indicate whether the disk selected in
Step 1 is a RAID device. For dual-hosted external SCSI disk systems,
enter
n
.
- You
are prompted to indicate whether you want to enter
another disk for the root file system. For dual-hosted external SCSI
disk systems, enter
y
.
- You
are prompted for the drive number of the
second SCSI disk. Enter the drive number.
- You are prompted to set aside OSU space on
the second disk. Answer
y
if you allowed
space for OSU on the disk.
- You
are prompted for the node number of the
other node connected to the second disk. Normally, this number is
2
.
- You
are prompted for the node number of an
additional node connected to the second disk. Enter
0
.
- You
are prompted to indicate whether the second disk is a
RAID device. Enter
n
.
- You
are prompted to indicate whether the configuration
summarized on the screen is correct. A no response lets you respecify
the configuration information. Enter
y
or n
.
When you confirm your
selections, messages indicate that the root file system will be moved to
the selected drive(s) when your system is ready to run SCO UnixWare 7
NonStop Clusters, and that failover configuration is now complete.
- Continue with Configure
the Cluster Interconnection on the Root Node.
Configuring
root File System Failover for a Cross-Node Mirroring Cluster
After you have responded yes
to the Do you want to configure failover
prompt, a
series of prompts appear, listing the disk devices found on the node, and
instructing you select the disk device(s) for the root file system.
Perform the following steps to
configure root file system failover on the cross-node mirroring system:
After you have responded yes to the Do
you want to configure failover prompt, a list of
available devices appears, or if you have only one disk,
you are prompted to accept that disk for cross-node
mirroring.
Perform the following steps to configure root file
system failover on the cross-node mirroring system:
- If you have a single disk
device, accept the default presented to you.
If you have more than one disk device, enter the drive number that
corresponds to the internal drive of the initial installation node. That
drive appears as disk device c0b0t0d0 in the list of devices.
-
Respond y to the prompt to use OSU if you allowed enough space for it
when you set up the disk. If you did not set up space for it, enter n.
- The next prompt asks you to enter the number of the root failover node.
Press the Enter key to accept the default value of 2. Messages
indicate that the root file system will be mirrored when you install node 2
and that failover configuration is now complete.
- Information about the cluster
interconnection appears along with a prompt. Continue with the next procedure to configure the cluster interconnection.
Configure the Cluster Interconnection on the
Root Node
After you set up root filesystem failover, you are prompted to configure the
cluster interconnection as either ServerNet or Ethernet (TCP/IP).
- Configuring a ServerNet Interconnection
When you have a ServerNet card and a NIC installed, you can choose either
ServerNet as the cluster interconnection (recommended) or the NIC for TCP/IP
(Ethernet) cluster interconnection (not recommended). The installation
program notes that it has detected the ServerNet card and asks if you want
to use TCP/IP as the interconnection instead.
- Answer n to the TCP/IP prompt so that ServerNet is selected as
the cluster interconnection.
- The interconnection is configured, and a completion message is
displayed. Immediately, a CVIP message is displayed. Continue with the
following procedure to configure the cluster virtual IP address.
- Configuring an Ethernet (TCP/IP) Interconnection
When your cluster has no ServerNet card installed, the installation
program indicates that you must use the TCP/IP interconnection.
- Respond y when prompted to use TCP/IP as the interconnection.
- The installation program supplies a list of interfaces that you
configured previously. Choose the NIC that you have cabled to the second
node, and confirm your selection.
- The interconnection is configured, and a completion message is
displayed. Immediately, an CVIP message is displayed. Continue with the
following procedure to configure the cluster virtual IP address.
Configure
the Cluster Virtual IP (CVIP) Address
After you configure the cluster interconnection, the installation program
prompts you to configure the Cluster Virtual IP (CVIP) for the cluster to use
for external networking.
- When prompted to configure the CVIP, respond y.
- The NIC you previously configured with the cluster name and
CVIP address is shown. A message indicates that it will be used as the CVIP.
Press Enter to confirm the CVIP.
- When prompted, supply the physical interface name (for example the name of
the initial installation node) and the IP address of the NIC installed in
the initial installation node.
- Continue with the following procedure to enter SNMP information and view final messages.
Supply SNMP
Information and View Final
Messages
After you finish configuring CVIP information, the installation program prompts
you for the following Simple Network Management Protocol (SNMP) information. Using the Tab key to move between fields, you
can supply information or press the Enter key to accept defaults:
- System Location
- Community String
- Manager Address
- Trap Destination
- Enable Sets (
y/n
)
When you are finished entering information or accepting the defaults, use the Tab key to
move to Apply and press the Enter key. The software loading
begins. Help is available from the displayed
menu.
While installing the SCO UnixWare 7
NonStop Clusters software, the installation program displays a number of
messages, informing you of the tasks it is performing, such as copying SCO
UnixWare 7 NonStop Clusters files, mirroring the root file system, updating
system files, and so on. This step can take an hour or more to
complete, depending on your system.
If the installation program changes
or replaces any previously existing system configuration files, it first saves
the original files in their original directory structure, but in the /nsc.0
directory and then informs you about the changed files.
- If you have not modified system configuration
files since you completed the UnixWare 7.1.1 installation, the SCO UnixWare
7 NonStop Clusters installation changes do not affect your system's
operation, and you can ignore the messages.
- If you have changed system configuration files,
review the files in /nsc.0 and re-apply any local changes before
rebooting.
When installation is completed,
messages indicate that the installation has completed successfully and instructs
you to remove media from the drives and reboot your system.
Reboot Your
System
To reboot the initial
installation node, enter the following command:
/etc/shutdown -i6 -g0 -y
NOTE: Ignore the warning message that appears during the
boot sequence stating that no available backup cache node(s) exist. After you
install the failover root node, the message no longer appears in boot sequences.
Licensing the Cluster
During UnixWare installation, you supplied one UnixWare with Mirroring Option
license for the initial installation node. Each dependent node of the cluster
must also have a UnixWare license. Additionally, the failover node and each
other dependent node require NonStop Clusters licenses, which relate to various
configurations.
You can enter the clusters UnixWare and NonStop Clusters
licenses now. You can then skip the licensing steps when you
install the failover and dependent nodes.
Licenses include the following types:
- Two-node, not extensible NonStop Clusters license
- Licenses two nodes only. For this procedure, enter a UnixWare license for
node 2 and the two-node, non-extensible license.
- Two-node, extensible NonStop Clusters license
- Licenses two nodes, but allows additional nodes to be added with
additional licenses. For this procedure, enter the two-node, extensible
license, each additional NonStop Clusters license, and a UnixWare license
for each dependent node in the cluster.
- Three to six node NonStop Clusters bump license
- Licenses an additional node in the cluster. These licenses work with
the two-node extensible license and should be entered after you have entered
the two-node, extensible license.
- UnixWare License
- Licenses the initial node in a cluster. You must have one UnixWare
license for each dependent node in the cluster. This
license must have the Online Date Manager (ODM) license or
Veritas Mirroring license included. The ODM and Mirroring
licenses might be separate and require a separate entry.
To enter all the licenses for the cluster now, use the following steps.
- Select the License Manager from the SCOadmin main window. The license you
entered during UnixWare installation is listed.
- Select License-->Add and fill in the license information. The
window expands as needed to expose additional fields for NonStop Clusters
Licenses.
- Add one UnixWare license for each dependent node that will
be in your cluster. Use the information in your UnixWare
Licenses for Each Dependent Node checklist.
- Add the appropriate NonStop Clusters licenses according to
your Initial Root Node and Cluster Licenses checklist.
If an error about brand
appears, check your data for
typographical errors, fix them, and reapply the information.
Other
errors could indicate problems with the network configuration information
you supplied. Exit the License Manager, review the network information
you supplied earlier, make any corrections, and re-enter licensing
information.
- Exit the License Manager when you have added all the licenses for the
cluster.
- Exit the SCOadmin utility by selecting File-->Exit from the
main SCOadmin window.
- Log out of the SCO desktop by holding down the right mouse button,
selecting Log off, and pressing the Enter key to confirm.
Installing the
Failover and Dependent Nodes
Follow the procedure specified for the type of interconnection your cluster uses:
ServerNet Interconnection Failover and Dependent Node Installation
Use this procedure to install the ServerNet SAN-interconnected cluster's failover node and its
other dependent nodes. If you have an Ethernet-interconnected cluster, use the
procedure in Ethernet (IP) Interconnection Failover Node Installation
to install your failover node.
The following sections describe the steps necessary to install the dependent
nodes of a ServerNet-interconnected cluster:
Before you install your cluster's dependent nodes,
be sure that the initial installation node is up and running and that power to
the rest of the nodes is turned off, then continue with the following procedure to make the dependent node boot diskettes.
Make the
Dependent Node Boot Diskettes for a ServerNet SAN-Interconnected Cluster
The dependent node boot diskettes supplied in the
cluster kit are blank, pre-formatted diskettes. When you run the make_floppy
command, the SCO UnixWare 7 NonStop Clusters software creates bootable versions
of the single-system image, containing all the information you provided in
previous installation steps. You use these diskettes to install each dependent
node in your cluster.
To make the dependent node boot diskettes, perform
the following steps:
- Log in as root on the command line.
- Format 2 diskettes using the following command, or have on
hand 2 IBM-formatted diskettes:
format /dev/rdsk/f03ht
- On the initial installation node, enter the following
command:
make_floppy diskette1
- Wait while the dependent node
boot image is created.
- When prompted, place the first diskette into the disk drive
of the initial installation node and press the Enter key.
- When prompted, place the second diskette into the disk drive
of the initial installation node and press the Enter key.
- When the make_floppy
command finishes, remove diskette 2 from the drive. If you used diskettes
other than the ones that came with the SCO UnixWare 7 NonStop Clusters
software, label your diskettes so you can differentiate diskette 1
from diskette 2.
- Write protect both diskettes and
continue with the following procedure to install the software.
Install the
SCO UnixWare 7 NonStop Clusters Software on the Nodes
Be sure that the initial installation node is up
and running and that power to the rest of the nodes is turned off. Then, perform
the following steps on each dependent node (beginning with the root failover
node) to install the SCO UnixWare 7 NonStop Cluster software:
- Insert SCO UnixWare 7 NonStop
Clusters Dependent Node Boot Diskette 1 into the diskette drive of the node
to be installed.
- Switch to the screen for the node to be installed, so you can view
messages and interact with the node.
- Turn on the node. It boots from
diskette 1.
- When prompted, replace diskette 1
with diskette 2 and press the Enter key.
At the prompt, enter the node number for the
dependent node you are installing. If this is the failover root node (the
first dependent node) you are installing, the node number is 2.
- Enter the node number and press
the Enter key. Wait for the software to load. An assortment of messages
appears as the software loads.
- A licensing prompt appears.
Continue with the following procedure for entering licensing information.
Verify or Enter License Information
If you licensed the cluster after UnixWare 7 installation, answer y to the
prompt and verify that both a UnixWare license and a NonStop Clusters license is
listed for each node.
If you need to enter licenses now, use the Tab key, the arrow keys, and the
Enter key to navigate the screen to add licenses as follows:
- Tab to License, press the Enter key, and use the down arrow key
to highlight Add.
- Press the Enter key.
- Enter a UnixWare license for this node, pressing the Tab key to move among
the fields, and the Enter key to apply the information.
- If you are installing the second node, which is the failover node, enter a
2-node NonStop Clusters license.
- If this node is the third or subsequent dependent node, add an additional
UnixWare license.
- If this node is the third or subsequent dependent node add an additional NonStop Clusters
three to six node bump license.
You must have the appropriate licenses or installation terminates. If an error
about the brand
command appears, check your data for typographical errors, fix
them, and press the Enter key again. See Licensing the Cluster for additional
information.
Set Up the Dependent Node's
Disks
The first dependent node you install is the
failover root node. If the dependent node you are installing is the failover
root node of a cross-node mirrored cluster, the installation program sets up the
disks automatically. You skip this procedure and continue by setting up the alternate
communication path.
Otherwise, the setup program displays the node
disk setup screens. Perform the following steps:
- Select Disk
- Select Automatic
configuration.
- From the Node X Disk Setup
screen, select Apply changes to hard disk(s).
- Select Apply changes and exit.
Continue by setting up the alternate communication path.
Set Up the Alternate Communication Path
If your cluster does not have a ServerNet SAN switch, the dependent node
installation program prompts you to set up an alternate communication path to
use in the event of interconnection failure . This alternate path prevents the
situation in which both nodes try to act as initial root nodes. This alternate
path is required for clusters that do no use a ServerNet SAN switch. Without
this alternate path, the cluster boots improperly, functions incorrectly.
Enter y to set up the path. The alternate path setup dialogue:
- Lists the available ports
- Prompts you to choose the ports for both nodes
- Prompts you to confirm your choices
- Prompts you to test the connection now.
- Displays a completion message if setup is successful or an error if it is
not.
- If the connection did not test successfully, you are prompted to test it
again with the same settings. Responding n to this prompt gives you the
opportunity to respecify the ports.
Supply the port information from your installation
checklist. Your ports may be labeled COM A and COM B. COM A is Port 1
and COM B is Port 2.
Continue dependent node
installation by setting up network information using the following procedure.
Set Up Network Information
After the setup program sets up the dependent
node's hard disk and configures your alternate communication path, it asks if
you want to set up network information for the node.
Perform the following steps to configure the
node's network information:
- Answer yes to the Do
you want to enter network information prompt. A popup window appears,
containing a list of network interface controllers (NICs) installed.
Use the arrow keys to highlight the adapter installed on this dependent
node.
- Press the F10 key to access the Hardware
menu item, press Enter, and use the arrow key to highlight Add new LAN
adapter on that menu, and press the Enter key. The Add new LAN
adapter window appears.
- Use the Tab key to select Continue
and press the Enter key. The Network Driver Configuration window
appears.
- Use the Tab key to select OK
and press the Enter key. The Add protocol window appears.
- Use the arrow keys to select TCP/IP,
use the Tab key to highlight the Add button, and press the Enter
key. At this point, the setup program calls the Network Configuration
Manager, which displays the Internet Protocol Configuration window.
- Enter the appropriate information
for the node you are installing (including the dependent node name in the Host
name field), use the Tab key to select OK and press the Enter
key. A window displays a successful completion message.
- At the next
window, use the Tab key to select OK and press the
Enter key to return to the setup program's main networking window.
- From the Hardware menu
item, select Exit. The dependent node setup program now displays a
number of messages while it installs the SCO UnixWare 7 NonStop Clusters
software on the node. When it finishes installing the software, it prompts
you to reboot the node.
- Remove any diskettes from the
diskette drive and any CD-ROMs from the CD drive, then press the Enter key
to reboot the node.
For each dependent node in your cluster, repeat
the steps beginning at Installing the SCO UnixWare
7 NonStop Clusters Software on the Nodes, leaving the nodes powered on as
you install them.
When you have installed all the cluster's
dependent nodes, remove all media from all drives in the cluster, and continue
by rebooting the cluster according to the following procedure.
Reboot the ServerNet SAN-Interconnected Cluster
After all the cluster's dependent nodes have been
installed and have joined the cluster, remove all diskettes and CD-ROMs,
and reboot the entire cluster by entering the following command from the
initial installation node:
shutdown -i6 -g0 -y
After this reboot, you can view the SCO UnixWare 7 and NonStop Clusters
documentation set from the SCO desktop by clicking the book-and-question-mark
icon and selecting SCOhelp from the menu. The documentation relies on a correct
local domain name. Once installed, the documentation can be viewed remotely
using the following URL:
http://cluster_name:457
Substitute the name of your cluster for cluster_name.
When your system has rebooted, perform
post-installation tasks described in Performing
Post-Installation Tasks.
Ethernet (IP) Interconnection Failover
Node Installation
Use this procedure to
install the
Ethernet-interconnected cluster's failover node. Ethernet-interconnected
clusters are limited to two nodes, so installation involves a single dependent
node that is the failover node. To install the failover node for an
Ethernet-interconnected cluster, use the following procedures:
Before you install your cluster's dependent nodes, be sure that the initial
installation node is up and running and that power to the rest of the nodes is
turned off. To install the failover node, continue with the following procedure to make failover node diskettes.
Make the Ethernet-Interconnected
Failover Node Diskettes
To install the failover node of an Ethernet-interconnected cluster, you must
make a set of two boot diskettes and two network installation diskettes. For
this procedure, you need four blank diskettes. To make the Ethernet interconnect
failover node diskettes, perform the following steps:
- As root on a command line, change to the directory where the boot
diskettes are created:
cd /usr/lib/drf
- Be sure that the current directory is in your PATH.
For a K-shell:
PATH=$PATH:/usr/lib/drf
export PATH
For a C-shell:
setenv PATH $PATH:/usr/lib/drf
- Create the images needed to be placed on the dependent node boot diskettes
with the following command:
make_oci_diskettes
This command creates the boot1.img and boot2.img diskette
images. When this command is finished, you are returned to the prompt.
- Insert and format the first blank diskette:
format /dev/rdsk/f03ht
- Transfer the image for the first boot diskette:
ezcp boot1.img
The command displays a completion message when it is finished. Remove the
diskette, lock it, and label it as follows:
UnixWare 7 IP Dependent Node Installation Diskette for Release 711 (1 of 2)
- Insert and format the second blank diskette:
format /dev/rdsk/f03ht
- Transfer the image for the second boot diskette:
ezcp boot2.img
The command displays a completion message when it is finished. Remove the
diskette, lock it, and label it as follows:
UnixWare 7 IP Dependent Node Installation Diskette for Release 711 (2 of 2)
- Insert and format the third blank diskette:
format /dev/rdsk/f03ht
- Create the first network installation diskette:
ezcp netinst1.img.UW711
The command displays a completion message when it is finished. Remove the
diskette, lock it, and label it as follows:
UnixWare 7 Network Installation Diskette for Release 711 (1 of 2)
- Insert and format the fourth diskette
format /dev/rdsk/f03ht
- Create the second network installation diskette:
ezcp netinst2.img.UW711
The command displays a completion message when it is finished. Remove the
diskette, lock it, and label it as follows:
UnixWare 7 Network Installation Diskette for Release 711 (2 of 2)
- Continue with the following procedure for installing the
Ethernet-interconnected failover node.
Install the SCO UnixWare 7 NonStop
Clusters Software on the Ethernet-Interconnected Failover Node
Have your installation checklists filled out and on hand, then perform the
following steps to install the SCO UnixWare 7 NonStop Cluster software:
- Switch to the failover node to see messages and interact with the node.
- Insert UnixWare 7 IP Dependent Node Installation Diskette for Release
711 (1 of 2) that you made in the preceding procedure into the primary
diskette drive of the node to be installed.
If you have more than one diskette drive, make sure the 3.5-inch drive is
the primary drive (sometimes called the boot drive). Check your computer
hardware manual if you are unsure which is the primary drive.
- Turn on the node. It boots from diskette 1. After displaying hardware
information, the message Starting UnixWare... and the animated SCO
logo appear. It may take several minutes to load the system from the
diskette.
- If prompted, choose the desired language at the Language Selection window.
-
When prompted, insert the UnixWare 7 IP Dependent Node Installation
Diskette for Release 711 (2 of 2) and press Enter. Wait for files to be
extracted. A progress indicator is displayed.
- From the Welcome window, press F10 to continue. Remove the diskette.
- Enter zone, locale, keyboard, and UnixWare licensing information in the
next several windows. If you licensed the cluster earlier, but this node's
UnixWare license does not appear, re-enter the information.
One or two lines at the bottom of the window give brief instructions for
completing each field. For more detailed help, press the F1 key.
- When the system prompts you for HBA diskettes, insert the non-SCO
Host Bus
Adapters diskette and press the F10 key.
When prompted to install any additional HBA diskettes, remove the HBA
diskette and insert the SCO-supplied HBA diskette.
After you finish installing all drivers, remove the diskette, select Proceed
with Installation and press the F10 key.
- At the Device Configuration Utility (DCU) window, select Do not enter
the DCU (autoconfigure drivers) and press the F10 key.
- When prompted to enter the system node name, enter the name you want to
use for this node in the cluster.
- When the Installation Method window appears, insert the UnixWare 7 Release
7.1.1 Installation CD-ROM (disk 1) into the CD drive of the failover node.
Do not make a selection from the list yet.
- Switch to the initial installation node, and enter the following command:
installsrv -E 1
- Switch back to the failover node. At the Installation Method window, use
the arrow keys to select Install IP cluster dependent node and
press the Enter key. The Configure Network Installation Server window
appears.
- Use the Tab key to select Configure Networking Hardware and press
the Enter key.
- A list of detected adapters is displayed. Use the arrow keys to choose Select
from the detected adapters shown above, and press the F10 key.
- Use the arrow key to select the adapter that is cabled to the other node,
and press the F10 key.
- Accept the defaults at the configuration screen by pressing the F10 key.
- When prompted, insert the UnixWare 7 Network Installation Diskette for
Release 711 (1 of 2) diskette and press the Enter key. Wait for the
utilities to load.
If prompted, enter the UnixWare 7 Network Installation Diskette for
Release 711 (2 of 2) diskette.
- At the screen presented, use the Tab key to Configure Networking Protocol and
press the Enter key. A message is displayed as the network is configured,
then the TCP/IP configuration window appears.
- Enter the TCP/IP information for the interconnection at the prompts. Use
the information from the Ethernet Interconnection Failover Node TCP/IP
Information (private subnet) checklist.
Press the F10 key when you finish entering the information.
If you see an error message, follow its instructions to correct the
problem. You may need to use the F9 key to back up to the network
configuration dialogs.
- You are prompted to continue the installation, and a warning indicates
upcoming hard disk actions. At this point, you can:
- Continue the installation by pressing the F10 key.
- Step back through the installation to change any of your answers by
pressing the F9 key.
- After you press the F10 key to confirm that you want to install the
dependent node into the cluster, the software sets up the disk and prepares
the node to join the cluster in about five minutes.
- After you see a message indicating that the first phase of dependent node
setup is complete, remove all diskettes and CD-ROMs from the various drives
on your system and press any key to reboot.
- After the dependent node reboots and joins the cluster, more configuration
and the mirroring of the root disks will occur for CNM clusters. The system
will automatically reboot again. Cross-node mirroring of the root filesystem
can take a long time.
- After the reboot, continue with the following procedure for licensing.
Verify or Enter License Information
If you licensed the cluster after UnixWare 7 installation, answer y to the
prompt and verify that each node has a UnixWare license, and that the failover
node also has a NonStop Clusters license.
If you need to enter licenses now, use the Tab key, the arrow keys, and the
Enter key to navigate the screen to add licenses as follows:
- Use the Tab key to select License, press the Enter key, and use
the down arrow key to highlight Add.
- Press the Enter key.
- Enter a UnixWare license for this node, pressing the Tab key to move among
the fields, and the Enter key to apply the information.
- Add a 2-node NonStop Clusters license for the cluster.
- When you are finished with the licensing screen, Tab to Host and use
the arrow keys to select Exit.
- Continue with the next section to set up the alternate communication path.
You must have the appropriate licenses or installation terminates. If an
error about the brand command appears, check your data for typographical errors,
fix them, and press Enter again. See Licensing the
Cluster for additional information.
Set Up the Alternate Communication Path
The dependent node
installation program prompts you to set up an alternate communication path to
use in the event of interconnection failure. This alternate path prevents the
situation in which both nodes try to act as initial root nodes. This alternate
path is required.
Enter y to set up the path.
The alternate path setup dialogue:
- Lists the available ports
- Prompts you to choose the ports for both nodes
- Prompts you to confirm your choices
- Prompts you to test the connection now.
- Displays a completion message if setup is successful or an error if it is
not.
- If the connection did not test successfully, you are prompted to test it
again with the same settings. Responding n to this prompt gives you the
opportunity to respecify the ports.
- Continue to the next section to verify the network information.
Supply the port information from your installation
checklist. Your ports may be labeled COM A and COM B. COM
A is Port 1
and COM B is Port 2.
Add Public Networking Information
You are prompted to enter network configuration information.
Use the information from your checklist and the following steps:
- Answer yes to the Do
you want to enter network information prompt. A popup window appears,
containing a list of network interface controllers (NICs) installed.
Use the arrow keys to highlight the adapter installed on this dependent
node.
- Press the F10 key to access the Hardware
menu item, press Enter, and use the arrow key to highlight Add new LAN
adapter on that menu, and press the Enter key. The Add new LAN
adapter window appears.
- Use the Tab key to select Continue
and press the Enter key. The Network Driver Configuration window
appears.
- Use the Tab key to select OK
and press the Enter key. The Add protocol window appears.
- Use the arrow keys to select TCP/IP,
use the Tab key to highlight the Add button, and press the Enter
key. At this point, the setup program calls the Network Configuration
Manager, which displays the Internet Protocol Configuration window.
- Enter the appropriate information
for the node you are installing (including the dependent node name in the Host
name field), use the Tab key to select OK and press the Enter
key. A window displays a successful completion message.
- At the next
window, use the Tab key to select OK and press the
Enter key to return to the setup program's main networking window.
- From the Hardware menu
item, select Exit. The dependent node setup program now displays a
number of messages while it installs the SCO UnixWare 7 NonStop Clusters
software on the node. When it finishes installing the software, it prompts
you to reboot the node.
- Remove any diskettes from the
diskette drive and any CD-ROMs from the CD drive, then press the Enter key
to reboot the node.
View Final Messages and Reboot
Final messages appear and you are prompted to press the Enter key to reboot.
Press the Enter key. The system reboots and comes up in multi-user mode.
After the reboot, you can view the SCO UnixWare 7 and NonStop Clusters
documentation set from the SCO desktop by clicking the book-and-question-mark
icon and selecting SCOhelp from the menu. The documentation relies on a correct
local domain name. The documentation can be viewed remotely using the following
URL:
http://cluster_name:457
Substitute the name of your cluster for cluster_name.
Continue the installation with any post-installation tasks as necessary.
Performing
Post-Installation Tasks
After SCO UnixWare 7 NonStop
Clusters software is installed, you may need to perform various
post-installation tasks, such as restoring site-specific files.
Identify which site-specific files
you have customized for your cluster; for example, /etc/passwd
(see Back
Up Important Files). Resolve any differences between your files and files
that you created as part of the SCO UnixWare 7 NonStop Clusters installation.
Removing an
SCO UnixWare 7 NonStop Clusters Installation
Perform the steps described in this
section on the initial installation node to remove the SCO UnixWare 7 NonStop
Clusters software installation from your system.
WARNING: Once you have installed NonStop Clusters on
systems running UnixWare 7 Release 7.1.1, removing NonStop Clusters software
renders the dependent nodes unbootable. In this situation, you must reinstall
UnixWare 7 Release 7.1.1 on each dependent node to be able to use that node as a
stand-alone system.
- Verify that the initial
installation node has control of the root file system. Enter the following
command:
where /
A message indicating that the
root file system is on node 1 (the initial installation node) appears. If
this message does not appear, force the cluster to fail over to the initial
installation node. To force a failover, follow these steps:
- Run the following command
to be sure that the initial installation node is completely up:
cluster -v 1
The following output is
displayed if the node is up:
UP
If the initial installation
node is not up, reboot it, or perform whatever steps are necessary to
bring it up as a dependent node in the currently running cluster
configuration.
- Once the initial installation
node is up, turn off the power to the current root node (usually the
failover root node if the initial installation node is not the current
root node). As the failover root node goes down, the cluster
automatically fails over to the initial installation node. A message
appears on the initial installation node indicating that the failover is
complete.
- Leave the power to the
failover root node turned off.
- Power down the rest of the
dependent nodes in the cluster. Do not turn off the initial installation
node.
- Log in as root on the initial
installation node.
- Change to the root directory.
Enter the following command:
cd /
- Remove the SCO UnixWare 7 NonStop
Clusters software. To do so, enter the following commands:
pkgrm NSC
pkgrm UW7PTF
The pkgrm
of the NSC
package rebuilds the UnixWare kernel
on the initial installation node. At this point, that node converts to
UnixWare 7.1.1 and the cluster no longer exists.
- Enter the following command from
the root (
/
) directory to restart the initial
(or previous) root node:
shutdown -i6 -y -g0
Using OSU to Upgrade NonStop Clusters Software
If you set aside space during a previous UnixWare and NonStop Clusters installation, you can upgrade that installation using OSU. OSU installs the new software into an alternate root filesystem and allows you to boot from that alternate root.
You must have appropriate UnixWare and NSC licenses for the
nodes in your cluster before you can upgrade the NonStop
Clusters software. Use the License Manager in SCOadmin to verify that you have a UnixWare license and an NSC license for each node.
Use the following steps:
- Enter the following command to prepare the alternate root:
makenewroot
- Enter the following command to open a shell to the
alternate root:
innewroot
-
Enter the following command to add the
new NonStop Clusters Software from the CD-ROM to the
new root:
pkgadd -d cdrom1 -q NSC
The package is added into the alternate root volume.
- When the package has completed, return to the original
shell by entering the following command:
exit
-
Set the cluster to boot from the new root volume upon reboot
by entering the following command:
switchroot
- Reboot the cluster to enable the new software.
After the cluster boots, you can switch to the previous version of the software by entering switchroot
and rebooting the cluster.
The online intro_osu(1M) manual page provides details about OSU and
OSU commands.
Upgrading Using the pkgadd Command
You can upgrade your NonStop Clusters software using the online
pkgadd(1M) command.
This method removes your cluster from service
for the duration of the installation.
You must have appropriate UnixWare and NSC licenses for the
nodes in your cluster before you can upgrade the NonStop
Clusters software. Use the License Manager in SCOadmin to verify that you have a UnixWare license and an NSC license for each node.
Follow these steps to upgrade the NonStop Clusters software
using pkgadd
- Be sure that all nodes in the cluster are up and joined with the cluster
and that node 1 is the current root node.
-
Log in as root.
-
Bring your cluster to run level 1:
shutdown -i1
-
Remove any PTFs you installed under the previous release:
pkgrm nsc1009
Supply the PTF package name for nsc1009.
-
Enter the following command to update the NonStop Clusters
software:
pkgadd -d cdrom1 -q NSC
You do not need to re-enter any failover configuration
information, networking information, and so on.