HSG80 and Tru64 V5.1 Migration questions

From: Andy Pavitt (andy@livedb.flutter.com)
Date: Tue Oct 15 2002 - 05:03:37 EDT


Your thougths please.

Operating system version Tru64 UNIX V5.1a
Patch Kit version Patch Kit 3
ES45 Console Firmware Rev V6.2-4
KGPSA Firmware Rev 382a1
HSG Firmware Rev V8.7 ACS
SAN Switch Rev v2.6.0c

ORACLE 9i RAC

Originaly we wanted to setup a stripeset consisting of 30 x 3 way mirrors which would then
be striped. The three way mirrors were so I could run a BCVSplit script and remote mount
the thrid mirrored striped on another box for bachkups. I discovered much to my annoyance
that HSG80's only support :-

20 raidsets
30 raid/mirrorsets
45 raid/mirror/stripesets
6 disks per raidset
24 disks per stripeset
45 disks per striped mirror

As purchasing an HSV controller is not an option, the next plan is to therefore
purchase an additional HSG80 pair then use split bus mode for our
"Drive Enclosure Model 4314 Rack" hanging three shelves with 14 drives off of
the top HSG pair and three shelves with 14 drives off of the additional HSG pair.

Our issue is, we currently have a cluster consisting of 4 x ES45's
hanging off of my current disk setup. We want to make the disk changes to split
bus mode without having to rebuild the cluster and suffering as little down
time as possible.

A plan hase been devised and I would appreciate comments or pointers
to anything that may have missed.

Disk Migration Plan for 4 x ES45 cluster

1. Take full backup of cluster /,/usr, /var + boot_disks 1- 4 on live to spare 72GB disk.

2. Not sure how split bus mode will be layed out on the shelves yet, think it will be
   alternate disks across the shelf bus0 bus1 bus0 bus1 etc.

Most important bit is to preserve the same UNIT ID.s + WWID.S for devices otherwise
The cluster will not boot and basically we are stuffed. (Full rebuilds)

3. Migrate live cluster /, /usr, /var + boot disks & common etc to top three shelves by adding
    third mirror to mirrorsets. Check the copy has finished and the volumes are NORMAL.
    Because we don.t know the new bus layout, we wont know if we are mirroring across
    Buses and that we have an optimum config for performance and redundancy.

4. Reboot one node in the cluster at a time, check systems boot and cluster is healthy.

5. Reduce mirrored volumes down to 2 members.

6. Carry out the split bus work H/W engineer to install split bus module.

7. Move mirrors about to get optimum config for performance and redundancy.

8. Reboot cluster members one at a time to check systems boot and cluster is healthy.

With two dual bus shelves we will now have:

(Note assuming the split is left/right not interlaced - but its not important)
(H - Hotspare, O - Operating System, D - Data)

                BUS 1 BUS2
        H O D D D D D | D D D D D O H
        H O D D D D D | D D D D D O H
        H O D D D D D | D D D D D O H

So we have a HS per bus, an OS per bus, and 5 data disks per bus.
In total that is 30 data disks. which after 3 way mirroring is 10 effective spindles.
Total count of mirrorsets is:
3 for OS
11 for data (10 mirrors + 1 stripe)
11 for split backup

There is not enough spare disks for all the OS disks to go on one controller pair
as 4 x root 1 x common 1 x parition for quorum 1 x build equates to 6 mirrors.
So I am happy to do a clu_del member and clu_add to use some of the other
disks allocated to OS on the other controller pair.

Thanks
Andy ***NOTE** Please reply to Andy.Pavitt@betfair.com



This archive was generated by hypermail 2.1.7 : Sat Apr 12 2008 - 10:48:56 EDT