Summary :5.1 Shared System Image.

From: Browett, Darren (dbrowett@city.coquitlam.bc.ca)
Date: Mon Jul 22 2002 - 17:00:28 EDT


Thank you to all that responded : Colin Bull, Phil Baldwin, Jim Lola, Bryan
Williams and alan@nabeth.

In summary :

Yes I can add new nodes and drop the original node. One point that came out
that I didn't consider,
instead of having the install on one of the local disks of the original
node, create the install
directory on the SAN and use mirrored disks as opposed to Raid 5

Also I should go to 5.1a when implementing Oracle 9i with Rac.

Darren

----------------
Responses

Your idea sounds very practical, you do not need a local
install drive, use a partition on the SAN.

For databases, do not use RAID5.
In last weeks postings, someone told of 3 disk failures out
of 12 in 3 months. With RAID5 there is a possibility of the
whole lot going down. Use RAID 10.

Colin Bull
c.bull@VideoNetworks.com

---
can't see a problem with removing the test members when you 'go live'.
Theoretically shouldn't be a problem. We had a 'temporary' ES40 in our
cluster, which was removed when our GS140 went live.
For an operational cluster you don't need a system disk. A system disk is
only of some use if there is a problem with a member.
If for some reason you had to reinstall the cluster, you would have to build
one of the SAN disks as if it were the system disk.
For diagnosis, it is possible to boot from the OS CD and get a shell from
there.
Thanks and Regards.
		Phil
---
If I were you, I would use 5.1A if you are going to do a 9i Cluster b/c 
of the direct I/O capabilities and then actually build a new cluster. 
 That local systems disk is very important because you can actually use 
it to recover or repair your cluster if you encounter problems.  This 
local systems disk is also known as the Emergency Repair disk.
Jim
---
We typically build the install disk on a partition on the san, so that 
way any of the systems can access it.
This is our example setup.
"We take an 18 GB disk, mirror it, and partition it as follows:
Part size       HSG    ID     Used for
                 Unit
10%		D10    1010    cluster root   /
50%             D11    1011    cluster usr    /usr
39%             D12    1012    cluster var    /var
LARGEST         D13    1013    cluster quorum
We take another 18 gb disk, mirror it, and partition it as follows:
15%             D1     1001    member1_boot
15%             D2     1002    member2_boot
15%             D3     1003    member3_boot
15%             D4     1004    member4_boot
LARGEST         D5     1005    cluster pre-install disk
Before you boot the install CD on the console, type:
wwidmgr -quickset -udid 1005
That will make the preinstall disk visible to the system. Then boot the 
install CD, and install on the disk that has the ID of 1005.
The other units will get mapped to dsk{something} that you need to get 
before you start the cluster install. use "hwmgr -v d" to map the ID to 
the dsk designations."
This may not apply to you, but may give you an idea of what needs to be 
done. BTW, install Tru64 and TruCluster + all the other optional 
components on the preinstall disk, then patch it, then create your 
cluster. This will save you from having to do a rolling cluster upgrade 
right off the bat.
Good luck,
Bryan Williams
---
I don't know how special the first member of the cluster
	is.  Since the system you're using for it isn't the loaner,
	then you could keep it as part of the cluster (asking as
	a Quorum node, if nothing else) and add the two real
	members is just like adding more members.  You can
	probably also remove the loaner DS20 to keep things
	clean.
	My experience (though limited) was that once the common
	cluster disk was created from the initial installation
	disk, the installation disk was no longer used.  Or at
	least didn't seem to be.  So, it may be possible to
	remove the initial member, once the core cluster members
	are added.
	This wasn't the clean answer you were probably hoping
	for, but I hope it helps a little.
---------------------------------
My original question  :
I am in the very early stages of planning a new 5.1 cluster, and have a very
simple question, (using the fine book by Tim
Donar, Trun64 Unix-Oracle 9i Cluster, Quick Reference)
If I create a "test" cluster consisting of a DS10/DS20 combined with a
HSG80, Raid 5 44 Gb Shared System Image disk.
The DS10 would have the Unix Install Disk. (We own the DS10 and the DS20 is
a loaner.)
When it comes time to "cut" over to production, can I simply add the two
current production DS20's, and remove
the DS10/DS20 "test" systems.  
My thinking is : I will have a "patched" production ready 5.1 environment
already installed on my SAN, why
not make use of it for production.
One concern would be that the production DS20's would not have a local
install disk.
----------------------------------------------------------------------------
----------------------------------------------
Darren Browett P.Eng 						This message
was transmitted
Data Administrator		 				using 100%
recycled electrons 
Information and Communication Technology
City of Coquitlam 
P:(604)927 - 3614 
E:dbrowett@city.coquitlam.bc.ca 
----------------------------------------------------------------------------
----------------------------------------------- 


This archive was generated by hypermail 2.1.7 : Sat Apr 12 2008 - 10:48:46 EDT