From: przemolicc@poczta.fm
Date: Thu Jan 02 2003 - 07:02:21 EST
We have following configuration (v880):
disk c1t0d0 (in E 450 it would be called: c0t0d0) is divided into
several slices:
Part Size
0 1.95GB [*] /
1 1.95GB [*] swap
2 33.92GB
3 2.93GB [*] /var
4 26.98GB [*] /u01 (oracle)
7 100.16MB
Slices marked [*] are made as metadevices and mirrored with disk c1t8d0.
All the slices (metadevices) have also associated hot spares, e.g.:
# df -k /
Filesystem kbytes used avail capacity Mounted on
/dev/md/dsk/d4 1984564 523490 1401538 28% /
# metastat d4
d4: Mirror
Submirror 0: d0
State: Okay
Submirror 1: d2
State: Okay
Pass: 1
Read option: roundrobin (default)
Write option: parallel (default)
Size: 4096602 blocks
d0: Submirror of d4
State: Okay
Hot spare pool: hsp001
Size: 4096602 blocks
Stripe 0:
Device Start Block Dbase State Hot Spare
c1t0d0s0 0 No Okay
d2: Submirror of d4
State: Okay
Hot spare pool: hsp001
Size: 4096602 blocks
Stripe 0:
Device Start Block Dbase State Hot Spare
c1t8d0s0 0 No Okay
The hot spare is defined as:
hsp001: 1 hot spare
c1t3d0s2 Available 71127180 blocks
The problem comes when the disk (c1t0d0) fails. Hot spare (to be more precise:
c1t3d0s2 what means whole disk) replaces failed
_slice_ (!), not entire disk (I know that this is how SDS works). It means
that other metadevices (slices) from c1t0d0 are not hot spared.
What I would like to have is to have all other slices (metadevices)
hot spared having only one hot spare.
Can you give me some advices how to cope with the problem ?
przemol
_______________________________________________
sunmanagers mailing list
sunmanagers@sunmanagers.org
http://www.sunmanagers.org/mailman/listinfo/sunmanagers
This archive was generated by hypermail 2.1.7 : Wed Apr 09 2008 - 23:25:32 EDT