Metadb status after replacing failed disk under Disksuite

From: Perttunen, Bruce (BPerttunen@bcbsm.com)
Date: Fri Jul 26 2002 - 09:23:31 EDT


I have a 420R with the two internal disk mirrored to each other. I had to
replace the c0t0d0 drive, so I broke the mirrors and removed all metadevices
on the disk. I also removed the 2 copies of the metadb on c0t0d0s7 - there
will still two copies on the other (good) disk. I installed a new drive,
created the metadevices, attached the metadevices to the mirrors and synced
everything up. I then added two metadb's to the new c0t0d0 disk and the
output from the "metadb -i" command is now showing as...

# metadb -i
        flags first blk block count
     a u 16 1034 /dev/dsk/c0t0d0s7
     a u 1050 1034 /dev/dsk/c0t0d0s7
     a p luo 16 1034 /dev/dsk/c0t1d0s7
     a p luo 1050 1034 /dev/dsk/c0t1d0s7
 o - replica active prior to last mddb configuration change
 u - replica is up to date
 l - locator for this replica was read successfully
 c - replica's location was in /etc/opt/SUNWmd/mddb.cf
 p - replica's location was patched in kernel
 m - replica is master, this is replica selected as input
 W - replica has device write errors
 a - replica is active, commits are occurring to this replica
 M - replica had problem with master blocks
 D - replica had problem with data blocks
 F - replica had format problems
 S - replica is too small to hold current data base
 R - replica had device read errors

None of the metadb's are listed as the master replica indicated by the "m"
flag Also, the two new metadb replicas don't have the "l" flag. Can anyone
tell me if this is a problem?

Thanks.

Bruce Perttunen
Blue Cross Blue Shield of Michigan
_______________________________________________
sunmanagers mailing list
sunmanagers@sunmanagers.org
http://www.sunmanagers.org/mailman/listinfo/sunmanagers



This archive was generated by hypermail 2.1.7 : Wed Apr 09 2008 - 23:24:39 EDT