Disksuite configuration question

From: Rierson Robert Civ OC-ALC/MASLA (Robert.Rierson@tinker.af.mil)
Date: Thu Dec 19 2002 - 12:35:59 EST


Hello Sun Managers. I realize that we are all busy this time of year but if
some of you familiar with disksuite configuration guidelines could look over
my configuration before I implement, it would be greatly appreciated. I am
trying to configure an NFS server to give users some additional disk space
and remove the 2GB limitation from SunOS. I need to do this with existing
resources so faster or better disks are not really an option. I have a large
network of SPARC 20 running SunOS 4.1.4 (Yep!!!). I want to give users
additional disk space by taking some existing 6 x4.2GB disk arrays and
combining them together with disksuite. So, I will be configuring a SPARC20
running Sol 2.7 with Disksuite 4.2. As the clients are all SunOS, the NFS
client will be V2. I have two FastWide SCSI controllers connected to the
disk arrays which are StorEdge 6 x 4.2 GB. Here are my configuration
thoughts. Would you please look this over and see if I am making any major
performance snafu's in the configuration. I am interested in getting the
best performance/redundancy that I can get.

Thanks

My thoughts are as follows

1. We have SPARC 20 running 2.7 with Disksuite 4.2. Disk configuration is
two 6 x 4.2GB StorEdge arrays on separate FW Controllers. Output of format
command is below. This machine will be used exclusively as an NFS server
serving users home volumes to NFS clients. The NFS clients are running NFS
V2 thus our read/write size will be almost exclusively 8kB.

I considered creating a Trans Metadevice so that I can enable UFS logging. I
would create a 4.2 GB mirror between c1t1d0s2 and c1t2d0s2 to create the
logging device. Then I would create a 20.8 GB RAID 0+1 device from the
remaining 10 drives available in the array.

OUTPUT from FORMAT

       3. c1t1d0 <SUN4.2G cyl 3880 alt 2 hd 16 sec 135>
          /iommu@f,e0000000/sbus@f,e0001000/QLGC,isp@0,10000/sd@1,0
       4. c1t2d0 <SUN4.2G cyl 3880 alt 2 hd 16 sec 135>
          /iommu@f,e0000000/sbus@f,e0001000/QLGC,isp@0,10000/sd@2,0
       5. c1t3d0 <SUN4.2G cyl 3880 alt 2 hd 16 sec 135>
          /iommu@f,e0000000/sbus@f,e0001000/QLGC,isp@0,10000/sd@3,0
       6. c1t4d0 <SUN4.2G cyl 3880 alt 2 hd 16 sec 135>
          /iommu@f,e0000000/sbus@f,e0001000/QLGC,isp@0,10000/sd@4,0
       7. c1t5d0 <SUN4.2G cyl 3880 alt 2 hd 16 sec 135>
          /iommu@f,e0000000/sbus@f,e0001000/QLGC,isp@0,10000/sd@5,0
       8. c1t6d0 <SUN4.2G cyl 3880 alt 2 hd 16 sec 135>
          /iommu@f,e0000000/sbus@f,e0001000/QLGC,isp@0,10000/sd@6,0
       9. c2t1d0 <SUN4.2G cyl 3880 alt 2 hd 16 sec 135>
          /iommu@f,e0000000/sbus@f,e0001000/QLGC,isp@3,10000/sd@1,0
      10. c2t2d0 <SUN4.2G cyl 3880 alt 2 hd 16 sec 135>
          /iommu@f,e0000000/sbus@f,e0001000/QLGC,isp@3,10000/sd@2,0
      11. c2t3d0 <SUN4.2G cyl 3880 alt 2 hd 16 sec 135>
          /iommu@f,e0000000/sbus@f,e0001000/QLGC,isp@3,10000/sd@3,0
      12. c2t4d0 <SUN4.2G cyl 3880 alt 2 hd 16 sec 135>
          /iommu@f,e0000000/sbus@f,e0001000/QLGC,isp@3,10000/sd@4,0
      13. c2t5d0 <SUN4.2G cyl 3880 alt 2 hd 16 sec 135>
          /iommu@f,e0000000/sbus@f,e0001000/QLGC,isp@3,10000/sd@5,0
      14. c2t6d0 <SUN4.2G cyl 3880 alt 2 hd 16 sec 135>
          /iommu@f,e0000000/sbus@f,e0001000/QLGC,isp@3,10000/sd@6,0

2. Each disk will be formated identical as containing a single slice S2.

Part Tag Flag Cylinders Size Blocks
  0 unassigned wm 0 0 (0/0/0) 0
  1 unassigned wm 0 0 (0/0/0) 0
  2 unassigned wm 0 - 3879 4.00GB (3880/0/0) 8380800
  3 unassigned wm 0 0 (0/0/0) 0
  4 unassigned wm 0 0 (0/0/0) 0
  5 unassigned wm 0 0 (0/0/0) 0
  6 unassigned wm 0 0 (0/0/0) 0
  7 unassigned wm 0 0 (0/0/0) 0

3. Create Initial State Databse Replicas on each disk (12 State Databases
will exist) Use S2 (Which will be part of a created Metadevice)

#metadb -a -f c1t1d0s2 c1t2d0s2 c1t3d0s2 c1t4d0s2 c1t5d0s2 c1t6d0s2 \
        c2t1d0s2 c2t2d0s2 c2t3d0s2 c2t4d0s2 c2t5d0s2 c2t6d0s2

4. Final product desired is a RAID 1 4.2 GB disk for logging and a RAID 0+1
20GB disk for data. Is that best or should I not create the TRANS-META
device and just create a 6 way sliced/mirrored device RAID 0+1?

5. Create the meta devices that will be used for the TRANSMeta logging
device.

# metainit d51 1 1 c1t1d0s2 # create stripe 1
# metainit d52 1 1 c2t1d0s2 # create stripe 2
# metainit D50 -m D51 #create mirror from stripe 1
# metattach d50 d52 #attach stripe 2 to mirror

6. Create the meta device that will be used for the TransMeta master device.
As this is an NFS server for clients requesting 8Kbyte chunks of data (NFS
V2) what do you think my interlace size should be?

# metainit d41 1 5 c1t2d0s2 c2t2d0s2 c1t3d0s2 c2t3d0s2 c1t4d0s2 -i 8k
# metainit d42 1 5 c2t4d0s2 c1t5d0s2 c2t5d0s2 c1t6d0s2 c2t6d0s2 -i 8k
# metainit d40 -m d41
# metattach d40 d42

7. Create the file system for the logging device. DiskSuite reference Guide
suggested the newfs parameters. Do you agree?

# newfs -m 1 -i 8192 /dev/md/rdsk/d50
# fsck /dev/md/rdsk/d50

8. Create the file system on the master device. I specified a cluster size
as a multiple of the # of slices and interlace value. Is this appropriate or
are there better settings?

# newfs -m 1 -i 8192 -c 40 /dev/md/rdsk/d40
# fsck /dev/md/rdsk/d40

9. create the TRANSMeta device

# metainit d0 -t d40 d50

10. Mount the file system

# mount /dev/md/dsk/d0 /home

Thanks for all or any information and input you can provide.

Robert Rierson
robert.rierson@tinker.af.mil
_______________________________________________
sunmanagers mailing list
sunmanagers@sunmanagers.org
http://www.sunmanagers.org/mailman/listinfo/sunmanagers



This archive was generated by hypermail 2.1.7 : Wed Apr 09 2008 - 23:25:30 EDT