The Home Page of Jon
C. LeBlanc

Solstice DiskSuite Command Line Configuration

This lab will provide a command line alternative to the graphical version of Solstice DiskSuite. This lab is suitable for System Administrators with previous Solaris UFS disk partitioning experience. It is assumed that the essential principals of disk concatenation and striping are at least familiar.

First, we will need to install the DiskSuite software from either the Intranet Extensions 1.0 CD-ROM or the Easy Access Server CD-ROM (depending on your version of Solaris).

We will install Solstice DiskSuite exactly as specified, but will then use command line methodology to configure and operate DiskSuite, rather than using the Graphical User Interface.

Next, we will add the following to root's PATH:

/usr/opt/SUNWmd/sbin
In all of the exercises below, take care to use the logical device (disk) names particular to your system. The examples below are provided for demonstration purposes only.


Setting up the MetaDB (State Database)

A DiskSuite installation would not be able to operate without a "state database", known as a metadb. Ideally, the metadb should be simultaneously located on more than one SCSI controller and on 3 or more disks. This is for redundancy and failover protection, but Solstice DiskSuite will operate with just one SCSI controller and one disk metadb.

In preparation for this lab, we have previously created raw, 7mb partitions on two disks, closest to the inner hub. The metadb placed into those slices should optimally have 3 replicas declared.

To create one metadb on two disks, each having three replicas (for a total of six replicas):


metadb -a -f -c 3 /dev/dsk/c0t3d0s6 /dev/dsk/c1t0d0s6

    The options on the line above are:
    
    -a		attach a new database
    
    -f		form a new database
    
    -c (#)		number of state replicas per partition
    

Note: the metadb command creates a file called

metadb.cf
which must never be edited by us. Next, we need to add an entry into the file:

/etc/opt/SUNWmd/mddb.tab
for each metadb we have created (in this case, one).


Configuring Metadevices

A metadevice (actual data-bearing disk configuration described by the metadb) can be configured to implement one of:

    plain concatenation

    • the file system of one disk resource is expanded to span one or more additional disk resources
    • completely fills one disk resource before using the next, and the next, and so on
    • lower performance but superior disaster recovery

    striping

    • data input/output occurs across several disks at once
    • a data chunk (of a pre-defined size) is read or written to disk one, then disk two, then disk three, repeatedly until entire read/write operation is complete
    • superior performance but poor disaster recovery
Any number of concatenated and striped metadevices can co-exist within the same Solaris kernel, to the extent that the underlying hardware must be able to support such a configuration. Such capacity planning is not covered in this lab.

Plain Concatenation

If we wish to concatenate three slices from different disks into one metadevice, we edit:


/etc/opt/SUNWmd/mddb.tab

and add this line:


d0 3 1 /dev/dsk/c0t0d0s1 1 /dev/dsk/c0t2d0s4 1 /dev/dsk/c0t3d0s5

Don't forget to use the proper logical device names of your own disks!

    The segments of this line are as follows:
    
    d0		our first metadevice (the numeric portion is purely arbitrary)
    
    3		number of slices to be added
    
    1		number of metadevice slices
    
    /dev/dsk/cXtXdXsX	logical device name of each constituent slice
    

Next, we implement the new metadevice:

metainit d0
After that, we layout a new UFS file system on the metadevice:

newfs /dev/md/rdsk/d0
Next, we make a suitable mount point for the new metadevice:

mkdir /testconcat
Then we add a new entry to:

/etc/vfstab
to make the new metadevice available at every bootup:

/dev/md/dsk/d0	/dev/md/rdsk/d0	/testconcat ufs 2 yes -
and then remount all entries:

mountall
and verify that the mount was correctly made:

df -k

Striped Configuration

If we wish to stripe three slices from different disks into one metadevice, we must edit


/etc/opt/SUNWmd/mddb.tab
and add this line:

d1 1 3 /dev/dsk/c0t0d0s1 1 /dev/dsk/c0t2d0s1 1 /dev/dsk/c0t3d0s1 -i 8k
Don't forget to use the proper logical device names from your own disks!

    The segments of this line are as follows:
    
    d1		our second metadevice
    
    1		stripe width
    
    3		number of stripes
    
    /dev/dsk/cXtXdXsX	logical device name of each constituent slice
    
    -i 8k		interlace size (unit of disk writes)
    

Next, we implement the new metadevice:

metainit d1
After that, we layout a new UFS file system on the metadevice:

newfs /dev/md/rdsk/d1
Next, we make a suitable mount point for the new metadevice:

mkdir /teststripes
Then we add a new entry to:

/etc/vfstab
to make the new metadevice available at every bootup:

/dev/md/dsk/d1 /dev/md/rdsk/d1 /teststripes ufs 2 yes -
and then remount all entries:

mountall
and verify that the mount was correctly made:

df -k


Advanced Metadevice Configuration

Beyond plain concatenation and striping, additional configuration techniques can optimize data security, system reliability, and performance.


Mirroring

Mirroring allows a data chunk of pre-defined size to be written or read in parallel to two or more metadevice resources at once. Should one metadevice resource suddenly become unavailable, the data will still be available on the remaining resource(s).

This section must be treated separately from the concatenation and striping sections above. For simplicity, we will cover the basic mirroring of two metadevice resources, which we will name d12 and d13. First, we edit:


/etc/opt/SUNWmd/mddb.tab file
and add the following lines:

d3 -m /dev/md/dsk/d12 /dev/md/dsk/d13
d12 1 1 /dev/dsk/c0t2d0s6
d13 1 1 /dev/dsk/c0t3d0s5

    The segments of the first line are as follows:
    
    d3		our new mirror metadevice
    
    -m		make the mirror
    
    /dev/md/dsk/dXX	the two source metadevices to be mirrored
    
    The last two lines specify which actual slices are to be made into their own metadevices, then mirrored.
Since these are new metadevices, they must be initialized:

metainit d12
metainit d13
metainit d3
After that, we layout a new UFS file system onto the metadevices:

newfs /dev/md/rdsk/d12
newfs /dev/md/rdsk/d13
newfs /dev/md/rdsk/d3
Next, we make a suitable mount point for the new metadevice:

mkdir /testmirror
Then we add a new entry to:

/etc/vfstab
to make the new metadevice available at every bootup:

/dev/md/dsk/d3 /dev/md/rdsk/d3 /testmirror ufs 2 yes -
and then remount all entries:

mountall
and verify that the mount was correctly made:

df -k

Mirroring Our System (Boot) Disk

In the following example, we will assume that all of the slices to be used below are either fully functional on an active Solaris machine, or are already layed out in UFS by having run newfs.

We will assume the following file systems are already mounted on our System Disk:

  • root directory on /dev/dsk/c0t0d0s0
  • /usr on /dev/dsk/c0t0d0s3
  • swap on /dev/dsk/c0t0d0s1

We will mirror each of the above three slices onto three slices of /dev/dsk/c1t2d0.

First, we edit


/etc/opt/SUNWmd/mddb.tab
and add the following lines:

d4 -m /dev/md/dsk/d14
d5 -m /dev/md/dsk/d15
d6 -m /dev/md/dsk/d16
d14 1 1 /dev/dsk/c0t0d0s0
d15 1 1 /dev/dsk/c0t0d0s3
d24 1 1 /dev/dsk/c1t2d0s0
d25 1 1 /dev/dsk/c1t2d0s7
d16 1 1 /dev/dsk/c0t0d0s1
d26 1 1 /dev/dsk/c1t2d0s4
Since these are new metadevices, they must be initialized, in this order:

metainit /dev/md/dsk/d24
metainit /dev/md/dsk/d25
metainit /dev/md/dsk/d26
metainit -f /dev/md/dsk/d14
metainit -f /dev/md/dsk/d15
metainit -f /dev/md/dsk/d16
metainit /dev/md/dsk/d4
metainit /dev/md/dsk/d5
metainit /dev/md/dsk/d6

Note the use of the -f option in lines initializing our original system slices. This is to force the usage of these mounted file systems as metadevices.

Next, we edit:


/etc/vfstab
and change the /usr and swap entries:

/dev/md/dsk/d5	/dev/md/rdsk/d5 	/usr 	ufs 	2 	no 	-
/dev/md/dsk/d6 	- 			- 	swap 	- 	no 	-
Now we must inform the kernel that the root file system is now a metadevice:

metaroot /dev/md/dsk/d4
Since this has now fundamentally changed the nature of our kernel, we must restart the system:

init 6
When the system comes back up, log in as root and attach the submirrors to the main metadevices:

metattach /dev/md/dsk/d4 /dev/md/dsk/24
metattach /dev/md/dsk/d5 /dev/md/dsk/25
metattach /dev/md/dsk/d6 /dev/md/dsk/26
This commences a complete sync of the data to the mirrors, and will not need to be repeated. The attachment of the mirrors is now permanent for the lifetime of the Solaris installation.

UFS Logging

The logging feature is now an option in Solaris 7 and will be a standard feature in Solaris 8, but can be retrofitted to Solaris 2.6 and some earlier versions with DiskSuite command line instructions. The purpose of UFS logging is to speed up reboots and to decrease the number of sync disk writes by committing disk changes only to an "intent log", thus avoiding the system overhead required to constantly update UFS superblocks, cylinder groups, inodes, and other related phenomena. In most cases, UFS logging obsoletes fsck on the metadevices assigned to it.

For this example, we will apply UFS logging to a partition mounted on a directory called /testdir. In single user mode (recommended, but not mandatory) view this file:


/etc/vfstab
and note that the /testdir entry might look like this:

/dev/dsk/c0t0d0s7 /dev/rdsk/c0t0d0s7 /testdir ufs 4 yes -
First, unmount /testdir:

umount /testdir
Now, create something called a metatrans device:

metainit d30 -t /dev/dsk/c0t3d0s7 /dev/dsk/c1t5d0s7
Now edit this file:

/etc/vfstab
and change /testdir's entry:

/dev/md/dsk/d30 /dev/md/rdsk/d30 /testdir ufs yes -
and remount /testdir:

mountall
The next time the system is rebooted, /testdir will not be checked by fsck (since it is UFS Logging's partial duty to do so) and you will see the following message:

/dev/md/d30: is logging

Expanding an Existing UFS File System

Be aware that growing a file system with Solstice DiskSuite (using either GUI or command line method) is a one-way process. Reducing the size of a grown file system in real time is impossible with Solstice DiskSuite, but can be accomplished by unmounting the resources (preferably in single user mode).

Reducing a grown file system would require:

  • a complete backup of data
  • a redeclaration of the metadevice into a smaller one
  • then the restoration of the data from backup

To add a new partition to the file system of a pre-existing DiskSuite resource, first attach the new slice to a sub-mirror:

metattach -a d40 /dev/dsk/c0t1d0s4
Then verify that the attachment worked:

metastat -p d40
Then expand the file system onto the metadevice:

growfs -M /var /dev/md/rdsk/d40
Note that growfs is similar in function to newfs, but extends the disk structures of the initial resource onto the newly added one.


Copyright © 1997, 2001 Jon C. LeBlanc.

Click Here to jump to my Custom Solaris System Administration Labs page.