summary: LSM RAID5 question

From: - (ldg@ulysium.net)
Date: Sun Feb 27 2005 - 14:02:00 EST


This might help a few lost souls that end up with the same questions.
I tried creating a raid5 set with 4 disks on a new system and was having
trouble understanding what was actually happening, as the end result was a
raid5 set with 3 disks plus the 4th disk as a concatenated disk in that
raid5 volume. So my original question was about lsm being able to create
raid5 sets with more than the minimum 3 disks and how could a 4th drive
showing up as concatenated be of any use for redundancy if it's just
concatenated?
I got several responses, mostly pointing to some books, which I do have and
aren't specific enough. For example the file system admin book from steven
hancock, which does show how to make raid5 sets, but not detailed enough for
me to realize exactly how it really works.
I found a pdf manual on lsm on the hp site, which does go deeper in details
and does explain more fully how the raid5 is handled, especially for more
than 3 disk sets.
I couldn't test this on that new system any more, but I had a spare system,
for which I gathered a few old drives and I did some experimenting.

Basically what was happening originally, was that when using the lsm gui to
set up the raid5 set, far too much was being done by default, which was
giving a result that wasn't exactly what was expected.
That set of 4 disks I was trying to make, was really being created as a 3
disk set plus a log plex on the 4th drive, which is what I saw as the
concatenated disk, but it wasn't a data disk concatenated with the other set
of 3, it was actually the log plex.
When I couldn't obtain my expected results with the gui, I went to the cli
and tried it with volassist and ended up with the same result.
So now that I did some experimenting on that test bed, I think I understand
what was going on and how lsm actually handles raid5.
LSM actually can make raid5 sets with up to 8 columns, not just the 3
minimum, and it places a log plex on a separate disk unless we tell it not
to.
However I found out one important detail about how volassist does things:
In the manual, it says that the order in which we list the disks to use on
the command line will be respected and the last one listed would get the log
plex, for example:

volassist make test 100000 layout=raid5 ncolumn=4 dsk2 dsk3 dsk4 dsk5 dsk1

Which should make the 4 disks raid5 set with the disks 2,3,4 and 5 then
placing the log plex on dsk1. But what actually happens, is dsk1,2,3,4 get
the raid5 set and the log plex is put on dsk5. Why??? Go figure!
If someone can explain the reasons for this, I'd like to know..

In any case, I was able to create that 4 disks raid5 set on my test bed
system, with a 5th disk holding the log plex. Although it's not doing it
exacty as it should, it does work.
I guess the best way to do it exactly as you want is not to use volassist
but rather using the bottom-up method of creating everything manually one by
one using volmake and explicitly specify each disk in the set then place the
log plex separately where you want it.

One thing I find wasteful and not very efficient is the log plex, which is
rather small and is placed on a disk by itself, using so much space.
Since it's not a good thing to place a log plex on the same disks as the
ones holding the raid5 data, I fail to see any good other use to make of the
unused space on the disk holding the log plex. So much wasted space...

-- 
Didier Godefroy
mailto:dg@ulysium.net


This archive was generated by hypermail 2.1.7 : Sat Apr 12 2008 - 10:50:16 EDT