RAID Levels, definitions, and impact on Sybase devices.

RAID==Redundant Array of Inexpensive (some say Independent) Disks.  
Raid levels are different ways to configure Disk arrays, which are 
high-capacity external storage devices typically attached to high end 
servers for Database storage.  

There are several ways to configure the RAID units, as described below. 
see  for
a much more technically detailed overview of Raid Levels.

---
Raid 0: Striping data across several disks, instead of writing data
all through one disk before moving on to the next.

Advantages: faster data access because you're using multiple disk
heads to read one series of data, instead of using just one head to
read down one disk.  

Disadvantages: Media failure can prove catastrophic if not used
in conjunction with mirroring or foolproof backups (thus you never see
just a Raid-0 configuration).  There are also concerns that striping
actually slows some types of random disk access, but overall the
performance is increased.

---
Raid 1: 1-to-1 Disk mirroring
Advantages: Full 1-1 mirroring; online failover capability for
data.  If done at the hardware level, very fast data access.

Disadvantages: 50% reduction in useful disk space.  Very costly.

Typically you see installations of Striping and 1-1 Mirroring done
together (called Raid 0+1, Raid 0/1, Raid 1+0, or Raid "10" ... they're
all equivalent ways of saying "mirroring and striping")

---
Raid 5: A string of 5 disks is bound together and configured to have 
full failover within the 5-disk string based on a complex algorithm
of parity blocks.  If any of the 5 disks in the bound string fails, the
other 4 can reconfigure themselves to continue working without
any system impact.

Advantages: Full failover for a set of disks with only 20% (1 disk out
of 5) being "wasted" for backup.  

Disadvantages: Write performance to the devices terribly slow b/c of
the need to not only write the data, but write its parity backup information
elsewhere in the data string.  In fact, one write activity involves
two disk reads and then two disk writes...quadruple the I/O load.  The
multiple writes also introduce a data integrity issue (powering off
mid data write).  

Most raid arrays can overcome this slowness by caching at the Array level. 
DG Clariions certainly have a mirrored and battery backed write cache
on their Disk Arrays.  However, too much I/O activity can flood the
write cache and lead to errors (and if this is in conjunction with a
database activity, database corruption).  I saw this problem most recently
with Sybase 10 on a Sun 2.3 box with a DG Clariion disk array.  The solution 
was to actually throttle down I/O activity in the Solaris kernel.

===
How to configure Raid for maximum performance, failover, and disk space
efficiency in Sybase systems.

Pretty straightforward:
- Put high WRITE I/O activity segments on Raid 0+1 devices.  These include
tempdb devices, all log segments, and data segments that are OLTP activity
based (lots of reads and writes and updates).  Note, It may not help
any to stripe log disks...so you can just mirror the device disks.
- Put high READ I/O activity on RAID5 strings.  This means heavy DSS
data only.  This is data that has very little write activity; read only,
archive, and Data Warehousing data. This way, you pay the write penalty once 
upon initial insert, while data reading is just as efficient as in Raid1
or straight devices.

- Match up Indexes with the type of data its indexing.  Remember, when you
update a table's data, you also update any/all clustered indexes.  Granted
its not as "heavy" an update (since its just modifying a binary tree-based
pointer) but its still I/O activity.
- ALWAYS use hardware or O/S-based mirroring.  Do NOT depend on Sybase's device
mirroring as your primary method of failover.  In general, any type of
Software mirroring will not be as efficient as a hardware or an O/S mirror.  
Even Sybase recommends to use O/S based mirroring.   O/S mirroring is
built into the kernel, issuing a dual write when writing data.  Sybase
mirroring has a primary/slave concept, so that data is written to one,
then written to the other of mirrored pair of devices  (serialiased, slow
behavior).  Plus, in some cases, O/S mirroring will actually make a 
point of time decision as to which mirror to use (based on utility and 
distance the disk heads are from the data in question).  If you depend
on O/S mirroring, you can sometimes get added benefits such as
hot swap disks and disk-sharing, NVRam write buffering (debatable
whether this helps or hurts...I believe it introduces write buffering
issues but is a godsend during recoveries).
- Sybase Mirroring has one useful endeavor; when creating a backup of
a database/device, Sybase mirroring is a great fast way to accomplish this.

---
Other Random Information

- Raid levels have no effect on a disk's ability to be read; binding
5 disks together into a Raid5 string merely creates a device readable
by the format command that's larger than normal.  

- In general, despite a trend towards larger disks, smaller disks will
be much better performing than larger disks.  If you can afford to purchase
extra storage arrays, you'll get faster performance.

- Be careful when configuring Sun Arrays; never put both pairs of a 
mirror in the same tray or you eliminate all advantages of failover.

- Try to make the block size of any striping you setup  in Raid 0 match 
the I/O size of the database server in question for maximum performance.

- there are other variants of Raid, but I've never installed them.
Raid3 and Raid4 are differnet variants on Raid5, but not considered as
efficient.  There are raid5+ configs outthere (raid7 eg) but I would
not trust them.

- SSA info page: http://www.columbia.edu/~marg/misc/ssa/
---
On Solaris the old argument was always ODS vs Veritas.  
- ODS was command line based, and "closer" to the disks.  More difficult
to administer but more efficient.
- VxVM is graphically based, easier to setup.  Now, in 2.7 its bundled
with Solaris and ODS is EOL'd.