Re: performance of san-disk

From: Darryl Ousterhout (D.Ousterhout@LABSAFETY.COM)
Date: Tue Nov 12 2002 - 16:54:37 EST


Not sure if this is your problem or not, but I was talking with IBM last
week about a known JFS2 performance problem in 5.1. If you are using JFS2 on
file systems other then rootvg, it has been known to cause severe
performance problems. Check to see if you have APAR IY32280 installed. This
was IBM's fix for the JFS2 problem.

Regards,
Darryl

 -----Original Message-----
From: Holger.VanKoll@SWISSCOM.COM [mailto:Holger.VanKoll@SWISSCOM.COM]
Sent: Tuesday, November 12, 2002 3:05 PM
To: aix-l@Princeton.EDU
Subject: performance of san-disk

Hello,

I got a p670 connected to HDS (Hitachi; 9900) Storage with 2 FC-adapter.

No matter how much disks I access concurrently, I get 34mbyte/sec maximum.
All 4 CPU are 0% idle when measuring, but depending on the # disks they
spend more time in kernel than in wait.

# disks accessed kernel wait disk-busy (topas) transfer
Mbyte/sec
10-20 8 90 50-70% 25
30 40 60 40-60% 33
40 55 45 60-80% 33
50 70 30 40-60% 34
60 78 22 35-50% 34

This looks like my CPU-power is the bottleneck, correct?

As I get 184 Mbyte/sec with the 4 internal disks and 60% kernel / 40% wait
that looks to me as if the fibre-channel driver (kernel-time) uses all my
cpu. Is this correct?

If someone likes to do some measurement to have numbers to compare, here is
the script I used:

#!/bin/sh
#dlmfdrv4 - 61
count=2000
bsk=256k
dev=dlmfdrv
#dev=hdisk

start=4
end=54

i=$start
until false
do
dd </dev/$dev$i >/dev/null bs=$bsk count=$count 2>/dev/null &
[ $i = $end ] && break
let i=i+1
done

I would appreciate any other "benchmark"-results, especially latest
ssa-technology and non hds (f.e. ibm) storage servers.

Regards,

Holger



This archive was generated by hypermail 2.1.7 : Wed Apr 09 2008 - 22:16:19 EDT