Re: performance of san-disk

From: Holger.VanKoll@SWISSCOM.COM
Date: Tue Nov 26 2002 - 05:36:42 EST


old thread... but good news:
 
after accessing the raw-devs (unbuffered) rvpath instead of vpath
transfer increased from 35mb/s to 150mb/s and more
 
that was just a problem of measuring...

        -----Original Message-----
        From: Willeat, Todd [mailto:TWilleat@MHP.SMHS.COM]
        Sent: Wednesday, November 13, 2002 9:47 PM
        To: aix-l@Princeton.EDU
        Subject: Re: performance of san-disk
        
        
        I seem to recall that all non-IBM disk devices are defaulted to
a queue depth of 1. I haven't really done any performance tuning on it,
but if anyone has suggestions for queue depth or number of commands to
queue to adapter, I'd love to heard them...

                -----Original Message-----
                From: Vincent D'Antonio, III
[mailto:dantoniov@COMCAST.NET]
                Sent: Wednesday, November 13, 2002 2:17 PM
                To: aix-l@Princeton.EDU
                Subject: Re: performance of san-disk
                
                

                Todd,

                On the disk your queue depth is set to 1, I seem to
remember that needs to be changed, I can't find the paper I had on it,
but I had a 9970 and I had it set to 5. I did do some playing with it
and it did change my I/O stats. I would check with HDS to see what the
math was on the setting on this parm is.

                HTH

                Vince

                 

                -----Original Message-----
                From: IBM AIX Discussion List
[mailto:aix-l@Princeton.EDU] On Behalf Of Willeat, Todd
                Sent: Wednesday, November 13, 2002 11:35 AM
                To: aix-l@Princeton.EDU
                Subject: Re: performance of san-disk

                 

                I believe the only thing I changed was the "INIT Link
Value" on the fcs devices. I was told by our HDS vendor that this did
not affect performance, only how quickly the HBA would sync up with the
switch. I did not use your script - I have my own that I used. It
measures max performance with filesystem/cache influences. I have used
both topas and nmon to verify disk throughput during writes. I tend to
stripe most of my filesystems, so my numbers were with the filesystem
striped across 2 LUNs. With that said, here's the info you wanted as
well as the script I used...

                 

                lsattr -El fcs0:
                bus_intr_lvl 13 Bus interrupt level
False
                intr_priority 3 Interrupt priority
False
                bus_io_addr 0xfff400 Bus I/O address
False
                bus_mem_addr 0xc3238000 Bus memory address
False
                lg_term_dma 0x200000 N/A
True
                max_xfer_size 0x100000 Maximum Transfer Size
True
                num_cmd_elems 200 Maximum number of COMMANDS to
queue to the adapter True
                pref_alpa 0x1 Preferred AL_PA
True
                sw_fc_class 3 FC Class for Fabric
True
                init_link pt2pt INIT Link flags
True

                 

                lsattr -El fscsi0:
                scsi_id 0x11000 Adapter SCSI ID False
                attach switch How this adapter is CONNECTED False
                sw_fc_class 3 FC Class for Fabric True

                 

                lsattr -El hdisk10:
                scsi_id 0x11300 SCSI ID
False
                lun_id 0x4000000000000 Logical Unit Number ID
False
                location Location Label
True
                ww_name 0x50060e8000037160 FC World Wide Name
False
                pvid none Physical volume
identifier False
                queue_depth 1 Queue DEPTH
True
                q_type simple Queuing TYPE
True
                q_err yes Use QERR bit
True
                clr_q no Device CLEARS its Queue
on error True
                rw_timeout 30 READ/WRITE time out
value True
                start_timeout 60 START unit time out
value True
                reassign_to 120 REASSIGN time out value
True

                        -----Original Message-----
                        From: Holger.VanKoll@SWISSCOM.COM
[mailto:Holger.VanKoll@SWISSCOM.COM]
                        Sent: Wednesday, November 13, 2002 5:03 AM
                        To: aix-l@Princeton.EDU
                        Subject: Re: performance of san-disk

                        uuuhhh.... quite a bit more. did you change any
default-values?

                         

                        could you be so nice to post

                        lsattr -El fcs0

                        lsattr -El fscsi0

                        lsattr -El one_of_your_discs

                         

                        did you measure this 70mb/s with my script or in
another way?

                         

                        thank you very much!

                         

                                -----Original Message-----
                                From: Willeat, Todd
[mailto:TWilleat@MHP.SMHS.COM]
                                Sent: Tuesday, November 12, 2002 11:52
PM
                                To: aix-l@Princeton.EDU
                                Subject: Re: performance of san-disk

                                I've been able to get >70MB/s between a
B80 (w/2 CPUs and 2 6228s) and a HDS 9200.

                                -----Original Message-----
                                From: Holger.VanKoll@SWISSCOM.COM
[mailto:Holger.VanKoll@SWISSCOM.COM]
                                Sent: Tuesday, November 12, 2002 3:05 PM
                                To: aix-l@Princeton.EDU
                                Subject: performance of san-disk

                                Hello,

                                I got a p670 connected to HDS (Hitachi;
9900) Storage with 2 FC-adapter.

                                No matter how much disks I access
concurrently, I get 34mbyte/sec maximum.
                                All 4 CPU are 0% idle when measuring,
but depending on the # disks they spend more time in kernel than in
wait.

                                # disks accessed kernel wait
disk-busy (topas) transfer Mbyte/sec
                                10-20 8 90
50-70% 25
                                30 40 60
40-60% 33
                                40 55 45
60-80% 33
                                50 70 30
40-60% 34
                                60 78 22
35-50% 34

                                This looks like my CPU-power is the
bottleneck, correct?

                                As I get 184 Mbyte/sec with the 4
internal disks and 60% kernel / 40% wait that looks to me as if the
fibre-channel driver (kernel-time) uses all my cpu. Is this correct?

                                If someone likes to do some measurement
to have numbers to compare, here is the script I used:

                                #!/bin/sh
                                #dlmfdrv4 - 61
                                count=2000
                                bsk=256k
                                dev=dlmfdrv
                                #dev=hdisk

                                start=4
                                end=54

                                i=$start
                                until false
                                do
                                dd </dev/$dev$i >/dev/null bs=$bsk
count=$count 2>/dev/null &
                                [ $i = $end ] && break
                                let i=i+1
                                done

                                I would appreciate any other
"benchmark"-results, especially latest ssa-technology and non hds (f.e.
ibm) storage servers.

                                Regards,

                                Holger



This archive was generated by hypermail 2.1.7 : Wed Apr 09 2008 - 22:16:22 EDT