Re: performance of san-disk

From: Holger.VanKoll@SWISSCOM.COM
Date: Wed Nov 13 2002 - 15:36:37 EST


thats in "Hitachi Freedom Storage(TM) Lightning 9900(TM) IBM ® AIX ®
Configuration Guide" ; filename "Conf_AIX_rd0143.pdf"
on a 7700 I had to set queue_depth to 8 and rw_timeout to 60
 
quoted from this hitachi-guide (mentioned above)
3.1 Changing the Device Parameters

3.1.1 Changing Device Parameters for the 9900

Note: If you are using the Emulex Lightpulse™ LP8000 .adapter, see
section 2.1.2 for adapter-specific

instructions

When the device files are created, the IBM

®

system sets the device parameters to the system

default values. You must change the read/write (r/w) time-out, queue
type, and queue depth

parameters for each new 9900 device. Table 2.1 specifies the r/w
time-out and queue type

requirements for the 9900 devices. Table 2.2 specifies the queue depth
requirements for the

9900 devices.

AIX

®

uses the Logical Volume Manager (LVM) (accessed from within SMIT .) to
manage

data storage. You can use either SMIT . or the AIX

®

command line to perform this procedure.

Make sure to set the parameters for the HRX devices as well as the SCSI
disk devices, and that

you use the same settings and device parameters for all 9900 devices.

Table 3.1 R/W Time-Out and Queue Type Requirements

Parameter Name Default Value Required Value for 9900

Read/write time-out 30 60

Queue type none simple

Table 3.2 Queue Depth Requirements for the 9900 Devices

Parameter Requirement

Queue depth per LU =32

Queue depth per port

(MAXTAGS)

=256 per port

Note: You can adjust the queue depth for the 9900 devices later as
needed (within the specified range) to optimize the I/O

performance of the devices.

 

        -----Original Message-----
        From: Vincent D'Antonio, III [mailto:dantoniov@COMCAST.NET]
        Sent: Wednesday, November 13, 2002 9:17 PM
        To: aix-l@Princeton.EDU
        Subject: Re: performance of san-disk
        
        

        Todd,

        On the disk your queue depth is set to 1, I seem to remember
that needs to be changed, I can’t find the paper I had on it, but I had
a 9970 and I had it set to 5. I did do some playing with it and it did
change my I/O stats. I would check with HDS to see what the math was on
the setting on this parm is.

        HTH

        Vince

         

        -----Original Message-----
        From: IBM AIX Discussion List [mailto:aix-l@Princeton.EDU] On
Behalf Of Willeat, Todd
        Sent: Wednesday, November 13, 2002 11:35 AM
        To: aix-l@Princeton.EDU
        Subject: Re: performance of san-disk

         

        I believe the only thing I changed was the "INIT Link Value" on
the fcs devices. I was told by our HDS vendor that this did not affect
performance, only how quickly the HBA would sync up with the switch. I
did not use your script - I have my own that I used. It measures max
performance with filesystem/cache influences. I have used both topas and
nmon to verify disk throughput during writes. I tend to stripe most of
my filesystems, so my numbers were with the filesystem striped across 2
LUNs. With that said, here's the info you wanted as well as the script I
used...

         

        lsattr -El fcs0:
        bus_intr_lvl 13 Bus interrupt level
False
        intr_priority 3 Interrupt priority
False
        bus_io_addr 0xfff400 Bus I/O address
False
        bus_mem_addr 0xc3238000 Bus memory address
False
        lg_term_dma 0x200000 N/A
True
        max_xfer_size 0x100000 Maximum Transfer Size
True
        num_cmd_elems 200 Maximum number of COMMANDS to queue to
the adapter True
        pref_alpa 0x1 Preferred AL_PA
True
        sw_fc_class 3 FC Class for Fabric
True
        init_link pt2pt INIT Link flags
True

         

        lsattr -El fscsi0:
        scsi_id 0x11000 Adapter SCSI ID False
        attach switch How this adapter is CONNECTED False
        sw_fc_class 3 FC Class for Fabric True

         

        lsattr -El hdisk10:
        scsi_id 0x11300 SCSI ID
False
        lun_id 0x4000000000000 Logical Unit Number ID
False
        location Location Label
True
        ww_name 0x50060e8000037160 FC World Wide Name
False
        pvid none Physical volume identifier
False
        queue_depth 1 Queue DEPTH
True
        q_type simple Queuing TYPE
True
        q_err yes Use QERR bit
True
        clr_q no Device CLEARS its Queue on
error True
        rw_timeout 30 READ/WRITE time out value
True
        start_timeout 60 START unit time out value
True
        reassign_to 120 REASSIGN time out value
True

                -----Original Message-----
                From: Holger.VanKoll@SWISSCOM.COM
[mailto:Holger.VanKoll@SWISSCOM.COM]
                Sent: Wednesday, November 13, 2002 5:03 AM
                To: aix-l@Princeton.EDU
                Subject: Re: performance of san-disk

                uuuhhh.... quite a bit more. did you change any
default-values?

                 

                could you be so nice to post

                lsattr -El fcs0

                lsattr -El fscsi0

                lsattr -El one_of_your_discs

                 

                did you measure this 70mb/s with my script or in another
way?

                 

                thank you very much!

                 

                        -----Original Message-----
                        From: Willeat, Todd
[mailto:TWilleat@MHP.SMHS.COM]
                        Sent: Tuesday, November 12, 2002 11:52 PM
                        To: aix-l@Princeton.EDU
                        Subject: Re: performance of san-disk

                        I've been able to get >70MB/s between a B80 (w/2
CPUs and 2 6228s) and a HDS 9200.

                                -----Original Message-----
                                From: Holger.VanKoll@SWISSCOM.COM
[mailto:Holger.VanKoll@SWISSCOM.COM]
                                Sent: Tuesday, November 12, 2002 3:05 PM
                                To: aix-l@Princeton.EDU
                                Subject: performance of san-disk

                                Hello,

                                I got a p670 connected to HDS (Hitachi;
9900) Storage with 2 FC-adapter.

                                No matter how much disks I access
concurrently, I get 34mbyte/sec maximum.
                                All 4 CPU are 0% idle when measuring,
but depending on the # disks they spend more time in kernel than in
wait.

                                # disks accessed kernel wait
disk-busy (topas) transfer Mbyte/sec
                                10-20 8 90
50-70% 25
                                30 40 60
40-60% 33
                                40 55 45
60-80% 33
                                50 70 30
40-60% 34
                                60 78 22
35-50% 34

                                This looks like my CPU-power is the
bottleneck, correct?

                                As I get 184 Mbyte/sec with the 4
internal disks and 60% kernel / 40% wait that looks to me as if the
fibre-channel driver (kernel-time) uses all my cpu. Is this correct?

                                If someone likes to do some measurement
to have numbers to compare, here is the script I used:

                                #!/bin/sh
                                #dlmfdrv4 - 61
                                count=2000
                                bsk=256k
                                dev=dlmfdrv
                                #dev=hdisk

                                start=4
                                end=54

                                i=$start
                                until false
                                do
                                dd </dev/$dev$i >/dev/null bs=$bsk
count=$count 2>/dev/null &
                                [ $i = $end ] && break
                                let i=i+1
                                done

                                I would appreciate any other
"benchmark"-results, especially latest ssa-technology and non hds (f.e.
ibm) storage servers.

                                Regards,

                                Holger



This archive was generated by hypermail 2.1.7 : Wed Apr 09 2008 - 22:16:20 EDT