Re: Fastt600 Performance

From: Yard, John (jyard@AIS.UCLA.EDU)
Date: Fri May 14 2004 - 21:01:38 EDT


Thanks, I wil do this,
and am upgrading to 5.1 very soon,

JYard
UCLA

-----Original Message-----
From: IBM AIX Discussion List [mailto:aix-l@Princeton.EDU] On Behalf Of
John Jolet
Sent: Friday, May 14, 2004 11:59 AM
To: aix-l@Princeton.EDU
Subject: Re: Fastt600 Performance

I still think there's something else going on here, since I'm seeing
MUCH higher throughput on essentially the same test on a 500, which a
1gig backplane. the part about using the r devices is legitimate, if
only to compare your tests to mine, which DID dd from the r device to
/dev/null. If you have hardware support on the fastt, you can call ibm,
the support group for that product is very knowledgable, though they
sometimes have limited aix knowledge...they have done a good job of
accepting ownership of problem resolution for me. for aix 5.1,
upgrading to devices.fcp.disk.array.rte.5.1.0.59 fixed some throughput
problems i was having.

-----Original Message-----
From: IBM AIX Discussion List [mailto:aix-l@Princeton.EDU]On Behalf Of
Yard, John
Sent: Friday, May 14, 2004 12:48 PM
To: aix-l@Princeton.EDU
Subject: Fastt600 Performance

All very true, but the point is the relative performance
of the fastt600 is poor compared with the 2104 using the
same benchmark; also multi disk arrays barely
have higher thruput than single disks. Changing the benchmark
does not change the issue. Typical db blocksizes are 4-8-16k,
not 64k 256k or 1m,

Thx,

JYard
-----Original Message-----
From: IBM AIX Discussion List [mailto:aix-l@Princeton.EDU] On Behalf Of
James Jackson
Sent: Friday, May 14, 2004 8:17 AM
To: aix-l@Princeton.EDU
Subject: Re: Fastt600 Performance

4096 (4k) is a pretty small block size for sequential I/O. Have you
tried dd's with larger block sizes; e.g., 32k, 64k, 256k, etc.,? You
can use "k" as a 1024 multiplier in the dd block size spec, i.e.,
bs=4096 is the same as bs=4k, and bs=65536 is the same as bs=64k. John
Jolet had suggested using a 1 meg block size, which you can specify as
bs=1048576 or bs=1024k. You might also want to try running concurrent
dd's to simulate a workload.

It looks like you're trying to perform reads from the logical volume
"/dev/worktest5". You should specify the raw device (/dev/rworktest5)
in the input file spec; i.e., if=/dev/rworktest5, if you're trying to
simulate read operations on database objects built using raw devices.

Depending on your workload, you may want to benchmark your I/O with JFS
filesystems, though operations with raw LV's will typically give you a
good indicator of subsystem bandwidth.

Regards,

James Jackson

-----Original Message-----
From: IBM AIX Discussion List [mailto:aix-l@Princeton.EDU] On Behalf Of
Yard, John
Sent: Thursday, May 13, 2004 7:43 PM
To: aix-l@Princeton.EDU
Subject: Fastt600 Performance

I created a similar configuration of 2 7 disk arrays each on
a different controller. The segment size is 32k and the stripe
size is 32K. fget_config -v -A is

[urrept:/] # fget_config -v -A

---dar0---

User array name = ''
dac0 ACTIVE dac1 ACTIVE
dac1-hdisk3 1
dac0-hdisk4 2

my test gave me ~25m/second :

urrept:/usr/local/bin] # timex dd if=/dev/worktest5 of=/dev/null bs=4096

and iostat :

Disks: % tm_act Kbps tps Kb_read Kb_wrtn
hdisk1 0.0 0.0 0.0 0 0
hdisk0 0.0 0.0 0.0 0 0
hdisk2 0.0 0.0 0.0 0 0
cd0 0.0 0.0 0.0 0 0
dac0 0.0 12847.9 3213.0 12880 0
dac1 0.0 12863.8 3216.0 12896 0
hdisk3 52.9 12863.8 3216.0 12896 0
hdisk4 47.9 12847.9 3213.0 12880 0

This seems pretty poor in that I get almost the same thruput for
a singe disk or a raid 1 with 2 disks.

My fc adapter defs say 1M thruput :

                                   Change / Show Characteristics of a FC
Adapter

Type or select values in entry fields.
Press Enter AFTER making all desired changes.

                                                        [Entry Fields]
  FC Adapter fcs0
  Description FC Adapter
  Status Available
  Location 14-08
  Maximum number of COMMANDS to queue to the adapter [200]
+#
  Maximum Transfer Size [0x100000]
+
  Preferred AL_PA [0x1]
+
  Apply change to DATABASE only no
+

Is the above correct ?

Jyard
UCLA

-----Original Message-----
From: IBM AIX Discussion List [mailto:aix-l@Princeton.EDU] On Behalf Of
John Jolet
Sent: Thursday, May 13, 2004 1:31 PM
To: aix-l@Princeton.EDU
Subject: Re: Fastt600 Performance

i'm getting 150M/sec with 4 10-disk raid 5s. I think there's something
else going on here. Make sure you have the latest rev of the fc
filesets...we had a severe performance problem go away when we upgraded
those. Plus make sure your array controllers are at the latest firmware
levels and that all your drives are at the same rev, as close as
possible. You should be able to get 300M/sec. I got that with the same
test on my 700. That was 8 hdisks raid5 with 5 disks in each raid.
also make sure half are owned by one controller and half by the other,
if it's all going through one controller (fastt controller, not ha),
performance will suck.

What is the out put of fget_config -v -A?

Yard, John wrote:

>It ran a test using a procedure similar to that below.
>I created two raid5 arrays, one with 4 physical disks,
>one array with three. This appeared as 2 hdisks on the system.
>I created a logical volume across these two hdisks.
>the PP size for the volume group was 32meg. I ran
>tests , first using cplv to copy data to the new
>logical volume ( write data ), then running
>dd if=/dev/lvname of=/dev/null for read data.
>
>I get ~ 20M/sec read or write. My 2104s get the same or better.
>
>I also tried creating a striped lv over the raid arrays,
>as above, with little change. The fastt600 is 2G fiber attached
>, so I think I should be getting much better performance.
>
>I think if I create my raid arrays 4 disks each my results will
>be better, but not by much.
>
>Any suggestions very much appreciated,
>
>John Yard
>UCLA
>jyard@ais.ucla.edu
>
>-----Original Message-----
>From: IBM AIX Discussion List [mailto:aix-l@Princeton.EDU] On Behalf Of
>John Jolet
>Sent: Thursday, May 13, 2004 10:11 AM
>To: aix-l@Princeton.EDU
>Subject: Re: Fastt600 Performance
>
>We found significant speed differences depending on how we had the lvs
>arranged on the 500/700. We're using it for oracle, so we set up 4
>arrays, two on each controller and balance the raw lv creation between
>the two controllers. We can get over 150 mb/sec on the 500, twice that
>(of course) on the 700 (has a 2 gig backplane). How are your arrays
>configured on the 600? You could fake load balancing by creating one
>logical disk on each controller, adding both resulting hdisks to a vg
>and creating lvs "striped" across the two hdisks....
>
>Yard, John wrote:
>
>
>
>>Software support recommended against load balancing,
>>
>>saying it would cause 'thrashing' of LUNs from one controller to
>>
>>
>another.
>
>
>>We are upgrading to 5.1 - not 5.2 for app reasons - in the near
>>
>>
>future.
>
>
>>Will there be some resolution on 5.1 ?
>>
>>Currently we are getting much better performance out of 10K rpm
>>
>>2104s than on the 15K rpm fastt600, even though the fastt600
>>
>>was significantly more expensive,
>>
>>Jyard
>>
>>UCLA
>>
>>
>>
>>
>-----------------------------------------------------------------------
-
>
>
>>*From:* IBM AIX Discussion List [mailto:aix-l@Princeton.EDU] *On
>>Behalf Of *jeff barratt-mccartney
>>*Sent:* Wednesday, May 12, 2004 5:08 PM
>>*To:* aix-l@Princeton.EDU
>>*Subject:* Re: Fastt600 Performance
>>
>>did you try chdev -l dar0 -a loadbalancing=yes
>>
>> -----Original Message-----
>> *From:* IBM AIX Discussion List [mailto:aix-l@Princeton.EDU]*On
>> Behalf Of *Yard, John
>> *Sent:* Wednesday, May 12, 2004 4:30 PM
>> *To:* aix-l@Princeton.EDU
>> *Subject:* Fastt600 Performance
>>
>>
>>
>>
>-----------------------------------------------------------------------
-
>
>
>> I have a new fastt600 direct attached to 2 h80's
>>
>> ( aix 4.3.3.). The fast Best Practices Guide
>>
>> talks about balancing LUNs across controllers,
>>
>> but when I did this with a raid device, creating
>>
>> a logical volume across 2 raid devices, tests
>>
>> showed the system reading not from both raid devices
>>
>> at the same time, but in round robin fashion.
>>
>> Thruput was not terrible, but was not exceptional
>>
>> either.
>>
>> Anyone have any experience or suggestions
>>
>> tuning the fastt600, or other fast devices ?
>>
>> Thanks,
>>
>> John Yard
>>
>> UCLA
>>
>> John Yard
>>
>> UCLA Administrative Information Systems
>>
>> Distributed Platforms
>>
>> Unix/Win2000 Admin/Sybase/Oracle/SqlSrvr/Networking
>>
>> 310-825-1725
>>
>>
>>



This archive was generated by hypermail 2.1.7 : Wed Apr 09 2008 - 22:17:55 EDT