IOWAIT on a slice with no filesystem?

From: LA/ETH (ors.tiszay@ericsson.com)
Date: Thu Jan 13 2005 - 08:29:07 EST


Hi,

I have a Netra T1405 with 4 CPU, 4GB of RAM and 4GB swap running Solaris 8 (kernel Generic_108528-18). The machine runs Oracle OID (Oracle 9.0.1.4.0, OID 3.0.1.1.0) which stores its data on a T3. Oracle is put under Veritas Cluster (3.4) control.
>From time to time, without any recognisable time or usage patterns the machine experiences very high iowait which can only be stopped by restarting the Oracle resource group. What makes it interesting for me is that iostat (see printout below) seemes to indicate high disk busy rate on a slice of the mirrored root disks which does not even have a file system on it, and of course it's not the swap slice either.

Am I missing or overlooking something? Any ideas?

thx
Ors

======================================================

> df -k
Filesystem kbytes used avail capacity Mounted on
/dev/vx/dsk/rootvol 1523455 322915 1139602 23% /
/dev/vx/dsk/usr 5161437 823834 4285989 17% /usr
/proc 0 0 0 0% /proc
fd 0 0 0 0% /dev/fd
mnttab 0 0 0 0% /etc/mnttab
/dev/vx/dsk/var 2053605 982014 1009983 50% /var
swap 4750296 16 4750280 1% /var/run
swap 4805792 55512 4750280 2% /tmp
/dev/vx/dsk/opt 22099030 8076269 13801771 37% /opt
/dev/vx/dsk/Oracle-T3/Oracle-DS2
                     62914560 28891548 31896688 48% /opt/oradata01

======================================================

# iostat -cnpxz 5

     cpu
 us sy wt id
 25 9 25 41
                    extended device statistics
    r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
   66.6 27.4 532.2 478.4 0.0 1.4 0.0 14.7 0 66 c0t0d0
   66.6 27.4 532.2 478.4 0.0 1.4 0.0 14.7 0 66 c0t0d0s4
   87.2 27.4 694.0 481.6 0.0 1.2 0.0 10.6 0 60 c0t1d0
   87.2 27.4 694.0 481.6 0.0 1.2 0.0 10.6 0 60 c0t1d0s4
    9.2 3.0 160.0 24.0 0.0 0.1 0.0 7.3 0 6 c3t4d1
    9.2 3.0 160.0 24.0 0.0 0.1 0.0 7.3 0 6 c3t4d1s4
     cpu
 us sy wt id
 36 11 31 22
                    extended device statistics
    r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
   77.2 30.7 788.6 640.0 0.0 2.9 0.1 26.6 0 77 c0t0d0
   77.2 30.7 788.6 640.0 0.0 2.9 0.1 26.6 0 77 c0t0d0s4
   90.1 30.9 784.4 636.8 0.0 1.7 0.0 13.9 0 65 c0t1d0
   90.1 30.9 784.4 636.8 0.0 1.7 0.0 13.9 0 65 c0t1d0s4
    8.4 0.6 114.9 4.8 0.0 0.1 0.0 6.8 0 5 c3t4d1
    8.4 0.6 114.9 4.8 0.0 0.1 0.0 6.8 0 5 c3t4d1s4

======================================================
# vmstat 5

 procs memory page disk faults cpu
 r b w swap free re mf pi po fr de sr s0 s1 s6 sd in sy cs us sy id
 1 0 0 4287512 495112 612 232 171 30 136 14504 164 10 9 0 0 943 812 604 22 13 65
 1 4 0 3024872 59976 289 1983 1864 416 1424 16112 2429 125 175 0 0 2259 10915 5235 30 10 60
 1 3 0 3027312 62496 277 1763 1504 720 1648 14504 2420 141 125 0 0 2288 11200 5192 42 13 45
 0 2 0 3027272 61696 199 1518 1776 536 1168 13056 2102 129 127 0 0 2015 7219 4130 22 7 71
 0 1 0 3027272 61472 193 1475 1152 744 1328 11752 1961 121 107 0 0 1892 6680 4151 18 7 74
 0 1 0 3027128 60792 421 3506 1272 616 1136 10584 1937 113 126 0 0 1996 10958 4812 35 12 53

=======================================================
Partition layout for c0t0d0:

partition> p
Current partition table (original):
Total disk cylinders available: 24620 + 2 (reserved cylinders)

Part Tag Flag Cylinders Size Blocks
  0 root wm 20624 - 21712 1.50GB (1089/0/0) 3146121
  1 swap wu 21713 - 24615 4.00GB (2903/0/0) 8386767
  2 backup wu 0 - 24619 33.92GB (24620/0/0) 71127180
  3 - wu 0 - 0 1.41MB (1/0/0) 2889
  4 - wu 1 - 24619 33.91GB (24619/0/0) 71124291
  5 unassigned wm 5083 - 20623 21.41GB (15541/0/0) 44897949
  6 var wm 1 - 1452 2.00GB (1452/0/0) 4194828
  7 usr wm 1453 - 5082 5.00GB (3630/0/0) 10487070
_______________________________________________
sunmanagers mailing list
sunmanagers@sunmanagers.org
http://www.sunmanagers.org/mailman/listinfo/sunmanagers



This archive was generated by hypermail 2.1.7 : Wed Apr 09 2008 - 23:29:59 EDT