SUMMARY: netstat -p ip output

From: Rich Glazier (rglazier2002@yahoo.com)
Date: Tue Jan 27 2004 - 13:19:41 EST


Thanks to everyones detailed responses and follow ups.
 I think the real issue is that I had the default read
and write buffers for nfs (49,152) which had to be
chopped up into multiple datagrams when send over the
wire, to be reassembled on the other end, because the
maximum transfer size on that network segment (for
ethernet) was 1500.

I can't really up the MTU on this ethernet network, so
I changed the nfs mount rsize and wsize to 1024, so
the application UDP packets were within the MTU size.
This solved the fragmentation "problem" (which I'm
starting to think isn't a problem at all), but overall
performance was worse. netstat -i showed less packets
per second. I guess it wasn't filling the MTU-
filling the pipe. I tried to create a larger wsize
/rsize for nfs to see what effect that would have, but
apparently 49152 is the largest you can do.

I liked Chris's ideas for alternate remote dumps, and
would probably work for me for other filesystems, but
not this one where I needed to have root permission to
copy many of the files.

responses follow....

/////////////////////////

Dan Goetzman wrote:

Rich,

 I suspect others have said the same, but just to be
sure...

 You mentioned NFS was the protocol being used for the
network transfer, right? And, standard NFS settings
(UDP and the r/w size?). So, my question is what IS
the r/w size in use for the NFS mount? (mount -l for
Tru64 should tell you and nfsstat -m for many other
unix variants). I suspect the r/w size is at LEAST
8192 and if using a Tru64 NFS client and server it
should be 49152. That is the NFS buffer size and with
a MTU of 1500 you can see that the NFS data buffer
will have to be broken into many packets to stay under
the MTU size. This is how UDP works. It creates IP
fragment packets and that is what the counter is
counting.

 This would be normal. You can create less fragments
per NFS I/O request if the NFS r/w size is smaller
(like even 1024 K in some cases). Again, look at the
actual mount via "mount -l" or "nfsstat -m" (or what
ever your client OS uses to display NFS mount info).

I hope this helps... ( and is as clear as mud! ;-)

Dan Goetzman
---------------------------------------------------

Chris Ruhnke wrote:

Have you tried running the vdump directly over the
network?

On MARS:

# vdump -0f - /prod | (rsh jupiter; vrestore -xf -
-D /prod

or

You could do almost the same thing with "tar"

# tar -cf - /prod | (rsh jupiter; cd /prod; tar -xf -)
-----------------------------------------------------

Kris A. Smith wrote:

Rich,

        It is possible that this server has not been rebooted
for a long
time, or at least longer than the other servers. It is
also possible
that there is a TPDU setting (Transport Protocol Data
Unit or MTU) that dictates the maximum size of a
message (or fragment). If this setting
is too low on some other machine, the packets that
this server recieves from that machine will be
fragmented - broken up into smaller, more
manageable chunks. One of the reasons that there is so
little breakdown on the netstat output is that it is
completely relative and subjective.
These are typically running counts since the last
reboot or restart of the inetd daemon. Therefor, a
computer that has just been rebooted can
have a small number of fragments simply because it has
not been up long enough to see that many. This is also
true for crc errors, collisions,
jabbers, dropped packets, etc.... Since this is a
fragments RECEIVED count, I would expect that the
cause of the count is a network setting
that resides on some other machine, maybe even a
machine that resides
on
the WWW. One potential source would be any old NIC
cards that are only
capable of 10 MB and/or half-duplex. When one of these
old NIC cards is
connected to a 10/100 MB LAN, with some machines
running in full
duplex,
the 10 MB half-duplex messages typically must be
broken into smaller
packets. When a hub or switch says that it supports
half-duplex and full-duplex, it typically does not do
a very good job of supporting both at the same time
(unless you have one of the higher end devices - i.e.
$$$$). I hope this is what you were looking for, or
that it at least helps to clarify something. Best of
luck !

Rich,

        I just looked at your original message again. You
will want to look at the capability of the devices on
your network and make a compromised judgment call
based on the device that is the most limiting
(least capable). The typical default MTU for a 10/100
IP network is 1500. Using this setting, I get the
following stats on a server that
was booted 5 days ago:

# netstat -p ip
ip:
        4697343 total packets received
        0 bad header checksums
        0 with size smaller than minimum
        0 with data size < data length
        0 with header length < data size
        0 with data length < header length
        20 fragments received
        0 fragments dropped (dup or out of space)
        0 fragments dropped after timeout
        0 packets forwarded
        0 packets not forwardable
        0 packets denied access
        0 redirects sent
        0 packets with unknown or unsupported protocol
        4697327 packets consumed here
        4557347 total packets generated here
        1 lost packet due to resource problems
        4 total packets reassembled ok
        13 output packets fragmented ok
        74 output fragments created
        0 packets with special flags set
#

        Keep in mind that sometimes messages created at the
application
layer are simply too big for the network layer, and
therefor must be
fragmented - streaming. Once again, I hope this shelps
!
------------------------------------------------------

Robert Romani wrote:

Rich,

You may find this useful

http://www.cs.unm.edu/~maccabe/SSL/frag/FragPaper1/Fragmentation.html

-----------------------------------------------
--- Rich Glazier <rglazier2002@yahoo.com> wrote:
> I zeroed out all the netstat counters and nfsstat
> counters, and then did my vdump | vrestore between
> the
> two nodes (from Mars to Jupiter). It's the only
> network traffic on Jupiter. I have an AdvFS
> filesystem from Mars nfs mounted on Jupiter
> (standard-
> udp, no rsize or wsize settings). I then vdump |
> vrestore the nfs mounted filesystem on Jupiter to a
> local AdvFS filesystem on Jupiter. Maybe my
> fragmentation is the AdvFS to NFS to AdvFS pipe.
> I'm
> just trying to tune this dump to get it as fast as
> possible. Maybe it's as fast as it can be. The
> storage is a 0+1 lun using 10k drives on a dual
> controller MSA1000. Here is the output from the
> systems. Again, it's the "fragments recieved" on
> the
> 'netstat -p ip' output that I'm curious about.
>
> Thanks for the help.
>
>
> Mars ---> Jupiter
>
> mars# hwmgr get att -cat network -a name -a model -a
> MTU_size -a media_speed -a full_duplex
> 87:
> name = ee1
> model = Intel 82559
> MTU_size = 1500
> media_speed = 100
> full_duplex = 1
>
> jupiter# hwmgr get att -cat network -a name -a model
> -a MTU_size -a media_speed -a full_duplex
> 59:
> name = ee0
> model = Intel 82559
> MTU_size = 1500
> media_speed = 100
> full_duplex = 1
>
> jupiter# df -k
> Filesystem 1024-blocks Used Available
> Capacity Mounted on
> root_domain#root 7643312 135482 7501592
>
> 2% /
> /proc 0 0 0
>
> 100% /proc
> usr_domain#usr 8267016 2771399 5448176
>
> 34% /usr
> var_domain#var 50849968 148455 50693488
>
> 1% /var
> DR#prod 142255568 37082281 65829944
>
> 37% /prod
> DR#prod2 142255568 39150561 65829944
>
> 38% /prod2
> 172.16.0.1:/prod 68157440 35634948 32312536
>
> 53% /prod_on_mars
>
> jupiter# time vdump -0f - /prod_on_mars | vrestore
> -xf
> - -D /prod
> .
> .
> .
> real 2h6m37.96s
> user 2m31.45s
> sys 17m49.28s
>
> jupiter# netstat -p ip
> ip: Last Zeroed: Fri Jan
> 23
> 16:19:41 2004
> 27229952 total packets received
> 0 bad header checksums
> 0 with size smaller than minimum
> 0 with data size < data length
> 0 with header length < data size
> 0 with data length < header length
> 25121406 fragments received
> 0 fragments dropped (duplicate or out of
> space)
> 1 fragment dropped after timeout
> 0 packets forwarded
> 0 packets not forwardable
> 0 packets denied access
> 0 redirects sent
> 0 packets with unknown or unsupported
> protocol
> 3033828 packets consumed here
> 3104604 total packets generated here
> 0 lost packets due to resource problems
> 925282 total packets reassembled ok
> 0 output packets fragmented ok
> 0 output fragments created
> 0 packets with special flags set
>
> jupiter# netstat -p tcp
> tcp: Last Zeroed: Fri Jan
> 23
> 16:20:46 2004
> 623233 packets sent
> 240192 data packets (4506006 bytes)
> 0 data packets (0 bytes)
> retransmitted
> 248738 ack-only packets (4141
> delayed)
> 0 URG only packets
> 0 window probe packets
> 0 window update packets
> 134303 control packets
> 540130 packets received
> 131663 acks (for 790586 bytes)
> 120114 duplicate acks
> 0 acks for unsent data
> 242625 packets (123022 bytes)
> received
> in-sequence
> 40766 completely duplicate packets
> (0
> bytes)
> 0 packets with some duplicate data
> (0
> bytes duped)
> 2491 out-of-order packets (0 bytes)
> 0 packets (0 bytes) of data after
> window
> 0 window probes
> 4 window update packets
> 0 packets received after close
> 0 discarded for bad checksums
> 0 discarded for bad header offset
> fields
> 0 discarded because packet was too
> short
> 122663 connection requests
> 2492 connection accepts
> 122299 connections established (including
> accepts)
> 134339 connections closed (including 116370
> drops)
> 2856 embryonic connections dropped
> 251470 segments updated rtt (of 370696
> attempts)
> 5712 retransmit timeouts
> 0 connections dropped by rexmit
> timeout
> 0 persist timeouts
> 3117 keepalive timeouts
> 63 keepalive probes sent
> 2856 connections dropped by
> keepalive
>
> jupiter# netstat -p udp
> udp: Last Zeroed: Fri Jan
> 23
> 16:20:41 2004
> 2482715 packets sent
> 2486560 packets received
> 0 incomplete headers
> 0 bad data length fields
> 0 bad checksums
> 0 full sockets
> 4263 for no port (4263 broadcasts, 0
> multicasts)
> 0 input packets missed pcb cache
>
>
> jupiter# nfsstat -c
>
> Client rpc:
> tcp: calls badxids badverfs timeouts
> newcreds
> 0 0 0 0
>
> 0
> creates connects badconns inputs
>
> avails interrupts
> 0 0 0 0
>
> 0 0
> udp: calls badxids badverfs timeouts
> newcreds retrans
> 5326646 19 0 853
>
> 0 853
> badcalls timers waits
> 0 1283 0
>
> Client nfs:
> calls retrans badcalls nclget
>
=== message truncated ===

__________________________________
Do you Yahoo!?
Yahoo! SiteBuilder - Free web site building tool. Try it!
http://webhosting.yahoo.com/ps/sb/



This archive was generated by hypermail 2.1.7 : Sat Apr 12 2008 - 10:49:49 EDT