Re: repost - indirect inode limitation???

From: Jean-Christophe Basaille (Jean-Christophe.Basaille@U-BOURGOGNE.FR)
Date: Wed Jul 24 2002 - 04:31:18 EDT


Suite au message de Taylor, David (DTaylor@WBMI.COM):

We got the same problem, and it has been solved after applying the
APAR (IY13763), in fact upgrading from ML6 to ML8, in august 2001.

We don't schedule any reboot, and all works fine (for this particular
point :-) ), with no more corruption.

I should consider upgrading directly to ML8, or higher, if you can.

And, you have to mount the filesystem with the -mind option. Just add
the option in /etc/filesystems, i.e.:
options = rw,mind
in your fs stanza,

before rebooting after applying the APAR/ML.

HTH.

> I posted this earlier in the week and received no responses.
>
> I am mostly interested in knowing, if it's possible, how to monitor the
> .indirect cache.
>
> I would also be interested, offline, of how many of you have ~and~ have-not
> heard/run-across this situation (to try and get a feel for how common it
> is).
>
> additional info:
> -----
> 4.3.3 ML 6
> PP size 128 MB
>
> Name Nodename Mount Pt VFS Size Options
> Auto Accounting
> /dev/perscachelvlg -- /tower/home/runtime/is/storage/personal.big
> jfs 239599616 rw no no
> (lv size: 239599616, fs size: 239599616, frag size: 4096, nbpi: 32768,
> compress: no, bf: false, ag: 64)
> -----
>
> TIA
> David
>
> The original post:
>
>
> > We recently ran into a situation, while reading the contents of several
> > optical platters back into magnetic cache, where we were unable to create
> > files larger than 32K, within this particular file-system. -- anything
> > larger than 32K was truncated to 32K and an error message was displayed.
> >
> > I was informed by IBM that we had, essentially, exceeded a limitation of
> > the file-system. (which I found disturbing - not to mention the fact
> > that the O/S allowed me to continue to corrupt the file-system)
> >
> > Even though I plan on applying the recommended fix (which doesn't seem
> > like a fix - merely increasing the finite number of files >32K that can be
> > created between reboots), does anyone know how I can monitor to prevent
> > this in the future?
> >
> > I would hate to have to schedule regular-reboots for servers with high
> > file-system traffic.
> >
> > Has anyone else run into this?
> >
> >
> > From the APAR: (IY13763)
> > ----------------------------------------------
> > Since the beginning of AIX there has been a single 256MB .indirect
> > virtual memory segment that is used to cache indirect inodes.
> > .
> > When you are copying many files that are greater than 32,768 bytes,
> > this 256MB .indirect segment fills up, and you get the ENOMEM error
> > and truncation of files. A reboot will clear out the cache, but
> > when/if the cache fills up you would see the problem return.
> > .
> > The APAR IY13763 allows you to use 8 256MB .indirect segments
> > for inode caching, IF you use "mount -o mind /filesystemname".
> > ---------------------------------------------
> >
> >
> >
> > David
>
>
> **********************************************************************
> This email and any files transmitted with it are confidential and
> intended solely for the use of the individual or entity to whom they
> are addressed. If you have received this email in error please notify
> the system manager.
>
> This footnote also confirms that this email message has been swept by
> MIMEsweeper for the presence of computer viruses.
>
> www.mimesweeper.com
> **********************************************************************

--
Jean-Christophe Basaille
Université de Bourgogne, Centre de Calcul, Dijon, France
Tel: +33 3 80 39 52 05 / Fax: +33 3 80 39 52 69


This archive was generated by hypermail 2.1.7 : Wed Apr 09 2008 - 22:16:05 EDT