[HPADM] Summary: Pinpointing Filesystem Problems

From: Johnson, Craig E (craig.johnson@siemens.com)
Date: Tue Jun 28 2005 - 12:10:13 EDT


The two most through answers came from Bill Hassell and Wolf-Dietrich
Schmook, and are attached. Thanks to everyone who took the time to respond.
 
Craig


attached mail follows:


It's called pain-tolerance...
 
Seriously, if you cd into the directory and finding a specific file with ll
takes several
minutes, or a masked list such as ll *abc*123 gives the error: LINE TOO
LONG,
then it's probably too many files. A reasonable limit in a single directory
is less
than 1000. But in a directory structure with hundreds to 10's of thousands
of
directories (and just a few dozen files in each directory) which might be a
total of millions of files, not much of a problem.
 
Except for backup. Thousands to millions of files will slow down every
backup
program. Tools like tar and cpio will not be able to keep up with modern
tapes.
fbackup will need the maximum 6 reader processes or use a commercial
product like Data Protector (aka, Omniback).
 
As far as spotting out-of-control directories, use du to spot large
directories
based on size:
 
du -kx /some_directory | sort -rn | more
 
then use: ls | wc
to count files. For size: ll | sort -rnk5 | more
 
For long filenames, use find to locate files (and directories too) and then
pipe the result (in a script) to count the number of charaters.
 
Weird names (accidents like spces or ctrl characters) will just have to
be scripted. It turns out that Ignite/UX creates a list of bad file
filenames
in /tmp called "badlines####" where #### is the process ID. Filenames
are really no bad, they are just inconvenient to use (requires quotes,
must use ll -b option to see them, etc. Of course, if you share files
with a PC (SAMBA or CIFS) you'll see a lot of difficult filenames with
imbedded spaces.
 

--
Bill Hassell
-----Original Message-----
From: hpux-admin-owner@DutchWorks.nl [mailto:hpux-admin-owner@DutchWorks.nl]
On Behalf Of Johnson, Craig E
Sent: Saturday, June 25, 2005 1:56 AM
To: 'hpux-admin@DutchWorks.nl'
Subject: [HPADM] Pinpointing Problem Filesystems
What tools do you use?  How do you spot a subdirectory with too many files
to be effectively managed?  Or lots of files with extremely long names?  Or
any other weirdness?  Thanks again.
 
Craig

attached mail follows:


Hi Craig,

depends on the file system, HFS behaves different from VxFS.
The usual tools come to mind:

echo "huge directories are slow:"
find / -type d -size +8192c -exec ls -abdl {} ";"
echo "too many links, limit is 32767 only:"
find / -type d -links +30000 -exec ls -abdl {} ";"
echo "reorganize directory allocation:"
for dir in / /opt /usr /tmp /var ... # list your mountpoints here
do fsadm -F vxfs -s -d -D $dir
done

check your filesystems to use JFS version 3.3 or 3.5 with layout version 4
or 5
resp. to enable bigger files, filesystems, and proper administration - get
rid
of HFS (except for /stand) and of JFS with layout versions less than 4.

Check your kernel's usage of the buffer cache and VxFS high water mark.

Is that what you asked for?

FWIW,
Wodisch

PS: read http://docs.hp.com/en/5991-1227/5991-1227.pdf

--
What tools do you use?  How do you spot a subdirectory with too many files
to be effectively managed?  Or lots of files with extremely long names?  Or
any other weirdness?  Thanks again.
Craig
--
             ---> Please post QUESTIONS and SUMMARIES only!! <---
        To subscribe/unsubscribe to this list, contact majordomo@dutchworks.nl
       Name: hpux-admin@dutchworks.nl     Owner: owner-hpux-admin@dutchworks.nl
 
 Archives:  ftp.dutchworks.nl:/pub/digests/hpux-admin       (FTP, browse only)
            http://www.dutchworks.nl/htbin/hpsysadmin   (Web, browse & search)


This archive was generated by hypermail 2.1.7 : Sat Apr 12 2008 - 11:02:48 EDT