Deleting Millions of Files

From: Bryan Pepin (bpepin@emc.com)
Date: Thu Jan 22 2004 - 14:14:29 EST


Hello,

We had an application "loose it's brain" and create millions of tiny
files all in 1 directory on a UFS filesystem. We have since fixed the
application, but now we are trying to clean up the directory because it
used up all the available inodes on the ufs filesystem.

So we have tried many different techniques for removing the files, but
it is taking forever?

Here is a sample of what we tried:

1) rm * -> the shell could not handle that expansion
2) cd to upper directory, and rm -rf dircectory_of_all_files --> this is
taking forever...on one server, it has been running around 12 hours, and
only half way done....
3) create a for loop from the output of an ls, and remove each file
individually --> same results as above....
4) create a for loop from the output of an ls, and remove each file
individually in the background --> this caused severe performance issues
on the box and had to be killed because it spun off so many rm's so
quickly, and they were all hanging around waiting.....

We cannot just nuke the filesystem since the other directories on it
have valuable information.....

Has anyone out there came up with a better way to remove this many
files?.....there is no disk/cpu/memory contention at all as
well....except for when we did the for loop and sent all the rm's in the
background.....

Thanks in advance, and I will summarize.

-Bryan Pepin
_______________________________________________
sunmanagers mailing list
sunmanagers@sunmanagers.org
http://www.sunmanagers.org/mailman/listinfo/sunmanagers



This archive was generated by hypermail 2.1.7 : Wed Apr 09 2008 - 23:27:52 EDT