Limiting the number of Superblock duplicates on newfs of huge filesystem

From: Chris Ruhnke (ruhnke@us.ibm.com)
Date: Wed Sep 14 2005 - 14:20:29 EDT


I don't have to newfs filesystems very often so I don't run into this
problem that much.

I have just built a 500+ GByte RAID-5 user data filesystem. I am running
"newfs" against it and it has been running for almost an hour now. Because
of the amount of data space available, newfs is creating a gazillion
superblock duplicates. Now, I am quite happy to have superblock
duplicates in the event of loss of the primary superblock. But this is a
failure that rarely happens any more. Usually the entire filesystem gets
corrupted beyond redemption. I don't mind giving up some space to a few
-- even a few hundred -- superblock clones. But there comes a point when
duplicate superblocks are just a waste of space and fragementation of the
data area.

Does anyone have any suggestions about how to limit the number of
superblock duplicates that get created on a ufs filesystem. I am
restricted by the customer to using UFS and do not have the luxury of
looking into Veritas or any other "smart" volume/filesystem manager. I
haven't found anything promising in the man pages.

 
--CHRis

Chris H. Ruhnke
Technical Services Professional
IBM Global Services
Dallas, TX

O'Toole's Law: Murphy is an optimist.
_______________________________________________
sunmanagers mailing list
sunmanagers@sunmanagers.org
http://www.sunmanagers.org/mailman/listinfo/sunmanagers



This archive was generated by hypermail 2.1.7 : Wed Apr 09 2008 - 23:31:39 EDT