Executing strictly from tmpfs - poor NFS performance

From: Wessell, Tom \(Mission Systems\) (Thomas.Wessell@ngc.com)
Date: Wed Aug 02 2006 - 16:43:13 EDT


Hi all,

Our system security requirements are such that we have to execute
entirely from RAM, so when the system is powered off, we return to a
known state. The question is - Is running Solaris from a tmpfs mounted
file system a reasonable thing to do, and if so, what sort of
performance should we expect - better or worse than a spinning disk.
If it is not reasonable, can anyone suggest an alternative.

Currently we are seeing very poor nfsd performance in particular. This
system acts primarily as a file server to a number of other VME
chassis', and the transfer of files to other chassis' is no where near
meeting our performance requirements.

Our configuration is:

Hardware: VME based Themis UltraSparc IIe with 4GB RAM
Software: Solaris 9 4/03
Disk: 9GB M-Systems 3.5" Ultra Wide SCSI Flash - Write Protected

Basically we install and configure the OS as required then write protect
the flash disk. In addition to jumpers on the disk, the following line
from /etc/rcS.d/S40standardmounts.sh is commented out:
  # /sbin/mount -m -o $mntops $mountp

Next we create the script /etc/rcS.d/S00WRITEprotected.sh so at boot
time we copy what we think we need from / to tmpfs then mount using lofs
onto /. The rest of / is mounted read-only. S40standardmounts.sh still
takes care of /proc, /etc/mnttab, and /dev/fd.

The S00WRITEprotected.sh script is as follows:
--------------------------------------------
# Begin
mount -n /tmp

tar cpf - var | (cd /tmp; tar xpf -)
mount -n -F lofs /tmp/var /var

tar cpf - etc | (cd /tmp; tar xpf -)
mount -n -F lofs /tmp/etc /etc

tar cpf - dev | (cd /tmp; tar xpf -)
mount -n -F lofs /tmp/dev /dev

tar cpf - devices | (cd /tmp; tar xpf -) mknod
/tmp/devices/pci@1f,0/pci@2/scsi@4/sd@0,0:a b 32 24 mount -n -F lofs
/tmp/devices /devices

# Our directory for application data which NFS clients download from tar
cpf - foo | (cd /tmp; tar xpf -) mount -n -F lofs /tmp/foo /foo

cp /.rhosts /tmp/.rhosts
mount -n -F lofs /tmp/.rhosts /.rhosts
# End
-------------------------------------------------

The /etc/vfstab file has entries for: /dev/fd, /proc, /tmp on tmpfs, and
/.

When fully populated with our operational software, we have approx 2GB
RAM free.

Some general findings in regard to performance:

- The less available memory on tmpfs, the poorer the performance.
- During NFS client access to data on tmpfs, such as performing an ls
-lRt, almost all the CPU time is spent on the system kernel;
specifically, the nfsd process.
- When another chassis in the system boots and downloads its OS and
operational software, nfsd is pretty much pegged, and the downloads take
at least 3 times as long as from a system running from a spinning disk
(not from tmpfs).

We are in the process of applying all the recommended Solaris 9 patches
as of 08/01 to determine if any NFS bug fixes improve performance.

Does anyone have any suggestions as to how to improve performance for
this configuration, or alternate configurations, given we must execute
entirely from RAM.

Thanks in advance!
_______________________________________________
sunmanagers mailing list
sunmanagers@sunmanagers.org
http://www.sunmanagers.org/mailman/listinfo/sunmanagers



This archive was generated by hypermail 2.1.7 : Wed Apr 09 2008 - 23:40:31 EDT