SUMMARY: force it to use more memory

From: Grzegorz Bakalarski (G.Bakalarski@icm.edu.pl)
Date: Fri Nov 14 2003 - 08:24:11 EST


Hello All

Thanks to all who responed. All replies were valuable!
Here is summary for my query about usage of memory ...

In short:
Solaris 9 reports cache memory as "free" (opposite to earlier version
of Solaris. To check this more closely one can use:

mdb -k
> ::memstat

[here is output from my machine]
Page Summary Pages MB %Tot
------------ ---------------- ---------------- ----
Kernel 69364 541 5%
Anon 98258 767 7%
Exec and libs 4935 38 0%
Page cache 104417 815 7%
Free (cachelist) 891091 6961 59%
Free (freelist) 339850 2655 23%

[remember ctrl-\ to quit]
(thanks to Casper for great input)
This shows much more memory is used, but still there is some
few GBytes really free ...

Because I use third party software and this is not any BIG DB family
product (like Oracle, DB2 or so) I cannot configure it in usual way.
I thought I could rely of OS. Also I do not use forcedirectio ...
I use only logging and noatime ...

I do use Apache and thanks to Christophe Dupre for suggestion on using
mod_perl. I'll put attention to this issue soon. It is not so easy,
because of some perl related problems (version, non-statndard modules
and also execution mode) but generally software producer has this option ...

Again THANKS and kind regards,

GB

P.S. FULL SUMMARY FOLLOWS

================Original Query=========================
Guess, this is tunning question ...

machine: V880, 6x 900Mhz UltraSparcIII, 12GB Memory, 2TB+ diskspace
OS: Solaris 9 + recent patchcluster
OBP: recent (i.e. Jun/2003)

The machine acts as a web (cgi) and dbs (backend) server (all in one).
Most activity is generated by cgi (perl) scripts which run "real"
programs for searching db indexes. The "mode" of action is that script
usually finishes in 2-5second (typical cgi environment). The indexes of
few different databases have some 200GB (80GB+80GB+40GB)... Other disk
content is accessed rarely. NB the system is mostly read-only i.e. it writes
 only logs (small amount od data) ... And there is almost no wait IO ...

During daytime the server is quite busy (some 90%+ cpu is busy), some 30+
concurrent users. But the system uses only small amount of memory ...
Usually I see (top, vmstat, sar) only 2-3GB of memory is used, occasionally
about 4GB, rest of the memory is unused (useless?). Swap (which is on
separate slice, i.e. not on tmpfs) is used in less than 1GB (usually
200-300MB).

So what should I make (changes in /etc/system, other ?) in order my machine
could use more memory for buffers, temporary writes, keeping part of indexes
in memory ... ?
Could such changes improve my services (e.g. response time, max number
of concurrent users etc) ?

What induced this query was the following observation: I have regular
/tmp (on separate slice) and also /vtmp (aka virtual tmp) which is of
tmpfs type (i.e. on swap/memory - it is limited to 6GB by mount (vfstab)
option). Sometimes I copy CDs to /vtmp (in order to make updates of one db
faster). However when I make installtion I can see with iostat that
the swap slice is highly used during installation. I.e. I copy some 1GB
to /vtmp (tmpfs), then after some 10-20 minutes I run install script which
reads data from temporary space and puts it in final place (other disk).
There is plenty of free memory, but the system read the data from swap
(for sure) instead of keeping it in memory ....

I'm afraid the default system setting are for small workstations which
typically have 256MB-2GB, and for my little larger system (but ofc not so
large like huge e15k or other high ends) such settings don't allow for full
utilization of resources ...
Thanks in advance ...

GB

77777777777777777777777 R E P L I E S 77777777777777777777777777

=============From: Christophe Dupre ============================
You didn't give much in term of actual software used, and much of the
tuning is application-dependant. So here are a few shots in the dark.
- Assuming your web server is Apache, you should make sure that you're
using mod_perl with Apache::Registry. This will do a few things for you:
        - link the perl interpreter with the web server, so it's always
          in memory.
        - keep interpreted perl scripts in memory so they don't have to
          be read/parsed/interpreted every time
This one thing can give you a boost in performance and allow you to server
more concurrent users.
Keeping the database index in memory is a configuration specific to the
database server, but there must be a configuration file where you can
specify buffer sizes.
Is database access done from the Perl scripts, or from other applications?
If from perl, there's a mod_perl module out there that enable persistant
database connections, which can dramaticaly increase throughput, as the
service time for each cgi request is reduced a lot.
                                                                                

=============From: Casper Dik 1 ======================

In Solaris 8 and before, free memory would usually be very low because
all memory used as cache was not marked free.

In Solaris 9, cached filesystem data is generally marked as "free for the
taking"; so it appears as "free" yet the kernel still knows what is on
the pages and when the content is accessed again, it's used without
more I/O.

The data is originally indeed in memory but is probably evicted in favour
of the data base pages that are still being used; so the memory which
appears "free" will hold the data in /vtmp for a while but after a log
more data base queries, the data is send to swap and replaced by cached
database data.

(If possible, you coudl run the same test when no database work is being
done in which case no swap I/O would happen)_

The system is largely self tuning; previously (S8) people complained about
where all the memory was gone when large system reported only a few
MB free; and now you complain that there's too much memory free.
So it seems that we really need something to quantify it better.

Have you tried the mdb dcmd "::memstat"?

As an example; 1GB Ultra-60 reporting:

        Memory: 1024M real, 607M free, 207M swap in use, 3551M swap free

(607MB free)

# mdb -k
> ::memstat
Page Summary Pages MB %Tot
------------ ---------------- ---------------- ----
Kernel 21232 165 17%
Anon 18164 141 14%
Exec and libs 2468 19 2%
Page cache 5993 46 5%
Free (cachelist) 72339 565 58%
Free (freelist) 5471 42 4%

Total 125667 981

So while it appears to nearly 60% of memory free, in actual fact 58% of the
memory is cached file data; only 4% of the memory is completely unused.

Casper

===================From: Casper Dik 2 ===================

>> Have you tried the mdb dcmd "::memstat"?
>No here is current output
>> ::memstat
>Page Summary Pages MB %Tot
>------------ ---------------- ---------------- ----
>Kernel 61984 484 4%
>Anon 40921 319 3%
>Exec and libs 4718 36 0%
>Page cache 103230 806 7%
>Free (cachelist) 755615 5903 50%
>Free (freelist) 541447 4230 36%
>
>Total 1507915 11780
>
>(from top)
>Memory: 12G real, 10G free, 485M swap in use, 15G swap free
>
>So still seems to be some real free ?

Indeed; not sure what the exact status is of tmpfs pages.

Casper

==================From: the hatter <hatter at pzat.meep.org> ===========
Assuming all the database is currently in memory, then my next step would
be to stick a reverse proxy in front of the real httpd (assuming that a
sensible number of db queries might be repeats, and/or you've got a
reasonable amount of static content) Unfortunately, it won't make you
machine generally busier, as it'll most likely reduce the amount of DB
activity quite a bit, speeding up new db queries even further.

What it should do is allow you to get an even larger volume of pages
through the httpd, before you run out of any single machine resource, and
while you've got plenty of unused ram, that means you can have a huge
cache of static and baked content in memory.

the hatter

==================From: Darren Dunham ===============================

Swap is never "on tmpfs". tmpfs filesystems may make use of virtual
memory, which may be either RAM or disk.

> So what should I make (changes in /etc/system, other ?) in order my machine
> could use more memory for buffers, temporary writes, keeping part of indexes
> in memory ... ?

That's an application question. The system gives it as much memory as
it wants unless the requests fail. Generally with a web server you
could start up more servers. That would get you a quicer response in
some cases. With a database, you could tune it to use more memory for
caching. Neither of these is tuned at the OS level.

In addition, all OS RAM is elegible for caching filesystem information,
but that space is not reflected in the "used memory" on Solaris 8 and
up. It just appears as free memory.

> Could such changes improve my services (e.g. response time, max number
> of concurrent users etc) ?

Possibly.

============== From: Joe Flecher ================================
Not sure what you've got at the moment but here's a section of what we use
on our 880s.

* SysV IPC parameters for Oracle, TNG, and PKMS
set shmsys:shminfo_shmmax=34359738368
set shmsys:shminfo_shmmin=1
set shmsys:shminfo_shmmni=1024
set shmsys:shminfo_shmseg=100

set semsys:seminfo_semmni=4096
set semsys:seminfo_semmsl=16010
set semsys:seminfo_semmns=65535
set semsys:seminfo_semmnu=1024
set semsys:seminfo_semume=512
set semsys:seminfo_semopm=100
set semsys:seminfo_semvmx=32767

set msgsys:msginfo_msgmap=1024
set msgsys:msginfo_msgmax=32768
set msgsys:msginfo_msgmnb=65535
set msgsys:msginfo_msgmni=1024
set msgsys:msginfo_msgssz=2048
set msgsys:msginfo_msgtql=1280
set msgsys:msginfo_msgseg=4096

==================From Kevin.Buterbaugh================================
GB,

     Probably the biggest thing you could do is increase the amount of
shared memory (shminfo_shmmax) allocated to the database. You don't
mention what database you're using, but if it's a 32-bit version, you'll
be limited to 4 GB. 64-bit, of course, bumps that up significantly.
Another thing to look into from the database side is named caches for your
indexes to keep them in RAM.

     90% CPU utilization isn't a cause for concern, but what is the ratio
of %usr to %sys (it should be 2 to 1). If it's less than that, you might
have some inefficient apps running.

     I'd highly recommend you pick up a copy of "System Performance
Tuning" (2nd edition) by Musumeci and Loukides, published by O'Reilly.
Excellent book. Also, the "Solaris Tunable Parameters Reference Manual,"
which may be downloaded from docs.sun.com, would be a useful reference to
have on hand. HTH...
Kevin Buterbaugh - UNIX System Administrator

====================From przemolicc at poczta.fm =======================

Just a couple of questions:
- what is a layout of your disks ?
- what options (logging, forcedirectio) did you mount the filesystems with ?
- did you do any metadevices ? If so how ? Raid 1+0 ? Raid 5 ? Etc.

If you use forcedirectio you don't use buffering so in your case it
would be good to turn it off (just 'logging').

przemol
======================= E O T ===============================================
_______________________________________________
sunmanagers mailing list
sunmanagers@sunmanagers.org
http://www.sunmanagers.org/mailman/listinfo/sunmanagers



This archive was generated by hypermail 2.1.7 : Wed Apr 09 2008 - 23:27:28 EDT