SUMMARY: Very large resident memory size oracle-processes.

From: Andre.van.Benthem@achmea.nl
Date: Fri Aug 09 2002 - 08:52:20 EDT


Dear managers,

So far, I got 3 responses, all of them with the same explanation:

The resident set size (RSS) also includes the shared memory for each
process.
This is different to tru64 4.0.

So the 46 oracle processes accessing the same shared memory segment wil
report each
in their resident set size, a portion of the oracle SGA-size. In our case
the SGA is about 900MB.

Thanks to :

Stephen Mulcahy
Gavin Kreuiter
Thomas Blinn

Complete answers:

Gavin Kreuiter :
There is no way to determine what portion of shared memory is used by
processes; the process might use the entire segment, or just a small
section
of it. Some flavours of unix rationalise this by dividing the total shared
memory size amongst the number of processes attached to the segment (which
can ALSO be misleading). Tru64 opts to attribute the entire segment to all
processes. Therefore, 10 processes accessing a shared memory segment of
1GB
(all of which is in memory) will report a total RSS of 10Gb.

======================

Stephen Mulcahy:
Hi,

Just a guess because its been a while since I've done any Oracle work
but oracle uses large shared memory segments and afaik, the RSS value
on ps output includes the shared memory for each process so the total
RSSs for all processes could be more than the total memory in your
system since each process memory includes the shared memory.

Hhhhm, does that clarify or confuse things?

=====================================
Thomas Blinn:
The "RSS" is the resident set size; it's a count of the number of
pages in the process' virtual address space that are actually in
physical memory. The count includes code pages that might be in
use as well in other processes, data pages that might be in shared
memory with other processes, as well as private pages (the stack,
some of the process context that's counted in the process' page
count, data pages that are not shared). So the total of the RSS
across all the active processes in the system may well exceed the
size of "managed" physical memory, since some pages are counted in
more than one RSS number.

If the system is performing normally and you don't have a high
page out rate, then you have NOTHING to worry about.

============-

Original Q:

Dear managers,

I noticed that most oracle-processes are using excessive resident memory.

Below some processes: ( tru64 5.1A PK2 / oracle 8.1.7.3)

# ps auxw | egrep 'RSS|oracle'
USER PID %CPU %MEM VSZ RSS TTY S STARTED TIME COMMAND
oracle 389837 0.0 17.1 1.09G 349M ?? S Aug 03 1:56.19 ora_snpv
_pvfbqp
oracle 389828 0.0 23.0 1.08G 469M ?? S Aug 03 1:33.94 ora_snpt
_pvfbqp
oracle 389824 0.0 10.1 1.08G 207M ?? S Aug 03 1:20.46 ora_snpp

In total there are 46 oracle processes using on average 300MB resident
memory.
The total resident memory is about 13GB on a system with 2.0GB physical
memory !

The system is performing normally.

Does anyone had similar experiences ?? and maybe a explanation ?

# vmstat -P

Total Physical Memory = 2048.00 M
                      = 262144 pages

Physical Memory Clusters:

start_pfn end_pfn type size_pages / size_bytes
         0 877 pal 877 / 6.85M
       877 262135 os 261258 / 2041.08M
    262135 262144 pal 9 / 72.00k

Physical Memory Use:

 start_pfn end_pfn type size_pages / size_bytes
       877 1056 scavenge 179 / 1.40M
      1056 1854 text 798 / 6.23M
      1854 2001 data 147 / 1.15M
      2001 2217 bss 216 / 1.69M
      2217 2415 kdebug 198 / 1.55M
      2415 2421 cfgmgmt 6 / 48.00k
      2421 2423 locks 2 / 16.00k
      2423 2437 pmap 14 / 112.00k
      2437 3501 unixtable 1064 / 8.31M
      3501 3549 logs 48 / 384.00k
      3549 7655 vmtables 4106 / 32.08M
      7655 262135 managed 254480 / 1988.12M
                             ============================
         Total Physical Memory Use: 261258 / 2041.08M

Managed Pages Break Down:

       free pages = 4565
     active pages = 61739
   inactive pages = 131922
      wired pages = 29485
        ubc pages = 26947
        ==================
            Total = 254658

WIRED Pages Break Down:

   vm wired pages = 6132
  ubc wired pages = 26
  meta data pages = 2596
     malloc pages = 11595
     contig pages = 2649
    user ptepages = 6260
  kernel ptepages = 222
    free ptepages = 5
        ==================
            Total = 29485

#

Regards A. van Benthem

********************* DISCLAIMER *********************
De informatie in dit e-mail bericht is uitsluitend
bestemd voor de geadresseerde. Verstrekking aan
en gebruik door anderen is niet toegestaan.
Door de electronische verzending van het bericht
kunnen er geen rechten worden ontleend aan de
informatie.
************************************************************



This archive was generated by hypermail 2.1.7 : Sat Apr 12 2008 - 10:48:49 EDT