Oracle memory tuning

From: rob.leadbeater@lynx.co.uk
Date: Mon Feb 28 2005 - 07:18:51 EST


Hi managers,

In an effort to increase the performance of our Oracle database cluster, we recently added an additional 4GB memory into each node. However this morning for certain operations we're actually seeing much lower performance than we were prior to adding the memory :-(

This is a two node ES47 memory channel cluster running 5.1B PK4.
Each ES47 consists of a single dual CPU building block, with 12GB of memory, (2 x 4GB) + (2 x 2GB). Prior to upgrading memory was 8GB (4 x 2GB)

The cluster is running Oracle 9i and 10g RAC. There's one 9i instance on each node and one 10g instance on each node.
SGA on the 9i instances is 656MB. SGA on the 10g instances is 6GB.

A top output shortly after we added the memory, indicated that we may not be using all of the additional memory correctly:

# top
load averages: 6.12, 6.33, 6.33
22:01:44
200 processes: 9 running, 34 waiting, 69 sleeping, 84 idle, 4 zombie
CPU states: 83.2% user, 0.0% nice, 16.5% system, 0.2% idle
Memory: Real: 8093M/12G act/tot Virtual: 8239M use/tot Free: 32M

  PID USERNAME PRI NICE SIZE RES STATE TIME CPU COMMAND
1155425 oracle 62 0 6251M 7618K run 9:37 75.30% oracle
1156141 oracle 49 0 6281M 38M run 3:06 23.30% oracle
1104580 oracle 42 0 7100M 272M WAIT 87:54 19.90% oracle
1156909 oracle 42 0 6311M 11M WAIT 0:12 18.90% oracle
1146081 root 44 0 333M 331M run 17:32 8.90% vdump
1050883 oracle 44 0 6263M 15M run 13:20 5.50% oracle
1050879 oracle 45 0 6263M 15M sleep 13:19 5.40% oracle
1156319 oracle 46 0 6253M 8658K sleep 0:09 5.20% oracle
1050875 oracle 45 0 6263M 15M sleep 1:28 4.80% oracle
1048576 root 0 0 13G 268M run 56:20 4.70% kernel idle
1050881 oracle 44 0 6263M 15M sleep 13:21 4.30% oracle
1050877 oracle 44 0 6263M 15M sleep 13:06 3.90% oracle
1052823 oracle 44 0 817M 10M run 8:27 2.10% oracle
1052819 oracle 44 0 817M 10M sleep 7:34 1.90% oracle
1050924 oracle 44 0 6253M 7659K sleep 0:17 0.70% oracle

Current kernel parameters follow at the end of the mail (I think I've included the relevant ones)

I've been trying to find some definitive tuning guides, but keep finding conflicting information.
For example, Oracle metalink note 169706.1 states that PER_PROC_DATA_SIZE should be set to 335544320 (320MB), whereas the HP doc here:
http://h30097.www3.hp.com/docs/base_doc/DOCUMENTATION/V51B_HTML/ARH9GCTE/TNRCLXXX.HTM

says it should be set to the amount of RAM...

Could someone give me some pointers as to how best to tune this beast, before it gets resigned to the skip.

Cheers,

Rob

# sysconfig -q proc
proc:
max_proc_per_user = 1024
max_threads_per_user = 4096
per_proc_stack_size = 536870912
max_per_proc_stack_size = 536870912
per_proc_data_size = 1073741824
max_per_proc_data_size = 4294967296
max_per_proc_address_space = 8589934592
per_proc_address_space = 8589934592
executable_stack = 0
autonice = 0
autonice_time = 600
autonice_penalty = 4
open_max_soft = 4096
open_max_hard = 4096
ncallout_alloc_size = 8192
round_robin_switch_rate = 60
sched_min_idle = 0
give_boost = 1
maxusers = 4096
num_wait_queues = 4096
num_timeout_hash_queues = 4096
enhanced_core_name = 0
enhanced_core_max_versions = 16
exec_disable_arg_limit = 0
dump_cores = 1
dump_setugid_cores = 0
executable_data = 0

# sysconfig -q ipc
ipc:
msg_max = 8192
msg_mnb = 16384
msg_mni = 64
msg_tql = 40
shm_max = 4278190080
shm_min = 1
shm_mni = 1024
shm_seg = 256
sem_mni = 16
sem_msl = 25
sem_opm = 10
sem_ume = 10
sem_vmx = 32767
sem_aem = 16384
sem_broadcast_wakeup = 0
max_kernel_ports = 314912
ssm_threshold = 8388608
ssm_enable_core_dump = 1
shm_allocate_striped = 1
shm_enable_core_dump = 1
pshared_hash_size = 2048

# sysconfig -q vm
vm:
ubc_minpercent = 10
ubc_maxpercent = 100
ubc_borrowpercent = 20
ubc_prewritemult = 2
ubc_approxlow = 60
ubc_approxhigh = 10
ubc_pgscan_drop_lock_cnt = 10
ubc_scan_pglock_cnt = 3
vm_max_wrpgio_kluster = 32768
vm_max_rdpgio_kluster = 16384
vm_cowfaults = 4
vm_segmentation = 1
vm_ubcpagesteal = 24
vm_ubcfilemaxdirtypages = 4294967295
vm_ubcdirtypercent = 40
ubc_maxdirtywrites = 5
ubc_maxdirtymetadata_pcnt = 70
ubc_kluster_cnt = 32
vm_ubcseqstartpercent = 80
vm_ubcseqpercent = 10
ubc_overflow = -1
vm_csubmapsize = 1048576
vm_ubcbuffers = 256
vm_syncswapbuffers = 128
vm_asyncswapbuffers = 4
vm_clustermap = 1048576
vm_clustersize = 65536
vm_syswiredpercent = 80
vm_troll_percent = 4
vm_inswappedmin = 1
vm_page_free_target = 1024
vm_page_free_swap = 527
vm_page_free_hardswap = 16384
vm_page_free_min = 30
vm_page_free_reserved = 20
vm_page_free_optimal = 527
vm_swap_eager = 0
swapdevice = /dev/disk/dsk2b
vm_page_prewrite_target = 2048
vm_ffl = 1
ubc_ffl = 1
vm_rss_maxpercent = 100
anon_rss_enforce = 0
vm_rss_block_target = 527
vm_rss_wakeup_target = 527
kernel_stack_pages = 2
vm_min_kernel_address = 18446741891866165248
malloc_percpu_cache = 1
vm_aggressive_swap = 0
new_wire_method = 1
vm_segment_cache_max = 50
gh_chunks = 0
rad_gh_regions[0] = 0
 snip rad_gh_regions[1-62] = 0
rad_gh_regions[63] = 0
gh_min_seg_size = 4194304
gh_fail_if_no_mem = 1
vm_bigpg_enabled = 0
vm_bigpg_anon = 64
vm_bigpg_seg = 64
vm_bigpg_shm = 64
vm_bigpg_ssm = 64
vm_bigpg_stack = 64
vm_bigpg_thresh = 6
vm_bigpg_factor = 1
vm_l3gh_anon = 1
vm_l3gh_shm = 1
vm_l3gh_ssm = 1
private_cache_percent = 0
gh_keep_sorted = 0
gh_front_alloc = 1
replicate_user_text = 1
enable_yellow_zone = 0
boost_pager_priority = 0
gsm_enabled = 1
kstack_free_target = 5
wire_audit_count = 0
vm_prepop_percent = 100

This message is intended only for the use of the person(s) ("The intended
Recipient(s)") to whom it is addressed. It may contain information which
is privileged and confidential within the meaning of applicable law. If
you are not the intended recipient, please contact the sender as soon as
possible. The views expressed in this communication are not necessarily
those held by LYNX Express Limited.



This archive was generated by hypermail 2.1.7 : Sat Apr 12 2008 - 10:50:16 EDT