Re: Config for OLTP system
- From: TBP <TheBigPotato@xxxxxxxxxxxxx>
- Date: Thu, 11 Aug 2005 18:50:40 GMT
Neil Truby wrote:
IDS 10.0 v3 on Solaris 9.
I'd be grateful for any comments on this OLTP system onconfig about to go live on a SunFire v480 with 2 processors and 8GBytes of RAM. I have only 6 extrenal disks fo the 60GByte database server. I've striped the database across 3 of them with a 32k interlace then mirrored this stripe in a RAID 0+1 arrangement.
I would defer to Art on this :D
I would also suggest just doing the following :
1. Ask the disk suppliers what throughput should you expect for reads from one of the chunk paths; then do a
timex dd if="chunk path" of=/dev/null bs=2k count=512000
and see what you actually get.
2. Do the same as above but with 2 running at the same time :O
3. Ideally, you want to check out the write performance as well :)
Somehow it seems a little sluggish, though we've done no detailed comparative testing ....
What do you mean by "seems sluggish"??
# Root Dbspace Configuration ROOTNAME rootdbs # Root dbspace name ROOTPATH /opt/informix/dbspaces1/rootdbs_1
Presumably a link to a raw device??
RESIDENT 1 # Forced residency flag (Yes = 1, No = 0)
MULTIPROCESSOR 1 # 0 for single-processor, 1 for multi-processor
NUMCPUVPS 4 # Number of user (cpu) vps
and 2 Physical CPUs - oh well they are probably "really fast"!
SINGLE_CPU_VP 0 # If non-zero, limit number of cpu vps to one
NOAGE 1 # Process aging
AFF_SPROC 0 # Affinity start processor
AFF_NPROCS 0 # Affinity number of processors
Interesting what to do with this - do you want to affinity to 0,1,0,1??
# Shared Memory Parameters LOCKS 20000 # Maximum number of locks
This seems "low" if this is a big OLTP system (how many users in this OLTP system)
NUMAIOVPS 3 # Number of IO vps
As long as you are using KAIO and all chunks are raw, this will be okey dokey.
PHYSBUFF 10240 # Physical log buffer size (Kbytes) LOGBUFF 1024 # Logical log buffer size (Kbytes)
Why? Why are these so big???
When you flush a LOGBUFF, you will fill more than 1 logical log (on the assumption that they are indeed 1,000k).
Even with, say, 1,000 OLTP users a LOGBUFF of 512 should be absolutely fine, and if you have unbuffered logging, then a LOGBUFF of 1,024 would be really inefficient (check out the onstat -l from a "standard" period).
Try PHYSBUFF / LOGBUFF at 128 / 128 and then check out the usage.
CLEANERS 33 # Number of buffer cleaner processes
How many chunks?
# Dynamic Logging
# 2 : server automatically add a new logical log when necessary. (ON)
# 1 : notify DBA to add new logical logs when necessary. (ON)
# 0 : cannot add logical log on the fly. (OFF)
# When dynamic logging is on, we can have higher values for LTXHWM/LTXEHWM,
# because the server can add new logical logs during long transaction rollback.
# However, to limit the number of new logical logs being added, LTXHWM/LTXEHWM
# can be set to smaller values.
# If dynamic logging is off, LTXHWM/LTXEHWM need to be set to smaller values
# to avoid long transaction rollback hanging the server due to lack of logical
# log space, i.e. 50/60 or lower.
# In case of system configured with CDR, the difference between LTXHWM and
# LTXEHWM should be atleast 30% so that we could minimize log overrun issue.
I love this :D
So, you are happy for the instance to add logical logs if you have a happy developer just "setting up the values for a new colum", then it hits LTXHWM, adds a log or two, and then the developer does it again.
Consider reducing LTXHWM and LTXEHWM to 50 / 60 or even 40 / 50.
For OLTP I would suggest about 5 Mb per LRU, where as this gives you about 66 Mb per LRU so, LRUs 257?
Also, with LRU_MAX_DIRTY you can potentially be in a position where you want to flush out 200 Mb. Shut it down to 2 and 1 for max and min dirty.
Still, live running will give you an idea :D
If using onbar, consider what you set
to; you will flood the storage manager if you have 100 dbspaces and this set to 0.
- Config for OLTP system
- From: Neil Truby
- Config for OLTP system