What I am
trying to work out is how long I have to convert our system, which is
a fat client (Delphi app) directly accessing the Firebird database, to
a proper 3 tier approach i.e., thinner client to middleware with
connection pooling, queuing, prioritising, etc. The estimate to do
this is many man-years of work so I need to know how long I have
before our current system runs into performance problems. This will
help us work out how to best allocate resources for the change to 3 tier.
In our current Firebird system, we have the hash slots value in the
lock manager set to the maximum. Even with this, fb_lock_print
indicates that we are close to the recommended figures of hash
lengths. An example is below. Are there any plans to increase the
hash slots limit? Does the Vulcan or Firebird 3 or 64 bit Firebird 2
versions change any of this?
fb_lock_print output:
Version: 15, Active owner: 0, Length: 16777216, Used: 14953276
Lock manager pid: 24218
Semmask: 0x6904, Flags: 0x0001
Enqs: 1924538961, Converts: 4488958, Rejects: 1140965, Blocks:
6006424
Deadlock scans: 2, Deadlocks: 0, Scan interval: 10
Acquires: 1989954825, Acquire blocks: 22512128, Spin count: 0
Mutex wait: 1.1%
Hash slots: 2039, Hash lengths (min/avg/max): 9/ 19/ 31
Remove node: 0, Insert queue: 0, Insert prior: 0
Owners (240): forward: 27220, backward: 8409756
Free owners (21): forward: 1511988, backward: 10777424
Free locks (55847): forward: 14566472, backward: 6378624
Free requests (86284): forward: 1436964, backward: 5726600
Lock Ordering: Enabled
The other issue is about the maximum number of connections allowed. I
believe superserver has a 1,024 connection limit. From my testing
Classic doesn’t seem to have this limit. I was able to get 3,000
connections without much problem although I didn’t do a lot of work
with them so don’t know how badly the lock manager would cope with
these numbers. So it looks like connections are only limited by your
hardware in classic. Does anybody know if this will change with
Firebird 2 or 3? Let me reiterate that I do not think having 3,000
connections is a good idea but having more than 1,024 connections
might be necessary as a transitional solution while we convert to a 3
tier approach.
Lastly, does anybody know of any other limits or problems we may
encounter if we try to increase the amount of processing we are doing
by a factor of 5 or 10? I.e., 250 gig database, 1500 connections, 10
million transactions a day probably on something like a linux 8 cpu
dual core cpu with 128 gig ram, although HP-UX or solaris boxes may
also be a possibility.
« Hide it