|
|
|
[
Permlink
| « Hide
]
AlSt [19/Feb/18 12:19 PM]
Or might this maybe also be a bug in c3p0. I've seen that the version used in QB is 0.9.2.1 and 0.9.5.2 is the newest version.
QB only uses one connection to do database backup. You might put the new c3p0 jar into QB's lib (plugins/com.pmease.quickbuild/libs) directory to see if the situation is better.
Ok it seems this has nothing to do with the backup itself. That really uses just one connection, but I've some logging now to get an insight why a pool with max size of 50 opens 100 connections. It also tries to open more than the 100 connections and then the db limit kicks in which I already increased to 100.
Please see the attached connection.txt file with the connections log. it is basically some data fetched from pg_stat_activity. Ok ... the problem was that we have a socket timeout set in the DB connection string for postgreSQL.
The measurement data truncate takes pretty long, so "delete all" is triggered which locks the table. Subsequent calls to that table are running into timeout and QB connections (although still open) get marked as broken (by c3p0) and it tries to open more and more connections. Once the timeout got raised to a value so that the truncate call can finish without throwing a timeout exception the connection pool stays low. about 10 connections open. Also I reconfigured c3p0 to close idle connections to only create connection when needed. Usually there are about 15 connections open and 10 are in active use. Each open connection (pool size default is 25) is costly on the DB server (memory consumption, open file handle, etc) Thanks for the detailed analysis. Definitely helps in case some other one encounters the issue.
|