PreparedStatement caching in bitronix and in the database driver JSqlConnect

classic Classic list List threaded Threaded
3 messages Options
Reply | Threaded
Open this post in threaded view
|

PreparedStatement caching in bitronix and in the database driver JSqlConnect

Abbas Gadhia
Hi,
I am facing an issue with a typical Bitronix setup wherin if more than 100 PreparedStatements are fired on a single connection, the database driver does not allow me to fire any more. On going into the driver code we realized that there is a limit on the number of statements that a single connection can cache.  (Something like.. if cacheSize>100, oops!)

The problem is that these statements (more than 100) are fired from a single request by a user and are all done sequentially. Now, since they are sequential, the same connection is returned to the application by the connection pool, and hence the last few preparedstatements fail.

My question is:
Is there a way for me to invalidate the connection before it reaches its maximum preparedstatement cache size?
Reply | Threaded
Open this post in threaded view
|

Re: PreparedStatement caching in bitronix and in the database driver JSqlConnect

Ludovic Orban
Administrator
I see two ways around that problem:

1) split your work between two transactions
2) enable prepared statements cache

The first option is obvious but not very practical. The second one might help but with no guarantee. If most of your prepared statements are for the same query the BTM connection pool can reuse them instead of recreating them. Just call setPreparedStatementCacheSize(size) on the PoolingDataSource instance before initialization to enable it. You will certainly need to tweak the cache size tough.
Reply | Threaded
Open this post in threaded view
|

Re: PreparedStatement caching in bitronix and in the database driver JSqlConnect

Abbas Gadhia
Well, it turned out that the maximum limit of 100 prepared statements per connection was a driver bug which they have now fixed.

Nevertheless i tried using the Enable Prepared Statement Cache property but it did gave me the same problem. Even though bitronix was caching the statements, the driver would throw the same error regardless.

Thanks for your help!
Btw... BTM is awesome!

Ludovic Orban wrote
I see two ways around that problem:

1) split your work between two transactions
2) enable prepared statements cache

The first option is obvious but not very practical. The second one might help but with no guarantee. If most of your prepared statements are for the same query the BTM connection pool can reuse them instead of recreating them. Just call setPreparedStatementCacheSize(size) on the PoolingDataSource instance before initialization to enable it. You will certainly need to tweak the cache size tough.