|
|
|
Re: efficient strategy to avoid too many transaction.commit() [message #836800 is a reply to message #836647] |
Wed, 04 April 2012 23:41 |
|
Am 04.04.2012 20:41, schrieb Ligaj Pradhan:
> Hi everyone !
> I have a many to many relationship between 'Books' and 'Writers' in my EMF model.
> I have multiple repositories from where I extract the data and populate my models and store them in SQL database.
> I have to check from the database if the writer already exists. (The writers are unique with their unique id).
> checking from the resource if it contains the writer with certain id would be much faster but since I want to
> incrementally add the data every couple of days.....resources would not be available...
>
> checking from the database if the writer exists....would also mean that I have to do transaction.commit() everytime I
> add a writer....
>
> transaction.commit() after extracing every new writer is proving to be very time consuming.....especially while
> extracting thousands and thousands of writers.
Even if we find a solution to the million lookups problem you should call CDOTransaction.commit every couple thousand
instertions to free up resources on the client side.
>
> Is there any efficient strategy to avoid this transaction.commit() every time ?
If you import many books of a few number of writers a local HashSet with the writers' IDs can prevent multiple redundant
remote lookups. If the imports all happen from one client I would probably try to cache the set in a file a nd reuse it
for later imports.
Cheers
/Eike
----
http://www.esc-net.de
http://thegordian.blogspot.com
http://twitter.com/eikestepper
Cheers
/Eike
----
http://www.esc-net.de
http://thegordian.blogspot.com
http://twitter.com/eikestepper
|
|
|
Powered by
FUDForum. Page generated in 0.02821 seconds