Home » Modeling » EMF "Technology" (Ecore Tools, EMFatic, etc) » [CDO] Questions about 0.8.0M5
|
Re: [CDO] Questions about 0.8.0M5 [message #112655 is a reply to message #112598] |
Tue, 19 February 2008 16:38 |
|
Hi Stefan,
Comments below...
Stefan Winkler wrote:
> Hi Eike,
> two questions concerning the release notes of your new release.
> 1. Is HibernateStore ready to use - so could I change from MySql to
> Hibernate/MySql? Do I have to provide the mapping or do you generate it
> internally?
No, it is not so far, but Martin and I think that it could be ready for
EclipseCon ;-)
The mappings will probably be generated automatically by default, but
annotations in the model will have an influence on this.
I have no idea why it appears in the release notes! Could be a mistake on
my side but I suspect an issue with the generator script in the context of
the new Bugzilla workflow that we are supposed to use. Formerly ASSIGNED
meant to include a bug into the relnotes. Today it should just indicate
that it's being worked on. Unfortunately I can't verify that right now
since I'm not at home.
> 2. Do I understand savepoints correctly in that they do not write to the
> database?
Yes, they're "just" points in time of a transaction to where you can
rollback instead of only being able to rollback the whole transaction.
network traffic or even database write access only happens on commit().
> The rationale for 2. is as follows:
> As I understand, CDO does not run into an OutOfMemory when I read-access
> (iterate) huge models - because the CDOObjects are loaded on demand and
> not interconnected.
Yes, exactly.
> But when I create huge models, CDO-Objects are not written to the
> Database as long as I do not commit().
Yes, too.
> So when I read a huge data file and create CDO-Objects in one big
> transactions, I will encounter an OutOfMemory, because the dirty object
> accumulate until I commit.
That's true. The only way currently available to cope with huge
transactions is to split them into several commits.
> Leading to question 2 - Savepoints are only in-memory markers of the
> last valid state of data - they do not prevent OutOfMemoryErrors in the
> case of a huge amount of dirty objects, right?
I fear that's correct, too. Is thisa big issue for you?
Cheers
/Eike
Cheers
/Eike
----
http://www.esc-net.de
http://thegordian.blogspot.com
http://twitter.com/eikestepper
|
|
| |
Re: [CDO] Questions about 0.8.0M5 [message #615516 is a reply to message #112598] |
Tue, 19 February 2008 16:38 |
|
Hi Stefan,
Comments below...
Stefan Winkler wrote:
> Hi Eike,
> two questions concerning the release notes of your new release.
> 1. Is HibernateStore ready to use - so could I change from MySql to
> Hibernate/MySql? Do I have to provide the mapping or do you generate it
> internally?
No, it is not so far, but Martin and I think that it could be ready for
EclipseCon ;-)
The mappings will probably be generated automatically by default, but
annotations in the model will have an influence on this.
I have no idea why it appears in the release notes! Could be a mistake on
my side but I suspect an issue with the generator script in the context of
the new Bugzilla workflow that we are supposed to use. Formerly ASSIGNED
meant to include a bug into the relnotes. Today it should just indicate
that it's being worked on. Unfortunately I can't verify that right now
since I'm not at home.
> 2. Do I understand savepoints correctly in that they do not write to the
> database?
Yes, they're "just" points in time of a transaction to where you can
rollback instead of only being able to rollback the whole transaction.
network traffic or even database write access only happens on commit().
> The rationale for 2. is as follows:
> As I understand, CDO does not run into an OutOfMemory when I read-access
> (iterate) huge models - because the CDOObjects are loaded on demand and
> not interconnected.
Yes, exactly.
> But when I create huge models, CDO-Objects are not written to the
> Database as long as I do not commit().
Yes, too.
> So when I read a huge data file and create CDO-Objects in one big
> transactions, I will encounter an OutOfMemory, because the dirty object
> accumulate until I commit.
That's true. The only way currently available to cope with huge
transactions is to split them into several commits.
> Leading to question 2 - Savepoints are only in-memory markers of the
> last valid state of data - they do not prevent OutOfMemoryErrors in the
> case of a huge amount of dirty objects, right?
I fear that's correct, too. Is thisa big issue for you?
Cheers
/Eike
Cheers
/Eike
----
http://www.esc-net.de
http://thegordian.blogspot.com
http://twitter.com/eikestepper
|
|
| |
Goto Forum:
Current Time: Fri Apr 19 07:28:14 GMT 2024
Powered by FUDForum. Page generated in 0.01845 seconds
|