Home » Modeling » EMF "Technology" (Ecore Tools, EMFatic, etc) » [CDO] Questions about 0.8.0M5
[CDO] Questions about 0.8.0M5 [message #112598] |
Tue, 19 February 2008 03:58  |
Eclipse User |
|
|
|
Hi Eike,
two questions concerning the release notes of your new release.
1. Is HibernateStore ready to use - so could I change from MySql to
Hibernate/MySql? Do I have to provide the mapping or do you generate it
internally?
2. Do I understand savepoints correctly in that they do not write to the
database?
The rationale for 2. is as follows:
As I understand, CDO does not run into an OutOfMemory when I read-access
(iterate) huge models - because the CDOObjects are loaded on demand and
not interconnected.
But when I create huge models, CDO-Objects are not written to the
Database as long as I do not commit().
So when I read a huge data file and create CDO-Objects in one big
transactions, I will encounter an OutOfMemory, because the dirty object
accumulate until I commit.
Leading to question 2 - Savepoints are only in-memory markers of the
last valid state of data - they do not prevent OutOfMemoryErrors in the
case of a huge amount of dirty objects, right?
Cheers,
Stefan
|
|
|
Re: [CDO] Questions about 0.8.0M5 [message #112655 is a reply to message #112598] |
Tue, 19 February 2008 11:38   |
Eclipse User |
|
|
|
Hi Stefan,
Comments below...
Stefan Winkler wrote:
> Hi Eike,
> two questions concerning the release notes of your new release.
> 1. Is HibernateStore ready to use - so could I change from MySql to
> Hibernate/MySql? Do I have to provide the mapping or do you generate it
> internally?
No, it is not so far, but Martin and I think that it could be ready for
EclipseCon ;-)
The mappings will probably be generated automatically by default, but
annotations in the model will have an influence on this.
I have no idea why it appears in the release notes! Could be a mistake on
my side but I suspect an issue with the generator script in the context of
the new Bugzilla workflow that we are supposed to use. Formerly ASSIGNED
meant to include a bug into the relnotes. Today it should just indicate
that it's being worked on. Unfortunately I can't verify that right now
since I'm not at home.
> 2. Do I understand savepoints correctly in that they do not write to the
> database?
Yes, they're "just" points in time of a transaction to where you can
rollback instead of only being able to rollback the whole transaction.
network traffic or even database write access only happens on commit().
> The rationale for 2. is as follows:
> As I understand, CDO does not run into an OutOfMemory when I read-access
> (iterate) huge models - because the CDOObjects are loaded on demand and
> not interconnected.
Yes, exactly.
> But when I create huge models, CDO-Objects are not written to the
> Database as long as I do not commit().
Yes, too.
> So when I read a huge data file and create CDO-Objects in one big
> transactions, I will encounter an OutOfMemory, because the dirty object
> accumulate until I commit.
That's true. The only way currently available to cope with huge
transactions is to split them into several commits.
> Leading to question 2 - Savepoints are only in-memory markers of the
> last valid state of data - they do not prevent OutOfMemoryErrors in the
> case of a huge amount of dirty objects, right?
I fear that's correct, too. Is thisa big issue for you?
Cheers
/Eike
|
|
|
Re: [CDO] Questions about 0.8.0M5 [message #112891 is a reply to message #112655] |
Thu, 21 February 2008 08:26  |
Eclipse User |
|
|
|
Hi,
I'm glad that I am beginning to guess CDO internals right ;-)
And answering your question:
>> So when I read a huge data file and create CDO-Objects in one big
>> transactions, I will encounter an OutOfMemory, because the dirty
>> object accumulate until I commit.
> That's true. The only way currently available to cope with huge
> transactions is to split them into several commits.
>
>> Leading to question 2 - Savepoints are only in-memory markers of the
>> last valid state of data - they do not prevent OutOfMemoryErrors in
>> the case of a huge amount of dirty objects, right?
> I fear that's correct, too. Is thisa big issue for you?
>
No - As I said, I derive complete huge models from other huge models. So
I can do "manual" rollbacks by deleting the new model completely if
something goes wrong. (This is not nice as in nice design - as I am
abusing commits as a memory-saver, but that's ok for me).
But thanks for asking ;-)
Cheers,
Stefan
|
|
|
Re: [CDO] Questions about 0.8.0M5 [message #615516 is a reply to message #112598] |
Tue, 19 February 2008 11:38  |
Eclipse User |
|
|
|
Hi Stefan,
Comments below...
Stefan Winkler wrote:
> Hi Eike,
> two questions concerning the release notes of your new release.
> 1. Is HibernateStore ready to use - so could I change from MySql to
> Hibernate/MySql? Do I have to provide the mapping or do you generate it
> internally?
No, it is not so far, but Martin and I think that it could be ready for
EclipseCon ;-)
The mappings will probably be generated automatically by default, but
annotations in the model will have an influence on this.
I have no idea why it appears in the release notes! Could be a mistake on
my side but I suspect an issue with the generator script in the context of
the new Bugzilla workflow that we are supposed to use. Formerly ASSIGNED
meant to include a bug into the relnotes. Today it should just indicate
that it's being worked on. Unfortunately I can't verify that right now
since I'm not at home.
> 2. Do I understand savepoints correctly in that they do not write to the
> database?
Yes, they're "just" points in time of a transaction to where you can
rollback instead of only being able to rollback the whole transaction.
network traffic or even database write access only happens on commit().
> The rationale for 2. is as follows:
> As I understand, CDO does not run into an OutOfMemory when I read-access
> (iterate) huge models - because the CDOObjects are loaded on demand and
> not interconnected.
Yes, exactly.
> But when I create huge models, CDO-Objects are not written to the
> Database as long as I do not commit().
Yes, too.
> So when I read a huge data file and create CDO-Objects in one big
> transactions, I will encounter an OutOfMemory, because the dirty object
> accumulate until I commit.
That's true. The only way currently available to cope with huge
transactions is to split them into several commits.
> Leading to question 2 - Savepoints are only in-memory markers of the
> last valid state of data - they do not prevent OutOfMemoryErrors in the
> case of a huge amount of dirty objects, right?
I fear that's correct, too. Is thisa big issue for you?
Cheers
/Eike
|
|
|
Re: [CDO] Questions about 0.8.0M5 [message #615534 is a reply to message #112655] |
Thu, 21 February 2008 08:26  |
Eclipse User |
|
|
|
Hi,
I'm glad that I am beginning to guess CDO internals right ;-)
And answering your question:
>> So when I read a huge data file and create CDO-Objects in one big
>> transactions, I will encounter an OutOfMemory, because the dirty
>> object accumulate until I commit.
> That's true. The only way currently available to cope with huge
> transactions is to split them into several commits.
>
>> Leading to question 2 - Savepoints are only in-memory markers of the
>> last valid state of data - they do not prevent OutOfMemoryErrors in
>> the case of a huge amount of dirty objects, right?
> I fear that's correct, too. Is thisa big issue for you?
>
No - As I said, I derive complete huge models from other huge models. So
I can do "manual" rollbacks by deleting the new model completely if
something goes wrong. (This is not nice as in nice design - as I am
abusing commits as a memory-saver, but that's ok for me).
But thanks for asking ;-)
Cheers,
Stefan
|
|
|
Goto Forum:
Current Time: Wed Jul 09 22:09:54 EDT 2025
Powered by FUDForum. Page generated in 0.06600 seconds
|