Eclipse Community Forums
Forum Search:

Search      Help    Register    Login    Home
Home » Modeling » EMF » CDO model evolution strategy(Looking for thoughts on our proposed model evolution strategy.)
CDO model evolution strategy [message #1058896] Wed, 15 May 2013 15:09 Go to next message
Kyle B is currently offline Kyle BFriend
Messages: 12
Registered: April 2013
Junior Member
Hello,

We are currently in the process of migrating our application to use the CDO framework. For the past few weeks I have been investigating how to handle model evolution.

Our meta-models change very frequently so it would be safe to say we will need to evolve the CDO model with each version upgrade. I have investigated some of the suggestions others have posted on this forum, but none seem to fit our needs (backup/restore, copy plugin with new nsuri, dump sql).

The approach I have been investigating involves executing SQL commands to the database to update the table structures and updating the CDO meta-model tables to reflect the new changes. The model changes will either be detected using EMF compare, or having a method where the developers will describe the changes manually.

I have manually tested this approach with several different types of model changes and have been successful so far, I was wondering if anyone more experienced with CDO might be able to foresee a pitfall?

Thank you,
Kyle
Re: CDO model evolution strategy [message #1058979 is a reply to message #1058896] Thu, 16 May 2013 06:42 Go to previous messageGo to next message
Per Sterner is currently offline Per SternerFriend
Messages: 77
Registered: October 2011
Member
Hi,

We are also in the same situation. Until now, we could reimport the data from the old storage.

I am also interested in the the SQL-command migratioen implementation. I have only tried to migrate a Eclass movement to another EPackage. There I only changed the column 'cdo_class' in the 'cdo_objects' Table. Although I thought it would be nice, if the cdo_class in that table would only be a reference Smile

I am also interested if there are pitfalls. And perhaps we should cooperate with this migration functionality, so that a lot migration-cases are covered.

Regards,

Per Sterner
Re: CDO model evolution strategy [message #1059277 is a reply to message #1058979] Thu, 16 May 2013 20:00 Go to previous messageGo to next message
Christophe Bouhier is currently offline Christophe BouhierFriend
Messages: 937
Registered: July 2009
Senior Member
Hello,

It's more and more obvious many are looking for a solution for this.
From Eike, I understand he wants to do it, if only properly funded. But
this is is tricky for various (Including myself) in monetary terms, but
I can donate coding time. We will obviously need Eike's mentoring. (I am
pretty sure, he knows how to attack this problem).

I have started to look into it myself, my approach is to produce a diff
of the meta models, and use the diff to migrate the data. I don't want
to do it SQL based, as the CDO logic, needs to be in the loop I believe.
(This is very much from the gut..hehe) . I have studied the
export/import to XML code, which is some sort of middle ground between
the CDO API, and SQL. I have started to experiment emulating the
export/import behavior and applying the diffs, but no where near a real
solution...

Eike, would you be willing to mentor a solution, which would make it in
CDO 4.4? (I followed your discussions rgd PDE and fragments for 4.3...so
you must be other things on your mind!).

Cheers Christophe


On 16-05-13 08:42, Per Sterner wrote:
> Hi,
>
> We are also in the same situation. Until now, we could reimport the data
> from the old storage.
>
> I am also interested in the the SQL-command migratioen implementation. I
> have only tried to migrate a Eclass movement to another EPackage. There
> I only changed the column 'cdo_class' in the 'cdo_objects' Table.
> Although I thought it would be nice, if the cdo_class in that table
> would only be a reference :)
>
> I am also interested if there are pitfalls. And perhaps we should
> cooperate with this migration functionality, so that a lot
> migration-cases are covered.
>
> Regards,
>
> Per Sterner
[CDO] Re: CDO model evolution strategy [message #1059398 is a reply to message #1059277] Fri, 17 May 2013 22:19 Go to previous messageGo to next message
Warwick Burrows is currently offline Warwick BurrowsFriend
Messages: 88
Registered: July 2009
Location: Austin, TX
Member
Guys, this thread may not come to Eike's attention without [CDO] in the title. He did tell me once that I should have added that to the title of a post to this forum. I'll update the title in my reply just in case.

Oh and I'm also interested in a model evolution strategy as we've gone to production with CDO but have managed to avoid model changes up to this point. That won't last for long.

Warwick

Re: [CDO] Re: CDO model evolution strategy [message #1059416 is a reply to message #1059398] Sat, 18 May 2013 06:06 Go to previous messageGo to next message
Eike Stepper is currently offline Eike StepperFriend
Messages: 5590
Registered: July 2009
Senior Member
Am 18.05.2013 00:19, schrieb Warwick Burrows:
> Guys, this thread may not come to Eike's attention without [CDO] in the title. He did tell me once that I should have
> added that to the title of a post to this forum. I'll update the title in my reply just in case.
Thank you, but I have notice it before. I have a Thunderbird filter that highlights all posts with CDO or Net4j in the
subject ;-)

These days I'm very busy with preparing next month's release and this particular subject (model evolution support) has
been discussed a lot already. It's definitely on my wish list for 4.3.

Cheers
/Eike

----
http://www.esc-net.de
http://thegordian.blogspot.com
http://twitter.com/eikestepper


>
> Oh and I'm also interested in a model evolution strategy as we've gone to production with CDO but have managed to
> avoid model changes up to this point. That won't last for long.
>
> Warwick
>
>
Re: CDO model evolution strategy [message #1188208 is a reply to message #1058896] Fri, 15 November 2013 14:01 Go to previous messageGo to next message
Christophe Bouhier is currently offline Christophe BouhierFriend
Messages: 937
Registered: July 2009
Senior Member
Hi Kyle,

I am maintaining the Edapt project which is about EMF model 'coupled'
evolution. I would like to work with you and the work you have done
sofar related to evolution.

The approach I am after is similar to what you describe, however I would
like to use the EDapt history model to perform the migrations.

If we combine our effort one hand the Edapt stuff and your work
migrating the CDO repo, we could have an interresting generic solution.
Can you plz contact me on dzonekl at gmail (parser anoyance ##$$) dot com

Thanks.
Christophe Bouhier




On 15-05-13 17:11, Kyle B wrote:
> Hello,
>
> We are currently in the process of migrating our application to use the
> CDO framework. For the past few weeks I have been investigating how to
> handle model evolution.
>
> Our meta-models change very frequently so it would be safe to say we
> will need to evolve the CDO model with each version upgrade. I have
> investigated some of the suggestions others have posted on this forum,
> but none seem to fit our needs (backup/restore, copy plugin with new
> nsuri, dump sql).
> The approach I have been investigating involves executing SQL commands
> to the database to update the table structures and updating the CDO
> meta-model tables to reflect the new changes. The model changes will
> either be detected using EMF compare, or having a method where the
> developers will describe the changes manually.
>
> I have manually tested this approach with several different types of
> model changes and have been successful so far, I was wondering if anyone
> more experienced with CDO might be able to foresee a pitfall?
>
> Thank you,
> Kyle
Re: CDO model evolution strategy [message #1194671 is a reply to message #1188208] Mon, 18 November 2013 16:35 Go to previous messageGo to next message
Lothar Werzinger is currently offline Lothar WerzingerFriend
Messages: 153
Registered: July 2009
Location: Bay Area
Senior Member
As there are many interested parties of which obviously none can shoulder paying in full for a complex feature like model evolution I wonder if we can somehow crowd-fund (even it's a comparatively small crowd Wink compared to regular crowd funded projects) this endeavor. I don't have a budget yet, but I would be certainly interested.
Re: CDO model evolution strategy [message #1203066 is a reply to message #1194671] Fri, 22 November 2013 13:06 Go to previous messageGo to next message
Christophe Bouhier is currently offline Christophe BouhierFriend
Messages: 937
Registered: July 2009
Senior Member
On 18-11-13 17:35, Lothar Werzinger wrote:
> As there are many interested parties of which obviously none can
> shoulder paying in full for a complex feature like model evolution I
> wonder if we can somehow crowd-fund (even it's a comparatively small
> crowd ;) compared to regular crowd funded projects) this endeavor. I
> don't have a budget yet, but I would be certainly interested.

Hi,

I have done some studying of Edapt the last couple of weeks, and I think
it can only partly be re-used for CDO and/or will suffer severe
performance penalty.
I still intend to blog about the inner guts of Edapt, so anyone can just
for themselves, but some *thinking* for the moment:

- Edapt is EMF Resource based. What it means is, the migration process
takes in model URI's and creates ResourceSet/Resource for them. It
caters for loading models, with an *older* EPackage.

- The metamodel reconstruction from a model History works pretty neat,
and can be re-used in CDO I believe. So basically, you have a history
for an ecore, with 2 or 3 releases, the metamodel reconstrucion process
allows you to build the ecore/EPackage back and forth between releases.

The big question is, how to let the reconstructor interact with the CDO
package registry and migrate back and forth the schema of the IStore.
This is an area which needs more studying...

- The actual model data migration is creating an Edapt specific Model
instance of the user model to migrate to the appropriate Release of
ECore. This Edapt "Model" model is loaded in memory and it takes
Resources as input. Currently I am not convinced this will work for CDO
very well. It will work OK, but imagine loading millions of records from
a CDO resource in memory (On top of what is already cached by CDO).
What I discussed with Eike is that, we would like the migration to
happen on IStore level, not on CDOResources. If this is the case, the
way Edapt access model objects needs to be re-designed.

- Finally, Edapt creates a mapping index, for mapping it's own model
instances with the instances of the model which needs to be migrated.
This index is auto-populated when the changes of a release are visited.
(This could be both Meta or Model). It is then used to resolve elements
in the migration process.

Conclusion:

- Edapt as it is can almost work with CDO, by accessing CDO Resources.
Each new Release of an Ecore will need to register a corresponding
EPackage with CDO, which would allow the IStore to be populated with the
schema.

I will commit some test cases whichexperiment in this direction in a
specific topic branch for the Edapt git project. (git.eclipse.org).
The goal here is to see if it works, without any changes to CDO
functionality.

- The Resource based migrator will not scale very nicely.
Re: CDO model evolution strategy [message #1204632 is a reply to message #1203066] Sat, 23 November 2013 06:53 Go to previous messageGo to next message
Eike Stepper is currently offline Eike StepperFriend
Messages: 5590
Registered: July 2009
Senior Member
Hi Christophe,

Thank you for the nice analysis ;-)

I don't think that we can make use of any of the instance-level functions of Edapt because they just wouldn't scale. It
would be great if we could Edapt's frontend and the release management (assuming that it operates on EPackage level
only). I'm curious to see how the recorded changes look like. Can you paste some examples? The goal should be to use
them directly in the DBStore in order to evolve the DB schema and migrate the existing table rows. If we need an object
index for the duration of the actual data migration (which I doubt) we can create temporary DB tables with adequate indexes.

Before we dive too much into the technical discussion on how to apply the recorded metamodel changes to the DB schema I
suggest that we clarify the overall workflow. What are the involved roles, phases and actions? Do we want to support
reverse evolution? How much do CDO client applications depend on specific meta model versions? What about local
availability of generated model plugins? What if OSGi/p2 are not available locally?

Are there other important questions?

Cheers
/Eike

----
http://www.esc-net.de
http://thegordian.blogspot.com
http://twitter.com/eikestepper



Am 22.11.2013 14:06, schrieb Christophe Bouhier:
> On 18-11-13 17:35, Lothar Werzinger wrote:
>> As there are many interested parties of which obviously none can
>> shoulder paying in full for a complex feature like model evolution I
>> wonder if we can somehow crowd-fund (even it's a comparatively small
>> crowd ;) compared to regular crowd funded projects) this endeavor. I
>> don't have a budget yet, but I would be certainly interested.
>
> Hi,
>
> I have done some studying of Edapt the last couple of weeks, and I think it can only partly be re-used for CDO and/or
> will suffer severe performance penalty.
> I still intend to blog about the inner guts of Edapt, so anyone can just for themselves, but some *thinking* for the
> moment:
>
> - Edapt is EMF Resource based. What it means is, the migration process takes in model URI's and creates
> ResourceSet/Resource for them. It caters for loading models, with an *older* EPackage.
>
> - The metamodel reconstruction from a model History works pretty neat, and can be re-used in CDO I believe. So
> basically, you have a history for an ecore, with 2 or 3 releases, the metamodel reconstrucion process allows you to
> build the ecore/EPackage back and forth between releases.
>
> The big question is, how to let the reconstructor interact with the CDO package registry and migrate back and forth
> the schema of the IStore. This is an area which needs more studying...
>
> - The actual model data migration is creating an Edapt specific Model instance of the user model to migrate to the
> appropriate Release of ECore. This Edapt "Model" model is loaded in memory and it takes Resources as input. Currently
> I am not convinced this will work for CDO very well. It will work OK, but imagine loading millions of records from a
> CDO resource in memory (On top of what is already cached by CDO). What I discussed with Eike is that, we would like
> the migration to happen on IStore level, not on CDOResources. If this is the case, the way Edapt access model objects
> needs to be re-designed.
>
> - Finally, Edapt creates a mapping index, for mapping it's own model instances with the instances of the model which
> needs to be migrated.
> This index is auto-populated when the changes of a release are visited. (This could be both Meta or Model). It is then
> used to resolve elements in the migration process.
>
> Conclusion:
>
> - Edapt as it is can almost work with CDO, by accessing CDO Resources.
> Each new Release of an Ecore will need to register a corresponding EPackage with CDO, which would allow the IStore to
> be populated with the schema.
>
> I will commit some test cases whichexperiment in this direction in a specific topic branch for the Edapt git project.
> (git.eclipse.org).
> The goal here is to see if it works, without any changes to CDO functionality.
>
> - The Resource based migrator will not scale very nicely.
>
>
>
Re: CDO model evolution strategy [message #1204967 is a reply to message #1204632] Sat, 23 November 2013 10:48 Go to previous messageGo to next message
Christophe Bouhier is currently offline Christophe BouhierFriend
Messages: 937
Registered: July 2009
Senior Member
Eike,

see below response.

On 23-11-13 07:53, Eike Stepper wrote:
> Hi Christophe,
>
> Thank you for the nice analysis ;-)
>
> I don't think that we can make use of any of the instance-level
> functions of Edapt because they just wouldn't scale. It would be great
> if we could Edapt's frontend and the release management (assuming that
> it operates on EPackage level only). I'm curious to see how the recorded
> changes look like. Can you paste some examples?
The changes (Operations) are well documented here:
http://www.eclipse.org/edapt/operations.php

Here is an example of the history model in xmi. As you can see it points
to the component.ecore

<history:History xmi:version="2.0"
xmlns:xmi="http://www.omg.org/XMI"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:ecore="http://www.eclipse.org/emf/2002/Ecore"
xmlns:history="http://www.eclipse.org/emf/edapt/history/0.3">
<releases date="2008-11-23T22:45:42.562+0100">
<changes xsi:type="history:CompositeChange">
<changes xsi:type="history:Create" element="component.ecore#/">
<changes xsi:type="history:Set" element="component.ecore#/"
featureName="name"
dataValue="component"/>
<changes xsi:type="history:Set" element="component.ecore#/"
featureName="nsURI"
dataValue="http://component/r0"/>
.......


The Edapt metamodel constructor builds an EPackage from these changes
for Release 1 of the history. ), subsequent releases migrate the
EPackage but also migrate the model data (if needed). Custom migration
operations are supported, a Classloader is provided to the migrator to
load the custom migrations.


The goal should be to
> use them directly in the DBStore in order to evolve the DB schema and
> migrate the existing table rows.
If we need an object index for the
> duration of the actual data migration (which I doubt) we can create
> temporary DB tables with adequate indexes.
>
> Before we dive too much into the technical discussion on how to apply
> the recorded metamodel changes to the DB schema I suggest that we
> clarify the overall workflow. What are the involved roles, phases and
> actions? Do we want to support reverse evolution? How much do CDO client
> applications depend on specific meta model versions? What about local
> availability of generated model plugins? What if OSGi/p2 are not
> available locally?
>
Very good points. Considering that these questions and choices need to
be thought out extensively, more study and design is needed. I hope to
find a way to share this.

> Are there other important questions?
>
> Cheers
> /Eike
>
> ----
> http://www.esc-net.de
> http://thegordian.blogspot.com
> http://twitter.com/eikestepper
>
>
>
> Am 22.11.2013 14:06, schrieb Christophe Bouhier:
>> On 18-11-13 17:35, Lothar Werzinger wrote:
>>> As there are many interested parties of which obviously none can
>>> shoulder paying in full for a complex feature like model evolution I
>>> wonder if we can somehow crowd-fund (even it's a comparatively small
>>> crowd ;) compared to regular crowd funded projects) this endeavor. I
>>> don't have a budget yet, but I would be certainly interested.
>>
>> Hi,
>>
>> I have done some studying of Edapt the last couple of weeks, and I
>> think it can only partly be re-used for CDO and/or will suffer severe
>> performance penalty.
>> I still intend to blog about the inner guts of Edapt, so anyone can
>> just for themselves, but some *thinking* for the moment:
>>
>> - Edapt is EMF Resource based. What it means is, the migration process
>> takes in model URI's and creates ResourceSet/Resource for them. It
>> caters for loading models, with an *older* EPackage.
>>
>> - The metamodel reconstruction from a model History works pretty neat,
>> and can be re-used in CDO I believe. So basically, you have a history
>> for an ecore, with 2 or 3 releases, the metamodel reconstrucion
>> process allows you to build the ecore/EPackage back and forth between
>> releases.
>>
>> The big question is, how to let the reconstructor interact with the
>> CDO package registry and migrate back and forth the schema of the
>> IStore. This is an area which needs more studying...
>>
>> - The actual model data migration is creating an Edapt specific Model
>> instance of the user model to migrate to the appropriate Release of
>> ECore. This Edapt "Model" model is loaded in memory and it takes
>> Resources as input. Currently I am not convinced this will work for
>> CDO very well. It will work OK, but imagine loading millions of
>> records from a CDO resource in memory (On top of what is already
>> cached by CDO). What I discussed with Eike is that, we would like the
>> migration to happen on IStore level, not on CDOResources. If this is
>> the case, the way Edapt access model objects needs to be re-designed.
>>
>> - Finally, Edapt creates a mapping index, for mapping it's own model
>> instances with the instances of the model which needs to be migrated.
>> This index is auto-populated when the changes of a release are
>> visited. (This could be both Meta or Model). It is then used to
>> resolve elements in the migration process.
>>
>> Conclusion:
>>
>> - Edapt as it is can almost work with CDO, by accessing CDO Resources.
>> Each new Release of an Ecore will need to register a corresponding
>> EPackage with CDO, which would allow the IStore to be populated with
>> the schema.
>>
>> I will commit some test cases whichexperiment in this direction in a
>> specific topic branch for the Edapt git project. (git.eclipse.org).
>> The goal here is to see if it works, without any changes to CDO
>> functionality.
>>
>> - The Resource based migrator will not scale very nicely.
>>
>>
>>
>
Re: CDO model evolution strategy [message #1204996 is a reply to message #1204967] Sat, 23 November 2013 11:09 Go to previous message
Martin Taal is currently offline Martin TaalFriend
Messages: 5340
Registered: July 2009
Senior Member
Hey,
Just to add my 2 cents to the discussion. In my experience (ERP business web apps), the main model change is adding new
eattributes/ereferences or eclasses and maybe making existing eattributes/ereferences not mandatory (or mandatory with a
default value). When doing new versions of a model/application one mostly wants to maintain previous data.

I think this type of model evolution is quite doable.

So I think a distinction should be made between model evolutions which are backward compatible with the previous version
(like adding new efeatures with possibly a default value script) or which are not backward compatible.

The first covers (I think, in my world) 95% (or even more) of the use cases.

Ime Non-backward compatibility changes require conversion scripts etc. which are bound to be custom solutions anyway,
quite hard to find a common approach there.

For database schema updating I have been looking at liquibase, it is widely used, but because of lack of time I did not
have the opportunity to incorporate liquibase support in Texo/Teneo yet.

gr. Martin


On 11/23/2013 11:48 AM, Christophe Bouhier wrote:
> Eike,
>
> see below response.
>
> On 23-11-13 07:53, Eike Stepper wrote:
>> Hi Christophe,
>>
>> Thank you for the nice analysis ;-)
>>
>> I don't think that we can make use of any of the instance-level
>> functions of Edapt because they just wouldn't scale. It would be great
>> if we could Edapt's frontend and the release management (assuming that
>> it operates on EPackage level only). I'm curious to see how the recorded
>> changes look like. Can you paste some examples?
> The changes (Operations) are well documented here:
> http://www.eclipse.org/edapt/operations.php
>
> Here is an example of the history model in xmi. As you can see it points to the component.ecore
>
> <history:History xmi:version="2.0"
> xmlns:xmi="http://www.omg.org/XMI" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
> xmlns:ecore="http://www.eclipse.org/emf/2002/Ecore" xmlns:history="http://www.eclipse.org/emf/edapt/history/0.3">
> <releases date="2008-11-23T22:45:42.562+0100">
> <changes xsi:type="history:CompositeChange">
> <changes xsi:type="history:Create" element="component.ecore#/">
> <changes xsi:type="history:Set" element="component.ecore#/" featureName="name"
> dataValue="component"/>
> <changes xsi:type="history:Set" element="component.ecore#/" featureName="nsURI"
> dataValue="http://component/r0"/>
> .......
>
>
> The Edapt metamodel constructor builds an EPackage from these changes for Release 1 of the history. ), subsequent
> releases migrate the EPackage but also migrate the model data (if needed). Custom migration operations are supported, a
> Classloader is provided to the migrator to load the custom migrations.
>
>
> The goal should be to
>> use them directly in the DBStore in order to evolve the DB schema and
>> migrate the existing table rows.
> If we need an object index for the
>> duration of the actual data migration (which I doubt) we can create
>> temporary DB tables with adequate indexes.
>>
>> Before we dive too much into the technical discussion on how to apply
>> the recorded metamodel changes to the DB schema I suggest that we
>> clarify the overall workflow. What are the involved roles, phases and
>> actions? Do we want to support reverse evolution? How much do CDO client
>> applications depend on specific meta model versions? What about local
>> availability of generated model plugins? What if OSGi/p2 are not
>> available locally?
>>
> Very good points. Considering that these questions and choices need to be thought out extensively, more study and design
> is needed. I hope to find a way to share this.
>
>> Are there other important questions?
>>
>> Cheers
>> /Eike
>>
>> ----
>> http://www.esc-net.de
>> http://thegordian.blogspot.com
>> http://twitter.com/eikestepper
>>
>>
>>
>> Am 22.11.2013 14:06, schrieb Christophe Bouhier:
>>> On 18-11-13 17:35, Lothar Werzinger wrote:
>>>> As there are many interested parties of which obviously none can
>>>> shoulder paying in full for a complex feature like model evolution I
>>>> wonder if we can somehow crowd-fund (even it's a comparatively small
>>>> crowd ;) compared to regular crowd funded projects) this endeavor. I
>>>> don't have a budget yet, but I would be certainly interested.
>>>
>>> Hi,
>>>
>>> I have done some studying of Edapt the last couple of weeks, and I
>>> think it can only partly be re-used for CDO and/or will suffer severe
>>> performance penalty.
>>> I still intend to blog about the inner guts of Edapt, so anyone can
>>> just for themselves, but some *thinking* for the moment:
>>>
>>> - Edapt is EMF Resource based. What it means is, the migration process
>>> takes in model URI's and creates ResourceSet/Resource for them. It
>>> caters for loading models, with an *older* EPackage.
>>>
>>> - The metamodel reconstruction from a model History works pretty neat,
>>> and can be re-used in CDO I believe. So basically, you have a history
>>> for an ecore, with 2 or 3 releases, the metamodel reconstrucion
>>> process allows you to build the ecore/EPackage back and forth between
>>> releases.
>>>
>>> The big question is, how to let the reconstructor interact with the
>>> CDO package registry and migrate back and forth the schema of the
>>> IStore. This is an area which needs more studying...
>>>
>>> - The actual model data migration is creating an Edapt specific Model
>>> instance of the user model to migrate to the appropriate Release of
>>> ECore. This Edapt "Model" model is loaded in memory and it takes
>>> Resources as input. Currently I am not convinced this will work for
>>> CDO very well. It will work OK, but imagine loading millions of
>>> records from a CDO resource in memory (On top of what is already
>>> cached by CDO). What I discussed with Eike is that, we would like the
>>> migration to happen on IStore level, not on CDOResources. If this is
>>> the case, the way Edapt access model objects needs to be re-designed.
>>>
>>> - Finally, Edapt creates a mapping index, for mapping it's own model
>>> instances with the instances of the model which needs to be migrated.
>>> This index is auto-populated when the changes of a release are
>>> visited. (This could be both Meta or Model). It is then used to
>>> resolve elements in the migration process.
>>>
>>> Conclusion:
>>>
>>> - Edapt as it is can almost work with CDO, by accessing CDO Resources.
>>> Each new Release of an Ecore will need to register a corresponding
>>> EPackage with CDO, which would allow the IStore to be populated with
>>> the schema.
>>>
>>> I will commit some test cases whichexperiment in this direction in a
>>> specific topic branch for the Edapt git project. (git.eclipse.org).
>>> The goal here is to see if it works, without any changes to CDO
>>> functionality.
>>>
>>> - The Resource based migrator will not scale very nicely.
>>>
>>>
>>>
>>
>


--

With Regards, Martin Taal

Springsite/Elver.org
Office: Hardwareweg 4, 3821 BV Amersfoort
Postal: Nassaulaan 7, 3941 EC Doorn
The Netherlands
Cell: +31 (0)6 288 48 943
Tel: +31 (0)84 420 2397
Fax: +31 (0)84 225 9307
Mail: mtaal@xxxxxxxx - mtaal@xxxxxxxx
Web: www.springsite.com - www.elver.org
Previous Topic:Modifying EMF model
Next Topic:Leading whitespace is not removed when parsing list element
Goto Forum:
  


Current Time: Sat Dec 20 06:08:40 GMT 2014

Powered by FUDForum. Page generated in 0.04199 seconds
.:: Contact :: Home ::.

Powered by: FUDforum 3.0.2.
Copyright ©2001-2010 FUDforum Bulletin Board Software