Proposed Support for Logical Model Integration in Eclipse

Version: 0.3

This document contains several sections which describe various aspects of the proposed or potential support for improved logical model integration in the Eclipse Platform (bug 37723). The requirements document can be found here and a description of the initial worked performed in 3.1 is here. The following list summarizes each item and provides a brief description of the impact on the Eclipse Platform and it's clients.

After presenting proposals for each of these area, we discuss the potential role of EMF and present some generic Team Scenarios that describe how the functionality we are proposing would play out.

Problems View

Making the problems view more logical model aware has been broken into several pieces, as described in the following sections.


How do we improve the usability of filters in the problems view. Work has started on this is 3.2 and there are several bug reports on the issue (108013, 108015, 108016). The basic ideas are:

An additional requirement identified by clients is the ability to filter on model specific information. We will need to collect some concrete scenarios on this to better understand the requirement.


Each problem type has different relevant properties. Java errors has a file path and line number. Other models may have other ways of describing a problem (e.g. a resource description and a field name). Ideally each problem would display it's relevant properties. However, the Problems view often contains many different types of problems, each of which may have different relevant properties. Table widgets have a single set of columns, leading to the following possibilities:

Given that users may want to see different problem types in the Problems view at the same time, the most practical approach is to provide a generic set of columns (e.g. Severity, Description, Element, Path, Location) and allow the problem type to dictate what values appear in the columns.

Problem Type Specific Behavior

The Problems view currently supports custom Quick Fixes for a problem type. Another useful feature would be the ability to navigate to model specific view. There is currently a Show In Navigator which could be enhanced to support showing the affected model element in a model view (e.g. Show in Packages Explorer for Java problems).


If there was some way of determining what role the user was paying at a particular time, it would be possible to tailor views to that particular task. Such information could be used to enable particular Problems view filters.

Common Navigator

The Common Navigator is being pushed down from WTP into the Platform. The view allows model tooling to add a root node to the Navigator and control what appears under that node. Clients that wish to plug into the view will need to provide a root element, content provider, label provider, sorter and action set for inclusion in the navigator. Clients with existing Navigator style views can decide whether to keep their view separate or integrate it into the Common Navigator. For JDT, they will probably want to integrate with the view to remain consistent with the Platform.

One aspect of the Common Navigator that is of particular interest to team operations is the ability to obtain a content provider that can show a logical model in a tree viewer. This would allow logical models to appear in team operation views and dialogs. The Common Navigator proposal defines an extension that provides this capability. The class for this extension is the NavigatorContentExtension and it provides the following:

For this API to be useable in Team operations, the NavigatorContentExtension contributed by the model must have access to the context of the team operation. Outside the context of a team operation, the content extension only has the local workspace from which to build its model tree. However, within the context of a team operation, there may be additional resources involved, specifically, resources that exist remotely but not locally (i.e. outgoing deletions or incoming additions). The model's content extension would need access to a team context so that these additional resources could be considered when displaying the model.

The following list summarizes the requirements that would be placed on a NavigatorContentExtension when being used to display the model in a team context.

Support for this can either be integrated with the Common Navigator API or made available as Team specific API (see Model Display in Team Operations). Our preference would be integrate the team requirements with the Common Navigator requirements so that model providers only need to implement one API. In the rest of this section we will address the following two questions:

The next two sections propose answers to these questions.

Configuring a NavigatorContentExtension with a Team Context

In the Common Navigator API description that was available at the time of writing, the NavigatorContentExtension is instantiated for each viewer and has the ability to have state of the viewer available to it. In the context of a team operation, the team provider would create the viewer that will be used to display the model tree. It could also associate the team context with the viewer so it was available to the context extension.

A team operation requires the ability to obtain a content provider that can consider the team context when it builds a model tree. Since the tree is built by the content provider, the following method to NavigatorContentExtension will need to consult the viewer state to see if a team context is available.

ITreeContentProvider getContentProvider()

where ISynchronizationContext is the interface that defines the team context. The model would be responsible for displaying a model tree that included relevant model objects that may not exist remotely but as part of the team operation.

In addition, the ability to decorate model elements with their team state is required. Adding the following method to NavigatorContentExtension would provide this capability:

ICommonLabelDecorator getLabelDecorator()

The provided decorator would need to consult the team context that is available from the viewer state in order to determine the proper decorations for each model element.

The other remaining requirement is filtering based on team state. Filtering is not as well defined in the Common Navigator proposal but a similar approach as described for the other two requirements could also be used to provide a filter that filters on team state.

The above is more to provide an idea of what is required instead of the exact solution. The final solution will depend on what the final shape of the Common Navigator.

The Team Context API

The ISynchronizationContext API below could be used to provide the team context to a model provider. It makes use of the following API pieces:

The model provider can use this information to determine what model tree to build, the synchronization state of model elements and what additional elements need to be displayed.

 * Allows a model provider to build a view of their model that includes
 * synchronization information with a remote location (usually a repository).
 * The scope of the context is defined when the context is created. The creator
 * of the scope may affect changes on the scope which will result in property
 * change events from the scope and may result in sync-info change events from
 * the sync-info tree. Clients should note that it is possible that a change in
 * the scope will result in new out-of-sync resources being covered by the scope
 * but not result in a sync-info change event from the sync-info tree. This can
 * occur because the set may already have contained the out-of-sync resource
 * with the understanding that the client would have ignored it. Consequently,
 * clients should listen to both sources in order to guarantee that they update
 * any dependent state appropriately.
 * This interface is not intended to be implemented by clients.
 * @since 3.2
public interface ISynchronizationContext {

	 * Synchronization type constant that indicates that
	 * context is a two-way synchronization.
	public final static String TWO_WAY = "two-way"; //$NON-NLS-1$
	 * Synchronization type constant that indicates that
	 * context is a three-way synchronization.
	public final static String THREE_WAY = "three-way"; //$NON-NLS-1$

	 * Return the scope of this synchronization context. The scope determines
	 * the set of resources to which the context applies. Changes in the scope
	 * may result in changes to the sync-info available in the tree of this
	 * context.
	 * @return the set of mappings for which this context applies.
	public ISynchronizeScope getScope();

	 * Return a tree that contains SyncInfo nodes for resources
	 * that are out-of-sync. The tree will contain sync-info for any out-of-sync
	 * resources that are within the scope of this context. The tree
	 * may include additional out-of-sync resources, which should be ignored by
	 * the client. Clients can test for inclusion using the method 
	 * {@link ISynchronizeScope#contains(IResource)}.
	 * @return a tree that contains a SyncInfo node for any
	 *         resources that are out-of-sync.
	public SyncInfoTree getSyncInfoTree();
	 * Returns synchronization info for the given resource, or null
	 * if there is no synchronization info because the resource is not a
	 * candidate for synchronization.
	 * Note that sync info may be returned for non-existing or for resources
	 * which have no corresponding remote resource.
	 * This method will be quick. If synchronization calculation requires content from
	 * the server it must be cached when the context is created or refreshed. A client should
	 * call refresh before calling this method to ensure that the latest information
	 * is available for computing the sync state.
	 * @param resource the resource of interest
	 * @return sync info
	 * @throws CoreException 
	public SyncInfo getSyncInfo(IResource resource) throws CoreException;

	 * Return the synchronization type. A type of TWO_WAY
	 * indicates that the synchronization information (i.e.
	 * SyncInfo) associated with the context will also be
	 * two-way (i.e. there is only a remote but no base involved in the
	 * comparison used to determine the synchronization state of resources. A
	 * type of THREE_WAY indicates that the synchronization
	 * information will be three-way and include the local, base (or ancestor)
	 * and remote.
	 * @return the type of merge to take place
	 * @see
	public String getType();
	 * Dispose of the synchronization context. This method should be
	 * invoked by clients when the context is no longer needed.
	public void dispose();
	 * Refresh the context in order to update the sync-info to include the
	 * latest remote state. any changes will be reported through the change
	 * listeners registered with the sync-info tree of this context. Changes to
	 * the set may be triggered by a call to this method or by a refresh
	 * triggered by some other source.
	 * @see SyncInfoSet#addSyncSetChangedListener(ISyncInfoSetChangeListener)
	 * @see
	 * @param traversals the resource traversals which indicate which resources
	 *            are to be refreshed
	 * @param flags additional refresh behavior. For instance, if
	 *            RemoteResourceMappingContext.FILE_CONTENTS_REQUIRED
	 *            is one of the flags, this indicates that the client will be
	 *            accessing the contents of the files covered by the traversals.
	 *            NONE should be used when no additional behavior
	 *            is required
	 * @param monitor a progress monitor, or null if progress
	 *            reporting is not desired
	 * @throws CoreException if the refresh fails. Reasons include:
	 *             The server could not be contacted for some reason (e.g.
	 *             the context in which the operation is being called must be
	 *             short running). The status code will be
    public void refresh(ResourceTraversal[] traversals, int flags, IProgressMonitor monitor) throws CoreException;


Maintaining Workspace Consistency

Model tools in Eclipse are typically layered. In the very least, there is the model layer (e.g. Java) and the file-system layer (i.e. IResource). However, in some cases, there may be more than two layers (e.g. J2EE<->Java<->IResource).

Operation Participation

There is already refactoring participant support in Eclipse which appears to meet several of the requirements logical models have. The original proposal for refactoring participation is described here. The implementation does vary slightly from what is in the proposal but the proposal is still a good description of the concepts involved.

Here is the summary of the features taken from the document:

One possibility was to support participation in operations at all levels. That is, JDT could participate in IResource level operations in order to react to resource level changes. For instance, Java could participate in a *.java file rename in the Resource Navigator and update any references appropriately (thus treating the file rename as a Java compilation unit, or CU, rename). This would lead to the following additional requirements:

Experiments were done by JDT in Eclipse 3.0 and the following observations were made for a package rename vs. a folder rename in which Java participates:

The next section addresses these issues by combining operation retargeting with participation in order to address these issues.

Operation Retargeting

To ensure that participants access models in a consistent state all operations have to be executed on the highest level model and the operation has to describe what happens in the lower level models to load corresponding lower level participants. For example when renaming a CU the rename refactoring also loads participants interested in file renames since a CU rename renames the file in the underlying resource model. However the system should help the user to keep higher level models consistent when manipulating lower level models. One approach would be that the systems informs about those situations and allows the triggering of the higher level operation instead. For example a rename of a *.java file in the resource navigator could show a dialog telling the user that for model consistency the file is better renamed using the Java Rename refactoring and if the users wants to execute this action instead. Doing so has the other nice side effect that models are not forced to use the LTK participant infrastructure. The way how to participate could be left open for the plug-in providing the model operations.

One potential complication arises when multiple models want to "own" or "control" a resource. This is less of an issue if one is a higher level model built on top of a lower level one. For instance, a J2EE model may override the Java model and assume ownership of any Java files that are J2EE artifacts, such as EJBs. However, problems arise if the two models are peers. For instance, there may be several models that are generated from a WSDL (web services) descriptor file. The user may need to pick which model gets control for operations performed directly on the resource.

Note that this feature area has a great deal of overlap with the Improve Action Contributions work being proposed by the UI team.

Operation Veto

It is not clear the operation retargeting is desirable. That is, if a user performs a delete on a file, it may be disconcerting if the delete is actually performed on an EJB that consists of several files. An alternate approach is to detect when an operation on a lower level model may have an effect on a higher level model and ask the user to confirm that they really do want to perform the operation on the lower level model.

Team Operations on Model Elements

The support for having Team operations appear in the context menu of logical elements is based on ResourceMappings. This support was available as non-API in 3.1 and the is described in the Support Logical Resources - Resource Mappings document. Here is a summary of what is required for this:

A RemoteResourceMappingContext is a means to allow the model to see the state of the repository at a particular point in time. There are many different terms used by different repository tools to identify this type of view of the repository including version, branch, configuration, view, snapshot, or baseline. The type of operation being performed dictates what files states are accessible from the RemoteResourceMappingContext. For example, when updating the local workspace to match the latest contents on the server, the context would need to allow the client to access the latest contents for remote files whose content differs from their local counterparts in order to allow the model to determine if there are additional files that should be included in the update. When committing, the context would need to provide the ancestor state of any locally modified files so that the model could ascertain if there are any outgoing deletions.

There are still some outstanding issues that need to be solved in this area.

The following sections describe proposed solutions to these issues

Team Operation Input Determination

In order to ensure that the proper resources are included as the input to a team operation, we introduce the concept of a model provider. A model provider has the following:

Model providers would be used in the following way to ensure that the proper resources were included in a team operation.

This mechanism can be used to ensure that operations performed directly on files include all the files that constitute a model and also will ensure that the effects can be displayed to the user in a form consistent with the higher level models that are effected. This will be covered separately in the Displaying Model Elements in Team Operations section.

Team Operation Lifecycle

Most Team operations have multiple steps. To illustrate this, consider an update operation. The steps of the operation, considering the inclusion of resource mappings and other facilities described in this proposal are:

  1. Determine the complete set of resource mappings that are potentially being updated.
  2. Display this to the user in a model specific way so they can confirm the operation.
  3. If the operation is confirmed, delegate the merge to the model so it can attempt to automatically merge the model elements being updated using model semantics to aid in the merging.
  4. If automatic merge of one or more model elements is not possible, perform a manual merge on these elements by obtaining appropriate compare editor inputs from the model provider.

Each of these steps may involve separate calls from the repository tooling to the model tooling. The model would not want to recompute the remote model state during each step but instead would rather cache any computed state until the entire operation was completed. One means of supporting this is to add listener support to the team context associated with the operation and have an event fired when the operation using the context is completed).

View Structure vs. Model Structure

The contents of views in Eclipse are determined by a content provider. In most cases, the structure of what is displayed matches the model structure but in some cases it does not. One such example is the package explorer when it is in hierarchy mode. In this mode, the children of a package are its files and its subpackages. When the user performs an operation on a package in this view, they may reasonably expect the operation to be performed on the package and its subpackage. However, the package adapts to a resource mapping that only includes the files and not the subpackages.

The simplest solution to this problem is to require that the content providers wrap objects when they need adaptation to resource mappings and are displayed in a way that does not match the model structure. In our Java example, this would mean creating a new model object (e.g. DeepJavaPackage) whose children were the java classes and subpackages. The advantage of this approach is that the process of converting a model object to a resource mapping can be performed by the model without any knowledge of the view configuration. Some of the concerns of this approach are:

Another solution to this problem would be to:

The advantage of this approach is that model tooling can still use content providers to provide alternate views of their model without wrapping model objects or providing new model objects. The disadvantages are:

Given the complexity of the second solution, the first is preferable from an implementation standpoint. However, we need to determine if clients can accept this solution.

Team Decorator Support

This section describes the support that is proposed to be added in Eclipse 3.2 to support the decoration of logical model elements. In Eclipse 3.1 and prior, logical model elements could still be decorated. However, the only inter-model adaptability support was for models whose elements had a one-to-one mapping to file system resources (i.e. IResource). Here is a summary of the issues that we are hoping to address in 3.2.

  1. General adaptability of lightweight decorators. In other words, an element of one model can be adapted to the element of another for the purpose of determining the label decorations for the original element. This is available in 3.2 M1 (see bug 86159).
  2. Additional support for the decoration of model elements that adapt to ResourceMapping. ResourceMapping decoration makes use of the general adaptability mechanism but also requires support for triggering label updates for any logical element whose decoration depends on the state of one or more resources (see bug 86493).
  3. Support for the proper dirty decoration of elements that are contained in a file but whose state depends on the contents of the file. An example of this is Java methods in a file. If you make a change in a Java file that is mapped to CVS, the file is decorated as dirty. It would be beneficial if the dirty decoration could also be placed on those particular methods or class members that are dirty. This is much more important for models where the user may not be as directly aware of the relationship between a model element and the file in which it is persisted.

As stated above, point one has already been completed. The following sections describe potential solutions to the remaining two problems. The first two sections describe potential solutions using the existing architecture while the third presents a unified solution that makes use of the team context described in the Common Navigator section.

Updating of Model Element Labels

Some repository decorations are propagated to the root of any views that display elements shared in the repository. This is done in order to provide useful information to the user. For instance, the "shared with CVS" decoration (by default, the icon) should appear on any object on which a CVS operation can be performed. Similarly, the dirty decoration (by default, a ">" prefix) should appear on any views items containing a dirty child in order to help the user find dirty items. For the purpose of discussion, we will use dirty decoration when describing our proposal but the same will hold true for other decorations that require propagation.

When a file becomes dirty, a label change must be issued for any items visible to the user whose dirty state has change or that is a direct or indirect parent of such an item. When we are dealing strictly with file system resources, this is straight forward. When a file becomes dirty, a label change is issued for the file and the folders and project containing the file. Any views that are displaying these items will then update their labels. It is the responsibility of models that have a one-to-one mapping from files to model elements to update the labels of the corresponding model elements as well. For instance, JDT maps the file, folder and project label changes to label changes on Java model elements such as Compilation Units, Packages and Java Projects so that decorations in the Packages Explorer get updated properly.

However, problems arise for logical models elements that do not have a one-to-one mapping to file resources. For instance, consider a working set that contains several project. The repository provider does not know that the working set is being displayed to the user so does not issue a label update for it. The view displaying the working set does not know when the state of the children impact the label of the parent. It could try to fake it by updating the working set label whenever the label of any children are updated but this could result in many unnecessary and potentially costly updates.

The following points summarize the aspects of the problem that should be considered when showing repository decorations in a model view:

  1. The repository deals with changes at the file system level so repository decoration changes occur on files or folders.
  2. The model views deal with decoration on model elements so label changes are issued on model elements. A means to translate file changes to model element changes is required.
  3. The repository may have some decorated properties, such as the dirty state, that are derived from the model view structure and not the file structure (i.e. all parents of a dirty item in a view should be decorated as dirty).
  4. Recalculating the label for a model element may be costly so, if possible, label changes should only be issued if there is an actually change in the state that determines the label.

It is interesting to note that the requirement in point 2 can be solved using the Team Operation Participation mechanism described previously. However, addressing the last two points will require additional support. The next two sections describe a potential solution. It is useful to note that any solution we come up with must consider the broader context of the direction decoration support in Eclipse will go. We have tried to consider this when drafting this proposal.

Decoration Change Notification

Currently, a decoration change is broadcast implicitly by issuing label change events for the elements that need redecoration. From a repository tooling standpoint, this means generating a label change on any changed file resources (and there ancestor resources if the decorator that represents the changed state is propagated to parents). It is then up to the model tooling to translate these label changes of file resources to label changes on the appropriate model elements.

An alternative approach would be to make the decoration change notification explicit. Thus, the repository tooling could issue a decoration change event that contains the resources that needs redecoration. It would then be up to any views that are displaying a repository decoration to update the label of any elements appropriately. This would mean determining the set of elements that correspond to the given resources.

As stated in point 4 above, a possible optimization is to only issue the label change if the state of the decoration has changed. This can be accomplished by including, as part of the change notification event, a property evaluator that evaluates and caches the properties for each element it is provided and indicates whether a change has occurred which requires the item to be redecorated.

Decoration Propagation

In the previous section we mentioned the possibility of having a property evaluator that indicated whether a label change was required. This evaluator could also indicate whether a reevaluation for the parent of the element is required. That is, if the evaluator calculated that the dirty state of the element had changed, it could indicate that the label update was required and that the evaluator should be executed with the parent element as input in order to determine if a label change was required for the parent and if the process should be repeated for the parent element's parent.

This calculation could be long running. Thus, it should be performed in a background job with minimal use of the UI thread. This may be a bit tricky as JFace viewers are not threadsafe (i.e. they are mostly invoked from the UI thread). The current JFace viewers persist the tree of elements in the SWT tree items so accessing them needs to be run in the UI thread. Also, label changes need to be run in the UI thread. These factors must be considered when designing a solution.

Sub-File Level Dirty Decorations

In this section we present API on ResourceMapping that supports change determination on logical model elements. With this API, the algorithm used by the decorator would be this:

In addition to the API on ResourceMapping, it would also be beneficial to provide an abstract lightweight decorator that team providers can use to get the above described behavior.

ResourceMapping Changes

Here are the proposed API additions to the ResourceMapping class. Note that there would be additional API added to ResourceMapping and RemoteResourceMappingContext to aid models in their calculation of the change state.

public abstract class ResourceMapping {

	 * Constant returned by calculateChangeState to indicate that
	 * the model object of this resource mapping does not differ from the
	 * corresponding object in the remote location.
    public static final int NO_DIFFERENCE = 0;

	 * Constant returned by calculateChangeState to indicate that
	 * the model object of this resource mapping differs from the corresponding
	 * object in the remote location.
    public static final int HAS_DIFFERENCE = 1;

	 * Constant returned by calculateChangeState to indicate that
	 * the model object of this resource mapping may differ from the
	 * corresponding object in the remote location. This is returned when
	 * getChangeState was not provided with a progress monitor and the remote
	 * state of the object was not cached.
    public static final int MAY_HAVE_DIFFERENCE = 2;

	 * Calculate the change state of the local object when compared to it's
	 * remote representation. If server contact is required to properly
	 * calculate the state but is not allowed (as indicated by an exception with
	 * the code
	 * RemoteResouceMappingContext.SERVER_CONTACT_PROHIBITED),
	 * MAY_HAVE_DIFFERENCE should be returned. Otherwise
	 * returned as appropriate. Subclasses may override this method.
	 * It is assumed that, when canContactServer is
	 * false, the methods
	 * RemoteResourceMappingContext#contentDiffers and
	 * RemoteResourceMappingContext#fetchMembers of the context
	 * provided to this method can be called without contacting the server.
	 * Clients should ensure that this is how the context they provide behaves.
	 * @param context a resource mapping context
	 * @param monitor a progress monitor or null. If
	 *            null is provided, the server will not be
	 *            contacted and MAY_HAVE_DIFFERENCE will be
	 *            returned if the change state could not be properly determined
	 *            without contacting the server.
	 * @return the calculated change state of HAS_DIFFERENCE if
	 *         the object differs, NO_DIFFERENCE if it does not
	 *         or MAY_HAVE_DIFFERENCE if server contact is
	 *         required to calculate the state.
	 * @throws CoreException
    public int calculateChangeState(
            RemoteResourceMappingContext context,
            IProgressMonitor monitor)
            throws CoreException {
        try {
			int changeState = ...
			return changeState;
		} catch (CoreException e) {
			if (e.getStatus().getCode() == RemoteResourceMappingContext.SERVER_CONTACT_PROHIBITED)
			throw e;


Team Aware Model Views

The complexities described in the previous sections arise because of the separation of models and decorators. An alternate approach would be to use the team context discussed in the Common Navigator section for any model view. Such support would work something like this.

The details would be the same as those discussed in the Common Navigator section. This would simplify the decorator update story as the view would then listen to both resource deltas and team deltas and update model elements and labels appropriately. The model will have enough information available from the tam context to make the decisions about propagation in any way they deal appropriate. The models will also be able to determine the change state of their model elements for themselves so no additional API on ResourceMapping would be required.

Model Level Merging

There are two types of merges that can take place: automatic and manual. Automatic merges (or auto-merges) are merges the either do not contain file level conflicts or whose file level conflicts can be resolved without user intervention. Manual merges require the user to inspect the conflicting changes and decide how to resolve them. In either case, involvement of the model in these two types of merges is beneficial. For auto-merges, model knowledge can increase the likelihood of a manual merge being possible and for manual merges, model involvement can enhance how the merges are displayed and performed.

In this section we describe the API we propose to add to support model merging:

Given a set of resource mappings, the repository tooling needs to be able to obtain the model tooling support classes which will perform the merging. This will require:

The steps for performing an optimistic merge would then look something like this:

When the model is asked to merge elements, either automatically or manually, it will need access to the remote state of the model. API for this is also being proposed.

Auto-merging API

In this section, we propose some API that will allow for model based auto-merging. Before we do that, we should first mention that Eclipse has a pluggable IStreamMerger (introduced in 3.0) for supporting model based merges when there is a one-to-one based correspondence between a file and a model object. However, this is not currently used by CVS (or any other repository provider as far as we know) but this can be part of the solution we propose here.

The proposed API to support model level merges consists of the following:

Resource Mapping Merger

Below is what IResourceMappingMerger the would look like. It contains a merge whose semantics differ depending on the type of the merge context. A merge is performed for three-way synchronizations and the replace occurs for two-way contexts. The model can determine which model elements need to be merged by consulting the merge context which is presented in the next section.

 * The purpose of this interface is to provide support to clients (e.g.
 * repository providers) for model level auto-merging. It is helpful in the
 * cases where a file may contain multiple model elements or a model element
 * consists of multiple files. It can also be used for cases where there is a
 * one-to-one mapping between model elements and files, although
 * IStreamMerger can also be used in that case.
 * Clients should determine if a merger is available for a resource mapping
 * using the adaptable mechanism as follows:
 *     Object o = mapping.getModelProvider().getAdapter(IResourceMappingMerger.class);
 *     if (o instanceof IResourceMappingMerger.class) {
 *        IResourceMappingMerger merger = (IResourceMappingMerger)o;
 *        ...
 *     }
 * Clients should group mappings by model provider when performing merges.
 * This will give the merge context an opportunity to perform the
 * merges optimally.
 * @see
 * @see
 * @since 3.2
public interface IResourceMappingMerger {

	 * Attempt to automatically merge the mappings of the merge context(MergeContext#getMappings()).
	 * The merge context provides access to the out-of-sync resources (MergeContext#getSyncInfoTree())
	 * associated with the mappings to be merged. However, the set of resources
	 * may contain additional resources that are not part of the mappings being
	 * merged. Implementors of this interface should use the mappings to
	 * determine which resources to merge and what additional semantics can be
	 * used to attempt the merge.
	 * The type of merge to be performed depends on what is returned by the
	 * MergeContext#getType() method. If the type is
	 * MergeContext.TWO_WAY the merge will replace the local
	 * contents with the remote contents, ignoring any local changes. For
	 * THREE_WAY, the base is used to attempt to merge remote
	 * changes with local changes.
	 * Auto-merges should be performed for as many of the context's resource
	 * mappings as possible. If merging was not possible for one or more
	 * mappings, these mappings should be returned in an
	 * MergeStatus whose code is
	 * MergeStatus.CONFLICTS and which provides access to the
	 * mappings which could not be merged. Note that it is up to the model to
	 * decide whether it wants to break one of the provided resource mappings
	 * into several sub-mappings and attempt auto-merging at that level.
	 * @param mappings the set of resource mappings being merged
	 * @param mergeContext a context that provides access to the resources
	 *            involved in the merge. The context must not be
	 *            null.
	 * @param monitor a progress monitor
	 * @return a status indicating the results of the operation. A code of
	 *         MergeStatus.CONFLICTS indicates that some or all
	 *         of the resource mappings could not be merged. The mappings that
	 *         were not merged are available using
	 *         MergeStatus#getConflictingMappings()
	 * @throws CoreException if errors occurred
    public IStatus merge(IMergeContext mergeContext,
            IProgressMonitor monitor) throws CoreException;


It is interesting to note that partial merges are possible. In such a case, the merge method must be sure to return a MergeStatus that contains any resource mappings for which the merge failed. These mappings could match some of the mappings passed in or could be mappings of sub-components of the larger mapping for which the merge was attempted, at the discretion of the implementer.

Merge Context

In order for repository tooling to support model level merging, they must be able to provide an IMergeContext. The merge context provides:

The following is the proposed API methods of the merge context.

 * Provides the context for an IResourceMappingMerger
 * or a model specific synchronization view that supports merging.
 * TODO: Need to have a story for folder merging
 * This interface is not intended to be implemented by clients.
 * @see IResourceMappingMerger
 * @since 3.2
public interface IMergeContext extends ISynchronizationContext {

	 * Method that allows the model merger to signal that the file in question
	 * has been completely merged. Model mergers can call this method if they
	 * have transferred all changes from a remote file to a local file and wish
	 * to signal that the merge is done.This will allow repository providers to
	 * update the synchronization state of the file to reflect that the file is
	 * up-to-date with the repository.
	 * Clients should not implement this interface but should instead subclass 
	 * MergeContext.
	 * @see MergeContext
	 * @param file the file that has been merged
	 * @param monitor a progress monitor
	 * @return a status indicating the results of the operation
	public abstract IStatus markAsMerged(IFile file, IProgressMonitor monitor);
	 * Method that can be called by the model merger to attempt a file-system
	 * level merge. This is useful for cases where the model merger does not
	 * need to do any special processing to perform the merge. By default, this
	 * method attempts to use an appropriate IStreamMerger to
	 * merge the files covered by the provided traversals. If a stream merger
	 * cannot be found, the text merger is used. If this behavior is not
	 * desired, sub-classes may override this method.
	 * This method does a best-effort attempt to merge all the files covered
	 * by the provided traversals. Files that could not be merged will be 
	 * indicated in the returned status. If the status returned has the code
	 * MergeStatus.CONFLICTS, the list of failed files can be 
	 * obtained by calling the MergeStatus#getConflictingFiles()
	 * method.
	 * Any resource changes triggered by this merge will be reported through the 
	 * resource delta mechanism and the sync-info tree associated with this context.
	 * TODO: How do we handle folder removals generically?
	 * @see SyncInfoSet#addSyncSetChangedListener(ISyncInfoSetChangeListener)
	 * @see org.eclipse.core.resources.IWorkspace#addResourceChangeListener(IResourceChangeListener)
	 * @param infos
	 *            the sync infos to be merged
	 * @param monitor
	 *            a progress monitor
	 * @return a status indicating success or failure. A code of
	 *         MergeStatus.CONFLICTS indicates that the file
	 *         contain non-mergable conflicts and must be merged manually.
	 * @throws CoreException if an error occurs
	public IStatus merge(SyncInfoSet infos, IProgressMonitor monitor) throws CoreException;

	 * Method that can be called by the model merger to attempt a file level
	 * merge. This is useful for cases where the model merger does not need to
	 * do any special processing to perform the merge. By default, this method
	 * attempts to use an appropriate IStreamMerger to perform the
	 * merge. If a stream merger cannot be found, the text merger is used. If this behavior
	 * is not desired, sub-classes may override this method.
	 * @param file the file to be merged
	 * @param monitor a progress monitor
	 * @return a status indicating success or failure. A code of
	 *         MergeStatus.CONFLICTS indicates that the file contain
	 *         non-mergable conflicts and must be merged manually.
	 * @see, org.eclipse.core.runtime.IProgressMonitor)
	public IStatus merge(SyncInfo info, IProgressMonitor monitor);

Manual Merging

Providing the capability to manually merge a set of model elements require two things:

The first requirement is met by the team context proposal outlined in the Common Navigator section. The second can be met by giving such a view access to the merge context discussed in the Model Level Merging section. This context provides enough state and functionality to display a two-way or three-way comparison and perform the merge.

Displaying Model Elements in Team Operations

There are two types of displays that a Team operation may need:

Both these requirements are met by the team context proposal outlined in the Common Navigator section.

Remote Discovery

There are two aspects to consider for this feature:

In the following sections we outline some specific scenarios and describe what would be required to support them.

Remote Browsing

Logical model browsing in the repository would need to be rooted at the project as that is where the associations between the resources and the model providers is persisted. This leads to the following two requirements:

There are two options for providing the remote project contents to the model provider.

  1. Provide a new API, potentially similar in form to the RemoteResourceMappingContext, and require model providers to reimplement their model building code in terms of this API.
  2. Provide a means to present the remote project as an IProject to the model provider. In this case, the model provider could reuse the code it currently has for building its model.

The second option is definitely preferable from a model provider standpoint because of the potential to reuse existing code. There are, however, a few things to consider:

Several of the issues mentioned above would benefit from having an explicit distinction between projects that are remote views of a project state and those that are locally loaded in order to perform modifications.

Comparing Remote Versions

When browsing a remote logical model version, the user may then want to compare what they see with another version. If the browsing is done using a remote IProject, then the comparison is no different that if it were performed between a local copy of the model and a remote resource mapping context.

Viewing Logical Model Element History

The user want to see the change history of a particular model element. In order to do that, we need the following.

The above is straight forward if there is a one-to-one mapping between files and model elements. Repositories can typically provide the history for a particular file efficiently. The model could then interpret each file revision as a model change (i.e. the model provider could show a list of model element changes using the timestamp of file changes as the timestamps for the model element changes). If a user opened a particular change, the model provider would then load and interpret the contents of the file in order to display it in an appropriate way.

In the case where there are multiple model objects in a single file (many-to-one), the model provider would need to interpret the contents of the file in order to determine if the model element of interest actually changed in any particular file change. This could result in potentially many file content fetches in a way for which repository tooling is not optimized (i.e. repository tooling is optimized to give you a time slice not to retrieve all the revisions of the same file). One way to deal with this would be to have the model provider use the file history as the change history for the model element with the understanding that the element may not have changed between entries. Another possibility would be to do the computation once and cache the result (i.e. the points at which each element in the file changed) to be used in the future. As new revisions of a file are released, the cache could be updated to contain the latest change history. This would only need to consider the newest revision as he history doesn't change. It may even be possible to share this history description in the project.

The final case to consider is when a model element spans multiple files (one-to-many). If the files that make up a model element never change, then it is simply case of looking at the history of each file involved and building the element history from that. However, it becomes more complicated if the number or location of the files that make up a model element can change. The calculation of the change history can then become quite expensive depending on how the files that make up a model element are determined. For example, determining what files make up a model element may require the contents of one of more files to be read. Thus, you end up in the same situation as the many-to-one case. The same solutions proposed for that case could also be used here.

The one-to-many case is interesting for another reason. Different repositories provide different types of history. For instance, CVS only provides file based history. In the UI, the history based CVS operations are only available for files but not for folders or projects. That's not to say that the history couldn't be obtained for a folder or project, it is just that it can be expensive to determine (i.e. would require transferring large amounts of information from the server). Other repositories could potentially provide higher level histories. For instance, Subversion treats commits atomically so the history of a project could be determined by obtaining the set of all commits that intersected with the project.

This is important because supporting Team operations on logical model elements blurs the distinction between files and folders. That is, logical model elements adapt to ResourceMappings which could be a part of a file, a complete file, a set of files, a single folder, a set of folders, etc. The question is whether the ability to see the history of a model element should be available for all model elements or for only some.

Supporting history on arbitrary model elements requires the repository to be able to produce a time slice for each interesting change. This may be possible for some repositories, such as Subversion since it supports atomic commits. However, for others like CVS, there is no built in way to determine all the files that belong to a single commit. This could potentially be deduced by looking at a set of file histories and grouping the changes by timestamp but this would be a prohibitively expensive operation. Another possibility would be to present a reduced set of time slices based on version tags but this has it's own potential failings (i.e. tags are done at the file level as well so there are no guarantees that a tag represents the complete time slice of a project).

Loading Logical Models

Ideally, users would be able to browse their model structure in the repository and pick those items which they wish to transfer to their workspace (i.e. checkout). In Eclipse, projects are the unit of transfer between the repository and the local workspace. This has the following implications:

The majority of the work here would need to be done by the repository tooling. That is, they would need to provide remote browsing capabilities and support partial project loading if appropriate. The ability to support cross-project references would also need additional API in Team that allowed these relationships to be stated in such a way that they could be shared and converted back to a project.

The Potential Role of EMF

Although not part of the Platform, it is worth while to mention the potential role of EMF in a lot of the areas touched by this proposal. For EMF models, much of the required implementation could be done at the EMF level thus simplifying what models would need to do. Some possibilities are:

The following sections mention some of the issues we've come across when prototyping using EMF.

Identifying Model Objects

One of the requirements for supporting team operations on logical models is to be able to identify and compare model elements. By default, EMF uses object identity to indicate that two model elements are the same element. This works when you only have one copy of the model. However, for team operations, there can be multiple copies of the model (i.e. local, ancestor and remote). EMF does support the use of GUIDs (i.e. when XMI is used) but it is not the default.

This brings rise to another issue. Team operations can involve up to 3 copies of a model. Putting and keeping all 3 models in memory has performance implications. A means of identifying a model element without requiring that the entire model be loaded would be helpful.


Another issue is that EMF objects do not implement IAdaptable but any objects that adapts to a ResourceMapping must. One solution would be to have EObject implement IAdaptable but this is not possible since EObject cannot have dependencies on Eclipse. This means that the owner of the model must ensure that each of their model objects that adapt to ResourceMapping implement IAdaptable and their getAdapter method match that found in org.eclipse.runtime.PlatformObject. Another option is to remove the assumption made by clients that only objects that implement IAdaptable can be adapted. This is tricky since anyone can be a client of the adaptable mechanism. We can ensure that the SDK gets updated but can make no guarantees about other clients.

Team Scenarios

In this section, we describe what some Team scenarios might look like with the logical model integration enhancement we have discussed in previous sections. We will describe the scenarios in terms of CVS.

Updating the Workspace

In this scenario, the user selects one or more model elements and chooses Team>Update. Currently what happens is each file that is updated will get it's new contents from the server. For files that have both local and remote modifications, the server attempts a clean merge but if that is not possible, the file will end up containing CVS specific markup identifying the conflicting sections. For binary files, no merge is attempted. Instead, the old file is moved and the new file downloaded. In both these cases, it is the users responsibility to resolve the conflicts by editing the file in order to remove any obsolete lines and the CVS conflict markup or decide which version of the binary file to keep, respectively. It should be noted that this "after-the-fact" conflict resolution will not be acceptable for many higher level models.

The goal of a Team>Update is to do an auto-merge if possible and only involve the user if there are conflicts that need to be resolved. For operations in the file model space, this can be done on a file by file basis. That is, an auto-merge can be attempted on each file individually and only those files for which the auto-merge is not possible would require user intervention. This should be fairly straight forward to implement for CVS. The IStreamMerger interface that was added in Eclipse 3.0 can be used to determine whether an auto-merge is possible and perform the merge if it is. The files for which an auto-merge is not possible could then be displayed in a dialog, compare editor or even the sync view in order to allow the user to resolve any conflicts.

It is not clear that this file-by-file approach would be adequate for merges involving higher level model elements. The reason for this is that it is possible for a model element to span files. Auto-merging one of those files while leaving another unmerged may corrupt the model of disk. The decision about when auto-merge is possible and when it is not can only be made by the model tooling. Therefore, some portion of the merge will need to be delegated to the model.

There are several sub-scenarios to consider:


When updating a model element, it may be possible that the merge is possible at the model level. In other words, if an IResourceMappingMerger is available for one or more resource mappings, the merge can be performed by the model without ever dropping down to a lower level merge (e.g. file level merge). This makes the assumption that the model doing the merge will not do anything that corrupts lower level models. However, it does not ensure that higher level models will not be corrupted. Hence, ideally, the Team operation would still need to check for participants at the model level to ensure that higher level models in order to include other resource mappings in the merge if required.

If no model level merge is available, the update will need to be performed at the file level. This means that participants at the file level must be queried for additional resource mappings and then the merges can then be performed on these files using the appropriate IStreamMerger.

Manual Merging

Model objects that cannot be merged automatically need to be merged manually. There are two main pieces required to create a UI to allow the user to perform the manual merge:

Both of these pieces must be available given a set of resource mappings. The adaptable mechanism should be adequate to provide these in whatever form they take. If either are absent, the manual merges can still be performed at the file level.

Committing or Checking In Changes

For repositories, check-ins or commits happen at the file level. Here are some considerations when supporting commits on logical model elements.

Ideally, what the user would like to see is all the files and model elements being committed arranged in such a way that the relationships between them are obvious. If additional elements to those that were originally selected are included in the commit, these should be highlighted in some manner.

Tagging Elements

Tagging in repositories happens at the file level and, at least in CVS, can only be applied to content that is in the repository. This leads to the following two considerations when tagging:

The above two points really require two different views. The first is much like the view used for committing where the user sees any outgoing changes but this time with a message indicating that it is the ancestor state of these elements that will be tagged. The second is just a model based view that highlights those elements that will be tagged but were not in the original selection.

Replacing Elements

Replacing is similar to Update but is not as complicated as the local changes are discarded and replaced by the remote contents (i.e. no merging is required). However, there are the following considerations:

The requirements here are similar to tagging except that the determination of additional elements is based on what the incoming changes are and, hence could be displayed in a synchronization type view. There are similarities with update in the sense that the existence of an IResourceMappingMerger may mean that extra elements need not be affected at all.

As with Update, Replacing could be performed at the model level if the model has an associated IResourceMappingMerger. The mechanics would be similar to Update except that no manual merge phase would be required. Also, the model merger would either need a separate method (replace) or a flag on the merge method (ignoreLocalChanges) to indicate that a replace was occurring. When performing a replace, the ancestor context is not required.

Synchronizing and Comparing

The ability to provide model support in the synchronize view would be a natural byproduct of several of the requirements discussed above. To summarize, what would be required is:

These are all included as requirements for previously mentioned operations. The only additional requirement for Synchronize view integration is that the synchronization state display must keep itself up-to-date with local file system changes and remote changes. The synchronize view already has infrastructure for this at the file level which the model provider could use to ensure that the model elements in the view were kept up-to-date.

Summary of Requirements

This section presents the requirements on various parties for this proposal. The parties we consider are the Eclipse Platform, Model Providers and Repository Providers.

Eclipse Platform

The 3.2 Eclipse Platform release schedule is:

The Platform work for the items is this proposal and their target availably dates are:

Target dates are given for all items but this may be subject to change, especially for those items currently under investigation. For Remote Discovery, we are too early in our investigation to commit to a delivery date.

Model Providers

The model providers will need to do the following work to make full use of the support outlined in this proposal.

The model can chose whether to provide any, all or some of the above facilities. If they do not, then a suitable resource-based default implementation will be used.

Repository Providers

Repository providers will need to provide the following:

The repository provider can decide which model provider facilities to make use and only using a subset may reduce the amount of work a repository provider must do. However, to achieve rich integration requires the repository provider to implement everything.

Open Issues

Here are some open issues and questions

Assumptions and Limitations

Here are some assumptions we have made or limitations that may exist.

Change History

Changes in Version 0.3

Changes in Version 0.2