Skip to main content

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [List Home]
Re: [e4-dev] Declarative UI

Hi all,

2008/12/18 David Orme <djo@xxxxxxxxxxxxxxxxxxxxxxxxx>
A few thoughts:

1) I completely agree with Hallvard on using an EMF-based model.  While evolving XSWT all these years has been really useful, I think that an EMF-based model makes more sense going forward for all the reasons Hallvard described.  In addition, if we model the workbench using EMF and adopt an EMF-based UI model, we will then have EMF all the way up and down the stack, which makes a huge amount of engineering sense to me.

2) Having said this, I'm *really* concerned about runtime overhead.  EMF's runtime is over a meg.  For many embedded devices, this is already too big.  I recently rejected using EMF on an Android application due to the runtime size.

You also have to take into account that at least currently EMF objects have a much higher overhead than simple POJO's. 
Also Ed Merks has been working on improve this very much, I'm not sure whether that is already in the main EMF codeline.
 
Regards,
Markus


(If I'm missing something about EMF that would solve this, somebody please correct me.)

So I guess I'm a bit of two minds on this and would really like to hear what the ESWT folks have to say.

For engineering reasons, I'd vastly prefer to use EMF all the way up and down the stack.  But I fear that this will eliminate one of our key constituancies.

Maybe there's a way to compile the EMF model to straight Java code (like we do for XSWT) and so retain the design-time modelling benefits and keep the small runtime size?  Or some other solution?


Regards,

Dave Orme


On Thu, Dec 18, 2008 at 5:21 AM, Hallvard Trætteberg <hal@xxxxxxxxxxx> wrote:
Angelo,


Angelo zerr wrote:

Thank you for your explanation. I'm agree with you to have model (like EMF)
to describe UI. With this mean, we are not linked to SWT.

I think we must be close enough to SWT to map easily to it, and support SWT-specific extensions. Most of the basic SWT widgets may be modelled in a way that is easy to map to all relevant toolkits. However, other UI objects, like the SWT layouts, are difficult to abstract away from SWT, without requiring a lot of custom binding/glue code. Hence, these should be definable in an SWT-specific model.


To avoid that, we could have renderer commons API and Model bindings
works with the commons renderer API.
This solution is :

Model <-> Renderer Commons API <-> SWT
                            <-> Swing
                            <-> GWT
                            <-> Visual editor....

It's important that commons renderer API must be bindable. Layout,
Widgets properties must be bindable (all properties
must be observable). So when layout change, it must fire event to
observe the layout changed.

This is what I meant by "live".


Commons API is UFace. UFace provide API where all properties are
bindable with JFace Databinding.
I'm managing int TK-UI the Declarative UI with DOM (XUL, XHTML..) and
declarative Binding with XAML _expression_ (or EL...)
but as UFace is bindable, it should be easy to use EMF Model instead
of DOM Model.

I discussed this with Tom at ESE, and I agree that UFace technically could have this role. UFace contains a lot of relevant and important code for implementing the rendering logic, i.e. controlling the life-cycle of toolkit objects and managing binding. However, consider the state at runtime if UFace as a whole is used. You will have
EObject <-binding-> UBean <-binding-> SWT widget, instead of the simpler
EObject <-binding-> SWT widget.
Although UFace contains a lot of relevant and important code, there really is no need for the UBean stuff, when EMF is already used, as UBean in effect is a duplication of EMF (as used in this context). Hence, having the whole of UFace in between an EMF model and e.g. SWT is a waste. BTW, correct me if my understanding of UFace is wrong.

At ESE, Tom seemed to agree that (the functionality of) UFace (as implemented now) overlaps with an EMF-based UI model, instead of (just) closing the gap between it and the toolkit. I argued that the UFace code should be merged with whatever EMF model that we decided to use, rather than contributing an additional layer. I understand that UFace wants to be independent of EMF, to make it easier to support some platforms (e.g. J2ME or GWT), but if we decide to introduce a UI model, I think we need to consider carefully how to utilize UFace, to avoid extra code and data.

I have nothing against UFace, but I'm worried that the easiest and cleanest way to combine an EMF UI model with UFace, as you outline, will result in something that is not acceptable by those wanting a lean platform. Perhaps UFace may be split into UCore and UBean, to support two usages. EMF + UCore when EMF is supported and UBean + UCore when EMF is not supported.

Hallvard

_______________________________________________
e4-dev mailing list
e4-dev@xxxxxxxxxxx
https://dev.eclipse.org/mailman/listinfo/e4-dev


_______________________________________________
e4-dev mailing list
e4-dev@xxxxxxxxxxx
https://dev.eclipse.org/mailman/listinfo/e4-dev



Back to the top