Skip to main content

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [List Home]
Re: [eclipse-incubator-e4-dev] UI ... the relaxed way


Michael,

I'm very interested in this and agree that with the reflective power of EMF and the ability to have a '.genmodel' equivalent open to supply hints we should be able to do something that is both generic (can support any EMF model) and 'complete' (can be extended to handle non-ecore types) and that can be run against a 'live' (already open) model (much like the current 'instance' editor).

The main issue I see isn't actually the discrete EStructuralFeature -> UI mapping (databinding will solve most of these issues I think) but rather in maintaining control over the number of 'hints' necessary to get the exact editor -you- want may be hard part. UI's are endlessly tweakable and so have to provide their own constraints. Beyond the normal ones you mention; filtering,, grouping and ordering (all of which have clear uses in even the current property sheet) about the only other one I can think of is the creation of the particular UI element you want to use to edit a particular feature with (for example a 'boolean' could use a 'Checkbox', a true/false radio button set or a combobox with 'true/false').

Trying to add 'layout' hints beyond the above structural ones could be a slippery path we may want to avoid somehow. While I'm confident that we can do a better property sheet I'm less sure that we can do 'automatic' preference/wizard page layouts...there's simply too much 'local' info contained in the various layouts that would have to be captured as 'hints'.

Do you have (or know of) any current code or papers that take this approach that I can take a look at?

Onwards,
Eric



Michael Scharf <Michael.Scharf@xxxxxxxxxxxxx>
Sent by: eclipse-incubator-e4-dev-bounces@xxxxxxxxxxx

09/19/2008 11:05 AM

Please respond to
E4 developer list <eclipse-incubator-e4-dev@xxxxxxxxxxx>

To
E4 developer list <eclipse-incubator-e4-dev@xxxxxxxxxxx>
cc
Subject
[eclipse-incubator-e4-dev] UI ... the relaxed way





Hi,

I mentioned in the yesterdays e4 call that I would like to see
some unification and simplification in the way the UI is done.

Summary
-------
- assume we use EMF models to express our domain objects
- the UI is built in a reflective way (similar to the
  emf generated editor)
- refine the model with "hints" for the UI layout engine
- reflective UI with hints allows stepwise refinement
  and is robust against changes in the model
- reflective UI encourages a better separation of model and
  UI, because the UI contains no application specific logic
- reflective UI facilitates better model extensibility because
  for new or modified models, much less work is needed to
  adjust the UI


Introduction
------------

The basic principle of this work is
  "Make simple things simple and complex things possible."
(Martin Oberhuber just recently reminded me of this very
wise saying)


Assume an EMF Model
-------------------
To follow this principle, I let's look at simple things
in UI design. Things you see in wizards, preferences,
properties pages, property views etc.

Let's assume for now, that we have an EMF model for the
data-structures we want to edit. This might be a Facade
of the a more complex model. Let's further assume that
all the logic (including validation) is implemented in
the model. That means, essentially, that all the UI has
to do is to call a few setter and getter methods on the
model object.

This is basically what the EMF properties view does and
we all know that this yields is a somewhat functional
but not very nice UI. But the interesting observation is,
that it works reflectively, that means no special code
has to be generated or hand-written to make this
work.

Annotate with some UI hints
---------------------------

I think reflective UI can do a much better job
than the EMF editor. And the key idea is to give the
reflective UI builder some hints on how to
arrange and display the UI elements. It is important
that those are hints and not seen as hard instructions.
Hints can potentially be ignored by the UI builder.

Hints might include:
- displayName (in different contexts)
- toolTip
- icon
- order
- grouping
- invisible: this attribute should not be shown in the UI
- widget: which widget to use (drop down versus selection list)
- listSource: for Combos there might be an attribute in
  the model that returns the potential values for that
  selection
- enablement: some boolean attribute that determines
  the enablement of the this attribute
- ... some databinding hints
- ...

All logic is in the model
-------------------------

With the model and the hints, the UI builder can create
the UI at runtime. It would use databinding to connect to
the model and some (CSS based) strategy to create the UI.
But there is NO specific logic in the UI. The UI is really
stupid. All the logic is in the model.

Let the reflective UI builder do the work
-----------------------------------------

The layout strategy of the UI might be selected
depending on the kind of container. E.g. a dialog might
be rendered differently than the same object in
the property view. The nice thing is that we would
get a lot of consistency by giving the reflective UI
builder more freedom to lay out the UI. And there
might be a set of strategies the user can choose from.
I am sure some talented UI designers could come
up with some great strategies. It is a kind of
encoded UI design knowledge. The UI might look
differently if  it is a running in a RCP or a
RAP application (by using different strategies).


Use annotations or UI models to define UI Hints
------------------------------------------------

The simplest way to associate UI Hints with a model
is to use annotations in the model. There could also
be a "hint model" like the genmodel. Or any another way
to associate hints with a model or inject hints into
a model.

Hints are meta information
--------------------------

The hints are meta information about the model.Meta
information that is needed to build the UI. It is the
type of information a model designer would give to a
UI designer who does not understand all the details of
the problem domain but who is capable to understand
the meaning and relations of the elements to display.
A bit similar to what TeX and LaTeX have done to
typesetting. Users express intent and relationships
between text elements and the compiler transforms
this into a printable text using some consistent
presentation strategies.

Runtime versus generated UI
---------------------------

I very much believe that there is no need to
generate UI code. It can all be done at runtime.
Other declarative UIs show that this is
possible. The key difference that the UI is
dynamically created by inspecting the objects
to edit. Rather than statically by declaring
the UI and mapping it to the model.

Express intent of the UI at a higher level
------------------------------------------

Finding a higher level language to express the
intent is key here. Graphical UI builders have
their function and purpose but in many cases
are overly repetitive. The same is true for
most declarative xml UI's. They are too low
level, too verbose, and too repetitive. Ideally,
no hints are needed to get a first version of
the UI for the model (much like the emf reflective
editor).


Gradually evolving robust UI
----------------------------

The UI can be gradually refined by adding hints.
You can very quickly get a running UI. This
helps a lot in focusing on the problem domain. Changes
of the model are automatically propagated to
the UI (I am not talking about changes to a model
instance, I am talking about changes to the "schema").
This is one of the reasons why EMF is so successful:
You can concentrate on your model and you get a prototype
of the UI for free. With the proposed approach you can
refine your UI by adding hints.

Additionally, it is robust: If an attribute is added
or removed, the UI is updates automatically. Conversely,
with a declarative UI you have to update your UI model.


Extensibility
-------------

It is extensible in multiple ways:

You can extend your model and the new elements
appear immediately in the UI. This is almost
impossible with traditional or declarative UI.

Application specific extension points might provide
subclasses of model objects that come with additional
hints for the new and modified model attributes. With
traditional UIs you often have to copy the entire
dialog page, just to add a simple new attribute.
With reflective UI it is injected automatically.

Some applications might (inject) hints to change the
layout or visibility of UI elements. Injection is more
robust than talking a "resource file" and modifying it.

Extensions might provide new hint types and layout
strategies.


..and what about "make complex things possible"?
------------------------------------------------

Well, there should be a mechanism to override the
automatic layout for a particular model object.
For example by providing a classical hand written UI
or a UI in any of the declarative languages.
Or, the reflective UI builder could spit out a
declarative UI model that could be refined
by hand.

The assumption to use EMF models we made in the
beginning can obviously be dropped if we would
find a way to provide similar reflective power
and detailed notification for arbitrary models.
(I let Ed say that we would end up reinventing EMF...)

And what to do with this idea?
-------------------------------

With e4 in my opinion, we have the unique chance to come
up with a solution that can change the way UI is done.
However this would require significant work and
the knowledge of many people. But in the end it
will pay off and we as a community will benefit.

But the basic question is: Do you think it is worth
thinking in this direction?

Let's do UI ... the relaxed way:
express the intend ...
...and let the computer do the hard work.

Michael

_______________________________________________
eclipse-incubator-e4-dev mailing list
eclipse-incubator-e4-dev@xxxxxxxxxxx
https://dev.eclipse.org/mailman/listinfo/eclipse-incubator-e4-dev


Back to the top