Hi all,
today Kristian and I met to discuss how to handle all these different coordinate systems in GEF3D. Kristian has already written a wiki entry on that topic ( http://wiki.eclipse.org/GEF3D_Coordinate_Systems), and he found some problems we discussed.
There are three different types of coordinate systems and coordinates at the moment: - mouse: mouse (or screen) coordinates are 2D, (0,0) is top left
- world: world coordinates are 3D, we oriented the system similarly to the 2D systems, i.e. (0,0,0) is top left, the z-axis is pointing into the screen
- surface: surface coordinates are 2D, they are relative to a 3D figure providing a surface
Besides, some concepts are used in the coordinate system context: - picking: a figure can be picked using color picking. Besides the figure under a given screen coordinate, also the depth can be retrieved this way
- virtual plane: if no figure is under a given mouse/screen coordinate, we introduced the concept of a virtual plane. In this case, the surface of the last picked 3D figure is used in order to calculate 3D coordinates from the mouse coordinates
There are several conversions method spread all over GEF3D and Draw3D. Mainly the CoordinateConverter is doing the job right now, but this will change as only specific classes can convert coordinates anyway. In order to convert coordinates, the following players are required:
- mouse <-> world: the camera is needed which uses OpenGL's (resp. GLU's) unproject method. In order to work, a depth information is required
- world <-> surface: a IFigure2DHost3D is required
- mouse <-> surface is a combination of the last two conversions
At the moment, coordinates are converted as needed and where needed. That is, a mouse event contains the original screen mouse coordinates, and tools and other classes have to convert the coordinates accordingly, e.g., for positioning a feedback figure. The problem about that is, that GEF (the 2D code) does not know that a conversion is necessary. As a consequence, many 2D tools are currently using wrong coordinates, e.g., when a figure is to be created. In order to make GEF work correctly, the coordinate conversion has to be made transparent. We are planning to achieve that by converting the coordinates as soon as possible, that is we do the conversion already in the lighweight system and the update manager. This will simplify the code "inside" GEF3D/Draw3D, as most tools don't have to care about coordinates anymore. The coordinates used then "inside" GEF and GEF3D will then be world and surface coordinates.
That is, a mouse coordinate is converted as soon as possible into surface and world coordinates. This way, 2D figures will only "see" the surface coordinate, and 3D figures can work with both. In order to clean up the code, we are planing to remove or rename some methods and change some concepts:
- ColorPicker.getLastValidFigure() is to be removed and replaced by a method called getLastSurface()
- getLastSurface() will returns an ISurface instance. This new interface/class has a reference to its IFigure2DHost3D, and we plan to add some converson methods to that new interface. This will enable a figure to provide multiple surfaces.
- RootFigure3D will return a surface as well, this way we can guarantee ColorPicker.getLastSurface() to never return null. The surface of the root figure will rotate with the camera, that is is is a virtual surface orthogonal to the viewing direction in an appropriate distance to the camera.
- we replace the concept of a virtual plane and we will only use surfaces now. Surface coordinates will become 3D coordinates, that is synchronized 2D/3D coordinates. 2D figures will only "see" the x/y-integer-parts of a surface coordinate, while 3D tools can work on full x, y, and z float values. Since no virtual plane is used anymore, mouse coordinates are always transformed to 3D coordinates using a surface.
Kristian is currently working on implementing these changes. We assume that most 3D editors will not be affected, since most changes are "inside" Draw3D/GEF3D.
Cheers
Jens
|