[
Date Prev][
Date Next][
Thread Prev][
Thread Next][
Date Index][
Thread Index]
[
List Home]
Re: [udig-devel] Rendering pipeline: how CRS is used.
|
You are on the right track. Mind you it depends on the renderer you
are using. Shapefile has a optimized renderer that has a different
pipeline from the Streaming renderer which is used to render other
datastores. I think I explain the StreamingRenderer as it is used by
more datastores.
I'm going to take it from the client code as well. I'm assuming that
the geometry is either a line or a polygon. Points and GridCoverages
are handled in a similar manner but the styling and drawing section
is slightly different.
1. Create a MapContext.
2. Add MapLayer(s) to context.
3. Set Area of Interest and CRS on the MapContext
4. Create renderer.
5. Set MapContext of renderer
foreach( layer ) do {
6. FeatureType is obtained from Datastore. (Contains the CRS in
the *current* Feature model)
7. Area of Interest is reprojected into the CRS of the layer.
8a. A query is constructed from the projected Area of Interest.
8b. The query is restricted to only request Attributes that are
required for styling the feature. (required for thematic styling,
and other non-basic styles)
9. A MathTransfrom is constructed that transforms from Layer CRS
to Screen space
foreach feature do {
9. Feature is obtained from the datastore.
9a. (Assuming shapefile) Geometry is created, no reprojection
9b. Attributes are read
9c. Feature is created.
10. Geometry is decimated. (Vertices that map to a single pixel
are collapsed to a single vertex)
11. Geometry is reprojected using the constructed MathTransform
12. A java.awt.Shape is constructed that wraps the Geometry
13. The graphics object is set according to the style. (Fill,
color, pattern, etc... are all set)
14. Shape is drawn.
15. If labeling is done the label for the current feature is
put in a cache.
}
}
16. Labels are drawn.
Hope this helps.
Jesse
On 23-May-06, at 6:23 AM, Adrian Custer wrote:
Hey all,
Could someone explain the uses of CRS and transforms in the rendering
pipeline? I need a coarse overview of rendering as used by
uDig/streaming Renderer to understand:
(1) how we go from shapefile data to screen data
(2) what CRS's are stored in the rendering system
This is all prep work so I can compose an intro to CRS that will
work as
background for a future, more complex discussion of CRS with features
and renderers.
What I really want is a pipeline, from a UTM shapefile (as an example)
to a displayed map on screen where we know what CRS's are in play and
where the transformations are performed. The initial start of the
operations are:
10) Get Shapefile Metadata => CRS used for
geometryAttributeType
20) Get Shapefile coordinates
30) Build a geometry
40) Build a feature
...
?) Create a Renderer (? with a crs?)
?) add the feature to the chain
...
?) transform to projected coordinates (if in lat/long?) ?
...
?) transform to image coordinates
...
?) transform to screen coordinates
If someone could flesh out the chain so I have a working idea of
what's
going on, I would appreciate it.
thanks,
adrian
_______________________________________________
User-friendly Desktop Internet GIS (uDig)
http://udig.refractions.net
http://lists.refractions.net/mailman/listinfo/udig-devel