Hawk includes multiple optional features to integrate the Thrift APIs with regular Eclipse-based tooling:
- A custom Hawk instance type that operates over the Thrift API instead of locally.
- An EMF abstraction that allows for treating remote models as local ones.
- An editor for the
.hawkmodelmodel access descriptors used by the above EMF resource abstraction.
This page documents how these different features can be used.
Managing remote Hawk indexers¶
When creating a Hawk instance for the first time (using the dialog shown below), users can specify which factory will be used. The name of the selected factory will be saved into the configuration of the instance, allowing Hawk to recreate the instance in later executions without asking again. Hawk provides a default
LocalHawk instances operate in the current Java virtual machine. Users can also specify which Hawk components should be enabled.
A factory can also be used to "import" instances that already exist but Hawk does not know about. For the local case, these would be instances that were previously removed from Eclipse but whose folders were not deleted. The Eclipse import dialog looks like this:
The "Thrift API integration for Hawk GUI" feature provides a plugin that contributes a new indexer factory, ThriftRemoteHawkFactory, which produces ThriftRemoteHawk instances that use ThriftRemoteModelIndexer indexers. When creating a new instance, the factory will use the createInstance operation to add the instance to the server. When used to "import", the remote factory retrieves the list of Hawk instances available on the server through the listInstances operation of the Thrift API. Management actions (such as starting or stopping the instance) and their results are likewise translated between the user interface and the Thrift API.
The Hawk user interface provides live updates on the current state of each indexer, with short status messages and an indication of whether the indexer is stopped, running or updating. Management actions and queries are disabled during an update, to prevent data consistency issues. The Hawk indexer in the remote server talks to the client through an Artemis queue: please make sure Artemis has been set up correctly in the server (see the setup guide).
All these aspects are transparent to the user: the only difference is selecting the appropriate "Instance type" in the new instance or import dialogs and entering the URL to the Hawk Thrift endpoint. If the remote instance type is chosen, Hawk will only list the Hawk components that are installed in the server, which may differ from those installed in the client.
Editor for remote model access descriptors¶
There are many different use cases for retrieving models over the network, each with their own requirements. The EMF model abstraction uses a
.hawkmodel model access descriptor to specify the exact configuration we want to use when fetching the model over the network.
.hawkmodel files can be opened by any EMF-compatible tool and operate just like a regular model.
To simplify the creation and maintenance of these
.hawkmodel files, an Eclipse-based editor is provided in the "Remote Hawk EMF Model UI Feature". The editor is divided into three tabs: a form-based tab for editing most aspects of the descriptor in a controlled manner, another form-based tab for editing the effective metamodel to limit the contents of the model, and a text-based tab for editing the descriptor directly.
Here is a screenshot of the main tab:
The main form-based tab is divided into three sections:
The "Instance" section provides connection details for the remote Hawk instance: the URL of the Thrift endpoint, the Thrift protocol to use (more details in D5.6) and the name of the Hawk instance within the server. "Instance name" can be clicked to open a selection dialog with all the available instances.
The "Username" and "Password" fields only need to be filled in if using the
.hawkmodelfile outside Eclipse. When using the
.hawkmodelinside Eclipse, the remote EMF abstraction will fall back on the credentials stored in the Eclipse secure store if needed.
The "Contents" section allows for filtering the contents of the Hawk index to be read and changing how they should be loaded:
- By default, the entire index is retrieved (repository URL is '*', file pattern is '*' and no query is used). The "Repository URL", "File pattern(s)" and "Query language" labels can be clicked to open selection dialogs with the appropriate options.
- The default loading mode is "GREEDY" (send the entire contents of the model in one message), but various lazy loading modes are available.
- The contents of the index can be split over the different source files or not. While splitting by file is useful for browsing, some EMF-based tools may not be compatible with it.
- The "Default namespaces" field makes it possible to resolve ambiguous type names. For instance,
both the IFC2x3 and the IFC4 metamodels have a type called
IfcActor. Without this field, the query would need to specify which one of the two metamodels should be used on every reference to
IfcActor, which is unwieldy and prone to mistakes. With this field filled, the query will be told to resolve ambiguous type references to those of the IFC2x3 metamodel.
- The "Page size for initial load" field can be set to a value other than 0, indicating that during the initial load of the model, its contents should not be sent in one response message, but rather divided into "pages" of a certain size. It was observed that a GREEDY loading mode with an adequate page size can be faster to load than a lazy loading mode, while still keeping server memory and bandwidth requirements under control.
- The "Subscription" section allows users to enable live updates in the opened model through the
watchGraphChangesoperation and an Apache Artemis queue of a certain durability. In order to allow the server to recognize users that reconnect after a connection loss, a unique client ID should be provided.
Effective metamodel tab¶
The effective metamodel editor tab presents a table that lists all the metamodels registered in the selected remote Hawk instance, their types, and their features (called "slots" by the Hawk API). It is structured as a tree with three levels, with the metamodels at the root level, the types inside the metamodels, and their slots inside the types.
The implicit default is that all metamodels are completely included, but users can manually include or exclude certain metamodels, types or slots within the types. This can be done through drop-down selection lists on the "State" column of the table, or through the buttons on the right of the table:
- "Include all" resets the entire table to the default state of implicitly including everything.
- "Exclude all" resets the entire table to excluding all metamodels.
- "Exclude" and "Include" only change the state of the currently selected element.
- "Reset" returns the currently selected element to the "Default" state.
The effective metamodel is saved as part of the
.hawkmodel file, and uses both inclusion and exclusion rules to remain as compact as possible (as it will need to be sent over the network). The rules work as follows:
- A metamodel is included if it is "Included", or if it has the "Default" state and no metamodels are explicitly "Included".
- A type is included if it is not "Excluded" and its metamodel is included.
- A slot is included if it is not "Excluded" and its type is included.