An Introduction to Using TPTP's Automated GUI Recorder
An Introduction to Using TPTP's Automated GUI Recorder

Important: This user guide is assumed to be used with TPTP 4.2
The following document provides an overview of using a GUI recorder and playback tool that is developed under the TPTP test framework. This tool allows users to record GUI actions in the Eclipse platform and play them back to verify the functionality of their product. It is meant to allow Eclipse developers to automate regression tests that often require many resources to perform repetitive tasks to verify basic functionalities.

Table of Contents:
1.0 Introduction - Description and Purpose
1.1 Downloading TPTP's Automated GUI Recorder
1.2 Known Limitations and Workarounds
2.0 Creating an Automated GUI Test Suite
2.1 Creating an Automated GUI Test Case
2.2 Running a Test Suite
2.2.1 Defining the Context
2.2.2 Running in Quick Mode
2.2.3 Running in Standard Mode
2.2.4 Using a Datapool
2.2.5 Using Variable Substitution
2.3 Verification Mechanisms
2.3.1 Verification by Selection
2.3.2 Inserting a Verification Hook
2.4 Widget Resolving Mechanism
2.4.1 The Adaptive Widget Resolver
2.5 Semantics of the Generated Script
3.0 Future Goals

1.0 Introduction - Description and Purpose

TPTP's Automated GUI Recorder (AGR) works by registering listeners with the SWT's display instance to record all incoming UI events that directly correspond to user's actions. The recorder keeps track of the context of the events and generates a script that allows the tool to playback the events in the respective order that the user recorded them. The generated script corresponds to a single test case that is embedded in a test suite which users can use to run in quick mode or standard mode (see section 2.2).

AGR gives users the ability to also insert verification hooks based on a view, editor, or a shell. The inserted verification hook corresponds to a JUnit method with a parameter of type org.eclipse.ui.IViewPart, org.eclipse.ui.IEditorPart, or org.eclipse.swt.widgets.Shell. During play back, the recorder will invoke the method corresponding to the verification hook with an argument corresponding to the editor, view, or shell selected by the user (see section 2.3)


1.1 Downloading TPTP's Automated GUI Recorder

Follow the steps below to obtain a copy of the automated GUI recorder:

  1. Click on the 'Latest Downloads' link from TPTP's home page: http://www.eclipse.org/tptp
  2. Select the version and the build that you wish to download. The Automated GUI Recorder was introduced in TPTP 4.1. Select a build corresponding to TPTP 4.1+.
  3. Once a desired build is selected, find and download the following items:
    • TPTP Runtime or SDK (ALL - Which includes Platform, Monitoring Tools, Tracing and Profiling Tools, and Testing Tools)
    • Agent Controller (For a desired platform)
    • Technology Preview (Automated GUI Recorder)
    • Obtain all items listed under 'Requirements'
Important: Ensure that the automated GUI recorder plug-in is placed in the 'plugin' directory of the Eclipse workbench that will be used as the context for running test suites. The code that begins the execution of test suites is a startup code that detects the execution of automated test suites and gives control over to the test runner. Do not run this in a runtime Eclipse workbench. This is expected to change in the future.

1.2 Known Limitations and Workarounds

The list below indicates some of the known limitations of the Automated GUI Recorder. It's perhaps best to revisit this list when finished reading this document. Where possible, a workaround is provided to overcome the problem.


2.0 Creating an Automated GUI Test Suite
Follow the steps below to create a TPTP's automated GUI test suite:
  1. Automated GUI test suites must be stored in a plug-in project in order for required plug-ins of verification hooks to be resolved (section 2.3 provides more details). Create an Eclipse plug-in: File Menu > New > Project > Plug-in Development > Plug-in Project > Next.
  2. Specify 'TestContainer' as the Project name > Next > Make sure the option 'This plug-in will make contributions to the UI' is checked' > Next > Uncheck the option 'Create a plug-in using one of the templates' > Finish.
  3. Create a new folder that will contain the test suite: Right Click the Project > New > Folder > Specify 'test-resources' as the name of the folder > Finish.
  4. Create a new package under the source directory: Right Click the 'src' Directory > New > Package > Specify 'org.eclipse.tests.verification.hooks' as the name of the package > Finish.
  5. Create the automated test suite: File Menu > New > Other > Test > Select TPTP Automated GUI Test (see figure 2.1) > Next. The first wizard page will allow you to specify the location of the class that will contain methods corresponding to each inserted verification hook.


    Figure 2.1 - New Wizard Page

  6. The newly created class needs a number of JAR files to be on the classpath in order for it to resolve imported APIs. Click 'Yes' to add the required Jar files to the plug-in's classpath.
  7. Specify the location of the class: Type in TestContainer/src for the source folder > Type in 'org.eclipse.tests.verification.hooks' for the package > Type in 'VerificationClass' for the Name of the source file > Next.
  8. The next wizard page allows users to specify the name and location of the test suite (see figure 2.2): Select 'test-resources' folder > Type 'AutomatedGUISuite' as the name of the test suite > Click Next


    Figure 2.2 - Test Suite Wizard Page

  9. Specify a description for the test suite to be created: Type in 'First GUI test suite' as the description > Click Finish. The new test suite should be created and opened (see figure 2.3). Notice that the class for the verification hooks will not be created until the first verification hook is inserted.


    Figure 2.3 - TPTP's Automated GUI Test Suite

In this section we will outline the steps required to create a simple test case. We will record the UI actions for creating a new Java project:
  1. Create a new test case: Switch to 'Test Cases' tab of the test suite > Click 'Record a Test Case' from the toolbar menu. A dialog box appears prompting for the test case name, description, and the starting point (see figure 2.4). A starting point is a required field of every test case that corresponds to the registered perspectives of the Eclipse workbench in which the test case is created. When the test case is about to be executed, the recorder will switch to the perspective that corresponds to the starting point of the test case and run the content of its macro in the context of that perspective.
  2. Fill in the fields of the dialog as shown in figure 2.4 and click on OK :


    Figure 2.4 - New Test Case Dialog

  3. The perspective of the workbench should be switched to the perspective that was specified as the starting point of the test case and a dialog box shown in figure 2.5 should appear. This dialog is the control center that is used to control the status of the recorder. It contains a termination button to terminate the test case, a restart button for discarding UI events that have so far been collected, a button for toggling on/off position-based recording, a button for toggling on/off artificial wait time recording, and an insertion button for inserting verification hooks. The dialog will always remain on top of every new shell in order to allow the user to insert a verification hook at any time during a recording session. The control center dialog can be moved to any convenient location in the workbench by clicking the shell and dragging it to the desired location.


    Figure 2.5 - Control Center Dialog

  4. When the current status of the control center dialog is 'recording', then all UI events are recorded. By default the UI recorder uses object-based recording, which means that UI events are captured and played back by identifying specific widget using properties that uniquely identify them. Alternatively the recorder can be used as a position-based recorder by clicking the toggle button with the cursor image. Turning on position-based recording will capture events based on mouse and keyboard events. Important: use position-based recording with caution. This will make the test case dependent on the resolution that it is created in. Use this option in places where object-based recording cannot be used (e.g. working with a canvas area). We are now ready to continue recording our test case. Create a new Java Project: Click the File Menu > Project > Java Project > Next > Specify 'Java Project' as the project name > Finish

  5. Terminate the test case: Click the termination button of the control center dialog . The recorder will switch back to the original perspective that the user was in before starting the test case. Open the test suite and notice that the test case has been created as shown in figure 2.6. The 'Test Cases' page of the editor will display a logical representation of the test case on the left pane and its corresponding macro in the right pane. Notice that the logical representation provides a tree representation of the shells and the commands of the macro as indicated in figure 2.6. Selecting an entry in the logical representation will automatically select its corresponding macro fragment in the right pane. Save the test suite: File Menu > Save


    Figure 2.6 - Newly Created Test Case


There are two different modes that automated GUI test suites can be executed. The quick mode is a convenient way for test case designers to ensure that the macro is running properly. The quick mode uses the currently running workbench as the context to run the selected test cases. It only serves to ensure that a test case is designed properly and will not generate any execution history file. See section 2.2.2 for more details.

The standard mode requires a user to create a launch configuration item before the test suite can be executed. It is also expected that the test suite contains a run-time behavior that is used to determine the invocation sequence of the test cases. Running a test suite in standard mode will require a context pointing to a workbench to be defined (see section 2.2.1). The context is used to launch an instance of a workbench that is used to execute the test cases in. If the test execution involves only the local host, the context will be either the current workbench that the test suite has been opened in or point to another workbench on the local machine. If the test execution involves a remote host, the context will point to a workbench on the remote machine where the standalone Agent Controller is installed and running.


There are three different ways that the user can specify a context for a given test suite. Open the test suite that was created in section 2.0. Switch to the 'Overview' page if that is not the current page of the test suite. Notice the checked option "Use current workbench as context". If this option is checked, then the recorder will determine the current workbench that the test suite has been opened in and will use its respective startup.jar as the context for the standard mode.

The user has the option to uncheck this option and browse to a startup.jar file that they wish to use as the context for their testing. If a custom defined context is specified, then the standard mode will use the respective jar file as the context in which to run the test suite.

The last way of specifying a custom context is through a deployment file, where a separate context can be defined for each location that the test suite is deployed in (see the workbench help content for finding out more about the purpose of deployment, location, and artifact files). Follow the steps below to determine how a deployment file can be used to specify a context based on location. The steps below are optional and we do not make use of the created resources later:

  1. Create a deployment file under the 'test-resources' folder: Right click on the 'test-resources' folder > New > Other > Test > Test Asset > Deployment > Next > Specify 'AutoGuiDeployment' as the name > Finish
  2. Create a location file under the 'test-resources' folder: Right click on the 'test-resources' folder > New > Other > Test > Test Asset > Location > Next > Specify 'AutoGuiLocation' as the name > Finish
  3. Create an artifact file under the 'test-resources' folder: Right click on the 'test-resources' folder > New > Other > Test > Test Asset > Artifact > Next > Specify 'AutoGuiArtifact' as the name > Finish
  4. Include the newly create test suite as a test asset of the artifact: Open the artifact file > Switch to the Test Asset page > Click Add > Select 'AutomatedGUISuite' > OK > Save the artifact file
  5. Specify the context in the location: Open the location file > Click the 'General Properties' link > Click on Add > Specify 'ECLIPSE_STARTUP_PATH' as the property name > Specify the full path of the startup.jar file of the Eclipse workbench as the property value > OK > Save the location
  6. Specify the artifact-location pair in the deployment file: Open the deployment file > Open the 'Pairs' page > Click Add beside the artifact content area > Click Browse > Select 'AutoGuiArtifact' > OK > Finish > Similarly, add the 'AutoGuiLocation' file as a location > Select the artifact and the location added > Click on the down arrow to add the pair (see figure 2.7)> Save the deployment file


    Figure 2.7 - Deployment File

The newly created deployment file can now be used when launching the test suite in standard mode (see section 2.2.3 for more details). The recorder will first attempt to find a specified context in location files that are associated with the launch. If that fails then it will resort to the path specified in the test suite to resolve the context workbench.

Before running test cases in quick mode, users are expected to restore the workbench to the state that it was in before the test case was created. Follow the steps below to run our newly created test case in quick mode:
  1. Delete the 'Java Project'
  2. Open the test suite and switch to the 'Test Cases' page
  3. Select the newly created test case and click on 'Play Test Case(s)' option from the toolbar menu.
The newly created test case should create the Java project that was deleted in the first step. Multiple test cases can be executed in quick mode by simply selecting the desired test cases in the 'Test Case' page and clicking the play toolbar button.

Follow the steps below to run the test suite in standard mode:
  1. As stated previously, a run-time behavior must be defined before running a test suite in standard mode. Define the behavior of the test suite: Open the test suite > switch to the 'Behavior' page > Click on 'Add Child' > invocation > Select 'Create Project' > OK > Save the test suite.
  2. Open the launch configuration dialog: Click on Run > Run...
  3. Create the launch configuration item: Click the 'Test' item > New > Type in 'Automated GUI Test' as the name > Select Test Container/test-resources/ AutomatedGUISuite as the test to be run > Select local_deployment as the deployment (see figure 2.8)


    Figure 2.8 - Launch Configuration Item

  4. Click on 'Run' to begin the execution of the test suite
It may take a minute until a second instance of a workbench is launched. Once the workbench is launched, the perspective is switched to the starting point of the test case that is about to be executed and the macro of the respective test case is executed in the context of its starting point. The workbench is terminated once all test cases are completed. An execution history file is generated under the plug-in after running the test suite in standard mode. The execution history file should be labeled as 'AutomatedGUISuite.execution'. Open this file and switch to the 'Events' page. Notice that there is an entry indicating the verdict of the test case invocation (see figure 2.9). If the event content is empty, then it maybe the case that not all the data has been passed to the agent. Close the execution history file and open it after a few seconds.


Figure 2.9 - Execution History File


AGR gives users the ability to replace user inputs in each test case with datapool entries. This is useful in cases where user input of the test case is different under the environments that the test case is executed under. This section will use the same test case to create two projects with different names. The first project name will be 'Java Project' and the second project will be 'Custom Project'. Follow the steps below to make use of a datapool within the context of an automated GUI test suite:
  1. Create a datapool: Right click on 'test-resources' > New > Other > Test > Test Assets > Datapool > Next > Enter 'AutoGuiDatapool' as the name of the datapool > Finish
  2. Add the project name variable: Click on the table column labeled 'Variable1::String' > change the name to 'ProjectName' > OK > Enter 'Custom Project' as the value of the variable (see figure 2.10) > Save the datapool


    Figure 2.10 - Datapool File

  3. Add a second behavior: Open the test suite > Open the 'Behavior' page > Click on Add Sibling > invocation > Select Create Project > OK
  4. Select the newly added behavior and notice that in its 'Detailed Properties' section it contains a table that displays the user inputs for the test case (see figure 2.11). Add the datapool entry: Select 'Java Project' > Click on 'Link to Datapool' > Select the datapool entry > Select the 'Project Name' Variable > OK > Save the test suite


    Figure 2.11 - User input values of test case

  5. Run the test suite in standard mode as outlined in section 2.2.3.
Two projects should be created in the context workbench and the execution history file should indicate two test invocations.
Using a variable provides a mechanism for users to specify relative paths to resources that they may need for a test case. This makes a test case less dependent under the environment which it is executed under. Using variables, testers can for example specify paths relative to the project containing the test suite or the workspace that the test suite is running under. To illustrate this feature a test case will be created that imports a resource to a project in the workspace using a relative path:
  1. Create an arbitrary text file called 'MyTestResource.txt' under a temporary directory (e.g. D:\temp) on your file system
  2. Quick run the 'Create Project' test case to create a Java project named 'Java Project' under your workspace.
  3. Add a test case: Open AutomatedGUISuite > Switch to the Test Cases tab > Click on Add > Give it the name 'Import Resource Relatively' > Select the Java perspective as the starting point > Click on OK to start recording the test case.
  4. Specify the directory of 'MyTestResource.txt' file: Select 'Java Project' > Click File > Import > File System > Next > Manually enter the directory containing 'MyTestResource.txt' (DO NOT click on the browse button because of limitations mentioned in section 1.2) and give focus to another control such as the "Into Folder" text box.
  5. Select 'MyTestResource.txt': Expand the directory that appears in the list on the left hand side > Check the text file, 'MyTestResource.txt' > Click on Finish
  6. Terminate the test case: Click on the terminate button of the control center dialog
  7. Move the text file 'MyTestResource.txt' from the temporary directory to the 'TestContainer' project containing the test suite.
  8. Add the new test case to the behavior: Open the test suite > Switch to the behavior tab > Add the test case 'Import Resource Relatively'
  9. Change the absolute path of the test case to a relative one: Select the test case invocation added > Click on the path displayed under the user input table > Click on 'Link to Custom Input' The dialog that appears in figure 2.11.1 appears.


    Figure 2.11.1 - Custom User Input Dialog

    Erase what appears in the dialog text box > Use the drop down tool bar item to select 'Project Location of the Test Suite' > Click OK to close the dialog. Notice that the variable substitutions will only take affect once the test suite is launched in standard mode. If variable substitution is also desired in quick mode, then users are required to modify the macro. The absolute path recorded in the macro will need to be changed to use variables. To make use of a variable in a macro use the following syntax: "%VARIABLE-NAME%" (e.g.: %testsuiteProjectLocation% points to the project containing the test suite and %workspaceLocation% points to the workspace path that the test case is running under).
  10. Now run the test suite in standard mode to see variable substitution in action. Notice that instead of using the path recoded in the macro, the path to 'Java Project' is used.
  11. Once the test suite completes executing, delete the test case invocation corresponding to the 'Import Resource Relatively' test case: Open the test suite > Switch to the behavior tab > Delete the test case invocation corresponding to 'Import Resource Relatively'.

Often times, running a macro is insufficient to verify the functionality of a given test case. AGR offers two verification mechanisms that allow the user to verify correct functionality. The first mechanism, verification by selection, is a convenient and quick method of verification that allows the user to verify the presence of expected widgets. This type of mechanism works best for verifying the presence of items in a tree or a list. For example to verify that 'item1' appears in a tree, the user is only expected to select the item. The recorder will detect this selection and will include it as part of its generated script. When the test case is played back, the absence of 'item1' from the tree will cause a test case failure. See section 2.3.1 for more details.

The second method of verification involves the user to insert verification hooks. Each hook corresponds to a JUnit method with a parameter of type org.eclipse.ui.IEditorPart, org.eclipse.ui.IViewPart, or org.eclipse.swt.widgets.Shell. AGR invokes the method with the correct argument corresponding to the user's selection of an editor, view, or shell. The argument is then used to extract the data of the view, editor, or shell and perform assertions to ensure the correctness of the program being tested. See section 2.3.2 for more details.


To demonstrate this type of verification mechanism we will create a test case that will import a log file. An item within the tree of the log viewer will then be selected to ensure the presence of the CBE item. Follow the steps below:
  1. Create a new test case: Open the test suite > switch to the 'Test Cases' page > Add > Specify Log Import as the name of the test case > Select 'Profiling and Logging' as the starting point of the perspective > OK
  2. Create a new test case: Open the test suite > switch to the 'Test Cases' page > Add > Specify Log Import as the name of the test case > Select 'Profiling and Logging' as the starting point of the perspective > OK
  3. Import an Apache access log file: Click the File Menu > Import > Log File > Next > Add > Select Apache HTTP Server access log > Switch to the details tab > Specify <ECLIPSE-HOME>/plugins/org.eclipse.hyades.logging.parsers_<VER-NUM>/config/Apache/access/v1.3.26/example.log as the path > OK > Finish
  4. Select the first warning log record with the message: "GET /scripts/root.exe?/c+dir HTTP/1.0" 404 279
  5. Terminate the test case: Click the termination button
  6. Add a new behavior: Open the test suite > Switch to the 'Behavior' page > Click Add Sibling > invocation > Select 'Log Import' > OK > Save the test suite
  7. Run the test suite in standard mode and notice that the execution history file indicates a success for the log import invocation.
  8. To further demonstrate that this mechanism works, we will modify the macro to force the recorder not to find the item selected: Open the test suite > switch to the 'Test Cases' page > Select 'Log Import' > Locate the line that looks similar to "<item path="org.eclipse.swt.widgets.TreeItem#{{"GET /scripts/root.exe?/c+dir HTTP/1.0" 404 279}}..."
  9. Once the line in the previous step is located, change 'GET' to 'INVALID' and save the test suite
  10. Run the test suite in standard mode and notice that the execution history file generated indicates a failure for the test invocation as shown in figure 2.12.


    Figure 2.12 - Execution History File


Follow the steps below to add a test case with a verification hook:
  1. Add a new test case: Open the test suite > Switch to the 'Test Cases' page > Add > Specify 'Log Import with Verification' as the name > Choose Profiling and Logging as the starting point > OK
  2. Import the log file: Select the previously imported log file entry > Click Edit > Switch to the 'Destination' tab > Change the monitor from 'Default Monitor' to 'Sec-Monitor' > OK > Finish
  3. Insert a verification hook: In the control center dialog enter 'verifyImport' in the text file > Click 'insert'. The status of the recorder should be changed to 'select a target (a view, editor, or dialog)'. Select the log view. The status of the recorder should change back to 'recording' once the code has been generated.
  4. Terminate the test case and save the test suite.
  5. A verification class should have been generated under the src directory of the plug-in containing the test suite. Open the class 'org.eclipse.tests.verification.hooks.VerificationClass' and notice that it contains the method 'verifyImport' with an argument of type 'org.eclipse.ui.IViewPart'.
  6. Add required dependencies: Open the manifest file of the plug-in containing the test suite > Switch to its 'Dependencies' page > Add the following two plugins: org.eclipse.tptp.platform.log.views and org.eclipse.tptp.platform.models. > Save the manifest file
  7. Add the following code to the 'verifyImport' method. This snippet of code simply verifies that the first entry of the log file is as what is expected:

    LogViewer logViewer = ((LogViewer)arg0);
    LogContentProvider logContentProvider = (LogContentProvider)logViewer.getLogPage().getView().getViewer().getContentProvider();
    Object[] listOfCBEs = logContentProvider.getElements(logViewer.getLogPage().getView().getViewer().getInput());

    /* Make sure the first element has the expected message */
    assertTrue(listOfCBEs.length > 0);
    assertEquals(((CBECommonBaseEvent)listOfCBEs[0]).getMsg(), "\"GET /WSsamples HTTP/1.1\" 301 315");

  8. Add a new behavior: Open the test suite > Switch to 'Behavior' page > Select the last entry > Click 'Add Sibling' > invocation > Select 'Log Import with Verification' > OK > Save the test suite
  9. Run the test suite in standard mode and notice that its generated execution file indicates a success for the newly added test invocation.
  10. We will now change the value of the assert statement to force a failure: Open the VerificationClass > Replace the last line of verifyImport with the following:
    assertEquals(((CBECommonBaseEvent)listOfCBEs[0]).getMsg(), "\"INVALID /WSsamples HTTP/1.1\" 301 315");
  11. Run the test suite in standard mode again and ensure that the newly added invocation indicates a failure (figure 2.13):


    Figure 2.13 - Execution History File


One of the main obstacles of object-based GUI recorders is to accurately identify widgets within a given context. This problem would be trivial if there exists a global and persistent identifier for each widget. Since there is no such ID available, heuristic algorithms must be used to identify each widget. AGR includes an extension point that allows clients to register their own personal widget resolvers. See the schema of the org.eclipse.tptp.test.auto.gui.widgetResolver extension point for more details. The next section describes a widget resolver that is already included with this tool. Users can choose to use the adaptive widget resolver to resolve widgets or register their own custom extension.

The adaptive widget resolver works by reading an XML file to load in the registered widgets. The resolver works by identifying a widget based on the properties that are defined on that widget. The XML file is located under <ECLIPSE-HOME>/plugins/org.eclipse.tptp.test.auto.gui/auto-gui/widgetReg.xml. A widget can be registered by adding an entry similar to the following:

<class name = "org.eclipse.debug.core.ILaunchConfigurationType" matchThreshold = "7.0">
  <method name = "getPluginIdentifier" weight = "0.2"/>
  <method name = "getSourcePathComputer" weight = "0.1"/>
  <method name = "getCategory" weight = "0.1"/>
  <method name = "getIdentifier" weight = "0.5"/>
</class>

The class name must either match the widget class name or the class name of the object returned by the getData() method of the widget. Each method element is a property that helps to identify the corresponding widget. The weight attribute indicates how well the property identifies that widget. The 'toString()' method is invoked on the returned object of each of the methods in order to determine the property value. If the accumulated weight equals or exceeds the value of the matchThreshold attribute, then the widget is identified as the right one. If in case the 'matchThreshold' for a class is not specified, then the 'globalMatchThreshold' attribute is used.

To register your own entry, simply modify and save the XML file 'widgetReg.xml'. The file is loaded every time its size has changed from the previous time that it was read. There is also a default entry (as shown below) that is used to apply to all widgets. The default entry is meant to be used as the missing global and persistent identifier mentioned earlier.

<default>
  <method name = "getData" weight = "1.0">
    <argument value = "tptp.persistent.id"/>
  </method>
</default>


As you have already noticed, the generated macro script format is in XML. At the time of writing this document there was no schema defined for the XML script. The table below attempts to describe the semantics of the generated XML file. It only covers elements corresponding to 'commands':

Command Type Purpose Child Elements and Their Attributes Root Attributes
select Indicates a selection Element: selected-item (item selected)
Attribute: path (the path of the item selected)

Element: parent (the parent element leading to the selection)
Attribute: widgetId (the id of the parent)
contextId (the id of the context)
widgetId (the id of the widget)
x-coord (used for drop down toolbar buttons only)
y-coord (used for drop down toolbar buttons only)
detail (the detail of the event)
selection (indicates whether item is enabled or disabled)
item-check Indicates a check box item Element: item (the item checked)
Attribute: path (the path of the item selected)
contextId (the id of the context)
widgetId (the id of the widget)
value (indicates whether the check box was checked or unchecked)
choice-select A choice selection such as choosing a specific tab or a combo box item None contextId (the id of the context)
widgetId (the id of the widget)
choiceId (the id of the selection)
item-expand Indicates that an item is expanded (such as a tree item) Element: item (the item checked)
Attribute: path (the path of the item selected)
contextId (the id of the context)
widgetId (the id of the widget)
value (indicates whether item is expanded or collapsed)
focus An item gains focus None contextId (the id of the context)
widgetId (the id of the widget)
modify A modification by the user None contextId (the id of the context)
widgetId (the id of the widget)
default-select A default selection Element: item (the item selected)
Attribute: path (the path of the item selected)
contextId (the id of the context)
widgetId (the id of the widget)
item-select Custom selection by user Element: item (the item selected)
Attribute: path (the path of the item selected)
contextId (the id of the context)
widgetId (the id of the widget)
verification A verification point None contextId (the id of the editor, view, etc...)
location (the location containing the verification class)
resource (verification class name)
hook (the method signature corresponding to the verification hook)
wait If the time-to-wait attribute appears, then the runner waits x number of milliseconds before proceeding to the next command (x is the value of the attribute). If the attribute is not present, then the runner waits until all user Eclipse jobs are completed before proceeding to the next command None time-to-wait (This optional attribute indicates a time in milli-seconds that the runner should sleep until it proceeds to the next command)
close-workbenchpart Close workbench parts (e.g. editors, views) None contextId (indicates whether the context is an editor or a view)
widgetId (the identifier for the workbench part)
key-up A position-based command used to indicate keyword up events None detail (indicates the key that corresponds to the key up event)
ischarset (takes the string literal 'true' or 'false' as value to indicate whether Event.character (if true) or Event.keyCode (if false) should be used)
key-down A position-based command used to indicate keyword down events None detail (indicates the key that corresponds to the key down event)
ischarset (takes the string literal 'true' or 'false' as value to indicate whether Event.character (if true) or Event.keyCode (if false) should be used)
key-press A position-based command used to have the same effect as a key-down followed by a key-up command None detail (indicates the key that corresponds to the key down event)
ischarset (takes the string literal 'true' or 'false' as value to indicate whether Event.character (if true) or Event.keyCode (if false) should be used)
mouse-up A position-based command used to indicate mouse up events None detail (indicates the mouse button that send the mouse up event)
x-coord (x-coordinate for the cursor location)
y-coord (y-coordinate for the cursor location)
mouse-down A position-based command used to indicate mouse down events None detail (indicates the mouse button that send the mouse down event)
x-coord (x-coordinate for the cursor location)
y-coord (y-coordinate for the cursor location)
mouse-click A position-based command used to have the same effect as a mouse-down followed by a mouse-up command None detail (indicates the mouse button that send the mouse down event)
x-coord (x-coordinate for the cursor location)
y-coord (y-coordinate for the cursor location)

The items listed below are some of the few enhancements that will be made within the near future:
  • Support nested calls in adaptive widget resolver
  • Further generalize the recorder framework
  • Support native dialogs (such as the browse dialog)
  • Allow a convenient way for users to debug verification hooks
  • Support RCP applications
  • Allow object-based recording with GEF editors