Eclipse Community Forums
Forum Search:

Search      Help    Register    Login    Home
Today's Messages (on)  | Unanswered Messages (off)

Forum: TMF (Xtext)
 Topic: [2.3.1 to 2.6.2] Migrating ScopeProvider
Re: [2.3.1 to 2.6.2] Migrating ScopeProvider [message #1424622 is a reply to message #1424256] Tue, 16 September 2014 07:33
Sebastian Zarnekow is currently offline Sebastian Zarnekow
Messages: 2828
Registered: July 2009
Senior Member
Hi Ingo,

a bugreport is appreciated. It should work out of the box.

The IllegalStateException is usually a follow-up problem. Do you see a
cause for that one?

Regards,
Sebastian
--
Looking for professional support for Xtext, Xtend or Eclipse Modeling?
Go visit: http://xtext.itemis.com

Am 15.09.14 20:22, schrieb Ingo Meyer:
> Hi Sebastian,
>
> yes, my block expression is not contained in a XExpression and is
> associated to an JvmOperation. I really love the idea to use an "it"
> parameter and the type of the parameter
> can be inferred from the container of the block in most cases. This
> works perfect for both cases with and without "it":
>
> { it.myInterfaceStringSetter = 'Nummer' myInterfaceNumberSetter = 80 }
>
> But in some cases the MyJavaInterface has a generic parameter wich is
> the type of another expression of MyModelElement (as in my previous
> post). Here I understand so far to use the JvmTypesBuilder.inferredType,
> so this is calculated later. I tried this, where x.config is the
> expression we are talking about and x.binding is the other one which
> should serve the generic parameter:
>
> def private void infer (MyModelElement x, JvmDeclaredType p) {
> p.members += x.toMethod( x.name, inferredType ) [
> if (x.binding != null)
> body = x.binding
> ]
>
> if (x.config != null) {
> val type = typeRefs.getTypeForName( typeof( MyJavaInterface ),
> x, inferredType( x.binding ) )
> p.members += x.config.toMethod( x.name + 'Config', type ) [
> parameters += x.config.toParameter('it', type )
> body = x.config
> ]
> }
> }
>
>
> As I used "inferredType( x.binding )" I expected that this works out of
> the box. Unfortunately this stops the editor working by an
> "java.lang.IllegalStateException: The TypeResolutionStateAdapter was
> removed while resolving" in
> DefaultReentrantTypeResolver.reentrantResolve(DefaultReentrantTypeResolver.java:113).
>
>
> Am I doing something wrong? Any hints or should I fill a bug report?
>
> Thanks for the great help,
> Ingo
Re: [2.3.1 to 2.6.2] Migrating ScopeProvider [message #1424786 is a reply to message #1424622] Tue, 16 September 2014 12:16
Ingo Meyer is currently offline Ingo Meyer
Messages: 127
Registered: July 2009
Senior Member
Yes, the IllegalStateException seems to be a follow-up.
The cause is in InferredTypeIndicator#getTypeReference, where the type could not be resolved. When he enters the expression part "if (expression != null && resolvedTypes != null)" the IllegalStateException is thrown but comes from a totally other expression. That expression is error-free normally. So must be a bug.
I try to make an example with domain-model-example and report that...

Greetings,
Ingo
 Topic: ConcurrentModificationException with xtend-maven-plugin:2.7.1
Re: ConcurrentModificationException with xtend-maven-plugin:2.7.1 [message #1424801 is a reply to message #1422669] Tue, 16 September 2014 12:41
Johannes Dorn is currently offline Johannes Dorn
Messages: 18
Registered: June 2013
Junior Member
Now that 2.7.1 isn't working, is there an update site where we can get 2.7.0? Preferably one that is stable. I've only found a zipped archive.
Forum: Mihini
 Topic: Mihini Installation on Raspberry Pi
icon5.gif  Mihini Installation on Raspberry Pi [message #1424741] Tue, 16 September 2014 10:46
Hema Vadlamudi is currently offline Hema Vadlamudi
Messages: 1
Registered: September 2014
Junior Member
Hi,
I am trying to install and configure Mihini on Raspberry Pi. For Mihini Stuff I am referring to this website http://wiki.eclipse.org/Mihini/Build_Mihini . In that I am not able to execute telnet command Sad
$ telnet localhost 2000. on executing this command I am getting the below error
$ telnet localhost 2000
Trying 127.0.0.1...
Trying ::1...
telnet: Unable to connect to remote host: Address family not supported by protocol


can anyone help me in solving this problem
Forum: NatTable
 Topic: How to obtain the cellValue in IMouseAction
How to obtain the cellValue in IMouseAction [message #1424735] Tue, 16 September 2014 10:39
Phaedrus The Greek is currently offline Phaedrus The Greek
Messages: 16
Registered: August 2011
Junior Member
Is this the correct method to obtain the Cell's dataValue ?

int sourceColumnPosition = ((NatEventData)event.data).getColumnPosition();			
int sourceRowPosition = ((NatEventData)event.data).getRowPosition();
				
int columnPosition = LayerUtil.convertColumnPosition(natTable, sourceColumnPosition, bodyDataLayer);				
int rowPosition = LayerUtil.convertRowPosition(natTable, sourceRowPosition, bodyDataLayer);
				
Object dataValue = natTable.getDataValueByPosition(columnPosition, rowPosition);


Is there any way to do it without a reference to the bodyLayer ?

Thanks,

Jay
Re: How to obtain the cellValue in IMouseAction [message #1424742 is a reply to message #1424735] Tue, 16 September 2014 10:47
Dirk Fauth is currently offline Dirk Fauth
Messages: 1284
Registered: July 2012
Senior Member
Is the columnPosition different to the sourceColumnPosition?

IIRC you should get the same result without the position transformation as the mouse event is executed on the visible part and therefore natTable.getDataValueByPosition() should return the correct value using the NatEventData values.
Forum: Papyrus
 Topic: Hide/remove base_Class property from stereotypes
Re: Hide/remove base_Class property from stereotypes [message #1424757 is a reply to message #1422706] Tue, 16 September 2014 11:18
Camille Letavernier is currently offline Camille Letavernier
Messages: 452
Registered: February 2011
Senior Member
Hi Luis,

The duplicates shouldn't happen. Maybe you specified too many extension links to the same metaclass? For example, you may have specified that Stereotype S1 is an extension of Class, and that the Stereotype S2, which is a specialization of S1, also is an extension of Class? (In which case you have specified the same information twice)

However, if I reproduce these steps, I indeed have a "duplicates" annotation in my profile, but still don't see any "base_X" reference in my properties view. So this is not sufficient.

The Ecore definition itself seems valid, but Papyrus also needs to check the actual profile. For example, your base_X properties should look like the following:

<packagedElement xmi:type="uml:Stereotype" xmi:id="_V1zaID2SEeSJvbA25IPzgA" name="Stereotype2">
    <ownedAttribute xmi:type="uml:Property" xmi:id="_XeFmkD2SEeSJvbA25IPzgA" name="base_Class" association="_XeFmkT2SEeSJvbA25IPzgA">
      <type xmi:type="uml:Class" href="pathmap://UML_METAMODELS/UML.metamodel.uml#Class"/>
    </ownedAttribute>
  </packagedElement>


With an "association=_xxxxxxxx" attribute (Otherwise it is considered to be a standard Property; not a metaclass extension; and it will be displayed in the properties view)


Camille Letavernier
Papyrus developer
Forum: ATL
 Topic: How to add new model elements in refining mode
How to add new model elements in refining mode [message #1424771] Tue, 16 September 2014 11:47
Rafael Durelli is currently offline Rafael Durelli
Messages: 58
Registered: September 2012
Member
Hello guys, I was interested in create a new model elements in refining model... I was searching and I was able to found that I should use the "including" instead of "newInstance()". As can be seen in the source code bellow the first including works pretty good. However, when I need to create another element inside of the earlier created I got an error message like this: org.eclipse.m2m.atl.engine.emfvm.VMException: Feature codeElement does not exist on Element

What am I doing wrong? Can someone help me? Thanks in advance.


module testingAllURI;
create OUT : build, OUT1 : code, OUT2 : conceptual, OUT3 : core, OUT4 : data, OUT5 : event, OUT6 : kdm, OUT7 : plat, OUT8 : source, OUT9 : structure, OUT10 : ui, OUT11 : action refining IN : build, IN1 : code, IN2 : conceptual, IN3 : core, IN4 : data, IN5 : event, IN6 : kdm, IN7 : plat, IN8 : source, IN9 : structure, IN10 : ui, IN11 : action;


rule addClassToPackage {
	
	from 
		source : code!Package (source.name = 'testelucas')
	to
		target: code!Package (
			codeElement <- source.codeElement->including(newClassUnit) //here it works
		
		),
		newClassUnit: code!ClassUnit (
			
			name <- 'theNameOfTheClasse',
			codeElement <- newClassUnit.codeElement->including(newStorableUnit)//I got the error here....
		
		),
		newStorableUnit: code!StorableUnit (
			
			name <- 'theAttribute'
		
		)
		

}

Forum: Oomph
 Topic: Git clone task how-to
Re: Git clone task how-to [message #1424547 is a reply to message #1424098] Tue, 16 September 2014 05:16
Ed Merks is currently offline Ed Merks
Messages: 26046
Registered: July 2009
Senior Member
Christian,

Comments below.

On 15/09/2014 3:04 PM, Christian W. Damus wrote:
> Hi, Cedric, Ed,
>
> If you look at the Papyrus model as an example, be aware that the
> pattern for specifying targlets and project imports there is quite
> different. Papyrus comprises such a large number of plug-in projects
> that most of the development team doesn't maintain workspaces
> containing all of those projects. Instead, the modus operandi is to
> create a PDE target including an entire nightly build of Papyrus and
> import only a subset of the plug-ins into the workspace that one
> intends to edit.
An interesting improvement of the targlet design
(https://bugs.eclipse.org/bugs/show_bug.cgi?id=439951) is that it can
and will materialize both a source project and the corresponding binary
IU. So depending on what you make visible via the source locator and
via the repository URLs, you'll can get both or either.
>
> For most cases, this is probably not the model that you most want to
> follow. That would be the EMF model. :-)
>
> However, Ed, you've got me thinking again about how I might revise
> this scheme to simplify the Papyrus setup model and improve its
> maintainability. Currently, I've got various sub(-sub)-projects
> defined that partition our plug-ins into various
> functional/architectural units. The Papyrus build does actually group
> plug-ins by feature in no small degree, so I'm wondering whether I
> can't leverage that to eliminate some of the redundancy of
> project-import tasks.
I imagine many of these subsets are driven by the transitive
dependencies of a feature... So if you include the p2 URL for a full
SDK build in the targlet task, and specify a source locator that helps
restrict which projects are visible as IUs, you should be able to
achieve a similar goal.

Note also that (much like Buckminster), you can specify Component
Definition to turn a normal project into an IU with an ID, a version
range, and development-time requirements or a Component Extension to
specify additional development-time requirements.
>
> My idea is like this: I would have a common targlet task that
> installs the latest nightly build via the Papyrus SDK feature. That is
> still needed in any case. But, the sub-projects would each define a
> targlet with some specific Papyrus feature required (e.g., "SysML
> Diagrams") and a source locator that would fetch those projects from
> the local git clone.
Exactly.
>
> My questions are:
>
> * would the source locator cause the relevant plug-in projects to be
> imported from git even if they are
> also installed in the target platform via the common Papyrus SDK
> nightly build targlet?
In the end you only have a single target definition so depending on
what's visible via source locators and what's visible via p2 URLs you'll
get either only a source project, only a binary IU, or both.
> * and if so, would Oomph perhaps not install in the PDE target any
> plug-ins (from the common SDK
> build targlet) that were imported into the workspace by some other
> targlet's source locator?
It will generally try to install both a source version and a binary IU,
with preference for the former if available and with the later being
installed only if it's compatible, i.e., if the source bundle's version
is x.y.z.qualifier it will install a binary IU in the range of
[x.y.z,x.y.z+1).
>
> The reason why I need plug-ins to be in both the PDE target (as
> binaries) and in the workspace is that I often run a run-time
> workbench configuration taking only the PDE target and nothing from
> the workspace, for the purpose of comparing the behaviour of the
> nightly build against changes in progress in my workspace.
Yes, that makes sense. Another advantage of materializing both is that
you could materialize a workspace with a huge number of projects and
then selectively close a large number of them so that the build doesn't
take forever.
>
> I do also still have some projects that aren't in any published
> Papyrus build, such as releng material and developer tools. So, those
> would need still to be managed by project-import tasks, and that's fine.
In general, a projects import task is only needed if the project isn't
an installable unit, but given you can use a Component Definition to
turn a project into an IU, it's clear that a projects import task isn't
strictly needed at all. Also, given Component Definitions and Component
Extensions can specify Requirements (exactly like what you're doing with
the targlet task), it's possible to move most of the requirements you're
specifying in the targlet task to Component Definitions or Component
Extensions.
>
> Thanks,
>
> Christian
>
>
> On 2014-09-15 04:41:27 +0000, Ed Merks said:
>
>> Cedric,
>>
>> Comments below.
>>
>> On 13/09/2014 12:30 PM, cedric wrote:
>>> Hello
>>>
>>> I would like to : clone git repo, add git repo to repo list, and
>>> finally
>>> import the projects in my workspace...
>>>
>>> I just created this task (and many others which fails) :
>>>
>>> <setupTask
>>> xsi:type="git:GitCloneTask"
>>> id="git.clone.toodledo-java"
>>> remoteURI="git@xxxxxxxx:cgava/toodledo-java.git">
>>> <description>da toodledo git clone description</description>
>>> </setupTask>
>>>
>>> Is it sufficient ?
>> That only creates the clone on disk...
>>> Shall I add an import project task ?
>> As Alexander mentions, you'll have to do something else to actually
>> get the projects into the workspace. A Projects Import Task is one
>> way to do that, but probably not the best way. Assuming the projects
>> are generally Eclipse plugins and features, better is to use a
>> targlet task.
>>> Which variable contains the location where this repository have been
>>> cloned (my git clone lcation rule is : ${workspace.location/.git/}
>>> ${@id.remoteURI|gitRepository})
>> Variables are inferred based on the id of the task, i.e.,
>> <id>.<feature-name>. (Actually it uses
>> ExtendedMetaData.INSTANCE.getName(eAttribute), not
>> eAttribute.getName() so is influenced by the extended metadata
>> annotations on the features.)
>>
>> So the best way would be to use a targlet task and list your root
>> features/plugins and a source locator using the variable for the
>> location of your clone. This will ensure that all the dependencies
>> of your source projects are satisfied by the target platform, and
>> will let you avoid having to create a long list of all your
>> projects... (And note that you can create an Oomph -> Component
>> Definition for projects that aren't a feature or a plugin to make it
>> look like an installable unit.)
>>
>> Looking at example setups in the index will help. I've been trying
>> to author the targlet tasks so they consistently use
>> eclipse.target.platform to select alternative versions of the target
>> platform, i.e., Mars, Luna, Juno, and so on...
>>>
>>> Thank you for your help.
>>>
>>> Cedric
>
>
Re: Git clone task how-to [message #1424783 is a reply to message #1424547] Tue, 16 September 2014 12:12
Christian W. Damus is currently offline Christian W. Damus
Messages: 787
Registered: July 2009
Senior Member
Whoo, lots of goodies here to investigate. It would appear that you
have my use cases thoroughly covered. Can this tool with the funny
name get any better?

Thanks so much, Ed!

cW


On 2014-09-16 05:16:13 +0000, Ed Merks said:

> Christian,
>
> Comments below.
>
> On 15/09/2014 3:04 PM, Christian W. Damus wrote:
>> Hi, Cedric, Ed,
>>
>> If you look at the Papyrus model as an example, be aware that the
>> pattern for specifying targlets and project imports there is quite
>> different. Papyrus comprises such a large number of plug-in projects
>> that most of the development team doesn't maintain workspaces
>> containing all of those projects. Instead, the modus operandi is to
>> create a PDE target including an entire nightly build of Papyrus and
>> import only a subset of the plug-ins into the workspace that one
>> intends to edit.
> An interesting improvement of the targlet design
> (https://bugs.eclipse.org/bugs/show_bug.cgi?id=439951) is that it can
> and will materialize both a source project and the corresponding binary
> IU. So depending on what you make visible via the source locator and
> via the repository URLs, you'll can get both or either.
>>
>> For most cases, this is probably not the model that you most want to
>> follow. That would be the EMF model. :-)
>>
>> However, Ed, you've got me thinking again about how I might revise this
>> scheme to simplify the Papyrus setup model and improve its
>> maintainability. Currently, I've got various sub(-sub)-projects
>> defined that partition our plug-ins into various
>> functional/architectural units. The Papyrus build does actually group
>> plug-ins by feature in no small degree, so I'm wondering whether I
>> can't leverage that to eliminate some of the redundancy of
>> project-import tasks.
> I imagine many of these subsets are driven by the transitive
> dependencies of a feature... So if you include the p2 URL for a full
> SDK build in the targlet task, and specify a source locator that helps
> restrict which projects are visible as IUs, you should be able to
> achieve a similar goal.
>
> Note also that (much like Buckminster), you can specify Component
> Definition to turn a normal project into an IU with an ID, a version
> range, and development-time requirements or a Component Extension to
> specify additional development-time requirements.
>>
>> My idea is like this: I would have a common targlet task that installs
>> the latest nightly build via the Papyrus SDK feature. That is still
>> needed in any case. But, the sub-projects would each define a targlet
>> with some specific Papyrus feature required (e.g., "SysML Diagrams")
>> and a source locator that would fetch those projects from the local git
>> clone.
> Exactly.
>>
>> My questions are:
>>
>> * would the source locator cause the relevant plug-in projects to be
>> imported from git even if they are
>> also installed in the target platform via the common Papyrus SDK
>> nightly build targlet?
> In the end you only have a single target definition so depending on
> what's visible via source locators and what's visible via p2 URLs
> you'll get either only a source project, only a binary IU, or both.
>> * and if so, would Oomph perhaps not install in the PDE target any
>> plug-ins (from the common SDK
>> build targlet) that were imported into the workspace by some other
>> targlet's source locator?
> It will generally try to install both a source version and a binary IU,
> with preference for the former if available and with the later being
> installed only if it's compatible, i.e., if the source bundle's version
> is x.y.z.qualifier it will install a binary IU in the range of
> [x.y.z,x.y.z+1).
>>
>> The reason why I need plug-ins to be in both the PDE target (as
>> binaries) and in the workspace is that I often run a run-time workbench
>> configuration taking only the PDE target and nothing from the
>> workspace, for the purpose of comparing the behaviour of the nightly
>> build against changes in progress in my workspace.
> Yes, that makes sense. Another advantage of materializing both is that
> you could materialize a workspace with a huge number of projects and
> then selectively close a large number of them so that the build doesn't
> take forever.
>>
>> I do also still have some projects that aren't in any published Papyrus
>> build, such as releng material and developer tools. So, those would
>> need still to be managed by project-import tasks, and that's fine.
> In general, a projects import task is only needed if the project isn't
> an installable unit, but given you can use a Component Definition to
> turn a project into an IU, it's clear that a projects import task isn't
> strictly needed at all. Also, given Component Definitions and
> Component Extensions can specify Requirements (exactly like what you're
> doing with the targlet task), it's possible to move most of the
> requirements you're specifying in the targlet task to Component
> Definitions or Component Extensions.
>>
>> Thanks,
>>
>> Christian
>>
>>
>> On 2014-09-15 04:41:27 +0000, Ed Merks said:
>>
>>> Cedric,
>>>
>>> Comments below.
>>>
>>> On 13/09/2014 12:30 PM, cedric wrote:
>>>> Hello
>>>>
>>>> I would like to : clone git repo, add git repo to repo list, and finally
>>>> import the projects in my workspace...
>>>>
>>>> I just created this task (and many others which fails) :
>>>>
>>>> <setupTask
>>>> xsi:type="git:GitCloneTask"
>>>> id="git.clone.toodledo-java"
>>>> remoteURI="git@xxxxxxxx:cgava/toodledo-java.git">
>>>> <description>da toodledo git clone description</description>
>>>> </setupTask>
>>>>
>>>> Is it sufficient ?
>>> That only creates the clone on disk...
>>>> Shall I add an import project task ?
>>> As Alexander mentions, you'll have to do something else to actually get
>>> the projects into the workspace. A Projects Import Task is one way to
>>> do that, but probably not the best way. Assuming the projects are
>>> generally Eclipse plugins and features, better is to use a targlet task.
>>>> Which variable contains the location where this repository have been
>>>> cloned (my git clone lcation rule is : ${workspace.location/.git/}
>>>> ${@id.remoteURI|gitRepository})
>>> Variables are inferred based on the id of the task, i.e.,
>>> <id>.<feature-name>. (Actually it uses
>>> ExtendedMetaData.INSTANCE.getName(eAttribute), not eAttribute.getName()
>>> so is influenced by the extended metadata annotations on the features.)
>>>
>>> So the best way would be to use a targlet task and list your root
>>> features/plugins and a source locator using the variable for the
>>> location of your clone. This will ensure that all the dependencies of
>>> your source projects are satisfied by the target platform, and will let
>>> you avoid having to create a long list of all your projects... (And
>>> note that you can create an Oomph -> Component Definition for projects
>>> that aren't a feature or a plugin to make it look like an installable
>>> unit.)
>>>
>>> Looking at example setups in the index will help. I've been trying to
>>> author the targlet tasks so they consistently use
>>> eclipse.target.platform to select alternative versions of the target
>>> platform, i.e., Mars, Luna, Juno, and so on...
>>>>
>>>> Thank you for your help.
>>>>
>>>> Cedric
Re: Git clone task how-to [message #1424795 is a reply to message #1424783] Tue, 16 September 2014 12:31
Ed Merks is currently offline Ed Merks
Messages: 26046
Registered: July 2009
Senior Member
Christian,

Yes, this was a feature Dennis Huebner requested. It seemed very
sensible to be able to switch between the workspace version and one from
a binary build easily... The Xtext project is quite huge too, and I
hate waiting for it to build every time I change something in emf.ecore,
but I often want to contribute bug fixes too. This way I can do both
easily...


On 16/09/2014 2:12 PM, Christian W. Damus wrote:
> Whoo, lots of goodies here to investigate. It would appear that you
> have my use cases thoroughly covered. Can this tool with the funny
> name get any better?
>
> Thanks so much, Ed!
>
> cW
>
>
> On 2014-09-16 05:16:13 +0000, Ed Merks said:
>
>> Christian,
>>
>> Comments below.
>>
>> On 15/09/2014 3:04 PM, Christian W. Damus wrote:
>>> Hi, Cedric, Ed,
>>>
>>> If you look at the Papyrus model as an example, be aware that the
>>> pattern for specifying targlets and project imports there is quite
>>> different. Papyrus comprises such a large number of plug-in
>>> projects that most of the development team doesn't maintain
>>> workspaces containing all of those projects. Instead, the modus
>>> operandi is to create a PDE target including an entire nightly build
>>> of Papyrus and import only a subset of the plug-ins into the
>>> workspace that one intends to edit.
>> An interesting improvement of the targlet design
>> (https://bugs.eclipse.org/bugs/show_bug.cgi?id=439951) is that it can
>> and will materialize both a source project and the corresponding
>> binary IU. So depending on what you make visible via the source
>> locator and via the repository URLs, you'll can get both or either.
>>>
>>> For most cases, this is probably not the model that you most want to
>>> follow. That would be the EMF model. :-)
>>>
>>> However, Ed, you've got me thinking again about how I might revise
>>> this scheme to simplify the Papyrus setup model and improve its
>>> maintainability. Currently, I've got various sub(-sub)-projects
>>> defined that partition our plug-ins into various
>>> functional/architectural units. The Papyrus build does actually
>>> group plug-ins by feature in no small degree, so I'm wondering
>>> whether I can't leverage that to eliminate some of the redundancy of
>>> project-import tasks.
>> I imagine many of these subsets are driven by the transitive
>> dependencies of a feature... So if you include the p2 URL for a full
>> SDK build in the targlet task, and specify a source locator that
>> helps restrict which projects are visible as IUs, you should be able
>> to achieve a similar goal.
>>
>> Note also that (much like Buckminster), you can specify Component
>> Definition to turn a normal project into an IU with an ID, a version
>> range, and development-time requirements or a Component Extension to
>> specify additional development-time requirements.
>>>
>>> My idea is like this: I would have a common targlet task that
>>> installs the latest nightly build via the Papyrus SDK feature. That
>>> is still needed in any case. But, the sub-projects would each
>>> define a targlet with some specific Papyrus feature required (e.g.,
>>> "SysML Diagrams") and a source locator that would fetch those
>>> projects from the local git clone.
>> Exactly.
>>>
>>> My questions are:
>>>
>>> * would the source locator cause the relevant plug-in projects to be
>>> imported from git even if they are
>>> also installed in the target platform via the common Papyrus SDK
>>> nightly build targlet?
>> In the end you only have a single target definition so depending on
>> what's visible via source locators and what's visible via p2 URLs
>> you'll get either only a source project, only a binary IU, or both.
>>> * and if so, would Oomph perhaps not install in the PDE target any
>>> plug-ins (from the common SDK
>>> build targlet) that were imported into the workspace by some other
>>> targlet's source locator?
>> It will generally try to install both a source version and a binary
>> IU, with preference for the former if available and with the later
>> being installed only if it's compatible, i.e., if the source bundle's
>> version is x.y.z.qualifier it will install a binary IU in the range
>> of [x.y.z,x.y.z+1).
>>>
>>> The reason why I need plug-ins to be in both the PDE target (as
>>> binaries) and in the workspace is that I often run a run-time
>>> workbench configuration taking only the PDE target and nothing from
>>> the workspace, for the purpose of comparing the behaviour of the
>>> nightly build against changes in progress in my workspace.
>> Yes, that makes sense. Another advantage of materializing both is
>> that you could materialize a workspace with a huge number of projects
>> and then selectively close a large number of them so that the build
>> doesn't take forever.
>>>
>>> I do also still have some projects that aren't in any published
>>> Papyrus build, such as releng material and developer tools. So,
>>> those would need still to be managed by project-import tasks, and
>>> that's fine.
>> In general, a projects import task is only needed if the project
>> isn't an installable unit, but given you can use a Component
>> Definition to turn a project into an IU, it's clear that a projects
>> import task isn't strictly needed at all. Also, given Component
>> Definitions and Component Extensions can specify Requirements
>> (exactly like what you're doing with the targlet task), it's possible
>> to move most of the requirements you're specifying in the targlet
>> task to Component Definitions or Component Extensions.
>>>
>>> Thanks,
>>>
>>> Christian
>>>
>>>
>>> On 2014-09-15 04:41:27 +0000, Ed Merks said:
>>>
>>>> Cedric,
>>>>
>>>> Comments below.
>>>>
>>>> On 13/09/2014 12:30 PM, cedric wrote:
>>>>> Hello
>>>>>
>>>>> I would like to : clone git repo, add git repo to repo list, and
>>>>> finally
>>>>> import the projects in my workspace...
>>>>>
>>>>> I just created this task (and many others which fails) :
>>>>>
>>>>> <setupTask
>>>>> xsi:type="git:GitCloneTask"
>>>>> id="git.clone.toodledo-java"
>>>>> remoteURI="git@xxxxxxxx:cgava/toodledo-java.git">
>>>>> <description>da toodledo git clone description</description>
>>>>> </setupTask>
>>>>>
>>>>> Is it sufficient ?
>>>> That only creates the clone on disk...
>>>>> Shall I add an import project task ?
>>>> As Alexander mentions, you'll have to do something else to actually
>>>> get the projects into the workspace. A Projects Import Task is
>>>> one way to do that, but probably not the best way. Assuming the
>>>> projects are generally Eclipse plugins and features, better is to
>>>> use a targlet task.
>>>>> Which variable contains the location where this repository have been
>>>>> cloned (my git clone lcation rule is : ${workspace.location/.git/}
>>>>> ${@id.remoteURI|gitRepository})
>>>> Variables are inferred based on the id of the task, i.e.,
>>>> <id>.<feature-name>. (Actually it uses
>>>> ExtendedMetaData.INSTANCE.getName(eAttribute), not
>>>> eAttribute.getName() so is influenced by the extended metadata
>>>> annotations on the features.)
>>>>
>>>> So the best way would be to use a targlet task and list your root
>>>> features/plugins and a source locator using the variable for the
>>>> location of your clone. This will ensure that all the
>>>> dependencies of your source projects are satisfied by the target
>>>> platform, and will let you avoid having to create a long list of
>>>> all your projects... (And note that you can create an Oomph ->
>>>> Component Definition for projects that aren't a feature or a plugin
>>>> to make it look like an installable unit.)
>>>>
>>>> Looking at example setups in the index will help. I've been trying
>>>> to author the targlet tasks so they consistently use
>>>> eclipse.target.platform to select alternative versions of the
>>>> target platform, i.e., Mars, Luna, Juno, and so on...
>>>>>
>>>>> Thank you for your help.
>>>>>
>>>>> Cedric
>
>
 Topic: No report for failing ProjectsImportTask.
No report for failing ProjectsImportTask. [message #1424615] Tue, 16 September 2014 07:21
Arthur Daussy is currently offline Arthur Daussy
Messages: 5
Registered: September 2014
Junior Member
Hi Oomph team,

I was wondering if there is a way to catch an event or a log for a failing ProjectImportTask. Let's say that the setup file contains a ProjectImportTask with a SourceLocator pointing to an incorrect location. When perfoming the task Oomph runs it but do not fail and does not log any error. There is a multistatus object but it seems not to be filled neither displayed by the context. My main goal here would be to find a way to detect that the setup action has failed. My context is the same described in the post https://www.eclipse.org/forums/index.php/mv/msg/782061/1413835/#msg_1413835).

Also if by any chance you could answer to the second question of the post https://www.eclipse.org/forums/index.php/t/805030/ that could be really nice.

Anyway thanks for your work and help,

Regards,

Arthur
Re: No report for failing ProjectsImportTask. [message #1424651 is a reply to message #1424615] Tue, 16 September 2014 08:24
Ed Merks is currently offline Ed Merks
Messages: 26046
Registered: July 2009
Senior Member
Arthur,

Comments below.

On 16/09/2014 9:21 AM, Arthur Daussy wrote:
> Hi Oomph team,
>
> I was wondering if there is a way to catch an event or a log for a
> failing ProjectImportTask. Let's say that the setup file contains a
> ProjectImportTask with a SourceLocator pointing to an incorrect
> location. When perfoming the task Oomph runs it but do not fail and
> does not log any error. There is a multistatus object but it seems not
> to be filled neither displayed by the context.
Eike is on vacation, but it looks to me like the intent of that design
must be to do something with that status object...

When you say it seems not to be filled, you looked with the debugger and
saw that no children were added to the status? Certainly nothing is
done with the status and I would expect some logic to check that the
status is okay or not and to throw a CoreException if it's not okay.
> My main goal here would be to find a way to detect that the setup
> action has failed. My context is the same described in the post
> https://www.eclipse.org/forums/index.php/mv/msg/782061/1413835/#msg_1413835).
Please open a bugzilla and I'll try to reproduce it.
>
> Also if by any chance you could answer to the second question of the
> post https://www.eclipse.org/forums/index.php/t/805030/ that could be
> really nice.
That thread looks complete. What question is left outstanding?
>
> Anyway thanks for your work and help,
>
> Regards,
>
> Arthur
>
Re: No report for failing ProjectsImportTask. [message #1424806 is a reply to message #1424651] Tue, 16 September 2014 12:52
Arthur Daussy is currently offline Arthur Daussy
Messages: 5
Registered: September 2014
Junior Member
Hi Ed,
First thanks for your answer.




Quote:
> I was wondering if there is a way to catch an event or a log for a
> failing ProjectImportTask. Let's say that the setup file contains a
> ProjectImportTask with a SourceLocator pointing to an incorrect
> location. When perfoming the task Oomph runs it but do not fail and
> does not log any error. There is a multistatus object but it seems not
> to be filled neither displayed by the context.
Eike is on vacation, but it looks to me like the intent of that design
must be to do something with that status object...


Yes I agree that the status object should be used somewhere to check if something went wrong but I could not find where etiher.


Quote:
When you say it seems not to be filled, you looked with the debugger and
saw that no children were added to the status?


Yes I have debugged the application and I think the problem is in:
org.eclipse.oomph.resources.impl.SourceLocatorImpl.getRootContainer(SourceLocator)
 public static BackendContainer getRootContainer(SourceLocator sourceLocator)
  {
    String rootFolder = sourceLocator.getRootFolder();

    BackendResource rootResource = BackendResource.get(rootFolder);
    if (rootResource instanceof BackendContainer)
    {
      return (BackendContainer)rootResource;
    }

    return null;
  }


In my case BackendResource.get(rootFolder) does not return a BackendContainer so this method returns null. This implies that nothing is done in:
org.eclipse.oomph.resources.impl.SourceLocatorImpl.handleProjects(SourceLocator, EList<ProjectFactory>, ProjectHandler, MultiStatus, IProgressMonitor).

 public static void handleProjects(final SourceLocator sourceLocator, final EList<ProjectFactory> defaultProjectFactories,
      final ProjectHandler projectHandler, final MultiStatus status, final IProgressMonitor monitor)
  {
    final BackendContainer rootContainer = SourceLocatorImpl.getRootContainer(sourceLocator);
    if (rootContainer != null)
    {
      rootContainer.accept(new BackendResource.Visitor.Default()
      {
        private final Set<String> excludedPaths = new HashSet<String>(sourceLocator.getExcludedPaths());

        @Override
        public boolean visitContainer(BackendContainer container, IProgressMonitor monitor) throws BackendException
        {
          ResourcesPlugin.checkCancelation(monitor);

          String path = container.getSystemRelativePath();
          if (excludedPaths.contains(path))
          {
            return false;
          }

          IProject project = loadProject(sourceLocator, defaultProjectFactories, rootContainer, container, monitor);
          if (ResourcesUtil.matchesPredicates(project, sourceLocator.getPredicates()))
          {
            try
            {
              projectHandler.handleProject(project, container);
            }
            catch (Exception ex)
            {
              SourceLocatorImpl.addStatus(status, ResourcesPlugin.INSTANCE, project.getName(), ex);
            }

            if (!sourceLocator.isLocateNestedProjects())
            {
              return false;
            }
          }

          return true;
        }
      }, monitor);
    }
  }


Maybe the case where rootContainer == null should be handle and log into the status.
Here the related bug: https://bugs.eclipse.org/bugs/show_bug.cgi?id=444250.

Quote:
That thread looks complete. What question is left outstanding?


I would like to have your thoughts on this part (this is my mistake I should have opened a second discussion):
Quote:

> I have another question wich is not directly related to this problem but linked to the ProjectImportTask implementation. I was wondering why do you use a property file to keep track of the imported projects (.plugins/org.eclipse.oomph.setup.projects/import-history.properties)? In my use case, each time I open a eclipse I have to clear all the projects of the workspace. Then I only import the project defined in the Oomph model. However with the current implementation the "isNeeded" method returns false because the property file seems corrupted (it has a key for my project but no value. So it returns "" instead of null). Each time I clear my workspace I need to clear this property file too. It feels a bit like a hack so I was wondering if there is a ClearWorkspace task that I have missed.
I think Ed wants to comment on this issue because he's implemented the import history...


Thanks again for your help,

Regards,

Arthur
Forum: Sirius
 Topic: Sample Model in Tutorial
Re: Sample Model in Tutorial [message #1424647 is a reply to message #1420568] Tue, 16 September 2014 08:17
Maxime Porhel is currently offline Maxime Porhel
Messages: 84
Registered: July 2009
Member
Hi

Le 10/09/2014 13:10, MG Gharib a écrit :
> Hi guys, I have some problems related to the same example, when I reach
> the point to create edges in the pallet there is a problem related the
> variables I use. For example, when I use [not father.oclIsUndefined()/]

I think you simply should write [not(father.oclIsUndefined())/]

> I get "unrecognized variable " for "father". Similarly, I get the same
> message concerning the variables "members" and "parents".
>
> Any suggestions?


--
Maxime - Obeo

Need professional services for Sirius?
http://www.obeodesigner.com/sirius
 Topic: get the name of the file storing a model
get the name of the file storing a model [message #1424807] Tue, 16 September 2014 12:52
anne-Lise Courbis is currently offline anne-Lise Courbis
Messages: 5
Registered: July 2014
Junior Member
Hi !
I would like to get the name of the file given by the user when he creates a model and use it to automatically name the first created instance. Where and how doing this ?

Thanks for your support.

Anne-Lise
Forum: SWTBot
 Topic: Unable to click a label in DatePicker
Unable to click a label in DatePicker [message #1424634] Tue, 16 September 2014 07:53
Pawan Relan is currently offline Pawan Relan
Messages: 8
Registered: July 2013
Junior Member
Hi All,
I am writing a code to select the date from a Date Picker.
All the date visible are Labels.I want to double click a date so that a date can be selected.Is there any way where we can click labels or is Type casting possible.
Thanks in Advance.

Regards,
Pawan Relan






  • Attachment: Date.docx
    (Size: 24.54KB, Downloaded 3 times)

[Updated on: Tue, 16 September 2014 07:54]

Report message to a moderator

 Topic: SWTBotRadio method click() doesn't work properly
SWTBotRadio method click() doesn't work properly [message #1424811] Tue, 16 September 2014 12:56
pascale prost is currently offline pascale prost
Messages: 2
Registered: November 2013
Junior Member

I am using Eclipse CDT Kepler SR2 (4.3.2) and SWTBot 2.2.2 on Windows 7.

My tests contain a SWTBotRadio.click()call which does not work. The problem seem to be known (see https://bugs.eclipse.org/bugs/show_bug.cgi?id=344484), and I tried to implement the suggested patch with deselectDefaultSelection method. The "deselection" works fine (I can see it happening on the screen), but when calling SWTBotRadio.click(), new radio button is selected and just after previously selected radio button is selected again.

I have also tried to implement the suggested patch in SWTBotRadio.click() method, but the result is the same : previously selected radio button remains selected (the default selection in fact), new selection is discarded.

If I run the application "manually", everything works fine.

Any advice is welcome, I cannot go further with my tests and it is very frustrating...
Forum: Subversive
 Topic: Multirepostory Checkout not working in eclipse
Multirepostory Checkout not working in eclipse [message #1424812] Tue, 16 September 2014 12:57
nishant chawla is currently offline nishant chawla
Messages: 1
Registered: September 2014
Junior Member
Hello
I am new to the eclipse . I checked-out some folders from different repository location into one root folder. For a day its working great , no problem occurs ,everything was good. On next day when I started the eclipse again it shows two of the folders in the Obstructed state. It was not accepting any changes from the repository. Neither it allowed to me to create that folder with the same name again. I had to set up the whole project again. It did happen on the very next day again. So I had to shift to the Old IDE that I was using for the development.


May be it is the problem of the my repository structure, but it is working fine with my latter editor which was intelliJIDEA. So I don't think there would be any problem. If there will you please suggest for the best project structure for the eclipse.


Second major problem that I am facing with eclipse that libraries automatically gets detached from the project as my eclipse crashes.

Third the most common is that eclipse crashes regularly. I am using a centos 6.4 Intel core-i5 processor with 8 gb ram.
I have changed -Xms parameter to 2g in ini file of the eclipse. If there is some problem in the configuration then please suggest me to improve it.

Please provide me the solutions as soon as possible.
Forum: EMF
 Topic: [CDO] MySQL database is broken after restart of CDO server
Re: [CDO] MySQL database is broken after restart of CDO server [message #1424717 is a reply to message #1415254] Tue, 16 September 2014 10:14
Stefan Weise is currently offline Stefan Weise
Messages: 8
Registered: July 2009
Junior Member
Update:

The problem occurs when the MYSQL database is running on Linux because here the file system is case sensitive. Each table within a database corresponds to at least one file within the database directory. See http://dev.mysql.com/doc/refman/5.7/en/identifier-case-sensitivity.html for more information about that.

The CDO server seems to use different spellings for the creation and the access of the tables even after a unclean restart of the database server. These are the table names in the database after setting up the database:
+----------------------------------------+
| Tables_in_cdo_repo                     |
+----------------------------------------+
| CDO_BRANCHES                           |
| CDO_COMMIT_INFOS                       |
| CDO_EXTERNAL_REFS                      |
| CDO_LOBS                               |
| CDO_LOCKS                              |
| CDO_LOCK_AREAS                         |
| CDO_OBJECTS                            |
| CDO_PACKAGE_INFOS                      |
| CDO_PACKAGE_UNITS                      |
| CDO_PROPERTIES                         |
| ERESOURCE_CDOBINARYRESOURCE            |
| ERESOURCE_CDORESOURCE                  |
| ERESOURCE_CDORESOURCEFOLDER            |
| ERESOURCE_CDORESOURCEFOLDER_NODES_LIST |
| ERESOURCE_CDORESOURCE_CONTENTS_LIST    |
| ERESOURCE_CDOTEXTRESOURCE              |
+----------------------------------------+

But after an unclean restart of the CDO server I get the following exception:
[INFO] CDO server starting
[WARN] Detected crash of repository yt_repo
[ERROR] com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Table 'cdo_repo.cdo_objects' doesn't exist
org.eclipse.net4j.db.DBException: com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Table 'cdo_repo.cdo_objects' doesn't exist
	at org.eclipse.emf.cdo.server.internal.db.DBStore.visitAllTables(DBStore.java:288)

Now the server wants to access the table 'cdo_repo.cdo_objects' which does not exist in the database. But 'cdo_repo.CDO_OBJECTS' does exist.

mysql> select * from cdo_objects;
ERROR 1146 (42S02): Table 'cdo_repo.cdo_objects' doesn't exist

mysql> select * from CDO_OBJECTS;
+--------+-----------+---------------+
| CDO_ID | CDO_CLASS | CDO_CREATED   |
+--------+-----------+---------------+
|      1 |        -5 | 1410862129468 |
+--------+-----------+---------------+
1 row in set (0.00 sec)


Do you have an idea where this different spelling comes from and how we can avoid it?

Regards,

Stefan
Re: [CDO] MySQL database is broken after restart of CDO server [message #1424789 is a reply to message #1424717] Tue, 16 September 2014 12:23
Christophe Bouhier is currently offline Christophe Bouhier
Messages: 918
Registered: July 2009
Senior Member
On 16-09-14 12:14, Stefan Weise wrote:
> Update:
>
> The problem occurs when the MYSQL database is running on Linux because
> here the file system is case sensitive. Each table within a database
> corresponds to at least one file within the database directory. See
> http://dev.mysql.com/doc/refman/5.7/en/identifier-case-sensitivity.html
> for more information about that.
>
> The CDO server seems to use different spellings for the creation and the
> access of the tables even after a unclean restart of the database
> server. These are the table names in the database after setting up the
> database:
>
> +----------------------------------------+
> | Tables_in_cdo_repo |
> +----------------------------------------+
> | CDO_BRANCHES |
> | CDO_COMMIT_INFOS |
> | CDO_EXTERNAL_REFS |
> | CDO_LOBS |
> | CDO_LOCKS |
> | CDO_LOCK_AREAS |
> | CDO_OBJECTS |
> | CDO_PACKAGE_INFOS |
> | CDO_PACKAGE_UNITS |
> | CDO_PROPERTIES |
> | ERESOURCE_CDOBINARYRESOURCE |
> | ERESOURCE_CDORESOURCE |
> | ERESOURCE_CDORESOURCEFOLDER |
> | ERESOURCE_CDORESOURCEFOLDER_NODES_LIST |
> | ERESOURCE_CDORESOURCE_CONTENTS_LIST |
> | ERESOURCE_CDOTEXTRESOURCE |
> +----------------------------------------+
>
> But after an unclean restart of the CDO server I get the following
> exception:
>
> [INFO] CDO server starting
> [WARN] Detected crash of repository yt_repo
> [ERROR] com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Table
> 'cdo_repo.cdo_objects' doesn't exist
> org.eclipse.net4j.db.DBException:
> com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Table
> 'cdo_repo.cdo_objects' doesn't exist
> at
> org.eclipse.emf.cdo.server.internal.db.DBStore.visitAllTables(DBStore.java:288)
>
>
> Now the server wants to access the table 'cdo_repo.cdo_objects' which
> does not exist in the database. But 'cdo_repo.CDO_OBJECTS' does exist.
>
>
> mysql> select * from cdo_objects;
> ERROR 1146 (42S02): Table 'cdo_repo.cdo_objects' doesn't exist
>
> mysql> select * from CDO_OBJECTS;
> +--------+-----------+---------------+
> | CDO_ID | CDO_CLASS | CDO_CREATED |
> +--------+-----------+---------------+
> | 1 | -5 | 1410862129468 |
> +--------+-----------+---------------+
> 1 row in set (0.00 sec)
>
>
> Do you have an idea where this different spelling comes from and how we
> can avoid it?
>
Hi, This sounds very much like a problem we have as well.
We also have Linux and MySQL. We have done extensive troubleshooting and
found the issue. (But still need a patch).

The bug is in the class DBNamedElement, this method.

public static String name(String name)
{
return name.toUpperCase().intern();
}

The CDO Bug report is this one:

https://bugs.eclipse.org/bugs/show_bug.cgi?id=412520

The bug was introduced somewhere in March 2013. (Judging from the GIT
logs). where this bug is very active:

https://bugs.eclipse.org/bugs/show_bug.cgi?id=401763


> Regards,
>
> Stefan
Re: [CDO] MySQL database is broken after restart of CDO server [message #1424816 is a reply to message #1424789] Tue, 16 September 2014 13:03
Stefan Weise is currently offline Stefan Weise
Messages: 8
Registered: July 2009
Junior Member
Hi.

Thank you for sharing this information. I've already voted for this bug Cool

The problem only seems to occur in maintenance mode when the database is getting upgraded or repaired. During normal operation this error does not appear.

~Stefan

Christophe Bouhier wrote on Tue, 16 September 2014 12:23

Hi, This sounds very much like a problem we have as well.
We also have Linux and MySQL. We have done extensive troubleshooting and
found the issue. (But still need a patch).

The bug is in the class DBNamedElement, this method.

public static String name(String name)
{
return name.toUpperCase().intern();
}

The CDO Bug report is this one:

https://bugs.eclipse.org/bugs/show_bug.cgi?id=412520

The bug was introduced somewhere in March 2013. (Judging from the GIT
logs). where this bug is very active:

https://bugs.eclipse.org/bugs/show_bug.cgi?id=401763

[Updated on: Tue, 16 September 2014 13:03]

Report message to a moderator

Forum: scout
 Topic: Amount and currency field
Re: Amount and currency field [message #1424609 is a reply to message #1424132] Tue, 16 September 2014 07:13
Jeremie Bresson is currently offline Jeremie Bresson
Messages: 694
Registered: October 2011
Senior Member
Got an answer from Daniel Wiehl:
In the Currency Field the Property GridWeightX needs to be set to 0.

The code looks like this:
@Order(20.0)
public class CurrencyField extends AbstractSmartField<String> {

  @Override
  protected Class<? extends ICodeType<?, String>> getConfiguredCodeType() {
    return CurrencyCodeType.class;
  }

  @Override
  protected double getConfiguredGridWeightX() {
    return 0;
  }

  @Override
  protected String getConfiguredLabel() {
    return TEXTS.get("Currency");
  }

  @Override
  protected boolean getConfiguredLabelVisible() {
    return false;
  }

  @Override
  protected int getConfiguredWidthInPixel() {
    return 75;
  }
}


 Topic: Standalone Client Application
Re: Standalone Client Application [message #1424661 is a reply to message #1008959] Tue, 16 September 2014 08:35
Jeremie Bresson is currently offline Jeremie Bresson
Messages: 694
Registered: October 2011
Senior Member
Jeremie Bresson wrote on Thu, 14 February 2013 09:09

Sandro just published a possible implementation for a CodeService that works on client side (no caching, no partition): LocalCodeService


Proposition to have something like this in the framework: Bug 444213. I do not think that LocalCodeService is a good name.
 Topic: Changing focus with the Tab key
Changing focus with the Tab key [message #1424700] Tue, 16 September 2014 09:46
Juan Flores is currently offline Juan Flores
Messages: 2
Registered: September 2014
Junior Member
Hello,

I am wondering, how I can determine which field will get the focus whenever I press the tab key.

As far as I know, there is no standard way to do this.

However, I have found a way which works for me in one of two different situations.

In the attached image there is a table and a box with two fields shown.
index.php/fa/19136/0/

Imagine there is a cell in a row selected. With the tab-key I can change the focus to the next cell at the right side of the current selected cell. Typically the focus changes from the last cell in a row to the first one. But I want the focus to be given to the first field in the box after the last cell in a row.

For this special case I have overridden the method requestFocusInCell(IColumn<?> column, ITableRow row) in the table. That works for me.

But, How can I set the focus to the first cell in the row when the focus having field is the last field in the box?

Can you give me a hint?

I have tried to define a KeyStroke bind with "tab" which is an inner class of the last field in the box. But this does not work.

Edit: I have reduced the size of your picture - Jeremie Bresson

[Updated on: Tue, 16 September 2014 10:27] by Moderator

Report message to a moderator

Re: Changing focus with the Tab key [message #1424730 is a reply to message #1424700] Tue, 16 September 2014 10:33
Jeremie Bresson is currently offline Jeremie Bresson
Messages: 694
Registered: October 2011
Senior Member
To request the focus somewhere in the table you can use:
ITable.requestFocusInCell(IColumn<?>, ITableRow)

To test it, you can run the code triggered from a button.

The next question is:
Where should you add the code in order to be able to fulfill your requirement?
The Scout model does not have event like execFocusLost(). I really do not know how you could do it.

Maybe someone else will provide an hint to help you with your use case.
Re: Changing focus with the Tab key [message #1424774 is a reply to message #1424730] Tue, 16 September 2014 11:52
Juan Flores is currently offline Juan Flores
Messages: 2
Registered: September 2014
Junior Member
Thanks for the answer,

Well the fist focus change from the cell in column 4 to the field "Name" in the box is done. It works! I have implemented the above mentioned method "requestFocusInCell".

But I don't have a solution for the switch from the field "Age" to the first cell (column 1). I don't know how to do this, too!

I know that RWT do not support manipulation of traversal events. I am not sure why.
I thought, there is maybe a workaround!

Unfortunately this feature is a really important MUST-HAVE!
 Topic: Clearing permission cache
Clearing permission cache [message #1424817] Tue, 16 September 2014 13:03
Thomas Schär is currently offline Thomas Schär
Messages: 1
Registered: February 2014
Junior Member
Hi!

When using permissions I want to make sure that whenever the client is restarted, the permissions are reloaded. I chose to clear the permission cache on server side when loading a new ServerSession. So the implementation in my ServerSession looks like:
...
  @Override
  protected void execLoadSession() throws ProcessingException {
    IAccessControlService service = SERVICES.getService(IAccessControlService.class);
    service.clearCacheOfUserIds(userid);
    super.execLoadSession();
    ...
  }

Is this the correct and a safe place to clear the cache? If I want to clear the cache on client runtime, I clear the cache on client side AND on server side.

When solving this problem we discussed the following questions:

  • Why are the permissions stored in a global cache which exists as long as the server is running? Why are they not bound directly to the session?
  • Is the implementation of the cache (with a simple HashMap) robust enough for concurrent queries from a lot of clients? Have there been any load tests on this issue yet?


Thanks in advance for answering my questions (especially the rather general ones)!
Thomas
Forum: Eclipse 4
 Topic: E4 RCP and Database
Re: E4 RCP and Database [message #1424821 is a reply to message #1423019] Tue, 16 September 2014 13:07
Christophe Bouhier is currently offline Christophe Bouhier
Messages: 918
Registered: July 2009
Senior Member
On 14-09-14 19:44, Praveen Banthia wrote:
> Hi,
> I am new to this forum as well as rcp development. I wanted to know how
> to connect a e4 rcp application to do CRUD on data entered in view.
> Can someone guide me in right direction
Hey Praveen,
This forum is really about e4 technology which doesn't include Data
persistences. If you use JFace in your RCP app, there is JFace
databinding, to bind UI widget events to Objects. You can find stuff here:
http://wiki.eclipse.org/index.php/JFace_Data_Binding
(Or checkout the JFace forum).

On the persistence side, you can really use any ORM Java solution,
Perhaps even combined with

http://en.wikipedia.org/wiki/List_of_object-relational_mapping_software#Java

Some might even nicely play with JFace Databinding
HTH
Christophe
Forum: e(fx)clipse
 Topic: Problem loading FXML files
Re: Problem loading FXML files [message #1424684 is a reply to message #1424221] Tue, 16 September 2014 09:20
Thomas Elskens is currently offline Thomas Elskens
Messages: 2
Registered: September 2014
Location: Brussels - Belgium
Junior Member
Now it works just fine. Danke schön for your quick answer !
 Topic: Part Toolbar and Part Menu
Re: Part Toolbar and Part Menu [message #1424760 is a reply to message #1423978] Tue, 16 September 2014 11:22
Christoph Keimel is currently offline Christoph Keimel
Messages: 353
Registered: December 2010
Location: Germany
Senior Member
Thanks. The Tag 'Part-Toolbar-FullSpan' works quite well.

In case somebody stumbles over this thread. The css selector would look something like this:
.view-toolbar-container .button {
}

If somebody who is more skilled in css would be willing to share a nice styling for toolbar buttons, I would be most obliged Razz

[Updated on: Tue, 16 September 2014 11:22]

Report message to a moderator

 Topic: Export e(fx)clipse Tutorial 2 plug-in
Re: Export e(fx)clipse Tutorial 2 plug-in [message #1424792 is a reply to message #1422208] Tue, 16 September 2014 12:29
Johann Richter is currently offline Johann Richter
Messages: 2
Registered: September 2014
Junior Member
Hi Tom,

thanks for your reply. When I listed all plugins using the osgi "ss"-command in Eclipse, my plug-in wasn't there. So I tried to install it using the osgi console, and indeed I got the error: Unresolved requirement: Require-Bundle: org.eclipse.fx.ui.workbench3.

In the tutorial the VM launch command -Dosgi.framework.extensions=org.eclipse.fx.osgi
was probably responsible for loading that bundle. But when I start my Eclipse VM with this parameter, the bundle is not installed.

How can install the required bundles in my e(fx)clipse else?

Thank you

Johann
Re: Export e(fx)clipse Tutorial 2 plug-in [message #1424798 is a reply to message #1424792] Tue, 16 September 2014 12:37
Thomas Schindl is currently offline Thomas Schindl
Messages: 5318
Registered: July 2009
Senior Member
No it is not - the worbench3 bundle is part of the target platform so it
is found when launching an inner eclipse!

Tom

On 16.09.14 14:29, Johann Richter wrote:
> Hi Tom,
> thanks for your reply. When I listed all plugins using the osgi
> "ss"-command in Eclipse, my plug-in wasn't there. So I tried to install
> it using the osgi console, and indeed I got the error: Unresolved
> requirement: Require-Bundle: org.eclipse.fx.ui.workbench3.
>
> In the tutorial the VM launch command
> -Dosgi.framework.extensions=org.eclipse.fx.osgi
> was probably responsible for loading that bundle. But when I start my
> Eclipse VM with this parameter, the bundle is not installed.
>
> How can install the required bundles in my e(fx)clipse else?
>
> Thank you
>
> Johann
>
 Topic: e(fx)clipse not installing on Kepler for some reason.
Re: e(fx)clipse not installing on Kepler for some reason. [message #1424664 is a reply to message #1424356] Tue, 16 September 2014 08:43
Thomas Schindl is currently offline Thomas Schindl
Messages: 5318
Registered: July 2009
Senior Member
1.0.0 can not be installed into kepler - we only support Luna. The
reason for that is that Equinox changed its API between Kepler and Luna.

The only version that will work on Kepler is 0.9.0.

Tom

On 16.09.14 02:02, Joe Thatcher wrote:
> I have been trying for about 5 hours now to install e(fx)clipse on
> Eclipse Kepler. Everything works fine, until I click "next" and it
> starts to install the software. Then, I get this error message, and it
> says that almost everything cannot be installed:
>
> Cannot complete the install because one or more required items could not
> be found.
> Software being installed: e(fx)clipse - IDE - Basic 1.0.0.201408150702
> (org.eclipse.fx.ide.basic.feature.feature.group 1.0.0.201408150702)
> Missing requirement: OSGi integration for JavaFX 1.0.0.201408150502
> (org.eclipse.fx.osgi 1.0.0.201408150502) requires 'bundle
> org.eclipse.osgi 3.10.0' but it could not be found
> Cannot satisfy dependency:
> From: e(fx)clipse - IDE - Basic 1.0.0.201408150702
> (org.eclipse.fx.ide.basic.feature.feature.group 1.0.0.201408150702)
> To: org.eclipse.fx.osgi [1.0.0.201408150502]
>
>
> To download e(fx)clipse, I used the following link:
> http://download.eclipse.org/efxclipse/updates-released/1.0.0/site
>
> I know I am probably just being an idiot, but I cannot seem to find any
> instructions or anything that actually gets me able to import the javafx
> libraries.
>
> Thanks!
Re: e(fx)clipse not installing on Kepler for some reason. [message #1424685 is a reply to message #1424664] Tue, 16 September 2014 09:21
Joe Thatcher is currently offline Joe Thatcher
Messages: 4
Registered: September 2014
Junior Member
Er, I just want the JavaFX imports to work. Does 0.9.0 do that? And if not, where can I download Luna?

Thanks for the help!
Re: e(fx)clipse not installing on Kepler for some reason. [message #1424689 is a reply to message #1424685] Tue, 16 September 2014 09:28
Thomas Schindl is currently offline Thomas Schindl
Messages: 5318
Registered: July 2009
Senior Member
Yes 0.9.0 fixes that as well but I would advise to use Luna. For easy
start just download it from http://efxclipse.bestsolution.at/install.html

Tom

On 16.09.14 11:21, Joe Thatcher wrote:
> Er, I just want the JavaFX imports to work. Does 0.9.0 do that? And if
> not, where can I download Luna?
>
> Thanks for the help!
Re: e(fx)clipse not installing on Kepler for some reason. [message #1424720 is a reply to message #1424689] Tue, 16 September 2014 10:15
Joe Thatcher is currently offline Joe Thatcher
Messages: 4
Registered: September 2014
Junior Member
I cannot install Luna due to some crap about "Cant find JNI library".

Where can I get 0.9.0 of e(fx)clipse?

Oh also, I tried those all in one solutions, and they have the same crap about JNI librarys Sad.
Re: e(fx)clipse not installing on Kepler for some reason. [message #1424743 is a reply to message #1424720] Tue, 16 September 2014 10:46
Thomas Schindl is currently offline Thomas Schindl
Messages: 5318
Registered: July 2009
Senior Member
What os, what is the error?

Tom

On 16.09.14 12:15, Joe Thatcher wrote:
> I cannot install Luna due to some crap about "Cant find JNI library".
>
> Where can I get 0.9.0 of e(fx)clipse?
>
> Oh also, I tried those all in one solutions, and they have the same crap
> about JNI librarys :(.
Re: e(fx)clipse not installing on Kepler for some reason. [message #1424810 is a reply to message #1424743] Tue, 16 September 2014 12:56
Joe Thatcher is currently offline Joe Thatcher
Messages: 4
Registered: September 2014
Junior Member
Just managed to install it on Kepler, but the JavaFX imports are still not working.
Screenshot:
gyazo
.
com
/482fd7942b48a1884f0948f76b193325

OS is windows, error is this:
gyazo
.
com
/29c3328ab7fc2003c7cc5e48ce36d97d

(Links are staggered between lines due to some nonsense about using links or something)

[Updated on: Tue, 16 September 2014 12:56]

Report message to a moderator

Re: e(fx)clipse not installing on Kepler for some reason. [message #1424826 is a reply to message #1424810] Tue, 16 September 2014 13:15
Thomas Schindl is currently offline Thomas Schindl
Messages: 5318
Registered: July 2009
Senior Member
Ok - so you are on Java7. How did you create your project? On Java7 you
need to use the JavaFX Project Wizard because you are in need of an
extra classpath container.

BTW: e(fx)clipse 1.0 requires Luna so it would not have worked for you
anyways
BTW2: JDK1.7.0_11 looks really old you should try upgrading to a newer
version if you do JavaFX-dev
BTW3: I strongly recommend to develop JavaFX apps with JDK8, JavaFX8 is
much more mature than JavaFX2.2 and you only get bugfixes for JavaFX8,
JavaFX2.2 only gets security fixes

Tom

On 16.09.14 14:56, Joe Thatcher wrote:
> Just managed to install it on Kepler, but the JavaFX imports are still
> not working.
> Screenshot: gyazo
Pages (3): [ «    1  2  3    »]


Current Time: Tue Sep 16 13:32:52 GMT 2014

Powered by FUDForum. Page generated in 0.14064 seconds
.:: Contact :: Home ::.

Powered by: FUDforum 3.0.2.
Copyright ©2001-2010 FUDforum Bulletin Board Software