Eclipse Community Forums
Forum Search:

Search      Help    Register    Login    Home
Home » Modeling » Sphinx » problem marker updates
problem marker updates [message #793771] Wed, 08 February 2012 14:46 Go to next message
Tibor Kovacs is currently offline Tibor Kovacs
Messages: 3
Registered: February 2012
Junior Member
There's a findObject function in the ScopingResourceSetImpl which kicks out a job updating the problem markers, that means every time a proxy gets resolved a new such job is getting scheduled. This can cause huge latency when we try to move packages among different resources (files) in the same project.

If we change the async parameter to false, there's no job created and behaves much better, but still I think this wouldn't be an ideal solution:

 // Handle problems that may have been encountered during proxy resolution
 ResourceProblemMarkerService.INSTANCE.updateProblemMarkers(resources, false, null);

Any recommendations or ideas?

Thanks

[Updated on: Wed, 08 February 2012 14:48]

Report message to a moderator

Re: problem marker updates [message #795538 is a reply to message #793771] Fri, 10 February 2012 15:38 Go to previous messageGo to next message
Stephan Eberle is currently offline Stephan Eberle
Messages: 35
Registered: July 2009
Member

Thank you for sharing your questions and ideas! It is feedback like yours that greatly helps us to improve Sphinx (and Artop) and making it what it is.

The issue you are raising is valid. It is not ideal, and was not intended, to schedule as many jobs for updating the problem markers. However, is is, as you have imagined yourself, also not ideal to do that with a synchronous call to the underlying API. The reason is that error marker creation as done by Eclipse is a heavy load operation, and in case you have many error markers (e.g., when all proxies in a resource are proxies and all proxy resolutions are failing) the performance would suffer significantly.

The solution that I see, is to improve the implementation of the ResourceProblemMarkerService and make sure that only the first call to that API results in scheduling a job for the given resource but all subsequent requests for doing so on the same resource get ignored in case the first job has not yet been executed.

This behavior could be realized quite conveniently by leveraging the Job#shouldSchedule()/shouldRun() API.

Would that help?
Re: problem marker updates [message #805140 is a reply to message #795538] Thu, 23 February 2012 11:33 Go to previous message
Tibor Kovacs is currently offline Tibor Kovacs
Messages: 3
Registered: February 2012
Junior Member
Thanks for your reply. Sounds to be a much better solution and there would be only one job running and the rest canceled, but still there could be several of the same job executed consecutively meanwhile only the latest should run once.
Previous Topic:skip creating/unloading unavailable resources
Next Topic:[Announce] Sphinx 0.7.0.I20120914-1942 integration build
Goto Forum:
  


Current Time: Sat Oct 25 11:31:46 GMT 2014

Powered by FUDForum. Page generated in 0.11007 seconds
.:: Contact :: Home ::.

Powered by: FUDforum 3.0.2.
Copyright ©2001-2010 FUDforum Bulletin Board Software