Skip to main content


Eclipse Community Forums
Forum Search:

Search      Help    Register    Login    Home
Home » Eclipse Projects » Rich Client Platform (RCP) » Jobs API: Overhead scheduling > 100 Jobs?
Jobs API: Overhead scheduling > 100 Jobs? [message #447937] Tue, 18 April 2006 10:39 Go to next message
Benjamin Pasero is currently offline Benjamin PaseroFriend
Messages: 337
Registered: July 2009
Senior Member
Hi,

I want to schedule a large number of Jobs and limit the number of concurrently
running Jobs to 16 using a Scheduling Rule. However, I wonder if this is the
right way to go for it.

I would create all Jobs at once, schedule them and let the Scheduling Rule take
care of the execution.

My alternative would be to only create new Jobs, when there are free slots (less
than 16 Jobs of this kind running). That would add a bit more work, since I need
a background-job that takes care of the scheduling.

In short: Is the Jobs API scaling well, even when 100 or more Jobs are scheduled
at once?

Ben
Re: Jobs API: Overhead scheduling > 100 Jobs? [message #448134 is a reply to message #447937] Tue, 18 April 2006 21:09 Go to previous messageGo to next message
John Arthorne is currently offline John ArthorneFriend
Messages: 176
Registered: July 2009
Senior Member
Scheduling a hundred jobs shouldn't be a problem if there are scheduling
rules to prevent them all running at once. The automated tests for the
jobs framework create hundreds of jobs without difficulty. However,
having a hundred jobs *running* concurrently would likely be a problem,
as there is no limit on the number of worker threads that can be created
(the VM typically runs out of memory after allocating a few hundred
threads). Another alternative for dealing with fine-grained bits of
work is to design a job that processes its own queue of work items.
This approach is used quite frequently in the platform itself - for
example there is a single job that handles a queue of viewer decoration
requests.
--

Benjamin Pasero wrote:
> Hi,
>
> I want to schedule a large number of Jobs and limit the number of
> concurrently running Jobs to 16 using a Scheduling Rule. However, I
> wonder if this is the right way to go for it.
>
> I would create all Jobs at once, schedule them and let the Scheduling
> Rule take care of the execution.
>
> My alternative would be to only create new Jobs, when there are free
> slots (less than 16 Jobs of this kind running). That would add a bit
> more work, since I need a background-job that takes care of the scheduling.
>
> In short: Is the Jobs API scaling well, even when 100 or more Jobs are
> scheduled at once?
>
> Ben
Re: Jobs API: Overhead scheduling > 100 Jobs? [message #448219 is a reply to message #448134] Wed, 19 April 2006 11:14 Go to previous messageGo to next message
Benjamin Pasero is currently offline Benjamin PaseroFriend
Messages: 337
Registered: July 2009
Senior Member
> Scheduling a hundred jobs shouldn't be a problem if there are scheduling
> rules to prevent them all running at once. The automated tests for the
> jobs framework create hundreds of jobs without difficulty. However,
> having a hundred jobs *running* concurrently would likely be a problem,
> as there is no limit on the number of worker threads that can be created
> (the VM typically runs out of memory after allocating a few hundred
> threads). Another alternative for dealing with fine-grained bits of
> work is to design a job that processes its own queue of work items. This
> approach is used quite frequently in the platform itself - for example
> there is a single job that handles a queue of viewer decoration requests.

Thanks for the reply John!

Yes, I am setting a SchedulingRule to each Job that makes sure that at
any time only 16 Jobs are running concurrently. For that I am counting
the number of running Jobs in an int field, based on a JobChangeListener.
(Is there a better way? I know there is JobManager.find(), but I rather
dont want to use that in the scheduling rule isConflicting, since its
called so often).

Its working quite well, however I am having difficulties aggregating
the progress of the running Jobs into a single Progress Monitor. My
current workaround is letting the Jobs run as system-job and having
1 Job as non-system Job running over the entire time of all Jobs and
showing Progress. This special Job is also terminating all running and
scheduled Jobs, if canceld by the user.

I know there is setProgressGroup for a Job, but from what I read from
the API, its not the right thing to use in this case, right?

Ben

> --
>
> Benjamin Pasero wrote:
>> Hi,
>>
>> I want to schedule a large number of Jobs and limit the number of
>> concurrently running Jobs to 16 using a Scheduling Rule. However, I
>> wonder if this is the right way to go for it.
>>
>> I would create all Jobs at once, schedule them and let the Scheduling
>> Rule take care of the execution.
>>
>> My alternative would be to only create new Jobs, when there are free
>> slots (less than 16 Jobs of this kind running). That would add a bit
>> more work, since I need a background-job that takes care of the
>> scheduling.
>>
>> In short: Is the Jobs API scaling well, even when 100 or more Jobs are
>> scheduled at once?
>>
>> Ben
Re: Jobs API: Overhead scheduling > 100 Jobs? [message #448240 is a reply to message #448219] Wed, 19 April 2006 19:38 Go to previous messageGo to next message
John Arthorne is currently offline John ArthorneFriend
Messages: 176
Registered: July 2009
Senior Member
Benjamin Pasero wrote:
> Its working quite well, however I am having difficulties aggregating
> the progress of the running Jobs into a single Progress Monitor. My
> current workaround is letting the Jobs run as system-job and having
> 1 Job as non-system Job running over the entire time of all Jobs and
> showing Progress. This special Job is also terminating all running and
> scheduled Jobs, if canceld by the user.
>
> I know there is setProgressGroup for a Job, but from what I read from
> the API, its not the right thing to use in this case, right?

I think progress groups are what you want. All jobs that share a
progress group will show as a single entry in the progress view.
--
Re: Jobs API: Overhead scheduling > 100 Jobs? [message #448242 is a reply to message #448240] Wed, 19 April 2006 21:46 Go to previous messageGo to next message
Benjamin Pasero is currently offline Benjamin PaseroFriend
Messages: 337
Registered: July 2009
Senior Member
John Arthorne wrote:
> Benjamin Pasero wrote:
>> Its working quite well, however I am having difficulties aggregating
>> the progress of the running Jobs into a single Progress Monitor. My
>> current workaround is letting the Jobs run as system-job and having
>> 1 Job as non-system Job running over the entire time of all Jobs and
>> showing Progress. This special Job is also terminating all running and
>> scheduled Jobs, if canceld by the user.
>>
>> I know there is setProgressGroup for a Job, but from what I read from
>> the API, its not the right thing to use in this case, right?
>
> I think progress groups are what you want. All jobs that share a
> progress group will show as a single entry in the progress view.
> --

I did some experiments with setting a Progress Group to these Jobs,
but was not very happy in the way progress is supported. Like, calling
"beginTask" from the 16 Concurrently running Jobs result in a very big
entry in the Progress-View (16 lines of Tasks given out). But even worse,
there was no progress indicated. Maybe this is a bug in the progress view,
since with my current workaround, I see the progress when its reported from
a single Job without a Progress Group.

Ben
Re: Jobs API: Overhead scheduling > 100 Jobs? [message #448436 is a reply to message #448242] Mon, 24 April 2006 19:52 Go to previous message
John Arthorne is currently offline John ArthorneFriend
Messages: 176
Registered: July 2009
Senior Member
I suggest entering a bug report against Platform UI.
--

Benjamin Pasero wrote:
> I did some experiments with setting a Progress Group to these Jobs,
> but was not very happy in the way progress is supported. Like, calling
> "beginTask" from the 16 Concurrently running Jobs result in a very big
> entry in the Progress-View (16 lines of Tasks given out). But even worse,
> there was no progress indicated. Maybe this is a bug in the progress view,
> since with my current workaround, I see the progress when its reported from
> a single Job without a Progress Group.
>
> Ben
Previous Topic:Opening ServerSocket from plugin
Next Topic:New to Eclipse RCP: Building a MAC version on a WinXP PC?
Goto Forum:
  


Current Time: Sun Dec 08 23:12:06 GMT 2024

Powered by FUDForum. Page generated in 0.03963 seconds
.:: Contact :: Home ::.

Powered by: FUDforum 3.0.2.
Copyright ©2001-2010 FUDforum Bulletin Board Software

Back to the top