Skip to main content

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [List Home]
Re: [stem-dev] stem-dev Digest, Vol 115, Issue 1

Thank you for sharing this.
The setting for MaxPermSize is no longer a setting for Java 8 or later (Java manages that automatically).
The option -Xmx sets the Maximum system memory available to your JVM but there is no magic here. You can't allocate more memory then you have on your physical machine. In my experience, for graphs with more than ~20,000 nodes you really want to be running on a large workstation or large memory server. If you exceed your system memory (eg on a laptop) the machine will start swapping processed in and out of memory and, as a result, really slow down.

Best Regards,

From:        stem-dev-request@xxxxxxxxxxx
To:        stem-dev@xxxxxxxxxxx
Date:        11/07/2018 09:00 AM
Subject:        stem-dev Digest, Vol 115, Issue 1
Sent by:        stem-dev-bounces@xxxxxxxxxxx

Send stem-dev mailing list submissions to

To subscribe or unsubscribe via the World Wide Web, visit
or, via email, send a message with subject or body 'help' to

You can reach the person managing the list at

When replying, please edit your Subject line so it is more specific
than "Re: Contents of stem-dev digest..."

Today's Topics:

  1. Update and questions on large pajek graph and memory
     available to STEM (Emily Nixon)


Message: 1
Date: Wed, 7 Nov 2018 16:35:53 +0000
From: Emily Nixon <emily.nixon@xxxxxxxxxxxxx>
To: developer mailing list STEM <stem-dev@xxxxxxxxxxx>
Subject: [stem-dev] Update and questions on large pajek graph and
                memory available to STEM
Content-Type: text/plain; charset="iso-8859-1"

Hi all,

I have two things I would like to update you on/discuss relating to large pajek graphs. They are both related to the amount of memory that is available to STEM.

Size limits for importing pajek graphs

We have identified before (STEM call July 20th 2017) that the amount of memory available to STEM affects whether large pajek graphs can be imported into STEM or not. I had already been doing all the suggestions from that call:

Jamie: To resolve size issue, go to stem.ini file, increase heap size to 1 gig; up to half the memory on the machine; per Taras, info is on wiki.

Jamie: Do test, run model; go to preferences on stem; check simulation management, uncheck pause after each; check solver, concurrent worker threads, set number less than 1 of the core and spread across multiple processors; use a smaller graph for testing

However, I have been having some difficulties in the past week or so in getting my latest large pajek graph with multiple triggers to import into STEM, even when increasing the - Xmx to 16GB (16384M).

This has been resolved for now by reducing the number of triggers I am using in the pajek graph (thanks to Armin for suggesting this to me and for identifying that it was a memory issue, rather than another issue with my graph. Also thanks to Taras). The graph which worked had  68,624 nodes and 204,816 edges with 5 triggers per edge. The graph which was too big had 68,624 nodes and 251,812 edges (204,816 of these with 5 triggers per edges and 46,996 with 11 triggers). I thought it might be helpful to let STEM DEV know about these specifics to give a future benchmark for size limits.

However,  it would ideally be good if there was a way to increase this limit, as if I want to simulate my model for longer than a one year period, I will need to include more triggers into the model (unless there is a way to just specify that triggers happen every year?). There might be other people that want to do a larger pajek graph too?

If it is purely down to the amount of memory available to STEM, then I guess larger pajek graphs would only be possible with machines with a larger memory than 16GB? Or are there other ways that could increase the memory available?

There is a parameter in stem.ini which I don't understand  -XX:MaxPermSize=256m. Could adjusting this help in any way?

I also wasn't sure whether adjusting the -Xms would have an impact or not.

Running a simulation with a large pajek graph

Now that I have managed to import my large pajek graph into STEM, I am having problems running the scenario that contains it. It so far today has remained at 0%. Nothing is coming up in the error log and so I don't think there is an error, it could be running just very very slowly. I was wondering if there were any other ways other than those mentioned before that might speed up the simulation? Is this also a memory issue?

Sorry for the really long email. I hope it might be helpful for people to be updated about this though and I'm keen to hear if people have any suggestions in answer to my questions.

Best wishes,


Emily Nixon
PhD Student


School of Biological Sciences
University of Bristol
Bristol Life Sciences Building
24 Tyndall Avenue
Tel +44 (0)117 394 1389
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <>


stem-dev mailing list
To change your delivery options, retrieve your password, or unsubscribe from this list, visit

End of stem-dev Digest, Vol 115, Issue 1

Back to the top