Skip to main content

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [List Home]
[stem-dev] Update and questions on large pajek graph and memory available to STEM

Hi all,

I have two things I would like to update you on/discuss relating to large pajek graphs. They are both related to the amount of memory that is available to STEM. 

Size limits for importing pajek graphs 

We have identified before (STEM call July 20th 2017) that the amount of memory available to STEM affects whether large pajek graphs can be imported into STEM or not. I had already been doing all the suggestions from that call:

Jamie: To resolve size issue, go to stem.ini file, increase heap size to 1 gig; up to half the memory on the machine; per Taras, info is on wiki. 

Jamie: Do test, run model; go to preferences on stem; check simulation management, uncheck pause after each; check solver, concurrent worker threads, set number less than 1 of the core and spread across multiple processors; use a smaller graph for testing

However, I have been having some difficulties in the past week or so in getting my latest large pajek graph with multiple triggers to import into STEM, even when increasing the - Xmx to 16GB (16384M). 

This has been resolved for now by reducing the number of triggers I am using in the pajek graph (thanks to Armin for suggesting this to me and for identifying that it was a memory issue, rather than another issue with my graph. Also thanks to Taras). The graph which worked had  68,624 nodes and 204,816 edges with 5 triggers per edge. The graph which was too big had 68,624 nodes and 251,812 edges (204,816 of these with 5 triggers per edges and 46,996 with 11 triggers). I thought it might be helpful to let STEM DEV know about these specifics to give a future benchmark for size limits. 

However,  it would ideally be good if there was a way to increase this limit, as if I want to simulate my model for longer than a one year period, I will need to include more triggers into the model (unless there is a way to just specify that triggers happen every year?). There might be other people that want to do a larger pajek graph too?

If it is purely down to the amount of memory available to STEM, then I guess larger pajek graphs would only be possible with machines with a larger memory than 16GB? Or are there other ways that could increase the memory available? 

There is a parameter in stem.ini which I don't understand  -XX:MaxPermSize=256m. Could adjusting this help in any way? 

I also wasn't sure whether adjusting the -Xms would have an impact or not. 

Running a simulation with a large pajek graph 

Now that I have managed to import my large pajek graph into STEM, I am having problems running the scenario that contains it. It so far today has remained at 0%. Nothing is coming up in the error log and so I don't think there is an error, it could be running just very very slowly. I was wondering if there were any other ways other than those mentioned before that might speed up the simulation? Is this also a memory issue?

Sorry for the really long email. I hope it might be helpful for people to be updated about this though and I'm keen to hear if people have any suggestions in answer to my questions. 

Best wishes,



Emily Nixon
PhD Student


School of Biological Sciences
University of Bristol
Bristol Life Sciences Building
24 Tyndall Avenue
Tel +44 (0)117 394 1389

Back to the top