Skip to main content


Eclipse Community Forums
Forum Search:

Search      Help    Register    Login    Home
Home » Eclipse Projects » Eclipse Deeplearning4j » Memory crash after using output repeatedly on large datasets
Memory crash after using output repeatedly on large datasets [message #1785137] Mon, 09 April 2018 23:09 Go to next message
Vincenzo Caselli is currently offline Vincenzo CaselliFriend
Messages: 210
Registered: January 2012
Senior Member

During training, when the overall error goes below a given value (e.g. 0.1), I periodically test results using
MultiLayerNetwork.output()
over around 100 test images, in order to have a custom evaluation of the network training.

However, after a few of such testings iterations the memory starts slowly raising until the JVM crashes without any error message in Java console.
I double checked how I instantiate Java new objects in this phase and cleaned every usage until I now have a static array of INDArray (one for every test image) and a correspondent static array of Strings for the labels: they are prepared once in advance and during training just tested with output() method.
Nevertheless the memory still increase as soon as I start using ouput().

Please note that the number of inputs for training is quite high (over 300,000), the network is a CNN (very similar to the Mnist CNN example included in DL4J examples) and overall it behaves just wonderfully, as soon as I do not start using .output() method.

I experienced this behavior both on Linux CentOS 7.4 and on Ubuntu 16.04. The DL4J version is 0.9.1, RAM is 16 GB + 64 GB swap available; the memory options in launch are
-XX:+UseSerialGC -Xmx20G -Dorg.bytedeco.javacpp.maxbytes=16G -Dorg.bytedeco.javacpp.maxphysicalbytes=16G

I could of course run this periodic testing session in a separate JVM instance, so to avoid crash of the training instance, but still am wondering what is causing the memory problem.

Could the output method() be affected by some form of memory leak or is there any cleanup I could do after each usage? (I already tried calling .cleanup() and also .detach() on output, but without success) ?

Thank you very much
Vincenzo
Re: Memory crash after using output repeatedly on large datasets [message #1785450 is a reply to message #1785137] Fri, 13 April 2018 22:01 Go to previous message
Vincenzo Caselli is currently offline Vincenzo CaselliFriend
Messages: 210
Registered: January 2012
Senior Member

Got some answer here
https://github.com/deeplearning4j/deeplearning4j/issues/4917
Previous Topic:[wrong duplicated] Memory crash after using output repeatedly on large datasets
Goto Forum:
  


Current Time: Sun Sep 23 03:26:58 GMT 2018

Powered by FUDForum. Page generated in 0.02330 seconds
.:: Contact :: Home ::.

Powered by: FUDforum 3.0.2.
Copyright ©2001-2010 FUDforum Bulletin Board Software

Back to the top