|
Re: OutOfMemory for simple example persisting large amount of data [message #385246 is a reply to message #385244] |
Wed, 14 January 2009 17:11 |
Johannes Stamminger Messages: 17 Registered: July 2009 |
Junior Member |
|
|
My test code snippets:
<persistence-unit name="entries-test-eclipselink-derby"
transaction-type="RESOURCE_LOCAL">
<provider>org.eclipse.persistence.jpa.PersistenceProvider</provider >
<class>test.SimpleEntryEntity</class>
<properties>
<property name="eclipselink.jdbc.driver"
value="org.apache.derby.jdbc.EmbeddedDriver"/>
<property name="eclipselink.target-database" value="Derby"/>
<property name="eclipselink.jdbc.user" value="" />
<property name="eclipselink.jdbc.password" value="" />
<property name="eclipselink.orm.throw.exceptions"
value="true"/>
<property name="eclipselink.ddl-generation"
value="create-tables"/>
<property name="eclipselink.ddl-generation.output-mode"
value="database"/>
</properties>
</persistence-unit>
@Entity
@Table(name = "SIMPLE_ENTRY_t")
public class SimpleEntryEntity {
private long fA;
private short fB;
private long fC;
private long fD;
protected SimpleEntryEntity() { } // JPA required
SimpleEntryEntity(final long a,
final short b,
final long c,
final long d) {
fA = a;
fB = b;
fC = c;
fD = d;
}
@Column(name = "A", nullable = false, updatable = false)
public long getA() {
return fA;
}
public void setA(final long a) {
fA = a;
}
@Column(name = "B", nullable = false, updatable = false)
public short getB() {
return fB;
}
public void setB(final short b) {
fB = b;
}
@Column(name = "C", nullable = false, updatable = false)
public long getC() {
return fC;
}
public void setC(final long c) {
fC = c;
}
@Id
@Column(name = "D", nullable = false, updatable = false)
public long getD() {
return fD;
}
public void setD(final long d) {
fD = d;
}
}
public void testEclipselinkHSQLDB() throws Exception {
final Map<String, String> dbProperties = new HashMap<String,
String>();
dbProperties.put(PersistenceUnitProperties.JDBC_URL,
"jdbc:hsqldb:file:" + getCurrentTestDir()
+ File.separator +
"entries-test-eclipselink-hsqldb;shutdown=true;create=true");
dbProperties.put(PersistenceUnitProperties.LOGGING_FILE,
getCurrentTestDir() + File.separator +
"entries-test-eclipselink-hsqldb.log");
final EntityManagerFactory entityManagerFactory
=
Persistence.createEntityManagerFactory("entries-test-eclipselink-hsqldb ",
dbProperties);
final EntityManager entityManager =
entityManagerFactory.createEntityManager();
final EntityTransaction transaction =
entityManager.getTransaction();
transaction.begin();
for (int i = 0; i < NB_ENTRIES; i++) {
entityManager.persist(new SimpleEntryEntity(i, (short) (i +
1), i + 2, i + 3));
if (i > 0 && i % COMMIT_AFTER == 0) {
commit(transaction, i);
transaction.begin();
}
}
commit(transaction, NB_ENTRIES);
entityManager.close();
LOG.debug("entityManager.closed");
final EntityManager entityManagerLoad =
entityManagerFactory.createEntityManager();
final Query q = entityManagerLoad.createQuery("select E from
SimpleEntryEntity E");
final List<SimpleEntryEntity> r = q.getResultList();
assertEquals(NB_ENTRIES, r.size());
LOG.debug("contents verified");
entityManagerLoad.close();
LOG.debug("entityManager.closed");
entityManagerFactory.close();
LOG.debug("entityManagerFactory.closed");
}
private void commit(final EntityTransaction transaction,
final int i) throws StandardException {
LOG.debug("commit " + i);
transaction.commit();
LOG.debug("commit " + i + " done");
}
|
|
|
|
Re: OutOfMemory for simple example persisting large amount of data [message #385252 is a reply to message #385249] |
Thu, 15 January 2009 15:43 |
Johannes Stamminger Messages: 17 Registered: July 2009 |
Junior Member |
|
|
Thanks a lot for the reply!
James wrote:
> This occurs because an EntityManager is an extended persistence context,
> in that it must store and track changes to everything you read through it.
I understood this when having had a look to the heap dump on out-of-mem.
But the annotations define that anything may be changed. My application
use case is to store immutable entities (just an index of something else).
Am I wrong when saying that the uow is no must-have in that situation or
at least should not prevent the cached entities from being garbage
collected (at least after a commit)?
> In JPA you must either:
> - Create a new EntityManager for each transaction.
I thought I had tested this - but obviously, as it works, may be I did
with hsqldb as database backend with having a different issue there.
I just checked with hsqldb backend again: still running out of memory. But
the heap dump now shows a different cause: now a big bunch of BigDecimals
are on top of the heap histogram. My first guess (and RootSet references
show this, too) is because hsqldb not using shared tables but memory ones
("create table XYZ ..." instead of "create shared table XYZ ..."). Is
there any other way than hand-coding the table create statement?
> - Call clear() after each transaction to clear the persistence context.
Works, too. The day yesterday was too hard for me to see this ;-)
> If you are concerned about performance for your mass insert, you could
Yes, I am - or better say: I was. With your hints it looks like becoming
practicable.
> also enable batch writing, and just for the purpose of the mass insert you
> could also disable caching (shared=false).
I already found the batch writing.
The @Cache(shared = false) annotation I need to read more about before
fully understand it's meaning. But obviously it does not completely
disable the uow tracking (it still runs out of mem without clear() call).
Thanks again,
Johannes
|
|
|
|
Powered by
FUDForum. Page generated in 0.04445 seconds