Skip to main content

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [List Home]
Re: [eclipselink-users] Efficient loading to minimise memory spike

Thanks James

I had solved this meantime so missed your reply. That's excellent detail, and I'll keep reference of that and may look into more in future.

As for the symptoms I was experiencing, I was guilty of not completely reading the manual - I wasn't weaving the code. After implementing static weaving at build time, the application's memory footprint is vastly reduced.

Thanks again

On 3/02/10 3:51 AM, James Sutherland wrote:
You can use pagination in JPA to limit query results.  See
Query.setFirstResult(), setMaxResults().

EclipseLink also supports Cursors, streams and scrolling.


In terms of the copies of the object maintained by the persistence context
their can be 3.
1 - Is the working copy that is returned that you can edit.
2 - Is the original cache object in the shared object cache, if you disable
caching (shared=false) this object will not be built.  Alternatively, if you
are just reading an object, you could use the Query hint
"" to only build and return this original object.
3 - The backup clone is used to track changes.  When an JVM agent is used or
weaving is enabled, no backup clone is built by default as attribute level
change tracking is used through instrumented events into your get/set
methods.  Some types of mappings however require "deferred" change tracking,
such as if your mapping for your lob used serialization to a complex object.
This can be controlled through the @Mutable and @ChangeTracking annotations.

Brad Milne wrote:
Using Eclipselink v1.0.2 with JPA

I have a thick client that displays database records that each have
associated byte arrays up to around 26k. This field is lazily loaded, so
not all the arrays are loaded into memory at query time.

I had severe memory problems a few months ago, as Eclipselink keeps 2 or
more copies of each record as part of the persistence context, etc. The
solution was to detach all records from the context on load
(entityManager.clear()), then each record is merged
(entityManager.merge(record)) when selected, then re-detached again when
moving to the next. This (seemingly-ugly) solution has worked well.

A problem I'm encountering now, however, is when I have such large
numbers of records in a query, that even during the initial load (ie
before the associated byte arrays are lazily-loaded in), the memory
requirements are huge, exceeding reasonable limits. It seems a logical
approach would be to load the records in chunks, then use the detach and
merge as described above when needed. Is there an elegant/api way to
achieve this? I have heard of some mechanics for this in other
persistence frameworks.

Many thanks

----- James Sutherland
  EclipseLink ,
Wiki: EclipseLink , TopLink
Forums: TopLink , EclipseLink
Book: Java Persistence

Back to the top