Skip to main content

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [List Home]
[eclipselink-users] Efficient loading to minimise memory spike

Using Eclipselink v1.0.2 with JPA

I have a thick client that displays database records that each have associated byte arrays up to around 26k. This field is lazily loaded, so not all the arrays are loaded into memory at query time.

I had severe memory problems a few months ago, as Eclipselink keeps 2 or more copies of each record as part of the persistence context, etc. The solution was to detach all records from the context on load (entityManager.clear()), then each record is merged (entityManager.merge(record)) when selected, then re-detached again when moving to the next. This (seemingly-ugly) solution has worked well.

A problem I'm encountering now, however, is when I have such large numbers of records in a query, that even during the initial load (ie before the associated byte arrays are lazily-loaded in), the memory requirements are huge, exceeding reasonable limits. It seems a logical approach would be to load the records in chunks, then use the detach and merge as described above when needed. Is there an elegant/api way to achieve this? I have heard of some mechanics for this in other persistence frameworks.

Many thanks
Brad



Back to the top