Skip to main content



      Home
Home » Archived » BIRT » BIRT memory consumption and large dataset
BIRT memory consumption and large dataset [message #367041] Mon, 16 February 2009 11:30 Go to next message
Eclipse UserFriend
Hello,

We are using BIRT in Websphere application server to generate reports
which can potentially be very large (several thousands of "entities" per
report with one or more report pages for each entity). The reports are
rendered as PDF.

Our report design use a scripted dataset that retrieves the data from a
backend server and make them available to the report design as POJOs (see
below for a summary of our dataset handler code).

When generating our report, I would have expected that the memory
consumption of BIRT would grow (of course) but would stabilize at a
reasonable value. However, it seems that the memory grows constantly when
BIRT is building the result set (the server starts with a JVM which uses
160MB; once the dataset handler has processed 2000 entities, the memory
consumption is nearly at 400MB).

I was initially using the runAndRender task and tried to split it in two
(run task then render task) to make sure the rptdocument is written to
disk and not held in memory but the memory consumption scheme is the same.

So, is BIRT designed to be able to handle such large data sets (or am I
missing something)? Is he building the entire result set into memory? Is
there a way to control the amount of memory that BIRT will use for this?

If we could configure BIRT so that the result set is directly written to
disk instead of being kept in memory that would improve a lot the
scalability of our application.

Thanks for your help.

public class PojoDataSetHandler extends ScriptedDataSetEventAdapter {
(...)
public boolean fetch(IDataSetInstance pDataSetInstance,
IUpdatableDataSetRow pUpdatableDataSetRow) {

if (mRowIterator.hasNext()) {
Object myJavaObject = mRowIterator.next();
pUpdatableDataSetRow.setColumnValue("MyColumnName", myJavaObject);
return true;
}
return false;
}
(...)
}
Re: BIRT memory consumption and large dataset [message #367043 is a reply to message #367041] Mon, 16 February 2009 11:44 Go to previous messageGo to next message
Eclipse UserFriend
I forgot to mention that we are using the 2.3.1 runtime.
Re: BIRT memory consumption and large dataset [message #367056 is a reply to message #367043] Mon, 16 February 2009 15:58 Go to previous message
Eclipse UserFriend
Originally posted by: jasonweathersby.alltel.net

Nicolas,

Take a look at this thread.
http://www.birt-exchange.com/forum/eclipse-birt-newsgroup-mi rror/8401-cache-configuration.html
Also bear in mind that the org.eclipse.birt.data.query.ResultBufferSize
is set in megabytes. Lowest value is 1 MB. If you change it and then
set the log level to FINE you should see messages like

DisckCache is used
or
MemoryCache is used

in the log file. You may also want to log a bugzilla entry to discuss,
because this does seem high for 2000 rows.

Jason

Nicolas wrote:
> I forgot to mention that we are using the 2.3.1 runtime.
>
Previous Topic:scale
Next Topic:How to create report design from a template programatically in Birt?
Goto Forum:
  


Current Time: Sun Jul 27 08:55:05 EDT 2025

Powered by FUDForum. Page generated in 0.04021 seconds
.:: Contact :: Home ::.

Powered by: FUDforum 3.0.2.
Copyright ©2001-2010 FUDforum Bulletin Board Software

Back to the top