BIRT memory consumption and large dataset [message #367041] |
Mon, 16 February 2009 11:30  |
Eclipse User |
|
|
|
Hello,
We are using BIRT in Websphere application server to generate reports
which can potentially be very large (several thousands of "entities" per
report with one or more report pages for each entity). The reports are
rendered as PDF.
Our report design use a scripted dataset that retrieves the data from a
backend server and make them available to the report design as POJOs (see
below for a summary of our dataset handler code).
When generating our report, I would have expected that the memory
consumption of BIRT would grow (of course) but would stabilize at a
reasonable value. However, it seems that the memory grows constantly when
BIRT is building the result set (the server starts with a JVM which uses
160MB; once the dataset handler has processed 2000 entities, the memory
consumption is nearly at 400MB).
I was initially using the runAndRender task and tried to split it in two
(run task then render task) to make sure the rptdocument is written to
disk and not held in memory but the memory consumption scheme is the same.
So, is BIRT designed to be able to handle such large data sets (or am I
missing something)? Is he building the entire result set into memory? Is
there a way to control the amount of memory that BIRT will use for this?
If we could configure BIRT so that the result set is directly written to
disk instead of being kept in memory that would improve a lot the
scalability of our application.
Thanks for your help.
public class PojoDataSetHandler extends ScriptedDataSetEventAdapter {
(...)
public boolean fetch(IDataSetInstance pDataSetInstance,
IUpdatableDataSetRow pUpdatableDataSetRow) {
if (mRowIterator.hasNext()) {
Object myJavaObject = mRowIterator.next();
pUpdatableDataSetRow.setColumnValue("MyColumnName", myJavaObject);
return true;
}
return false;
}
(...)
}
|
|
|
|
|
Powered by
FUDForum. Page generated in 0.04021 seconds