Home » Archived » BIRT » Join data sets performance issue
Join data sets performance issue [message #367969] |
Thu, 23 April 2009 21:02 |
Neil Wang Messages: 105 Registered: July 2009 |
Senior Member |
|
|
Hi,
I have been having this issue for a long time and would like to find out
if there is a new way to fix the problem with a later version of BIRT. (I
am using BIRT2.1.2)
The issue: I would like to have a hierarchical structure in my report. For
example, schema has tables, table has columns and so on.
My data sets are "Schema", "Table" and "Column". In order to construct the
hierarchical structure, I joined Schema and Table to form a join data set
called Schema-Table and joined the first join data set with Column to form
Schema-Table-Column.
I run into an out of memory problem when there are too many columns or
tables which increase the number of entries in the join data sets a lot.
For example, if there are 2 schemas and each schema has 10 tables, the
first join data set will have 2x10 = 20 entries. If there are 10 columns
for each table, there will be 20x10 = 200 entries in the second join data
set. Therefore, the number of entries grows very fast which creates the
performance problem.
Is there any solution to this? Please advise.
|
|
|
Re: Join data sets performance issue [message #367998 is a reply to message #367969] |
Mon, 27 April 2009 04:41 |
Eclipse User |
|
|
|
Originally posted by: jasonweathersby.alltel.net
Neil,
can you not do nested tables for this instead of Joint Data Set?
Jason
Neil Wang wrote:
> Hi,
>
> I have been having this issue for a long time and would like to find out
> if there is a new way to fix the problem with a later version of BIRT.
> (I am using BIRT2.1.2)
>
> The issue: I would like to have a hierarchical structure in my report.
> For example, schema has tables, table has columns and so on.
>
> My data sets are "Schema", "Table" and "Column". In order to construct
> the hierarchical structure, I joined Schema and Table to form a join
> data set called Schema-Table and joined the first join data set with
> Column to form Schema-Table-Column.
> I run into an out of memory problem when there are too many columns or
> tables which increase the number of entries in the join data sets a lot.
> For example, if there are 2 schemas and each schema has 10 tables, the
> first join data set will have 2x10 = 20 entries. If there are 10 columns
> for each table, there will be 20x10 = 200 entries in the second join
> data set. Therefore, the number of entries grows very fast which creates
> the performance problem.
>
> Is there any solution to this? Please advise.
>
|
|
| | | | |
Re: Join data sets performance issue [message #368265 is a reply to message #368261] |
Fri, 15 May 2009 14:00 |
Eclipse User |
|
|
|
Originally posted by: jasonweathersby.alltel.net
Neil,
Is there anyway to filter it in the query or in a stored proc?
Jason
Neil Wang wrote:
> Hi Jason,
>
> I have been trying to come up with some alternatives. With the newly
> created report design, I think the performance issue is the nested table
> structure. For example, if I have 10000 columns in my data source and
> the filtering function will iterate through each column and check if
> it's owning table is the parent table in the nested table structure. If
> it is, display it. Otherwise, ignore it. If there are 10000 tables in my
> data source and each table will iterate through the 10000 columns, I can
> imagine the performance being bad.
>
> Any suggestions? Please advise.
>
> cheers,
>
> Neil
>
|
|
| |
Re: Join data sets performance issue [message #368283 is a reply to message #368275] |
Mon, 18 May 2009 15:51 |
Eclipse User |
|
|
|
Originally posted by: jasonweathersby.alltel.net
Neil,
Are you using a table filter on the inner table? Or are you using data
set parameter for inner table query? The data set parameter approach
should work much faster.
Jason
Neil Wang wrote:
> Hi Jason,
>
> I am trying to come up with some other alternatives to solve this issue.
> Do you think it is possible to use script to control what will be
> displayed in the inner table (the table that holds the column
> information); so that, the inner table detail column does not iterate
> through every column and checks if the column needs to be displayed
> every time when a table entry is presented.
>
> The problem at hand: table1
> (in the detail row of the inner table, it iterates through every column
> to see if it needs to be displayed)
> table2 (iterate to the second table)
> (same thing happens here, in the detail row of the inner table, it
> iterate through every column to see if it needs to be displayed)
>
> I think the problem is the iteration through every column is taking too
> long. Do you agree? Is there any way to use script to improve the
> performance? Also, is there another way to change the structure (nested
> tables) of the report to improve the performance?
>
> thank you very much for your attention.
>
> Neil
>
|
|
| | | |
Goto Forum:
Current Time: Wed Apr 17 18:32:25 GMT 2024
Powered by FUDForum. Page generated in 0.02796 seconds
|