thin JDBC to large DB [message #144620] |
Thu, 16 March 2006 12:46  |
Eclipse User |
|
|
|
We are using a thin JDBC connection to setup BIRT datasources & datasets to
an Oracle datawarehouse that has a large # of users (14K+), and lots of
tables (4.7K+)...(and lots of table-cols, of course...41K+)
The problem is that the DataSet Editor either:
1. takes forever, and/or
5. uses all available memory
....apparently querying/loading every schema and/or table/view and/or column
in the instance.
Does anyone have a suggestion of how we might be able to handle this better?
P.S. We have no problems connecting to smaller instances.
|
|
|
|
|
|
|
Re: thin JDBC to large DB [message #192492 is a reply to message #149328] |
Mon, 25 September 2006 17:36  |
Eclipse User |
|
|
|
This bottleneck occurs when you have a lot of users defined to your Oracle
DB. We were able to get around the problem by creating a view (under ea. of
our own schemas) of sys.all_users that selects only a small sub-set of
Oracle users:
CREATE VIEW ALL_USERS AS
select *
from sys.all_users where username in ('SOURCE1','SOURCE2' ,'SOURCE2')
....A bit clumsy, but it works for us until BIRT optimizes the way the use
this Oracle table in the query tool.
"Scott Rosenbaum" <scottr@innoventsolutions.com> wrote in message
news:e0f2g7$is0$1@utils.eclipse.org...
> Please let me know when you have the Bugzilla entry in, I would like to
> track on this one.
>
> Scott Rosenbaum
> BIRT PMC
>
> BITBURNER wrote:
>> So far, we are having to use the sun jdbc/odbc bridge ...which we are not
>> happy about because Oracle/odbc will be a real pain to have to rollout to
>> our users (...and to support it!).
>>
>> "birt" <birt@ohds.co.uk> wrote in message
>> news:dvi6ba$eqk$1@utils.eclipse.org...
>>> We have exactly the same problem with a much smaller db. If you find a
>>> solution let me know.
>>>
>>> Birt.
>>>
>>> "BITBURNER" <rileymg@indiana.edu> wrote in message
>>> news:dvc8eo$p6$1@utils.eclipse.org...
>>>> We are using a thin JDBC connection to setup BIRT datasources &
>>>> datasets to an Oracle datawarehouse that has a large # of users (14K+),
>>>> and lots of tables (4.7K+)...(and lots of table-cols, of course...41K+)
>>>>
>>>> The problem is that the DataSet Editor either:
>>>> 1. takes forever, and/or
>>>> 5. uses all available memory
>>>> ...apparently querying/loading every schema and/or table/view and/or
>>>> column in the instance.
>>>>
>>>> Does anyone have a suggestion of how we might be able to handle this
>>>> better?
>>>>
>>>> P.S. We have no problems connecting to smaller instances.
>>>>
>>>>
>>>
>>
|
|
|
Powered by
FUDForum. Page generated in 0.04777 seconds