BIRT // Hive// Unable to execute aggregate functions [message #1712329] |
Fri, 23 October 2015 04:07 |
aashish kumar Messages: 7 Registered: September 2015 |
Junior Member |
|
|
Hello,
I am stuck in a problem with BIRT connected to Hive.
Earlier I had a table in HIVE "testcpuinfo" where all data I had put in one file and uploaded on HIVE Table
At that time my aggregate functions were working fine.
example:
select min(usercpu),reportname from testcpuinfo group by reportname;
Later, I found that its possible to upload data from difefrent files in same table in HIVE. So now my data in HIVE table consists of following files:
hduser@ubuntu:~/hive/examples/icc_load$ hadoop fs -ls /user/hive/warehouse/testcpuinfo
15/10/22 20:39:30 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Found 4 items
-rwxr-xr-x 1 hduser supergroup 32535 2015-10-22 07:13 /user/hive/warehouse/testcpuinfo/Report_11082015-040236_cpu.csv
-rwxr-xr-x 1 hduser supergroup 14549 2015-10-22 07:13 /user/hive/warehouse/testcpuinfo/Report_20062015-093441_cpu.csv
-rwxr-xr-x 1 hduser supergroup 14432 2015-10-22 07:13 /user/hive/warehouse/testcpuinfo/Report_22062015-092203_cpu.csv
-rwxr-xr-x 1 hduser supergroup 33652 2015-10-22 07:16 /user/hive/warehouse/testcpuinfo/Report_22092015_025040_cpu.csv
But then I have realised that now my aggregate functions query are not able to return any data.
example:-
0: jdbc:hive2://192.168.108.133:10000> select min(usercpu),reportname from testcpuinfo group by reportname;
INFO : Number of reduce tasks not specified. Estimated from input data size: 1
INFO : In order to change the average load for a reducer (in bytes):
INFO : set hive.exec.reducers.bytes.per.reducer=<number>
INFO : In order to limit the maximum number of reducers:
INFO : set hive.exec.reducers.max=<number>
INFO : In order to set a constant number of reducers:
INFO : set mapreduce.job.reduces=<number>
WARN : Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
INFO : number of splits:1
INFO : Submitting tokens for job: job_local1932765430_0315
INFO : The url to track the job: http://localhost:8080/
INFO : Job running in-process (local Hadoop)
INFO : 2015-10-22 20:36:20,601 Stage-1 map = 100%, reduce = 100%
INFO : Ended Job = job_local1932765430_0315
+-----+-------------+--+
| c0 | reportname |
+-----+-------------+--+
+-----+-------------+--+
No rows selected (1.451 seconds)
0: jdbc:hive2://192.168.108.133:10000>
Can you please help, why its happening like that ?
|
|
|
Powered by
FUDForum. Page generated in 1.37588 seconds