hive> SELECT userid ,Sequnce ,ActionTime FROM T_BZ_ClientActionLog GROUP BY Sequnce ,ActionTime limit 100;
FAILED: SemanticException [Error 10025]: Line 1:7 Expression not in GROUP BY key 'userid'
userid被要求也处在group by分组字段里面。
这个不同于mysql语句,mysql这样写是没有问题的。
以下是处理方法; hive> SELECT sequnce,actiontime,
collect_set(pagecode),collect_set(actioncode) FROM T_BZ GROUP BY Sequnce ,ActionTime limit 100;
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks not specified. Estimated from input data size: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=
In order to set a constant number of reducers:
set mapreduce.job.reduces=
Starting Job = job_1407387657227_0043, Tracking URL = http://n1.hadoop:8089/proxy/application_1407387657227_0043/
Kill Command = /app/prog/hadoop/bin/hadoop job -kill job_1407387657227_0043
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
2014-08-07 20:07:12,881 Stage-1 map = 0%, reduce = 0%
2014-08-07 20:07:24,192 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 18.84 sec
2014-08-07 20:07:29,347 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 20.71 sec
MapReduce Total cumulative CPU time: 20 seconds 710 msec
Ended Job = job_1407387657227_0043
MapReduce Jobs Launched:
Job 0: Map: 1 Reduce: 1 Cumulative CPU: 20.71 sec HDFS Read: 96397668 HDFS Write: 6969 SUCCESS
Total MapReduce CPU Time Spent: 20 seconds 710 msec
OK
00015a21-ef6d-4f05-b04e-ffd98fab2922 2014-07-24 01:20:33 [] ["A0001"]
00015a21-ef6d-4f05-b04e-ffd98fab2922 2014-07-24 01:20:37 ["P001"] ["A0001"]
00015a21-ef6d-4f05-b04e-ffd98fab2922 2014-07-24 01:20:45 ["P003","P001"] ["A0002","A0001"]
00015a21-ef6d-4f05-b04e-ffd98fab2922 2014-07-24 01:21:07 ["P003"] ["A0011"]
00015a21-ef6d-4f05-b04e-ffd98fab2922 2014-07-24 01:21:11 ["P003","P001"] ["A0017","A0001"]
00015a21-ef6d-4f05-b04e-ffd98fab2922 2014-07-24 01:21:13 ["P001","P002"] ["A0003","A0001"]
00015a21-ef6d-4f05-b04e-ffd98fab2922 2014-07-24 01:21:22 ["P002"] ["A0006"]
可以看到结果的一个集合。
当然如果不想得到集合,可以这样写,获取集合的第一个元素::
hive> SELECT sequnce,actiontime,collect_set(pagecode)[0],collect_set(actioncode)[0] FROM T_BZ GROUP BY Sequnce ,ActionTime limit 100;
00015a21-ef6d-4f05-b04e-ffd98fab2922 2014-07-24 01:20:33 A0001
00015a21-ef6d-4f05-b04e-ffd98fab2922 2014-07-24 01:20:37 P001 A0001
00015a21-ef6d-4f05-b04e-ffd98fab2922 2014-07-24 01:20:45 P003 A0002
00015a21-ef6d-4f05-b04e-ffd98fab2922 2014-07-24 01:21:07 P003 A0011
00015a21-ef6d-4f05-b04e-ffd98fab2922 2014-07-24 01:21:11 P003 A0017
00015a21-ef6d-4f05-b04e-ffd98fab2922 2014-07-24 01:21:13 P001 A0003
00015a21-ef6d-4f05-b04e-ffd98fab2922 2014-07-24 01:21:22 P002 A0006
这样的结果就和mysql一致了。
当然如果不想去重还可以使用collect_list处理,这两个函数是HIVE的UDF函数。