Name: hive-jq-udtf
Owner: CyberAgent
Description: jq for Apache Hive
Created: 2017-08-18 09:48:06.0
Updated: 2017-12-23 06:43:55.0
Pushed: 2017-08-18 12:21:25.0
Size: 17
Language: Java
GitHub Committers
User | Most Recent Commit | # Commits |
---|
Other Committers
User | Most Recent Commit | # Commits |
---|
jq for Hive
Build jar with maven and install target/hive-jq-udtf-$VERSION.jar
into your hive.aux.jars.path
.
clean package -DskipTests
Alternatively, you can download the pre-built jar at Maven Central.
CREATE FUNCTION
TE FUNCTION jq3 AS 'jp.co.cyberagent.hive.udtf.jsonquery.v3.JsonQueryUDTF';
You can choose whatever name of the function, but we recommend to use jq<version>
style naming where <version>
is the version number in the package name. We increment the version number always when we change something that breaks compatibility with the older versions. This allows multiple versions of this plugin to co-exist at the same time when migrating to a newer version, etc.
See Deploying Jars for User Defined Functions and User Defined SerDes section of the official Hive documentation for more deployment details.
There are two variants (i.e. overloads) of the UDTF:
jq(JSON, JQ, TYPE)
jq(JSON, JQ, FIELD_1:TYPE_1, ..., FIELD_N:TYPE_N)
The UDTF parses JSON
text and feed it to a JQ
filter, which in turn produces 0 or more results. The filter results are still JSON, so the UDTF converts each of the results to a row suitable in Hive.
This final conversion process differs slightly depending on which variant to use.
Note that JQ
, TYPE
and FIELD_N:TYPE_N
must be a constant string (or a constant expression which evaluates to a string).
This variant converts each JQ
result to a Hive row containing a single TYPE
column.
Extracting a single integer from JSON.
CT jq('{"region": "Asia", "timezones": [{"name": "Tokyo", "offset": 540}, {"name": "Taipei", "offset": 480}, {"name": "Kamchatka", "offset": 720}]}',
'.timezones[]|select(.name == "Tokyo").offset',
'int');
----+
l1 |
----+
0 |
----+
JQ
is allowed to produce more than one results and the UDTF also supports more complex types.
CT jq('{"region": "Asia", "timezones": [{"name": "Tokyo", "offset": 540}, {"name": "Taipei", "offset": 480}, {"name": "Kamchatka", "offset": 720}]}',
'.region as $region | .timezones[] | {name: ($region + "/" + .name), offset}',
'struct<name:string,offset:int>');
--------------------------------------+
col1 |
--------------------------------------+
name":"Asia/Tokyo","offset":540} |
name":"Asia/Taipei","offset":480} |
name":"Asia/Kamchatka","offset":720} |
--------------------------------------+
This variant can produce rows with more than one columns (FIELD_1
,…, FIELD_N
).
The fields (FIELD_1
, …, FIELD_N
) of the JQ
result are individually converted to respective Hive types (TYPE_1
, …, TYPE_N
), which eventually assemble to a Hive row.
Transforming a JSON into Hive rows with multiple columns.
CT jq('{"region": "Asia", "timezones": [{"name": "Tokyo", "offset": 540}, {"name": "Taipei", "offset": 480}, {"name": "Kamchatka", "offset": 720}]}',
'.region as $region | .timezones[] | {name: ($region + "/" + .name), offset}',
'name:string', 'offset:int');
--------------+---------+
name | offset |
--------------+---------+
ia/Tokyo | 540 |
ia/Taipei | 480 |
ia/Kamchatka | 720 |
--------------+---------+
Lateral view is used in conjunction with user-defined table generating functions such as explode(). […] A lateral view first applies the UDTF to each row of base table and then joins resulting output rows to the input rows to form a virtual table having the supplied table alias. — Hive Language Manual, Lateral View
repare `regions` table for LATERAL VIEW example
TE TABLE regions (region STRING, timezones STRING);
RT INTO regions (region, timezones) VALUES ('Asia', '[{"name":"Tokyo","offset":540},{"name":"Taipei","offset":480},{"name":"Kamchatka","offset":720}]');
ql
CT r.region, tz.name, tz.offset FROM regions r LATERAL VIEW jq(r.timezones, '.[]', 'name:string', 'offset:int') tz;
--------+------------+------------+
region | tz.name | tz.offset |
--------+------------+------------+
ia | Tokyo | 540 |
ia | Taipei | 480 |
ia | Kamchatka | 720 |
--------+------------+------------+
If the UDTF fails to parse JSON, jq input (.
) becomes null
and $error
object is set to something like below.
essage": "Unrecognized token 'string': was expecting ('true', 'false' or 'null')\n at [Source: \"corrupt \"string; line: 1, column: 33]",
lass": "jp.co.cyberagent.hive.udtf.jsonquery.v3.shade.com.fasterxml.jackson.core.JsonParseException",
nput": "\"corrupt \"string"
To substitute something in case of a currupt JSON,
CT jq('"corrupt "string', 'if $error then "INVALID" else . end', 'string');
-------+
col1 |
-------+
VALID |
-------+
To skip a corrupt JSON,
CT jq('"corrupt "string', 'if $error then empty else . end', 'string');
-------+
col1 |
-------+
-------+
To abort a query on a corrupt JSON,
CT jq('"corrupt "string', 'if $error then error($error.message) else . end', 'string');
r: java.io.IOException: org.apache.hadoop.hive.ql.metadata.HiveException: jq returned an error "Unrecognized token 'string': was expecting ('true', 'false' or 'null') at [Source: "corrupt "string; line: 1, column: 33]" from input: "corrupt "string (state=,code=0)
int
, bigint
, float
, double
, boolean
, string
struct<...>
, array<T>
, map<string, T>
Copyright (c) CyberAgent, Inc. All Rights Reserved.