spotify/dataproc-java-submitter

Name: dataproc-java-submitter

Owner: Spotify

Description: A library for submitting Hadoop jobs to Google Cloud Dataproc from Java

Created: 2016-09-18 23:48:00.0

Updated: 2018-01-27 06:20:46.0

Pushed: 2016-10-28 15:48:20.0

Homepage: null

Size: 40

Language: Java

GitHub Committers

UserMost Recent Commit# Commits

Other Committers

UserEmailMost Recent Commit# Commits

README

dataproc-java-submitter

CircleCI Maven Central License

A small java library for submitting Hadoop jobs to Google Cloud Dataproc from Java.

Why?

In many real world usages of Hadoop, the jobs are usually parameterized to some degree. Parameters can be anything from job configuration to input paths. It is common to resolve these parameter arguments in some workflow tool that eventually puts the arguments on a command line that is passed to the Hadoop job. On the job side, these arguments have to be parsed using various tools that are more or less standard.

However if the argument resolution environment is in a JVM, dropping down to a shell and invoking a command line can be pretty complicated and roundabout. It is also very limiting in terms of what can be passed to the job. It is not uncommon to take more structured data and store in some seralized format, stage the files, and have custom logic in the job to deserialize it.

This library aims to more seamlessly bridge between a local JVM instance and the Hadoop application entrypoint.

Usage
Maven dependency
endency>
roupId>com.spotify</groupId>
rtifactId>dataproc-java-submitter</artifactId>
ersion><!-- see version in maven badge above --></version>
pendency>
Example usage
ng project = "gcp-project-id";
ng cluster = "dataproc-cluster-id";

procHadoopRunner hadoopRunner = DataprocHadoopRunner.builder(project, cluster).build();
procLambdaRunner lambdaRunner = DataprocLambdaRunner.forDataproc(hadoopRunner);

se any structured type that is Java Serializable
ructuredJobArguments arguments = resolveArgumentsInLocalJvm();

daRunner.runOnCluster(() -> {

 This lambda, including its closure will run on the Dataproc cluster
stem.out.println("Running on the cluster, with " + arguments.inputPaths());

turn 42; // rfc: is it worth supporting a return value from the job?

The DataprocLambdaRunner will take care of configuring the Dataproc job so that it can run your lambda function. It will scan your local classpath and ensure that the loaded jars are staged and configured for the Dataproc job. It will also take care of serializing, staging and deserializing the lambda closure that is to be invoked on the cluster.

Note that anything referenced from the lambda has to implement java.io.Serializable

Low level usage

This library can also be used to configure the Dataproc job directly.

ng project = "gcp-project-id";
ng cluster = "dataproc-cluster-id";

procHadoopRunner hadoopRunner = DataprocHadoopRunner.builder(project, cluster).build();

job = Job.builder()
.setMainClass(...)
.setArgs(...)
.setProperties(...)
.setShippedJars(...)
.setShippedFiles(...)
.createJob();


opRunner.submit(job);

This work is supported by the National Institutes of Health's National Center for Advancing Translational Sciences, Grant Number U24TR002306. This work is solely the responsibility of the creators and does not necessarily represent the official views of the National Institutes of Health.