GoogleCloudPlatform/spark-on-k8s-operator

Name: spark-on-k8s-operator

Owner: Google Cloud Platform

Description: Kubernetes CRD operator for specifying and running Apache Spark applications idiomatically on Kubernetes.

Created: 2018-01-03 17:43:16.0

Updated: 2018-05-24 01:27:13.0

Pushed: 2018-05-23 22:58:03.0

Homepage:

Size: 570

Language: Go

GitHub Committers

UserMost Recent Commit# Commits

Other Committers

UserEmailMost Recent Commit# Commits

README

Build Status Go Report Card

This is not an officially supported Google product.

Community
Project Status

Project status: alpha

Spark Operator is still under active development and has not been extensively tested in production environment yet. Backward compatibility of the APIs is not guaranteed for alpha releases.

Customization of Spark pods, e.g., mounting ConfigMaps and PersistentVolumes is currently experimental and implemented using a Kubernetes Initializer, which is a Kubernetes alpha feature and requires a Kubernetes cluster with alpha features enabled. The Initializer can be disabled if there's no need for pod customization or if running on an alpha cluster is not desirable. Check out the Quick Start Guide on how to disable the Initializer.

Prerequisites

Spark Operator relies on garbage collection support for custom resources and optionally the Initializers which are in Kubernetes 1.8+.

Due to this bug in Kubernetes 1.9 and earlier, CRD objects with escaped quotes (e.g., spark.ui.port\") in map keys can cause serialization problems in the API server. So please pay extra attention to make sure no offending escaping is in your SparkAppliction CRD objects, particularly if you use Kubernetes prior to 1.10.

Get Started

Get started quickly with the Spark Operator using the Quick Start Guide.

If you are running the Spark Operator on Google Kubernetes Engine and want to use Google Cloud Storage (GCS) and/or BigQuery for reading/writing data, also refer to the GCP guide.

For more information, check the Design, API Specification and detailed User Guide.

Overview

Spark Operator aims to make specifying and running Spark applications as easy and idiomatic as running other workloads on Kubernetes. It uses a CustomResourceDefinition (CRD) of SparkApplication objects for specifying, running, and surfacing status of Spark applications. For a complete reference of the API definition of the SparkApplication CRD, please refer to API Definition. For details on its design, please refer to the design doc. It requires Spark 2.3 and above that supports Kubernetes as a native scheduler backend. Below are some example things that the Spark Operator is able to automate (some are to be implemented):

To make such automation possible, Spark Operator uses the SparkApplication CRD and a corresponding CRD controller as well as an initializer. The CRD controller setups the environment for an application and submits the application to run on behalf of the user, whereas the initializer handles customization of the Spark Pods. It also supports running Spark applications on standard cron schedules using the ScheduledSparkApplication CRD and the corresponding CRD controller.

Features

Spark Operator currently supports the following list of features:

The following list of features is planned:

Motivations

This approach is completely different than the one that has the submission client creates a CRD object. Having externally created and managed CRD objects offer the following benefits:

Additionally, keeping the CRD implementation outside the Spark repository gives us a lot of flexibility in terms of functionality to add to the CRD controller. We also have full control over code review and release process.


This work is supported by the National Institutes of Health's National Center for Advancing Translational Sciences, Grant Number U24TR002306. This work is solely the responsibility of the creators and does not necessarily represent the official views of the National Institutes of Health.