h2oai/h2o-3

Name: h2o-3

Owner: H2O.ai

Description: Open Source Fast Scalable Machine Learning Platform For Smarter Applications (Deep Learning, Gradient Boosting, Random Forest, Generalized Linear Modeling (Logistic Regression, Elastic Net), K-Means, PCA, Stacked Ensembles, Automatic Machine Learning (AutoML), ...)

Created: 2014-03-03 16:08:07.0

Updated: 2018-01-18 17:04:19.0

Pushed: 2018-01-18 23:42:52.0

Homepage: http://h2o.ai

Size: 368365

Language: Java

GitHub Committers

UserMost Recent Commit# Commits

Other Committers

UserEmailMost Recent Commit# Commits

README

H2O

Join the chat at https://gitter.im/h2oai/h2o-3

H2O is an in-memory platform for distributed, scalable machine learning. H2O uses familiar interfaces like R, Python, Scala, Java, JSON and the Flow notebook/web interface, and works seamlessly with big data technologies like Hadoop and Spark. H2O provides implementations of many popular algorithms such as GBM, Random Forest, Deep Neural Networks, Word2Vec and Stacked Ensembles. H2O is extensible so that developers can add data transformations and custom algorithms of their choice and access them through all of those clients.

Data collection is easy. Decision making is hard. H2O makes it fast and easy to derive insights from your data through faster and better predictive modeling. H2O allows online scoring and modeling in a single platform.

H2O-3 (this repository) is the third incarnation of H2O, and the successor to H2O-2.

Table of Contents

1. Downloading H2O-3

While most of this README is written for developers who do their own builds, most H2O users just download and use a pre-built version. If you are a Python or R user, the easiest way to install H2O is via PyPI or Anaconda (for Python) or CRAN (for R):

Python
install h2o
R
all.packages("h2o")

For the latest stable, nightly, Hadoop (or Spark / Sparkling Water) releases, or the stand-alone H2O jar, please visit: https://h2o.ai/download

More info on downloading & installing H2O is available in the H2O User Guide.

2. Open Source Resources

Most people interact with three or four primary open source resources: GitHub (which you've already found), JIRA (for bug reports and issue tracking), Stack Overflow for H2O code/software-specific questions, and h2ostream (a Google Group / email discussion forum) for questions not suitable for Stack Overflow. There is also a Gitter H2O developer chat group, however for archival purposes & to maximize accessibility, we'd prefer that standard H2O Q&A be conducted on Stack Overflow.

2.1 Issue Tracking and Feature Requests

(Note: There is only one issue tracking system for the project. GitHub issues are not enabled; you must use JIRA.)

You can browse and create new issues in our open source JIRA: http://jira.h2o.ai

2.2 List of H2O Resources

3. Using H2O-3 Artifacts

Every nightly build publishes R, Python, Java, and Scala artifacts to a build-specific repository. In particular, you can find Java artifacts in the maven/repo directory.

Here is an example snippet of a gradle build file using h2o-3 as a dependency. Replace x, y, z, and nnnn with valid numbers.

2o-3 dependency information
h2oBranch = 'master'
h2oBuildNumber = 'nnnn'
h2oProjectVersion = "x.y.z.${h2oBuildNumber}"

sitories {
 h2o-3 dependencies
ven {
url "https://s3.amazonaws.com/h2o-release/h2o-3/${h2oBranch}/${h2oBuildNumber}/maven/repo/"



ndencies {
mpile "ai.h2o:h2o-core:${h2oProjectVersion}"
mpile "ai.h2o:h2o-algos:${h2oProjectVersion}"
mpile "ai.h2o:h2o-web:${h2oProjectVersion}"
mpile "ai.h2o:h2o-app:${h2oProjectVersion}"

Refer to the latest H2O-3 bleeding edge nightly build page for information about installing nightly build artifacts.

Refer to the h2o-droplets GitHub repository for a working example of how to use Java artifacts with gradle.

Note: Stable H2O-3 artifacts are periodically published to Maven Central (click here to search) but may substantially lag behind H2O-3 Bleeding Edge nightly builds.

4. Building H2O-3

Getting started with H2O development requires JDK 1.7, Node.js, Gradle and Python. We use the Gradle wrapper (called gradlew) to ensure up-to-date local versions of Gradle and other dependencies are installed in your development directory.

4.1. Before building

Installation of h2o requires properly set up Python environment and following packages:


rama
re
late
ests
l

To install these packages you can use pip or conda. If you have troubles installing these packages on Windows, please follow section Setup on Windows of this guide.

(Note: It is recommended to use some virtual environment such as VirtualEnv, to install all packages. )

4.2. Building from the command line (Quick Start)

To build H2O from the repository, perform the following steps.

Recipe 1: Clone fresh, build, skip tests, and run H2O
ild H2O
clone https://github.com/h2oai/h2o-3.git
2o-3
adlew build -x test

may encounter problems: e.g. npm missing. Install it:
 install npm

art H2O
 -jar build/h2o.jar

int browser to http://localhost:54321
Recipe 2: Clone fresh, build, and run tests (requires a working install of R)
clone https://github.com/h2oai/h2o-3.git
2o-3
adlew syncSmalldata
adlew syncRPackages
adlew build

Notes:

  • Running tests starts five test JVMs that form an H2O cluster and requires at least 8GB of RAM (preferably 16GB of RAM).
  • Running ./gradlew syncRPackages is supported on Windows, OS X, and Linux, and is strongly recommended but not required. ./gradlew syncRPackages ensures a complete and consistent environment with pre-approved versions of the packages required for tests and builds. The packages can be installed manually, but we recommend setting an ENV variable and using ./gradlew syncRPackages. To set the ENV variable, use the following format (where `${WORKSPACE} can be any path):
r -p ${WORKSPACE}/Rlibrary
rt R_LIBS_USER=${WORKSPACE}/Rlibrary
Recipe 3: Pull, clean, build, and run tests
pull
adlew syncSmalldata
adlew syncRPackages
adlew clean
adlew build
Notes Recipe 4: Just building the docs
adlew clean && ./gradlew build -x test && (export DO_FAST=1; ./gradlew dist)
 target/docs-website/h2o-docs/index.html

4.3. Setup on Windows
Step 1: Download and install WinPython.

From the command line, validate python is using the newly installed package by using which python (or sudo which python). Update the Environment variable with the WinPython path.

Step 2: Install required Python packages:
pip install grip
pip install tabulate
pip install wheel
Step 3: Install JDK

Install Java 1.7 and add the appropriate directory C:\Program Files\Java\jdk1.7.0_65\bin with java.exe to PATH in Environment Variables. To make sure the command prompt is detecting the correct Java version, run:

javac -version

The CLASSPATH variable also needs to be set to the lib subfolder of the JDK:

CLASSPATH=/<path>/<to>/<jdk>/lib
Step 4. Install Node.js

Install Node.js and add the installed directory C:\Program Files\nodejs, which must include node.exe and npm.cmd to PATH if not already prepended.

Step 5. Install R, the required packages, and Rtools:

Install R and add the bin directory to your PATH if not already included.

Install the following R packages:

To install these packages from within an R session:

 <- c("RCurl", "jsonlite", "statmod", "devtools", "roxygen2", "testthat")
(pkg in pkgs) {
 (! (pkg %in% rownames(installed.packages()))) install.packages(pkg)

Note that libcurl is required for installation of the RCurl R package.

Finally, install Rtools, which is a collection of command line tools to facilitate R development on Windows.

NOTE: During Rtools installation, do not install Cygwin.dll.

Step 6. Install Cygwin

NOTE: During installation of Cygwin, deselect the Python packages to avoid a conflict with the Python.org package.

Step 6b. Validate Cygwin

If Cygwin is already installed, remove the Python packages or ensure that Native Python is before Cygwin in the PATH variable.

Step 7. Update or validate the Windows PATH variable to include R, Java JDK, Cygwin. Step 8. Git Clone h2o-3

If you don't already have a Git client, please install one. The default one can be found here http://git-scm.com/downloads. Make sure that command prompt support is enabled before the installation.

Download and update h2o-3 source codes:

git clone https://github.com/h2oai/h2o-3
Step 9. Run the top-level gradle build:
cd h2o-3
./gradlew.bat build

If you encounter errors run again with --stacktrace for more instructions on missing dependencies.

4.4. Setup on OS X

If you don't have Homebrew, we recommend installing it. It makes package management for OS X easy.

Step 1. Install JDK

Install Java 1.7. To make sure the command prompt is detecting the correct Java version, run:

javac -version
Step 2. Install Node.js:

Using Homebrew:

brew install node

Otherwise, install from the NodeJS website.

Step 3. Install R and the required packages:

Install R and add the bin directory to your PATH if not already included.

Install the following R packages:

To install these packages from within an R session:

 <- c("RCurl", "jsonlite", "statmod", "devtools", "roxygen2", "testthat")
(pkg in pkgs) {
 (! (pkg %in% rownames(installed.packages()))) install.packages(pkg)

Note that libcurl is required for installation of the RCurl R package.

Step 4. Git Clone h2o-3

OS X should already have Git installed. To download and update h2o-3 source codes:

git clone https://github.com/h2oai/h2o-3
Step 5. Run the top-level gradle build:
cd h2o-3
./gradlew build

If you encounter errors run again with --stacktrace for more instructions on missing dependencies.

4.5. Setup on Ubuntu 14.04
Step 1. Install Node.js
curl -sL https://deb.nodesource.com/setup_0.12 | sudo bash -
sudo apt-get install -y nodejs
Step 2. Install JDK:

Install Java 1.7. Installation instructions can be found here JDK installation. To make sure the command prompt is detecting the correct Java version, run:

javac -version
Step 3. Install R and the required packages:

Installation instructions can be found here R installation. Click ?Download R for Linux?. Click ?ubuntu?. Follow the given instructions.

To install the required packages, follow the same instructions as for OS X above.

Note: If the process fails to install RStudio Server on Linux, run one of the following:

sudo apt-get install libcurl4-openssl-dev

or

sudo apt-get install libcurl4-gnutls-dev

Step 4. Git Clone h2o-3

If you don't already have a Git client:

sudo apt-get install git

Download and update h2o-3 source codes:

git clone https://github.com/h2oai/h2o-3
Step 5. Run the top-level gradle build:
cd h2o-3
./gradlew build

If you encounter errors, run again using --stacktrace for more instructions on missing dependencies.

Make sure that you are not running as root, since bower will reject such a run.

4.6. Setup on Ubuntu 13.10
Step 1. Install Node.js
curl -sL https://deb.nodesource.com/setup_0.12 | sudo bash -
sudo apt-get install -y nodejs
Steps 2-4. Follow steps 2-4 for Ubuntu 14.04 (above)
4.7. Setting up your preferred IDE environment

For users of Intellij's IDEA, generate project files with:

./gradlew idea

For users of Eclipse, generate project files with:

./gradlew eclipse
4.7 Setup on CentOS 7
opt
 wget --no-cookies --no-check-certificate --header "Cookie: gpw_e24=http%3A%2F%2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie" "http://download.oracle.com/otn-pub/java/jdk/7u79-b15/jdk-7u79-linux-x64.tar.gz"

 tar xzf jdk-7u79-linux-x64.tar.gz
dk1.7.0_79

 alternatives --install /usr/bin/java java /opt/jdk1.7.0_79/bin/java 2

 alternatives --install /usr/bin/jar jar /opt/jdk1.7.0_79/bin/jar 2
 alternatives --install /usr/bin/javac javac /opt/jdk1.7.0_79/bin/javac 2
 alternatives --set jar /opt/jdk1.7.0_79/bin/jar
 alternatives --set javac /opt/jdk1.7.0_79/bin/javac

opt

 wget http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-5.noarch.rpm
 rpm -ivh epel-release-7-5.noarch.rpm

 echo "multilib_policy=best" >> /etc/yum.conf
 yum -y update

 yum -y install R R-devel git python-pip openssl-devel libxml2-devel libcurl-devel gcc gcc-c++ make openssl-devel kernel-devel texlive texinfo texlive-latex-fonts libX11-devel mesa-libGL-devel mesa-libGL nodejs npm python-devel numpy scipy python-pandas

 pip install scikit-learn grip tabulate statsmodels wheel

r ~/Rlibrary
rt JAVA_HOME=/opt/jdk1.7.0_79
rt JRE_HOME=/opt/jdk1.7.0_79/jre
rt PATH=$PATH:/opt/jdk1.7.0_79/bin:/opt/jdk1.7.0_79/jre/bin
rt R_LIBS_USER=~/Rlibrary

stall local R packages
 'install.packages(c("RCurl","jsonlite","statmod","devtools","roxygen2","testthat"), dependencies=TRUE, repos="http://cran.rstudio.com/")'


clone https://github.com/h2oai/h2o-3.git
2o-3

ild H2O
adlew syncSmalldata
adlew syncRPackages
adlew build -x test

5. Launching H2O after Building

To start the H2O cluster locally, execute the following on the command line:

java -jar build/h2o.jar

A list of available start-up JVM and H2O options (e.g. -Xmx, -nthreads, -ip), is available in the H2O User Guide.

6. Building H2O on Hadoop

Pre-built H2O-on-Hadoop zip files are available on the download page. Each Hadoop distribution version has a separate zip file in h2o-3.

To build H2O with Hadoop support yourself, first install sphinx for python: pip install sphinx Then start the build by entering the following from the top-level h2o-3 directory:

(export BUILD_HADOOP=1; ./gradlew build -x test)
./gradlew dist

This will create a directory called 'target' and generate zip files there. Note that BUILD_HADOOP is the default behavior when the username is jenkins (refer to settings.gradle); otherwise you have to request it, as shown above.

Adding support for a new version of Hadoop

In the h2o-hadoop directory, each Hadoop version has a build directory for the driver and an assembly directory for the fatjar.

You need to:

  1. Add a new driver directory and assembly directory (each with a build.gradle file) in h2o-hadoop
  2. Add these new projects to h2o-3/settings.gradle
  3. Add the new Hadoop version to HADOOP_VERSIONS in make-dist.sh
  4. Add the new Hadoop version to the list in h2o-dist/buildinfo.json
Secure user impersonation

Hadoop supports secure user impersonation through its Java API. A kerberos-authenticated user can be allowed to proxy any username that meets specified criteria entered in the NameNode's core-site.xml file. This impersonation only applies to interactions with the Hadoop API or the APIs of Hadoop-related services that support it (this is not the same as switching to that user on the machine of origin).

Setting up secure user impersonation (for h2o):

  1. Create or find an id to use as proxy which has limited-to-no access to HDFS or related services; the proxy user need only be used to impersonate a user
  2. (Required if not using h2odriver) If you are not using the driver (e.g. you wrote your own code against h2o's API using Hadoop), make the necessary code changes to impersonate users (see org.apache.hadoop.security.UserGroupInformation)
  3. In either of Ambari/Cloudera Manager or directly on the NameNode's core-site.xml file, add 2/3 properties for the user we wish to use as a proxy (replace with the simple user name - not the fully-qualified principal name).
    • hadoop.proxyuser.<proxyusername>.hosts: the hosts the proxy user is allowed to perform impersonated actions on behalf of a valid user from
    • hadoop.proxyuser.<proxyusername>.groups: the groups an impersonated user must belong to for impersonation to work with that proxy user
    • hadoop.proxyuser.<proxyusername>.users: the users a proxy user is allowed to impersonate
    • Example: ```
         <property>
           <name>hadoop.proxyuser.myproxyuser.hosts</name>
           <value>host1,host2</value>
         </property>
         <property>
           <name>hadoop.proxyuser.myproxyuser.groups</name>
           <value>group1,group2</value>
         </property>
         <property>
           <name>hadoop.proxyuser.myproxyuser.users</name>
           <value>user1,user2</value>
         </property>
         ```
      
  4. Restart core services such as HDFS & YARN for the changes to take effect

Impersonated HDFS actions can be viewed in the hdfs audit log ('auth:PROXY' should appear in the ugi= field in entries where this is applicable). YARN similarly should show 'auth:PROXY' somewhere in the Resource Manager UI.

To use secure impersonation with h2o's Hadoop driver:

Before this is attempted, see Risks with impersonation, below

When using the h2odriver (e.g. when running with hadoop jar ...), specify -principal <proxy user kerberos principal>, -keytab <proxy user keytab path>, and -run_as_user <hadoop username to impersonate>, in addition to any other arguments needed. If the configuration was successful, the proxy user will log in and impersonate the -run_as_user as long as that user is allowed by either the users or groups configuration property (configured above); this is enforced by HDFS & YARN, not h2o's code. The driver effectively sets its security context as the impersonated user so all supported Hadoop actions will be performed as that user (e.g. YARN, HDFS APIs support securely impersonated users, but others may not).

Precautions to take when leveraging secure impersonation Risks with secure impersonation
Debugging HDFS

These are the required steps to debug HDFS in IDEA as a standalone H2O process.

Debugging H2O on Hadoop as a hadoop jar hadoop mapreduce job is a difficult thing to do. However, what you can do relatively easily is tweak the gradle settings for the project so that H2OApp has HDFS as a dependency. Here are the steps:

  1. Make the following changes to gradle build files below
    • Change the hadoop-client version in h2o-persist-hdfs to the desired version
    • Add h2o-persist-hdfs as a dependency to h2o-app
  2. Close IDEA
  3. ./gradlew cleanIdea
  4. ./gradlew idea
  5. Re-open IDEA
  6. Run or debug H2OApp, and you will now be able to read from HDFS inside the IDE debugger

h2o-persist-hdfs is normally only a dependency of the assembly modules, since those are not used by any downstream modules. We want the final module to define its own version of HDFS if any is desired.

Note this example is for MapR 4, which requires the additional org.json dependency to work properly.

t diff
 --git a/h2o-app/build.gradle b/h2o-app/build.gradle
x af3b929..097af85 100644
a/h2o-app/build.gradle
b/h2o-app/build.gradle
8,5 +8,6 @@ dependencies {
ompile project(":h2o-algos")
ompile project(":h2o-core")
ompile project(":h2o-genmodel")
ompile project(":h2o-persist-hdfs")


 --git a/h2o-persist-hdfs/build.gradle b/h2o-persist-hdfs/build.gradle
x 41b96b2..6368ea9 100644
a/h2o-persist-hdfs/build.gradle
b/h2o-persist-hdfs/build.gradle
2,5 +2,6 @@ description = "H2O Persist HDFS"

endencies {
ompile project(":h2o-core")
ompile("org.apache.hadoop:hadoop-client:2.0.0-cdh4.3.0")
ompile("org.apache.hadoop:hadoop-client:2.4.1-mapr-1408")
ompile("org.json:org.json:chargebee-1.0")

7. Sparkling Water

Sparkling Water combines two open-source technologies: Apache Spark and the H2O Machine Learning platform. It makes H2O?s library of advanced algorithms, including Deep Learning, GLM, GBM, K-Means, and Distributed Random Forest, accessible from Spark workflows. Spark users can select the best features from either platform to meet their Machine Learning needs. Users can combine Spark's RDD API and Spark MLLib with H2O?s machine learning algorithms, or use H2O independently of Spark for the model building process and post-process the results in Spark.

Sparkling Water Resources:

8. Documentation
Documenation Homepage

The main H2O documentation is the H2O User Guide. Visit http://docs.h2o.ai for the top-level introduction to documentation on H2O projects.

Generate REST API documentation

To generate the REST API documentation, use the following commands:

cd ~/h2o-3
cd py
python ./generate_rest_api_docs.py  # to generate Markdown only
python ./generate_rest_api_docs.py --generate_html  --github_user GITHUB_USER --github_password GITHUB_PASSWORD # to generate Markdown and HTML

The default location for the generated documentation is build/docs/REST.

If the build fails, try gradlew clean, then git clean -f.

Bleeding edge build documentation

Documentation for each bleeding edge nightly build is available on the nightly build page.

9. Citing H2O

If you use H2O as part of your workflow in a publication, please cite your H2O resource(s) using the following BibTex entry:

H2O Software
@Manual{h2o_package_or_module,
    title = {package_or_module_title},
    author = {H2O.ai},
    year = {year},
    month = {month},
    note = {version_information},
    url = {resource_url},
}

Formatted H2O Software citation examples:

H2O Booklets

H2O algorithm booklets are available at the Documentation Homepage.

@Manual{h2o_booklet_name,
    title = {booklet_title},
    author = {list_of_authors},
    year = {year},
    month = {month},
    url = {link_url},
}

Formatted booklet citation examples:

Arora, A., Candel, A., Lanford, J., LeDell, E., and Parmar, V. (Oct. 2016). Deep Learning with H2O. http://docs.h2o.ai/h2o/latest-stable/h2o-docs/booklets/DeepLearningBooklet.pdf.

Click, C., Lanford, J., Malohlava, M., Parmar, V., and Roark, H. (Oct. 2016). Gradient Boosted Models with H2O. http://docs.h2o.ai/h2o/latest-stable/h2o-docs/booklets/GBMBooklet.pdf.

10. Community

H2O has been built by a great many number of contributors over the years both within H2O.ai (the company) and the greater open source community. You can begin to contribute to H2O by answering Stack Overflow questions or filing bug reports. Please join us!

Team & Committers
atish Ambati
f Click
Kraljevic
s Nykodym
al Malohlava
n Normoyle
cer Aiello
 Fu
i Mehta
 Candel
phine Wang
Wang
Schloemer
Peck
hvi Prabhu
don Hill
 Gambera
l Rao
j Parmar
all Harris
d Avati
ica Lanford
 Tellez
son Washburn
Wang
 Eckstrand
aja Madabhushi
stian Vidrio
Sabrin
 Dowle
 Landry
 LeDell
 Rogynskyy
 Martin
y Jordan
ant Kalonia
ne Hussami
 Cramer
ie Spreitzer
d Iyengar
lene Windom
g Sanghavi
eep Gill
en DiPerna
l Bal
 Chan
 Karpov
 Wadhwa
ith Barthur
n Hayrapetyan
ai Chow
ry Larko
den Murray
b Hava
Phan
us Stensmo
a Stetsenko
la Bartz
usz Dymczyk
h Stubbs
Wang
ne Ward
nd Wilkinson
y Wong
il Shekhar

Advisors

Scientific Advisory Council

hen Boyd
Tibshirani
or Hastie

Systems, Data, FileSystems and Hadoop

 Lea
s Pouliot
ba Borthakur

Investors
nu Bhattacharjee, Nexus Venture Partners
d Babu Periasamy
d Rajaraman
Bhardwaj
sh Mathur
ael Marks
rt Bierman
sh Ambati

This work is supported by the National Institutes of Health's National Center for Advancing Translational Sciences, Grant Number U24TR002306. This work is solely the responsibility of the creators and does not necessarily represent the official views of the National Institutes of Health.