dask/dask-tutorial

Name: dask-tutorial

Owner: dask

Description: Dask tutorial

Created: 2015-07-16 13:56:54.0

Updated: 2018-05-24 21:28:50.0

Pushed: 2018-05-22 19:55:20.0

Homepage: https://dask.pydata.org

Size: 48983

Language: Jupyter Notebook

GitHub Committers

UserMost Recent Commit# Commits

Other Committers

UserEmailMost Recent Commit# Commits

README

Dask Tutorial

This tutorial was last given at SciPy 2017 in Austin Texas. A video is available online.

Dask provides multi-core execution on larger-than-memory datasets.

We can think of dask at a high and a low level

Different users operate at different levels but it is useful to understand both. This tutorial will interleave between high-level use of dask.array and dask.dataframe (even sections) and low-level use of dask graphs and schedulers (odd sections.)

Prepare

You should clone this repository

git clone http://github.com/dask/dask-tutorial

and then install necessary packages.

a) Install into an existing environment

You will need the following core libraries

conda install numpy pandas h5py Pillow matplotlib scipy toolz pytables snakeviz dask distributed

You may find the following libraries helpful for some exercises

pip install graphviz cachey
b) Create a new environment

In the repo directory

conda env create -f environment.yml 

and then on osx/linux

source activate dask-tutorial

on windows

activate dask-tutorial
c) Use Dockerfile

You can build a docker image out of the provided Dockerfile.

Graphviz on Windows

Windows users can install graphviz as follows

  1. Install Graphviz from http://www.graphviz.org/Download_windows.php
  2. Add C:\Program Files (x86)\Graphviz2.38\bin to the PATH

Alternatively one can use the following conda commands (one installs graphviz and one installs python-bindings for graphviz):

  1. conda install -c conda-forge graphviz
  2. conda install -c conda-forge python-graphviz
Prepare artificial data.

From the repo directory

python prep.py
Launch notebook

From the repo directory

jupyter notebook 
Links
Outline
  1. Overview - dask's place in the universe

  2. Foundations - low-level Dask and how it does what it does

  3. Bag - the first high-level collection: a generalized iterator for use with a functional programming style and o clean messy data.

  4. Distributed - Dask's scheduler for clusters, with details of how to view the UI.

  5. Array - blocked numpy-like functionality with a collection of numpy arrays spread across your cluster.

  6. Advanced Distributed - further details on distributed computing, including how to debug.

  7. Dataframe - parallelized operations on many pandas dataframes spread across your cluster.

  8. Dataframe Storage - efficient ways to read and write dataframes to disc.


This work is supported by the National Institutes of Health's National Center for Advancing Translational Sciences, Grant Number U24TR002306. This work is solely the responsibility of the creators and does not necessarily represent the official views of the National Institutes of Health.