Name: data-making-guidelines
Owner: datamade
Description: :blue_book: Making Data, the DataMade Way
Created: 2015-04-13 20:46:34.0
Updated: 2018-05-21 03:36:34.0
Pushed: 2018-04-17 17:03:01.0
Homepage:
Size: 60
Language: HTML
GitHub Committers
User | Most Recent Commit | # Commits |
Other Committers
User | Email | Most Recent Commit | # Commits |
README
Making Data, the DataMade Way
This is DataMade's guide to extracting, transforming and loading (ETL) data using Make, a common command line utility.
ETL refers to the general process of:
- taking raw source data (“Extract”)
- doing some stuff to get the data in shape, possibly involving intermediate derived files (“Transform”)
- producing final output in a more usable form (for “Loading” into something that consumes the data - be it an app, a system, a visualization, etc.)
Having a standard ETL workflow helps us make sure that our work is clean, consistent, and easy to reproduce. By following these guidelines you'll be able to keep your work up to date and share it with the world in a standard format - all with as few headaches as possible.
Basic Principles
These five principles inform all of our data work:
- Never destroy data - treat source data as immutable, and show your work when you modify it
- Be able to deterministically produce the final data with one command
- Write as little custom code as possible
- Use standard tools whenever possible
- Keep source data under version control
Unsure how to follow these principles? Read on!
The Guide
- Make & Makefile Overview
- Why Use Make/Makefiles?
- Makefile 101
- Makefile 201 - Some Fancy Things Built Into Make
- ETL Styleguide
- Makefile Best Practices
- Variables
- Processors
- Standard Toolkit
- ETL Workflow Directory Structure
Code examples
Further reading