nprapps/austin

Name: austin

Owner: NPR visuals team

Description: The Austin 100

Created: 2015-01-26 15:50:48.0

Updated: 2018-05-24 02:52:40.0

Pushed: 2018-05-24 02:52:39.0

Homepage: http://apps.npr.org/austin/

Size: 1047

Language: JavaScript

GitHub Committers

UserMost Recent Commit# Commits

Other Committers

UserEmailMost Recent Commit# Commits

README

austin

What is this?

Put on your headphones and listen to 100 of NPR Music's favorite songs from SXSW 2015.

This code is open source under the MIT license. See LICENSE for complete details.

Assumptions

The following things are assumed to be true in this documentation.

For more details on the technology stack used with the app-template, see our development environment blog post.

What's in here?

The project contains the following folders and important files:

Bootstrap the project

Node.js is required for the static asset pipeline. If you don't already have it, get it like this:

 install node
 https://npmjs.org/install.sh | sh

Then bootstrap the project:

ustin
rtualenv austin
install -r requirements.txt
install
update

Problems installing requirements? You may need to run the pip command as ARCHFLAGS=-Wno-error=unused-command-line-argument-hard-error-in-future pip install -r requirements.txt to work around an issue with OSX.

Hide project secrets

Project secrets should never be stored in app_config.py or anywhere else in the repository. They will be leaked to the client if you do. Instead, always store passwords, keys, etc. in environment variables and document that they are needed here in the README.

Save media assets

Large media assets (images, videos, audio) are synced with an Amazon S3 bucket specified in app_config.ASSETS_S3_BUCKET in a folder with the name of the project. (This bucket should not be the same as any of your app_config.PRODUCTION_S3_BUCKETS or app_config.STAGING_S3_BUCKETS.) This allows everyone who works on the project to access these assets without storing them in the repo, giving us faster clone times and the ability to open source our work.

Syncing these assets requires running a couple different commands at the right times. When you create new assets or make changes to current assets that need to get uploaded to the server, run `fab assets.sync`. This will do a few things:

Unfortunantely, there is no automatic way to know when a file has been intentionally deleted from the server or your local directory. When you want to simultaneously remove a file from the server and your local environment (i.e. it is not needed in the project any longer), run `fab assets.rm:"www/assets/file_name_here.jpg"`

Adding a page to the site

A site can have any number of rendered pages, each with a corresponding template and view. To create a new one:

Run the project

A flask app is used to run the project locally. It will automatically recompile templates and assets on demand.

on $PROJECT_SLUG
app

Visit localhost:8000 in your browser.

COPY editing

IMPORTANT NOTE: This project relies on an outdated method to access content in Google Spreadsheets. For now, the connection has been disabled (see update() in fabfile/__init__.py), and the project instead pulls from a spreadsheet stored in www/assets/copy.xlsx. Run fab assets.sync to download this file (and other media files) from the assets rig.

This app uses a Google Spreadsheet for a simple key/value store that provides an editing workflow.

View the sample copy spreadsheet.

This document is specified in app_config with the variable COPY_GOOGLE_DOC_KEY. To use your own spreadsheet, change this value to reflect your document's key (found in the Google Docs URL after &key=).

A few things to note:

The app template is outfitted with a few fab utility functions that make pulling changes and updating your local data easy.

To update the latest document, simply run:

text.update

Note: text.update runs automatically whenever fab render is called.

At the template level, Jinja maintains a COPY object that you can use to access your values in the templates. Using our example sheet, to use the byline key in templates/index.html:

OPY.attribution.byline }}

More generally, you can access anything defined in your Google Doc like so:

OPY.sheet_name.key_name }}

You may also access rows using iterators. In this case, the column headers of the spreadsheet become keys and the row cells values. For example:

or row in COPY.sheet_name %}
ow.column_one_header }}
ow.column_two_header }}
ndfor %}

When naming keys in the COPY document, pleaseattempt to group them by common prefixes and order them by appearance on the page. For instance:

e
ne
t_header
t_body
t_url
load_label
load_url
Arbitrary Google Docs

Sometimes, our projects need to read data from a Google Doc that's not involved with the COPY rig. In this case, we've got a class for you to download and parse an arbitrary Google Doc to a CSV.

This solution will download the uncached version of the document, unlike those methods which use the “publish to the Web” functionality baked into Google Docs. Published versions can take up to 15 minutes up update!

First, export a valid Google username (email address) and password to your environment.

rt APPS_GOOGLE_EMAIL=foo@gmail.com
rt APPS_GOOGLE_PASS=MyPaSsW0rd1!

Then, you can load up the GoogleDoc class in etc/gdocs.py to handle the task of authenticating and downloading your Google Doc.

Here's an example of what you might do:

rt csv

 etc.gdoc import GoogleDoc

read_my_google_doc():
doc = {}
doc['key'] = '0ArVJ2rZZnZpDdEFxUlY5eDBDN1NCSG55ZXNvTnlyWnc'
doc['gid'] = '4'
doc['file_format'] = 'csv'
doc['file_name'] = 'gdoc_%s.%s' % (doc['key'], doc['file_format'])

g = GoogleDoc(**doc)
g.get_auth()
g.get_document()

with open('data/%s' % doc['file_name'], 'wb') as readfile:
    csv_file = list(csv.DictReader(readfile))

for line_number, row in enumerate(csv_file):
    print line_number, row

_my_google_doc()

Google documents will be downloaded to data/gdoc.csv by default.

You can pass the class many keyword arguments if you'd like; here's what you can change:

See etc/gdocs.py for more documentation.

Run Python tests

Python unit tests are stored in the tests directory. Run them with fab tests.

Run Javascript tests

With the project running, visit localhost:8000/test/SpecRunner.html.

Compile static assets

Compile LESS to CSS, compile javascript templates to Javascript and minify all assets:

on austin
render

(This is done automatically whenever you deploy to S3.)

Test the rendered app

If you want to test the app once you've rendered it out, just use the Python webserver:

ww
on -m SimpleHTTPServer
Deploy to S3
staging master deploy
Deploy to EC2

You can deploy to EC2 for a variety of reasons. We cover two cases: Running a dynamic web application (public_app.py) and executing cron jobs (crontab).

Servers capable of running the app can be setup using our servers project.

For running a Web application:

For running cron jobs:

You can configure your EC2 instance to both run Web services and execute cron jobs; just set both environment variables in the fabfile.

Install cron jobs

Cron jobs are defined in the file crontab. Each task should use the cron.sh shim to ensure the project's virtualenv is properly activated prior to execution. For example:

* * * ubuntu bash /home/ubuntu/apps/austin/repository/cron.sh fab $DEPLOYMENT_TARGET cron_jobs.test

To install your crontab set INSTALL_CRONTAB to True in app_config.py. Cron jobs will be automatically installed each time you deploy to EC2.

The cron jobs themselves should be defined in fabfile/cron_jobs.py whenever possible.

Install web services

Web services are configured in the confs/ folder.

Running fab servers.setup will deploy your confs if you have set DEPLOY_TO_SERVERS and DEPLOY_WEB_SERVICES both to True at the top of app_config.py.

To check that these files are being properly rendered, you can render them locally and see the results in the confs/rendered/ directory.

servers.render_confs

You can also deploy only configuration files by running (normally this is invoked by deploy):

servers.deploy_confs
Run a remote fab command

Sometimes it makes sense to run a fabric command on the server, for instance, when you need to render using a production database. You can do this with the fabcast fabric command. For example:

staging master servers.fabcast:deploy

If any of the commands you run themselves require executing on the server, the server will SSH into itself to run them.

Analytics

The Google Analytics events tracked in this application are:

|Category|Action|Label|Value| |——–|——|—–|—–| |austin|tweet|location|| |austin|facebook|location|| |austin|email|location|| |austin|open-share-discuss|| |austin|close-share-discuss|| |austin|summary-copied|| |austin|fullscreen-start|| |austin|fullscreen-stop|| |austin|chromecast-start|| |austin|chromecast-stop|| |austin|chromecast-ready|| |austin|begin|| |austin|song-skip|currentSong['artist'] + ' - ' + currentSong['title']| |austin|song-back|previousSong['artist'] + ' - ' + previousSong['title']| |austin|song-favorite|favSong['artist'] + ' - ' + favSong['title']| |austin|song-unfavorite|favSong['artist'] + ' - ' + favSong['title']| |austin|song-show-details|song['artist'] + ' - ' + song['title']| |austin|max-song-index|maxSongIndex|| |austin|song-download|song['artist'] + ' - ' + song['title']|| |austin|amazon-click|song['artist'] + ' - ' + song['title']|| |austin|itunes-click|song['artist'] + ' - ' + song['title']|| |austin|rdio-click|song['artist'] + ' - ' + song['title']|| |austin|spotify-click|song['artist'] + ' - ' + song['title']|| |austin|full-list||

Note: song-back is fired both when clicking the back button and when playing a song in the history list by clicking on it.


This work is supported by the National Institutes of Health's National Center for Advancing Translational Sciences, Grant Number U24TR002306. This work is solely the responsibility of the creators and does not necessarily represent the official views of the National Institutes of Health.