nprapps/books15

Name: books15

Owner: NPR visuals team

Description: book concierge 2015 edition

Created: 2015-11-04 19:23:37.0

Updated: 2018-05-18 18:51:42.0

Pushed: 2018-05-18 18:51:39.0

Homepage: null

Size: 868

Language: JavaScript

GitHub Committers

UserMost Recent Commit# Commits

Other Committers

UserEmailMost Recent Commit# Commits

README

Books Concierge (2015 version)

What is this?

A snappy looking presentation of NPR contributors' favorite books of the year.

This code is open source under the MIT license. See LICENSE for complete details.

Assumptions

The following things are assumed to be true in this documentation.

For more details on the technology stack used with the app-template, see our development environment blog post.

What's in here?

The project contains the following folders and important files:

Bootstrap the project

Node.js is required for the static asset pipeline. If you don't already have it, get it like this:

 install node
 https://npmjs.org/install.sh | sh

Then bootstrap the project:

ooks14
rtualenv --no-site-packages books14
install -r requirements.txt
install
update

Problems installing requirements? You may need to run the pip command as ARCHFLAGS=-Wno-error=unused-command-line-argument-hard-error-in-future pip install -r requirements.txt to work around an issue with OSX.

Hide project secrets

Project secrets should never be stored in app_config.py or anywhere else in the repository. They will be leaked to the client if you do. Instead, always store passwords, keys, etc. in environment variables and document that they are needed here in the README.

Save media assets

Large media assets (images, videos, audio) are synced with an Amazon S3 bucket specified in app_config.ASSETS_S3_BUCKET in a folder with the name of the project. (This bucket should not be the same as any of your app_config.PRODUCTION_S3_BUCKETS or app_config.STAGING_S3_BUCKETS.) This allows everyone who works on the project to access these assets without storing them in the repo, giving us faster clone times and the ability to open source our work.

Syncing these assets requires running a couple different commands at the right times. When you create new assets or make changes to current assets that need to get uploaded to the server, run `fab assets.sync`. This will do a few things:

Unfortunantely, there is no automatic way to know when a file has been intentionally deleted from the server or your local directory. When you want to simultaneously remove a file from the server and your local environment (i.e. it is not needed in the project any longer), run `fab assets.rm:"www/assets/file_name_here.jpg"`

Adding a page to the site

A site can have any number of rendered pages, each with a corresponding template and view. To create a new one:

Run the project

A flask app is used to run the project locally. It will automatically recompile templates and assets on demand.

on $PROJECT_SLUG
on app.py

Visit localhost:8000 in your browser.

COPY editing

This app uses a Google Spreadsheet for a simple key/value store that provides an editing workflow.

View the sample copy spreadsheet.

This document is specified in app_config with the variable COPY_GOOGLE_DOC_KEY. To use your own spreadsheet, change this value to reflect your document's key (found in the Google Docs URL after &key=).

A few things to note:

The app template is outfitted with a few fab utility functions that make pulling changes and updating your local data easy.

To update the latest document, simply run:

copytext.update 

Note: copytext.update runs automatically whenever fab render is called.

At the template level, Jinja maintains a COPY object that you can use to access your values in the templates. Using our example sheet, to use the byline key in templates/index.html:

OPY.attribution.byline }}

More generally, you can access anything defined in your Google Doc like so:

OPY.sheet_name.key_name }}

You may also access rows using iterators. In this case, the column headers of the spreadsheet become keys and the row cells values. For example:

or row in COPY.sheet_name %}
ow.column_one_header }}
ow.column_two_header }}
ndfor %}

When naming keys in the COPY document, pleaseattempt to group them by common prefixes and order them by appearance on the page. For instance:

e
ne
t_header
t_body
t_url
load_label
load_url
Load books and covers

To run the app, you'll need to load books and covers from a Google Spreadsheet. First, see DATA_GOOGLE_DOC_KEY in app_config.py.

Then run the loader:

data.load_books
data.load_images

Alternatively, you can update copy and social media along with books with a single command:

update
Arbitrary Google Docs

Sometimes, our projects need to read data from a Google Doc that's not involved with the COPY rig. In this case, we've got a class for you to download and parse an arbitrary Google Doc to a CSV.

This solution will download the uncached version of the document, unlike those methods which use the “publish to the Web” functionality baked into Google Docs. Published versions can take up to 15 minutes up update!

First, export a valid Google username (email address) and password to your environment.

rt APPS_GOOGLE_EMAIL=foo@gmail.com
rt APPS_GOOGLE_PASS=MyPaSsW0rd1!

Then, you can load up the GoogleDoc class in etc/gdocs.py to handle the task of authenticating and downloading your Google Doc.

Here's an example of what you might do:

rt csv

 etc.gdoc import GoogleDoc

read_my_google_doc():
doc = {}
doc['key'] = '0ArVJ2rZZnZpDdEFxUlY5eDBDN1NCSG55ZXNvTnlyWnc'
doc['gid'] = '4'
doc['file_format'] = 'csv'
doc['file_name'] = 'gdoc_%s.%s' % (doc['key'], doc['file_format'])

g = GoogleDoc(**doc)
g.get_auth()
g.get_document()

with open('data/%s' % doc['file_name'], 'wb') as readfile:
    csv_file = list(csv.DictReader(readfile))

for line_number, row in enumerate(csv_file):
    print line_number, row

_my_google_doc()

Google documents will be downloaded to data/gdoc.csv by default.

You can pass the class many keyword arguments if you'd like; here's what you can change:

See etc/gdocs.py for more documentation.

Run Python tests

Python unit tests are stored in the tests directory. Run them with fab tests.

Run Javascript tests

With the project running, visit localhost:8000/test/SpecRunner.html.

Compile static assets

Compile LESS to CSS, compile javascript templates to Javascript and minify all assets:

on books14
render

(This is done automatically whenever you deploy to S3.)

Test the rendered app

If you want to test the app once you've rendered it out, just use the Python webserver:

ww
on -m SimpleHTTPServer
Deploy to S3
staging master deploy

If you have already loaded books and cover images, you can skip this time-consuming step when deploying by running:

staging master deploy:quick
Analytics

The Google Analytics events tracked in this application are:

|Category|Action|Label|Value|Notes| |——–|——|—–|—–|—–| |best-books-2015|tweet|location||| |best-books-2015|facebook|location||| |best-books-2015|pinterest|location||| |best-books-2015|email|location||| |best-books-2015|open-share-discuss||| |best-books-2015|close-share-discuss||| |best-books-2015|summary-copied||| |best-books-2015|view-review|book_slug||| |best-books-2015|navigate|next or previous||| |best-books-2015|toggle-view|list or grid||| |best-books-2015|clear-tags|||| |best-books-2015|selected-tags|comma separated list of tags||| |best-books-2015|library|book_slug||Book slug of library click| |best-books-2015|amazon|book_slug||Book slug of amazon click| |best-books-2015|ibooks|book_slug||Book slug of ibooks click| |best-books-2015|indiebound|book_slug||Book slug of indiebound click|

Note: The library, amazon, ibooks, and indiebound events, which track link clicks from individual reviews, were added after the project was deployed. They should only be used for analysis that starts on or after 12-5-2014. launch.


This work is supported by the National Institutes of Health's National Center for Advancing Translational Sciences, Grant Number U24TR002306. This work is solely the responsibility of the creators and does not necessarily represent the official views of the National Institutes of Health.