sul-dlss/dlme

Name: dlme

Owner: Stanford University Digital Library

Description: Digital Library of the Middle East Prototype

Created: 2017-08-01 18:05:07.0

Updated: 2018-04-09 03:39:50.0

Pushed: 2018-05-16 18:36:54.0

Homepage: https://spotlight.dlme.clir.org/

Size: 760

Language: Ruby

GitHub Committers

UserMost Recent Commit# Commits

Other Committers

UserEmailMost Recent Commit# Commits

README

Build Status Coverage Status Dependency Status

Digital Library of the Middle East

Dataflows

This diagram represents how data gets loading into the application and ends up in the Solr index:

overview diagram Link to diagram in Google Drawings

You can read more about our data and related documentation in our data documentation.

Configuration

The AWS deployment needs to provide the follow environment configuration:

_URL
ET_KEY_BASE

And these database configuration settings:

tabase: "<%= ENV['RDS_DB_NAME'] %>"
ername: "<%= ENV['RDS_USERNAME'] %>"
ssword: "<%= ENV['RDS_PASSWORD'] %>"
st: "<%= ENV['RDS_HOSTNAME'] %>"
rt: "<%= ENV['RDS_PORT'] %>"
Development

You can spin up the rails server, jetty, and populate the solr index using these commands:

ndle exec solr_wrapper
ndle exec rails s
Deploying

This project is configured for continuous deployment to AWS at http://spotlight.dlme.clir.org/

The AWS stack can be built using:

s cloudformation create-stack --stack-name DLME --template-body file://cloudformation/stack.yaml --capabilities CAPABILITY_IAM --parameters file://path/to/some/params.json

After creating the stack, you also need to go into route53 and correct the DNS entry for solr. Change the public, elastic ip address to the internal IP (10.0.x.x).

Converting files

All files must first be converted to the intermediate representation (IR) before they can be imported.

Start by getting a personal access token from GitHub (https://github.com/settings/tokens) with the public_repo scope enabled. Put this in an environment variable called SETTINGS__IMPORT__ACCESS_TOKEN (or put it in settings.local.yml)

Then, run this command (locally on the production machine)

in/fetch_and_import

This will pull all the MODS files from https://github.com/waynegraham/dlme-metadata/tree/master/maps/records/stanford and all the TEI files from https://github.com/waynegraham/dlme-metadata/tree/master/manuscript/records/penn/schoenberg and pull them into the local database. It will launch background jobs to transform them to the JSON IR and load them as DlmeJson resources in the database. At this point they are also indexed into Solr for discovery.

If you want to repeat the transformation jobs without refetching the data you may use:

n/reprocess_harvest <harvest_id>

You can also run traject directly:

ndle exec traject -c config/traject.rb -c lib/traject/mods_config.rb -s source="source of data as set in config/settings" [path to some file]

Example:

ndle exec traject -c config/traject.rb -c lib/traject/fgdc_config.rb -s source='harvard_fgdc' spec/fixtures/fgdc/HARVARD.SDE2.AFRICOVER_EG_RIVERS.fgdc.xml

This work is supported by the National Institutes of Health's National Center for Advancing Translational Sciences, Grant Number U24TR002306. This work is solely the responsibility of the creators and does not necessarily represent the official views of the National Institutes of Health.