lucidworks/searchhub

Name: searchhub

Owner: Lucidworks

Description: Fusion demo app searching open-source project data from the Apache Software Foundation

Created: 2016-05-11 20:36:19.0

Updated: 2018-04-21 03:41:53.0

Pushed: 2017-10-23 16:18:50.0

Homepage: null

Size: 16996

Language: Python

GitHub Committers

UserMost Recent Commit# Commits

Other Committers

UserEmailMost Recent Commit# Commits

README

Lucidworks Search Hub

Search Hub is an application built on top of Lucidworks Fusion.
It is designed to be a showcase of Fusion's search, machine learning and analytical capability as well as act as a community service for a large number of Apache Software Foundation projects. It is the basis of several talks

by Lucidworks employees (e.g. http://www.slideshare.net/lucidworks/data-science-with-solr-and-spark).  A production version of this software hosted by [Lucidworks](http://www.lucidworks.com) is available
at http://searchhub.lucidworks.com. 

Search Hub contains all you need to download and run your own community search site. It comes with prebuilt definitions to crawl a large number of ASF projects, including their mailing lists, websites, wikis, JIRAs and Github repositories. These prebuilt definitions may also serve as templates for adding additional projects. The project also comes in with a built-in client (based off of Lucidworks View

This application uses Snowplow for tracking on the website. In particular, it tracks:

  1. Page visits

  2. Time on page (via page pings)

  3. Location

  4. Clicks on documents and facets

  5. Searches

    Search Hub is open source under the Apache License, although do note Lucidworks Fusion itself is not open source.

Requirements

You'll need the following software installed to get started.

Get Started

In ~/.gradle/gradle.properties, add/set:

chhubFusionHome=/PATH/TO/FUSION/INSTALL

The searchhubFusionHome variable is used by the build to know where to deploy custom plugins that the Search Hub project needs (namely, a Mail Parsing Stage)

If you haven't already, clone this repository and change into the directory of the clone.

t clone https://github.com/LucidWorks/searchhub
 searchhub

Run the Installer to install NPM, Bower and Python dependencies

adlew install

(Re)Start your Fusion instance (see Requirements above, this needs to be Fusion 2.4.x) This is important since `deployLibs` (task called by the install task) installed the MBoxParsingStage into Fusion.

Build the UI: This will copy the client files into python/server. NOTE: This is deprecated.

adlew buildUI

If you prefer using Gulp, you can also run `gulp build`

Setup Python Flask:

ce venv/bin/activate
ython
ample-config.py config.py
l in config.py as appropriate. You will need Twitter keys to make Twitter work.  You will need a Github key to make Github work.
env/bin/python bootstrap.py

NOTE: Before you can successfully run the bootstrap you must create a lucidfind user in the fusion admin panel. The bootstrap.py step creates a number of objects in Fusion, including collections, pipelines, schedules and data sources. By default, the start up script does not start the crawler, nor does it enable the schedules. If you wish to start them, visit the Fusion Admin UI or do one of the following:

To run the data sources once, upon creation (note: this can be quite expensive, as it will start all datasources):

python
venv/bin/python bootstrap.py --start_datasources

To enable the schedules, edit your config.py and set `ENABLE_SCHEDULES=Trueand then rerun ``python bootstrap.py```

Running Search Hub
Local, Non-Production Mode using Werkzeug

Run Flask (from the python directory):

ython
env/bin/python run.py

Browse to http://localhost:5000

If you make changes to the UI, you will either need to rebuild the UI part (npm build) or run:

watch
Production
Docker

The easiest way to spin up the Search Hub Client and Python app is by using Docker and the Dockerfile in the Python directory.

This container is built on httpd and mod_wsgi

To build a container, do the following steps:

  1. Edit your FUSION_CONFIG.js to point to the IP of your container. You can do also do this afterwards too, by attaching to the running container and editing it.
  2. Build the SearchHub UI (see above) so that the Client assets are properly installed in the Python `server` directory
  3. Create a `config-docker.py` file that contains the configuration required to connect to your Fusion instance. Note, this Docker container we are running now does not run Fusion.
  4. Point your browser at http://host:8000/ where host is the IP for your Docker container.

Some other helpful commands:

WSGI Compliant Server

See docker.sh in the Home directory for how to build and run mod_wsgi_express in a Docker container.

Scaling

Lucidworks' production instance is built using Solr Scale Toolkit – aka SSTK – using a Public/Private VPC setup.
The public facing Docker application (i.e. the Client Application below) sits in a public subnet with port 80 exposed. Everything else is in a private subnet and the public subnet can only reach the private subnet via port 8764.

The commands used to deploy Fusion using SSTK are as follows:

Due note, that because of the private Subnet, the machine you are running SSTK on needs access to that machine, so we typically use a proxy node that is locked down and has all of our tools installed on it.

The Client Application

The Client Application is an extension of Lucidworks View and thus relies on similar build and layout mechanisms and structures. It is an Angular app and leverages FoundationJS. We have extended it to use the Snowplow Javascript Tracker for capturing user interactions. All of these interactions are fed through the Flask middle tier and then on to Fusion for use by our clickstream and machine learning capabilities.

Configuration

In order to configure the client application you can change the settings in the FUSION_CONFIG.js. See the View docs for more details or read the comments in the config file itself.

Extending

Pull Requests are welcome for new projects, new configurations and other new extensions.

Project Layout

The Search Hub project consists of 3 main development areas, plus build infrastructure:

Client

Written in Javascript, using AngularJS and Foundation, the Client is located in the `clientdirectory. It's build is a bit different than most JS builds in that it copies Lucidworks View from the node_modules download area into a temporary ``build` directory and then copies in the Search Hub client code into the same directory and then it gets built and moved to the Flask application serving area (python/server``). We are working on ways to improve how View is extended and so this approach, while viable for now, may change. Our goal is to have most of the Client UI be driven by View itself with very little extension in Search Hub.

Python

The `pythondirectory contains all of the Flask application and acts as the middle tier in the application between the client and Fusion. Most of the work in the application is initiated by either the ``bootstrap.py` file or therun.py`` file. The former is responsible for using the configurations in `python/fusion_configand ``python/project_config``` to, as the name implies, bootstrap Fusion with datasources, pipeline definitions, schedules and whatever else is needed to make sure Fusion has the appropriate data necessary to function. The latter file (run.py) is a Flask app that takes care of the serving of the Flask application. It primarily consists of routing information as well as a thin proxy to Fusion.

Most of the Python work is defined by the `python/serverdirectory. This directory and it's children define how Flask talks to Fusion and also defines some template helpers for creating various datasources in Fusion. A good starting place for learning more is the ``fusion.py` file inpython/server/backends``

Fusion Plugins

The `searchhub-fusion-plugins` directory contains Java and Scala code for extending and/or utilizing Fusion's backend capabilities. On the Java side, the two main functions are:

  1. A Mail Parsing Stage that is responsible for extracting pertinent information out of Mail messages (e.g. thread ids, to/from)
  2. A Mail downloader. Since we don't want to tax Apache Software Foundation resources directly when crawling (they have a banning mechanism), we have setup an httpd mod_mbox mirror.
    The mail downloader is responsible for retrieving the daily mbox messages. If you wish to have a local mirror for your own purposes, you can use this class to get your own mbox files.

On the Scala side, there are a number of Spark Scala utilities that show how to leverage Lucene analysis in Spark, run common SparkML tasks like LDA and k-Means plus some code for correlating email messages based on message ids. See Grant Ingersoll's talk at the Dallas Data Science meetup for details. To learn more on the Scala side, start with the `SparkShellHelpers.scala` file.

The Build

The build is primarily driven by Gradle and Gulp. Gradle defines tasks, per the getting started above, for all necessary tasks needed to run Search Hub.
However, on the client side of things, it is simply invoking npm or Gulp to do the Javascript build. To learn more about the build, see `build.gradle`.

Adding your own Project to Crawl

To add another project, you need to do a few things:

  1. In $FUSION_HOME/python/project_config, create/copy/edit a project configuration file. See accumulo.json as an example.
  2. In $FUSION_HOME/searchhub-fusion-plugins/src/main/resources, edit the mailing_lists.csv to add your project.
  3. If you are adding more mailing lists, you will need to either crawl the ASF's mail archives site (please be polite when doing so) or setup an httpd mod_mbox instance like we have at http://asfmail.lucidworks.io. If you submit a pull request against this project with your mailing_lists.csv changes, we will consider adding it to our hosted version.

This work is supported by the National Institutes of Health's National Center for Advancing Translational Sciences, Grant Number U24TR002306. This work is solely the responsibility of the creators and does not necessarily represent the official views of the National Institutes of Health.