Name: FluxJS.jl
Owner: Flux
Description: I heard you like compile times
Created: 2018-01-16 16:28:41.0
Updated: 2018-05-09 21:05:06.0
Pushed: 2018-05-05 16:29:58.0
Size: 32
Language: Julia
GitHub Committers
User | Most Recent Commit | # Commits |
---|
Other Committers
User | Most Recent Commit | # Commits |
---|
Run Flux models in the browser, via deeplearn.js.
Note that if you get errors running this package, you may need to run Pkg.checkout("ASTInterpreter2")
.
You can see what Flux.JS sees with @code_js
, which works like @code_typed
or
@code_native
. Flux.JS simply accepts a function of arrays along with example
inputs, and generates JavaScript code for you. Here's the simplest possible
example:
a> x = rand(10)
lement Array{Float64,1}:
99338
67917
a> @code_js identity(x)
model = (function () {
t math = dl.ENV.math;
nction model(kinkajou) {
return kinkajou;
del.weights = [];
turn model;
;
.fetchWeights("model.bson").then((function (ws) {
turn model.weights = ws;
You can see that there's some setup code as Flux.JS expects to load some weights
for a model. But the core of it is this function, which is exactly like the
identity
function in Julia.
tion model(kinkajou) {
turn kinkajou;
Let's try something more interesting; f
takes two arguments and multiplies
them.
a> f(W,x) = W*x
a> @code_js f(rand(5,10),rand(10))
model = (function () {
t math = dl.ENV.math;
nction model(bear, giraffe) {
return math.matrixTimesVector(bear, giraffe);
del.weights = [];
turn model;
;
.fetchWeights("model.bson").then((function (ws) {
turn model.weights = ws;
Because Flux models are just Julia functions, we can use the same macro with them too. You'll now notice that the weights are being used.
a> m = Chain(Dense(10,5,relu),Dense(5,2),softmax)
a> @code_js m(x)
model = (function () {
t math = dl.ENV.math;
nction badger(eland) {
return math.add(math.matrixTimesVector(model.weights[0], eland), model.weights[1]);
nction chimpanzee(mongoose) {
return math.relu(math.add(math.matrixTimesVector(model.weights[2], mongoose), model.weights[3]));
nction model(shark) {
return math.softmax(badger(chimpanzee(shark)));
del.weights = [];
turn model;
;
.fetchWeights("model.bson").then((function (ws) {
turn model.weights = ws;
There is also early support for RNNs (we compile stateful models directly, no unrolling).
a> m = Chain(RNN(10,5))
a> @code_js m(x)
model = (function () {
t math = dl.ENV.math;
t init = [0.017732, 0.00991122, -0.00712077, -0.00161244, -0.00232475];
t states = init.slice();
nction nightingale(seal, mongoose) {
return [seal, mongoose];
nction cat(horse) {
let weasel = math.tanh(math.add(math.add(math.matrixTimesVector(model.weights[0], horse), math.matrixTimesVector(model.weights[1], states[0])), model.weights[2]));
let coati = nightingale(weasel, weasel);
states[0] = coati[1];
return coati[2];
nction model(fish) {
return cat(fish);
del.reset = (function () {
states = init.slice();
return;
;
del.weights = [];
turn model;
;
.fetchWeights("model.bson").then((function (ws) {
turn model.weights = ws;
In general, the more useful entry point to the package is FluxJS.compile
.
a> FluxJS.compile("mnist", m, rand(10))
This will produce two files in the current directory: (1) mnist.js
, which
contains the same JavaScript code as above; (2) mnist.bson
, which contains the
model weights in a JS-loadable format.
Firstly, you'll need the following scripts in your <head>
. The flux.js
script can be found here.
d>
cript src="https://unpkg.com/deeplearn"></script>
cript src="https://unpkg.com/bson/browser_build/bson.js"></script>
cript src="flux.js"></script> <!-- Or embed the script directly -->
ad>
From here, you can either link the generated code as another script, or embed it
directly. In real applications you'll most likely want to wait on the
fetchWeights
promise, to avoid trying to use the model before it's ready.
ipt>
model = (function () {
t math = dl.ENV.math;
nction model(kinkajou) {
return kinkajou;
del.weights = [];
turn model;
;
.fetchWeights("model.bson").then((function (ws) {
turn model.weights = ws;
ript>
In the page, you can run the model from the dev tools.
= dl.tensor([1,2,3,4,5,6,7,8,9,10])
nsor {isDisposed: false, size: 10, shape: Array(1), dtype: "float32", strides: Array(0), ?}
ait model(x).data()
oat32Array(25) [0.0262143611907959, -0.04852187633514404, ?]
See the deeplearn.js docs for more information on how to work with its tensor objects.