Name: js-ipfs-unixfs-engine
Owner: TABLEFLIP
Description: JavaScript implementation of the layout and chunking mechanisms used by IPFS
Forked from: ipfs/js-ipfs-unixfs-engine
Created: 2017-08-30 14:05:48.0
Updated: 2017-08-30 14:05:51.0
Pushed: 2017-09-08 15:39:10.0
Homepage: null
Size: 12984
Language: JavaScript
GitHub Committers
User | Most Recent Commit | # Commits |
---|
Other Committers
User | Most Recent Commit | # Commits |
---|
JavaScript implementation of the layout and chunking mechanisms used by IPFS to handle Files
m install ipfs-unixfs-engine
Let's create a little directory to import:
/tmp
dir foo
ho 'hello' > foo/bar
ho 'world' > foo/quux
And write the importing logic:
t Importer = require('ipfs-unixfs-engine').Importer
t filesAddStream = new Importer(<dag or ipld-resolver instance)
n array to hold the return of nested file/dir info from the importer
root DAG Node is received upon completion
t res = []
mport path /tmp/foo/bar
t rs = fs.createReadStream(file)
t rs2 = fs.createReadStream(file2)
t input = { path: /tmp/foo/bar, content: rs }
t input2 = { path: /tmp/foo/quxx, content: rs2 }
isten for the data event from the importer stream
sAddStream.on('data', (info) => res.push(info))
he end event of the stream signals that the importer is done
sAddStream.on('end', () => console.log('Finished filesAddStreaming files!'))
alling write on the importer to filesAddStream the file/object tuples
sAddStream.write(input)
sAddStream.write(input2)
sAddStream.end()
When run, the stat of DAG Node is outputted for each file on data event until the root:
ltihash: <Buffer 12 20 bd e2 2b 57 3f 6f bd 7c cc 5a 11 7f 28 6c a2 9a 9f c0 90 e1 d4 16 d0 5f 42 81 ec 0c 2a 7f 7f 93>,
ze: 39243,
th: '/tmp/foo/bar' }
ltihash: <Buffer 12 20 bd e2 2b 57 3f 6f bd 7c cc 5a 11 7f 28 6c a2 9a 9f c0 90 e1 d4 16 d0 5f 42 81 ec 0c 2a 7f 7f 93>,
ze: 59843,
th: '/tmp/foo/quxx' }
ltihash: <Buffer 12 20 bd e2 2b 57 3f 6f bd 7c cc 5a 11 7f 28 6c a2 9a 9f c0 90 e1 d4 16 d0 5f 42 81 ec 0c 2a 7f 7f 93>,
ze: 93242,
th: '/tmp/foo' }
ltihash: <Buffer 12 20 bd e2 2b 57 3f 6f bd 7c cc 5a 11 7f 28 6c a2 9a 9f c0 90 e1 d4 16 d0 5f 42 81 ec 0c 2a 7f 7f 93>,
ze: 94234,
th: '/tmp' }
t Importer = require('ipfs-unixfs-engine').Importer
The import
object is a duplex pull stream that takes objects of the form:
th: 'a name',
ntent: (Buffer or Readable stream)
import
will output file info objects as files get stored in IPFS. When stats on a node are emitted they are guaranteed to have been written.
dag
is an instance of the IPLD Resolver
or the js-ipfs
dag api
The input's file paths and directory structure will be preserved in the dag-pb
created nodes.
options
is an JavaScript option that might include the following keys:
wrap
(boolean, defaults to false): if true, a wrapping node will be createdshardSplitThreshold
(positive integer, defaults to 1000): the number of directory entries above which we decide to use a sharding directory builder (instead of the default flat one)chunker
(string, defaults to "fixed"
): the chunking strategy. Now only supports "fixed"
chunkerOptions
(object, optional): the options for the chunker. Defaults to an object with the following properties:maxChunkSize
(positive integer, defaults to 262144
): the maximum chunk size for the fixed
chunker.strategy
(string, defaults to "balanced"
): the DAG builder strategy name. Supports:flat
: flat list of chunksbalanced
: builds a balanced treetrickle
: builds a trickle treemaxChildrenPerNode
(positive integer, defaults to 174
): the maximum children per node for the balanced
and trickle
DAG builder strategieslayerRepeat
(positive integer, defaults to 4): (only applicable to the trickle
DAG builder strategy). The maximum repetition of parent nodes for each layer of the tree.reduceSingleLeafToSelf
(boolean, defaults to false
): optimization for, when reducing a set of nodes with one node, reduce it to that node.dirBuilder
(object): the options for the directory builderhamt
(object): the options for the HAMT sharded directory builder8
): the number of bits at each bucket of the HAMTprogress
(function): a function that will be called with the byte length of chunks as a file is added to ipfs.onlyHash
(boolean, defaults to false): Only chunk and hash - do not write to diskhashAlg
(string): multihash hashing algorithm to usecidVersion
(integer, default 0): the CID version to use when storing the data (storage keys are based on the CID, including it's version)reate an export source pull-stream cid or ipfs path you want to export and a
dag or ipld-resolver instance> to fetch the file from
t filesStream = Exporter(<cid or ipfsPath>, <dag or ipld-resolver instance>)
ipe the return stream to console
sStream.on('data', (file) => file.content.pipe(process.stdout))
t Exporter = require('ipfs-unixfs-engine').Exporter
Uses the given [dag API or an ipld-resolver instance][] to fetch an IPFS UnixFS object(s) by their multiaddress.
Creates a new readable stream in object mode that outputs objects of the form
th: 'a name',
ntent: (Buffer or Readable stream)
Errors are received as with a normal stream, by listening on the 'error'
event to be emitted.
Feel free to join in. All welcome. Open an issue!
This repository falls under the IPFS Code of Conduct.