NVIDIA/torch-nccl

Name: torch-nccl

Owner: NVIDIA Corporation

Description: torch bindings for nccl

Forked from: ngimel/nccl.torch

Created: 2016-08-15 23:09:11.0

Updated: 2016-08-16 15:12:35.0

Pushed: 2016-06-23 16:32:03.0

Homepage: null

Size: 103

Language: Lua

GitHub Committers

UserMost Recent Commit# Commits

Other Committers

UserEmailMost Recent Commit# Commits

README

nccl.torch

Torch7 FFI bindings for NVidia NCCL library.

Installation

Collective operations supported

Example usage

Argument to the collective call should be a table of contiguous tensors located on the different devices. Example: perform in-place allReduce on the table of tensors:

ire 'nccl'
.allReduce(inputs)

where inputs is a table of contiguous tensors of the same size located on the different devices.


This work is supported by the National Institutes of Health's National Center for Advancing Translational Sciences, Grant Number U24TR002306. This work is solely the responsibility of the creators and does not necessarily represent the official views of the National Institutes of Health.