Name: imagecapture-polyfill
Owner: GoogleChromeLabs
Description: MediaStream ImageCapture polyfill. Take photos from the browser as easy as .takePhoto().then(processPhoto)
Created: 2016-10-18 16:26:34.0
Updated: 2018-05-22 11:46:50.0
Pushed: 2018-01-10 23:22:10.0
Homepage: https://googlechromelabs.github.io/imagecapture-polyfill/
Size: 76
Language: JavaScript
GitHub Committers
User | Most Recent Commit | # Commits |
---|
Other Committers
User | Most Recent Commit | # Commits |
---|
ImageCapture is a polyfill for the MediaStream Image Capture API.
As of June 2017, the ImageCapture spec is relatively stable. Chrome supports the API starting with M59 (earlier versions require setting a flag) and Firefox has partial support behind a flag. See the ImageCapture browser support page for details.
Prior to this API, in order to take a still picture from the device camera, two approaches have been used:
<video>
element to a stream obtained via navigator[.mediaDevices].getUserMedia
, then use a 2D canvas context to drawImage
from that video. The canvas
can return a URL to be used as the src
attribute of an <img>
element, via .toDataURL('image/<format>')
. (1, 2)<input type="file" name="image" accept="image/*" capture>
The demo currently shows grabFrame() and takePhoto().
add image-capture
Or, with npm:
install --save image-capture
In your JS code:
videoDevice;
canvas = document.getElementById('canvas');
photo = document.getElementById('photo');
gator.mediaDevices.getUserMedia({video: true}).then(gotMedia).catch(failedToGetMedia);
tion gotMedia(mediaStream) {
Extract video track.
deoDevice = mediaStream.getVideoTracks()[0];
Check if this device supports a picture mode...
t captureDevice = new ImageCapture(videoDevice);
(captureDevice) {
captureDevice.takePhoto().then(processPhoto).catch(stopCamera);
captureDevice.grabFrame().then(processFrame).catch(stopCamera);
tion processPhoto(blob) {
oto.src = window.URL.createObjectURL(blob);
tion processFrame(imageBitmap) {
nvas.width = imageBitmap.width;
nvas.height = imageBitmap.height;
nvas.getContext('2d').drawImage(imageBitmap, 0, 0);
tion stopCamera(error) {
nsole.error(error);
(videoDevice) videoDevice.stop(); // turn off the camera
o.addEventListener('load', function () {
After the image loads, discard the image object to release the memory
ndow.URL.revokeObjectURL(this.src);
Start by constructing a new ImageCapture object:
captureDevice;
gator.mediaDevices.getUserMedia({video: true}).then(mediaStream => {
ptureDevice = new ImageCapture(mediaStream.getVideoTracks()[0]);
atch(...)
Please consult the spec for full detail on the methods.
Takes a video track and returns an ImageCapture object.
TBD
TBD
Capture the video stream into a Blob containing a single still image.
Returns a Promise that resolves to a Blob on success, or is rejected with DOMException
on failure.
ureDevice.takePhoto().then(blob => {
atch(error => ...);
Gather data from the video stream into an ImageBitmap object. The width and height of the ImageBitmap object are derived from the constraints of the video stream track passed to the constructor.
Returns a Promise that resolves to an ImageBitmap on success, or is rejected with DOMException
on failure.
ureDevice.grabFrame().then(imageBitmap => {
atch(error => ...);
The polyfill has been tested to work in current browsers:
For the widest compatibility, you can additionally load the WebRTC adapter. That will expand support to:
For older browsers that don't support navigator.getUserMedia, you can additionally load Addy Osmani's shim with optional fallback to Flash - getUserMedia.js. Alternatively, the getUserMedia wrapper normalizes error handling and gives an error-first API with cross-browser support.
run dev
install
run dev
To make your server accessible outside of localhost
, run npm/yarn run lt
.
Before committing, make sure you pass yarn/npm run lint
without errors, and run yarn/npm run docs
to generate the demo.