guardianproject/informacam-guide

Name: informacam-guide

Owner: Guardian Project

Description: Trainer and end-user guide to using InformaCam and CameraV app

Created: 2015-04-07 21:07:36.0

Updated: 2017-12-23 18:15:13.0

Pushed: 2015-07-28 16:39:29.0

Homepage: null

Size: 40588

Language: HTML

GitHub Committers

UserMost Recent Commit# Commits

Other Committers

UserEmailMost Recent Commit# Commits

README

InformaCam System and the CameraV App User Guide
TRUST (BUT VERIFY!) WHAT YOUR EYES SEE

InformaCam is a system that uses the built-in sensors in modern smartphones for tracking movement, light and other environmental inputs, along with Wi-Fi, Bluetooth, and cellular network information to capture a snapshot of the environment around you, while you are taking a photo or video. This extra metadata (the data about the data!) helps verify and validate the date, time and location of capture, and provides an entirely new layer of context and meaning out of “invisible” energy for use in any way you choose. Finally digital signatures and encryption ensure that your media hasn't been tampered with since capture and that it can only be seen by the people you choose.

Currently, you can use InformaCam by installing the CameraV app for Android smartphones. CameraV uses V for Verification, Veritas (Truth!) and Vaulted (secured!). It is also evokes the “V” hand sign for victory and peace.

Easy To Use

CameraV is the easiest way to capture and share secure photos and videos on a smartphone or tablet.

web_hi_res_512.png

Sensor Smart

CameraV turns sensor inputs like compass, light, temperature, location and more into “metadata for good”.

views

Share Media and Metadata

Upload and share media captured from CameraV wherever you choose, and people can trust what their eyes see.

Crypto-Power

CameraV has strong encryption and network security built-in using technology OpenPGP, IOCipher and Tor.

Open and Free

CameraV, and the InformaCam system, is open-source and freely licensed for use by any individual or organization.


This work is supported by the National Institutes of Health's National Center for Advancing Translational Sciences, Grant Number U24TR002306. This work is solely the responsibility of the creators and does not necessarily represent the official views of the National Institutes of Health.