Name: pulse2percept
Owner: UW eScience Institute
Description: A Python-based simulation framework for bionic vision
Created: 2016-01-05 22:01:46.0
Updated: 2018-05-21 15:04:38.0
Pushed: 2018-05-21 15:04:37.0
Homepage: http://uwescience.github.io/pulse2percept/
Size: 283513
Language: Python
GitHub Committers
User | Most Recent Commit | # Commits |
---|
Other Committers
User | Most Recent Commit | # Commits |
---|
By 2020 roughly 200 million people will suffer from retinal diseases such as macular degeneration or retinitis pigmentosa. Consequently, a variety of retinal sight restoration procedures are being developed to target these diseases. Electronic prostheses (currently being implanted in patients) directly stimulate remaining retinal cells using electrical current, analogous to a cochlear implant. Optogenetic prostheses (soon to be implanted in human) use optogenetic proteins to make remaining retinal cells responsive to light, then use light diodes (natural illumination is inadequate) implanted in the eye to stimulate these light sensitive cells.
However, these devices do not restore anything resembling natural vision: Interactions between the electronics and the underlying neurophysiology result in significant distortions of the perceptual experience.
We have developed a computer model that has the goal of predicting the perceptual experience of retinal prosthesis patients. The model was developed using a variety of patient data describing the brightness and shape of phosphenes elicited by stimulating single electrodes, and validated against an independent set of behavioral measures examining spatiotemporal interactions across multiple electrodes.
The model takes as input a series of (simulated) electrical pulse trains—one pulse train per electrode in the array—and converts them into an image sequence that corresponds to the predicted perceptual experience of a patient:
If you use pulse2percept in a scholary publication, please cite as:
M Beyeler, GM Boynton, I Fine, A Rokem (2017). pulse2percept: A Python-based simulation framework for bionic vision. Proceedings of the 16th Python in Science Conference, p.81-88, doi:10.25080/shinma-7f4c6e7-00c.
Or use the following BibTex:
roceedings{ BeyelerSciPy2017,
thor = { {M}ichael {B}eyeler and {G}eoffrey {M}. {B}oynton and {I}one {F}ine and {A}riel {R}okem },
tle = { pulse2percept: {A} {P}ython-based simulation framework for bionic vision },
oktitle = { {P}roceedings of the 16th {P}ython in {S}cience {C}onference },
ges = { 81 - 88 },
ar = { 2017 },
i = { 10.25080/shinma-7f4c6e7-00c },
itor = { {K}aty {H}uff and {D}avid {L}ippa and {D}illon {N}iederhut and {M} {P}acer }
Scientific studies referencing pulse2percept:
pulse2percept requires the following software installed for your platform:
Python 2.7 or >= 3.4
Optional packages:
Dask for parallel processing (a joblib alternative). Use conda to install.
Numba. Use conda to install.
ffmpeg codec
if you're on Windows and want to use functions in the files
module.
The latest stable release of pulse2percept can be installed with pip:
p install pulse2percept
In order to get the bleeding-edge version of pulse2percept, use the commands:
t clone https://github.com/uwescience/pulse2percept.git
pulse2percept
thon setup.py install
To test pulse2percept after installation, execute in Python:
import pulse2percept
A number of useful examples can be found in the “examples/notebooks” folder, including the following:
0.0-example-usage.ipynb: How to use the model.
0.1-image2percept.ipynb: How to convert an image to a percept.
Detailed documentation can be found at uwescience.github.io/pulse2percept.