Welcome to SpikeInterface Reports
Introduction
SpikeInterface Reports is a collection of notebooks to:
demonstrate what can be done with SpikeInterface.
show detailed benchmarks for spike sorters.
push code that generated figures on papers related to spikeinterface (spikeinterface, probeinterface, collision)
serve as a sandbox for future benchmarks that can be integrated in the SpikeForest website.
demonstrate sortingcomponents_examples
test out other ideas!
Notebooks examples for sorting components
- 2022-04-14 13:51 spikeinterface peak localization
- 2022-04-14 13:44 spikeinterface clustering
- 2022-04-13 16:45 spikeinterface motion estimation / correction
- 2022-04-13 16:31 spikeinterface peak detection
- 2022-04-13 12:36 spikeinterface template matching
- 2022-04-11 16:47 spikeinterface destripe
Notebooks reproducing the figures in collision preprint
- 2021-11-29 10:29 Collision paper spike sorting performance
- 2021-11-29 10:29 Collision paper simulated recordings
- 2021-11-29 10:28 Collision paper generate recordings
Notebooks reproducing the figures in probeinterface paper 2022
- 2021-11-26 12:19 probeinterface paper figures
Notebooks reproducing the figures in the elife paper 2020
- 2020-08-25 11:50 Ensemble sorting of a 3Brain Biocam recording from a retina
- 2020-08-25 11:49 Ensemble sorting of a Neuropixels recording 2
- 2020-08-23 17:30 Ground truth comparison and ensemble sorting of a synthetic Neuropixels recording
- 2020-08-23 17:29 Ensemble sorting of a Neuropixels recording
Notebooks with NEW API
- 2021-04-02 17:30 Quick benchmarck with new API and new sorters (april 2021)
- 2021-03-29 15:22 Compare old vs new spikeinterface API
All notebooks
- 2022-04-14 13:51 spikeinterface peak localization
- 2022-04-14 13:44 spikeinterface clustering
- 2022-04-13 16:45 spikeinterface motion estimation / correction
- 2022-04-13 16:31 spikeinterface peak detection
- 2022-04-13 12:36 spikeinterface template matching
- 2022-04-11 16:47 spikeinterface destripe
- 2021-11-29 10:29 Collision paper spike sorting performance
- 2021-11-29 10:29 Collision paper simulated recordings
- 2021-11-29 10:28 Collision paper generate recordings
- 2021-11-26 12:19 probeinterface paper figures
Other ideas we want to test here:
compute agreement across sorters for dataset without ground truth
-
make benchmarks for sorters that separate:
the accuracy per GT units
how many units are detected (true and false positives)
Often benchmarks report an average of accuracy that mixes different metrics. A sorter with high accuracy that does not detect all neurons may have a lower average accuracy than one detecting more neurons with less precision.
Furthermore, we plan to add benchmarks on specific aspects of extracellular recordings:
benchmarks specifically for spike collisions
benchmarks for probe drift
benchmarks investigating if high-density probes yield better results than lower-density probes
example for parameters optimisations
further testing of the "ensemble sorting" method