Welcome to SpikeInterface Reports

/images/logo.png

Introduction

SpikeInterface Reports is a collection of notebooks to:

  • demonstrate what can be done with SpikeInterface.

  • show detailed benchmarks for spike sorters.

  • push code that generated figures on papers related to spikeinterface (spikeinterface, probeinterface, collision)

  • serve as a sandbox for future benchmarks that can be integrated in the SpikeForest website.

  • demonstrate sortingcomponents_examples

  • test out other ideas!

Notebooks examples for sorting components

Notebooks reproducing the figures in collision preprint

Notebooks reproducing the figures in probeinterface paper 2022

Notebooks reproducing the figures in the elife paper 2020

Notebooks with NEW API

All notebooks

Other ideas we want to test here:

  • compute agreement across sorters for dataset without ground truth

  • make benchmarks for sorters that separate:

    • the accuracy per GT units

    • how many units are detected (true and false positives)

Often benchmarks report an average of accuracy that mixes different metrics. A sorter with high accuracy that does not detect all neurons may have a lower average accuracy than one detecting more neurons with less precision.

Furthermore, we plan to add benchmarks on specific aspects of extracellular recordings:

  • benchmarks specifically for spike collisions

  • benchmarks for probe drift

  • benchmarks investigating if high-density probes yield better results than lower-density probes

  • example for parameters optimisations

  • further testing of the "ensemble sorting" method