Welcome to SpikeInterface Reports

/images/logo.png

Introduction

SpikeInterface Reports is a collection of notebooks to:

  • demonstrate what can be done with SpikeInterface.

  • show detailed benchmarks for spike sorters.

  • serve as a sandbox for future benchmarks that can be integrated in the SpikeForest website.

  • test out other ideas!

Notebooks with NEW API

Other ideas we want to test here:

  • compute agreement across sorters for dataset without ground truth

  • make benchmarks for sorters that separate:

    • the accuracy per GT units

    • how many units are detected (true and false positives)

Often benchmarks report an average of accuracy that mixes different metrics. A sorter with high accuracy that does not detect all neurons may have a lower average accuracy than one detecting more neurons with less precision.

Furthermore, we plan to add benchmarks on specific aspects of extracellular recordings:

  • benchmarks specifically for spike collisions

  • benchmarks for probe drift

  • benchmarks investigating if high-density probes yield better results than lower-density probes

  • example for parameters optimisations

  • further testing of the "ensemble sorting" method