How to debug a synapse classifier with WEBKNOSSOS
Interview with Valentin Pinkau
Today, we will have a chat with one of our colleagues: Valentin Pinkau. Valentin has been working as a machine learning engineer at scalable minds for 5 years. As an experienced data scientist specializing in connectomics, he has some interesting insights to share.
What do you like most about your work here?
The fact that we support researchers in neuroscience and life sciences, which is a meaningful purpose. I also find my tasks exciting: training models, working with machine learning and diving deep into the wonderful data.
So how do you teach your model to detect such things? What are the steps you have to make to obtain this information and which tools do you use?
It all starts with what we call the “ground truth”: bounding boxes of volume EM data annotated by biologists.
As you can see in this image, some scientists (Motta et al. 2019) annotated the synaptic clefts in the EM images (red, brown), along with mitochondrias (orange) and vesicles (green). They also indicated the synapse types: spine-head or dendritic shaft synapse.
The scientists generate their training data using webKnossos, which is great to manually and collaboratively annotate EM data, and then simply send me a link to the annotated dataset. Based on this, I train my machine learning model with Voxelytics to detect such information, using a fully convolutional residual Unet.
That sounds amazing. So you run your ML model on any EM dataset, obtain a prediction and the work is done?
Not exactly. The next step is to evaluate how well the model performed, meaning, how many of the predicted synapses are actually synapses. This is a challenging step, for which webKnossos’ project management features are essential.
First, I execute our AI toolbox Voxelytics to detect where physical contacts exist between pairs of neurons (a physical contact won’t necessarily mean that there is a synapse). If a physical contact exists where the prediction indicates a synapse, a three-point annotation (pre, post, cleft) will be created. We visualize these annotations in webKnossos as skeletons.
To evaluate the model, I run the predictions on an evaluation box and automatically compare my 3-point annotation with a ground-truth 3-point annotation done by the biologists. In some cases, I will have a look at the detected mistakes to try to understand how they happened.
Finally, I discuss the result of the comparison with the scientists in order to decide how to proceed: e.g. fix errors in the ground truth, iterate on the model if it is not good enough, or run it on the whole dataset. Once the synapse detection has run on the complete dataset, I can visualize them on webKnossos (despite the huge size of the dataset) and share a link with the scientists.
Let’s take an example: imagine there is a synapse in the ground truth but none in my prediction. To find out what happened, I go through the different steps of the process in WEBKNOSSOS. I might realize that the synapse was detected and appeared in my prediction layer. However, the neuron segmentation might have been wrong at this location and did not indicate a physical contact. Due to the missing contact, the synapse would not be marked as a 3-point prediction and does not appear in the result.
Now, I know that I cannot fix this error in the synapse detection, but need to go back to the neuron reconstruction segmentation. With WEBKNOSSOS, this is as easy as activating additional layers, such as the neuron segmentation and the CNN predictions of the reconstruction.
I understand debugging a synapse classifier is quite a challenge. Thank you Valentin for sharing your insights!