Learning non-linear patch embeddings with neural networks for label fusion


    loading  Checking for direct PDF access through Ovid

Abstract

HighlightsWe present a method to improve discriminative abilities of patch-based label fusion.We use neural networks to learn optimal embeddings of image patches.Our method allows for embeddings with different complexities.Our method scales linearly with the number of atlases both in train and test phases.Results show an improvement over standard PBLF even with the simplest embeddings.Graphical abstractIn brain structural segmentation, multi-atlas strategies are increasingly being used over single-atlas strategies because of their ability to fit a wider anatomical variability. Patch-based label fusion (PBLF) is a type of such multi-atlas approaches that labels each target point as a weighted combination of neighboring atlas labels, where atlas points with higher local similarity to the target contribute more strongly to label fusion. PBLF can be potentially improved by increasing the discriminative capabilities of the local image similarity measurements. We propose a framework to compute patch embeddings using neural networks so as to increase discriminative abilities of similarity-based weighted voting in PBLF. As particular cases, our framework includes embeddings with different complexities, namely, a simple scaling, an affine transformation, and non-linear transformations. We compare our method with state-of-the-art alternatives in whole hippocampus and hippocampal subfields segmentation experiments using publicly available datasets. Results show that even the simplest versions of our method outperform standard PBLF, thus evidencing the benefits of discriminative learning. More complex transformation models tended to achieve better results than simpler ones, obtaining a considerable increase in average Dice score compared to standard PBLF.

    loading  Loading Related Articles