Multi-atlas label fusion by using supervised local weighting for brain image segmentation
- Peña, David A. Cárdenas 1
- Jóver, Eduardo Fernández 2
- Vicente, José M. Ferrández 3
- Domínguez, César G. Castellanos 1
-
1
Universidad Nacional de Colombia
info
-
2
Universidad Miguel Hernández de Elche
info
-
3
Universidad Politécnica de Cartagena
info
ISSN: 2256-5337, 0123-7799
Año de publicación: 2017
Título del ejemplar: May - August 2017
Volumen: 20
Número: 39
Páginas: 209-225
Tipo: Artículo
Otras publicaciones en: TecnoLógicas
Resumen
The automatic segmentation of interest structures is devoted to the morphological analysis of brain magnetic resonance imaging volumes. It demands significant efforts due to its complicated shapes and since it lacks contrast between tissues and inter-subject anatomical variability. One aspect that reduces the accuracy of the multi-atlas-based segmentation is the label fusion assumption of one-to-one correspondences between targets and atlas voxels. To improve the performance of brain image segmentation, label fusion approaches include spatial and intensity information by using voxel-wise weighted voting strategies. Although the weights are assessed for a predefined atlas set, they are not very efficient for labeling intricate structures since most tissue shapes are not uniformly distributed in the images. This paper proposes a methodology of voxel-wise feature extraction based on the linear combination of patch intensities. As far as we are concerned, this is the first attempt to locally learn the features by maximizing the centered kernel alignment function. Our methodology aims to build discriminative representations, deal with complex structures, and reduce the image artifacts. The result is an enhanced patch-based segmentation of brain images. For validation, the proposed brain image segmentation approach is compared against Bayesian-based and patch-wise label fusion on three different brain image datasets. In terms of the determined Dice similarity index, our proposal shows the highest segmentation accuracy (90.3% on average); it presents sufficient artifact robustness, and provides suitable repeatability of the segmentation results.