Abstract—Accurate segmentation of brain tissues from
magnetic resonance imaging (MRI) is of significant importance
for clinical application and scientific research. Traditional
strategies to handle the 2D images have the limitation of 3D data.
In this paper, to overcome these issues, a tissue segmentation
approach with supervoxel clustering and the novel 3D texture
extraction method are proposed. At first, the simple linear
iterative clustering in three-dimension is applied, to reduce the
number of calculation objects. Then, a novel local binary
pattern in three-dimension is proposed for better discriminate
the supervoxels with different tissues. A clustering approach is
also developed to classify supervoxels with features into
different types of tissues. The labels of supervoxel are finally
mapped back to original data to have the tissue type of voxels.
The performance of the proposed method is evaluated on the
commonly utilized Internet Brain Segmentation Repository 18
dataset. The experiment showed promising results with
insufficient trainset.
Index Terms—Magnetic resonance imaging, brain tissue,
supervoxel, clustering, texture extraction, k-nearest neighbor.
Yongfan Liu is with Chien-shiung Wu College, Southeast University,
Nanjing, China. He is now with the Division of Continuing Education,
University of California, Irvine, P.O. Box 6050 USA (e-mail:
yongfal@uci.edu).
Sen Du and Youyong Kong are with School of Computer Science and
Engineering, Southeast University, Nanjing, China (Corresponding author:
Youyong Kong; e-mail: silentchord@163.com, kongyouyong@seu.edu.cn).
Cite: Yongfan Liu, Sen Du and Youyong Kong, "Supervoxel Clustering with a Novel 3D Descriptor for Brain Tissue Segmentation," International Journal of Machine Learning and Computing vol. 10, no. 3, pp. 501-506, 2020.
Copyright © 2020 by the authors. This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited (CC BY 4.0).