Multi-modal image registration and atlas formation Public Deposited

Downloadable Content

Download PDF
Last Modified
  • March 21, 2019
  • Lorenzen, Peter Jonathan
    • Affiliation: College of Arts and Sciences, Department of Computer Science
  • Medical images of human anatomy can be produced from a wide range of sensor technologies and imaging techniques resulting in a diverse array of imaging modalities, such as magnetic resonance and computed tomography. The physical properties of the image acquisition process for different modalities elicit different tissue structures. Images from multiple modalities provide complementary information about underlying anatomical structure. Understanding anatomical variability is often important in studying disparate population groups and typically requires robust dense image registration. Traditional image registration methods involve finding a mapping between two scalar images. Such methods do not exploit the complementary information provided by sets of multi-modal images. This dissertation presents a Bayesian framework for generating inter-subject large deformation transformations between two multi-modal image sets of the brain. The estimated transformations are generated using maximal information about the underlying neuroanatomy present in each of the different modalities. This modality independent registration framework is achieved by jointly estimating the posterior probabilities associated with the multi-modal image sets and the high-dimensional registration transformations relating these posteriors. To maximally use the information present in all the modalities for registration, Kullback-Leibler divergence between the estimated posteriors is minimized. This framework is extended to large deformation multi-class posterior atlas estimation. The method generates a representative anatomical template from an arbitrary number of topologically similar multi-modal image sets. The generated atlas is the class posterior that requires the least amount of deformation energy to be transformed into every class posterior (each characterizing a multi-modal image set). This method is computationally practical in that computation times grows linearly with the number of image sets. The multi-class posterior atlas formation method is applied to a database of multi-modal images from ninety-five adult brains as part of a healthy aging study to produce 4D spatiotemporal atlases for the female and male subpopulations. The stability of the atlases is evaluated based on the entropy of their class posteriors. Global volumetric trends and local volumetric change are evaluated. This multi-modal framework has potential applications in many natural multi-modal imaging environments.
Date of publication
Resource type
Rights statement
  • In Copyright
  • Joshi, Sarang C.
Degree granting institution
  • University of North Carolina at Chapel Hill
  • Open access

This work has no parents.