Nonlinear registration of 2D histological sections with corresponding slices of MRI data is a critical step of 3D histology reconstruction algorithms. This registration is difficult due to the large differences in image contrast and resolution, as well as the complex nonrigid deformations and artefacts produced when sectioning the sample and mounting it on the glass slide. It has been shown in brain MRI registration that better spatial alignment across modalities can be obtained by synthesising one modality from the other and then using intra-modality registration metrics, rather than by using information theory based metrics to solve the problem directly. However, such an approach typically requires a database of aligned images from the two modalities, which is very difficult to obtain for histology and MRI.
Here, we overcome this limitation with a probabilistic method that simultaneously solves for deformable registration and synthesis directly on the target images, without requiring any training data. The method is based on a probabilistic model in which the MRI slice is assumed to be a contrast-warped, spatially deformed version of the histological section. We use approximate Bayesian inference to iteratively refine the probabilistic estimate of the synthesis and the registration, while accounting for each other's uncertainty. Moreover, manually placed landmarks can be seamlessly integrated in the framework for increased performance and robustness.
Experiments on a synthetic dataset of MRI slices show that, compared with mutual information based registration, the proposed method makes it possible to use a much more flexible deformation model in the registration to improve its accuracy, without compromising robustness. Moreover, our framework also exploits information in manually placed landmarks more efficiently than mutual information: landmarks constrain the deformation field in both methods, but in our algorithm, it also has a positive effect on the synthesis – which further improves the registration. We also show results on two real, publicly available datasets: the Allen and BigBrain atlases. In both of them, the proposed method provides a clear improvement over mutual information based registration, both qualitatively (visual inspection) and quantitatively (registration error measured with pairs of manually annotated landmarks).