Supervisor: @Magdalena Wysocki

Author: Xingyu Zhang 

Introduction

Deformable Image Registration

Image registration is a process that involves aligning corresponding semantic regions in two or more images acquired with different imaging modalities or at separate points in time [1]. It attempts to find a spatial mapping between different images by placing them in a shared coordinate space while maximizing local correspondences of image content [2]. It plays an important role in many image analysis pipelines, e.g. motion analysis in time-series data [3], joint analysis of multi-modal images [4], and tracking of disease progression [5], etc.

Unlike rigid registration that only accounts for linear transformations such as translations and rotations, deformable registration involves non-rigid deformations that can be motion-induced or resulting from compressing or bending. In medical imaging, such deformations can be the change of soft tissues induced by, e.g. breathing, heart beats, or bowl movement. 

Conventional methods typically employ mathematical models to define the deformation field, which describes how points in one image move to match the correspondent ones in another. The resemblance is achieved by pairwise instance optimization to iteratively minimize the dissimilarity metrics over a space of transformations [6, 7]. These conventional iterative methods have been proven effective, but they often require significant computational resources and can struggle with noises or severe deformations [1].

Recent advancement in data-driven methods implemented with deep learning architectures like CNNs enables the learning of image correspondence over a dataset of image pairs to predict dense displacement fields [8] or diffeomorphisms [9]. These methods can offer fast inference time but suffers from reduced accuracy [10]. Moreover, they are often resolution-dependent and can fail to generalize for out-of-distribution data [11]. 

Implicit Neural Representations

Implicit neural representations (INRs) are a relatively new approach to represent continuous signals, such as images or transformations, as a function stored in the weights of a multi-layer perceptron (MLP) [12]. This network implicitly defines the signal through its function, which operate on continuous coordinates rather than grid-bound image values [13]. 

In recent studies, INRs are adopted for image registration [14], where a neural representation

(1) \begin{gather} \Phi(\bar{x}) = \nu(\bar{x}) + \bar{x} \end{gather}

is defined to map coordinates from source domain \(S\)  to target domain \(T\), and \(\nu(\bar{x})\) is a sinusoidal representation network (SIREN) [12]. Unlike CNN-based methods, where image information is directly processed through convolutional layers, the image information only enters the INRs network through backpropagation of gradients from a loss function where coordinates are related to pixel values [11]. This allows INRs to represent complex, high-dimensional signals compactly and continuously.

Motivation

INRs offer several advantages. As continuous or meshless functions, they are not limited by resolution, making them more memory efficient than discrete representations. Such continuity also allows for the numerical computation of gradients, which is more accurate than finite difference approximations. Moreover, INRs do not require large training data as they are optimized on image pairs. 

However, the INR-based methods face many challenges. First, the non-rigid deformations are complex and hard to capture, particularly when they involve intricate geometries or various sources of motions [2]. Second, INR-based methods suffer from spatial folding, where different spatial locations are mapped to the same or similar representations within the neural network, leading to inaccuracies or physiologically implausible results [15]. Third, the optimization process for INRs is non-convex and highly sensitive to initialization and hyperparameter settings, which may cause the optimization to collapse into local minima, making it difficult to achieve global optimal solutions and potentially hindering the performance of the model [11].

To overcome these limitations, recent studies attempt to integrate conventional or new concepts into the INRs. The following section will examine the latest advancements in deformable image registration using INRs, with each addressing one specific aforementioned limitation.

Paper Summaries

In this section, unless otherwise specified, the respective ideas and figures are adapted from the corresponding papers.

Deformable Image Registration with Geometry-informed Implicit Neural Representations

Research Problem

The optimization target of INRs is an estimated analytic function that maps \(\mathbb{R}^3\rightarrow\mathbb{R}^3\). The complexity of this function influences the neural network’s required capacity and the computational effort for optimization. Considering that in medical motion analysis tasks, deformation fields can often be somewhat predictable due to the geometric constraints formed by muscles and tissues, this paper proposes an geometry-informed approach to align the coordinate system with the dominant motion, with the hope to reduce the complexity of the optimized deformation function, thereby improving the precision and efficiency of the registration process. 

Dataset

This study utilizes an abdominal 4D MRI dataset comprising scans from 14 healthy volunteers and 10 inflammatory bowel disease (IBD) patients characterised with reduced intestinal motility. Each scan consists of volumetric sequences acquired at a rate of 1.0 volume per second during a breath-hold. Small intestine centerline is annotated and extracted from the initial timepoint using existing method [16]

Methods

To reduce the complexity of the deformation function, the coordinate system is conditioned on the centerline curve of the small intestine, and optimization is performed in the tangent space to this curve. As shown in Figure 1, the transformation \(R\) maps the coordinate space of the images \(xyz\) into a tangent frame \(uvw\) that is aligned with the centerline curve. Since the dominant motion of the bowl motility is contractions aligned with the intestines, such method may result in a simpler analytic function for the motion field without explicitly encoding the curve \(w\).

The INRs are represented in a similar form as in equation 1, except that \(\bar{x}\) is not sampled from image space \(xyz\), but from the tangent space \(uvw\). The optimization objective for the tangent space model is:

\begin{align}
L &= L^{data} + \alpha L^{jac}\\
&= \frac{1}{bs} \sum_{i=1}^{bs} \left( -NCC\left( T[R^{-1}(\bar{x}_i)], S[R^{-1}(\Phi(\bar{x}_i))] \right) + \alpha \left|1 - \det \nabla \Phi[\bar{x}_i] \right| \right),
\end{align}

where \(\alpha\) is a weighting factor, \(bs\) is the batch size, NCC is the normalized cross-correlation. 

Additionally, for comparison, a combined model that operates simultaneously on both the image space and the tangent space coordinates is proposed, where the resulting deformation vectors are combined to obtain the final result. 

Experiments and Results

Pairwise registration is performed for each bowel segment in the dataset to register the first timepoint to every other timepoint in the sequence. Three methods are compared: the baseline image space registration, the tangent space registration, and the combined image-tangent space registration. 

For quantitative analysis of the impact of the centerline prior, after each iteration, the mean absolute error (MAE) and the structural similarity metric (SSIM) are evaluated between the target image and the transformed source image within the foreground mask, i.e. a tube around the centerline with a diameter of 40 mm, which is twice a typical non-contracted small intestine diameter, so to limit possible inaccuracies in the extracted centerlines and for possible pathological distension.

Figure 3 shows the quantitative result in terms of SSIM for the optimization performance of different time intervals between the acquisition times of the registered images. In all scenarios, both proposed methods achieve better performance in the first 150 iterations. However, for small time intervals, the baseline image space registration eventually catches up with tangent space registration. For large time intervals, the average improvement is positive for healthy volunteers but negative for IBD patients, particularly in the presence of severe breathing artifacts. However, the combined method consistently demonstrates positive improvements across all assessed scenarios.


Figure 4 shows three qualitative results of the registration in image space and the tangent space. When the dominant motion in the bowel loop is caused by its motility, the tangent space registration is beneficial (A) with lower MAE. However, the proposed method is not beneficial (B) when the centerline is inaccurately extracted or motility is absent. Moreover, when breathing becomes the dominant motion, e.g. when the subject is unable to maintain the breath-hold, the proposed method appears to be harmful (C) with deteriorated metrics.

SINR: Spline-enhanced implicit neural representation for multi-modal registration

Research Problem

INR-based methods use sinusoidal activations (SIRENs), which are controlled by a frequency term (\(\omega\)). High \(\omega\) values can lead to spatial folding, which requires explicit regularization to enforce smoothness at the cost of registration accuracy. Moreover, random coordinate sampling may affect convergence and performance. As a solution, the use of a mask is suggested to prioritize regions of interest [14]. However, prioritized sampling does not scale well for multi-modal registration, as information-based metrics like normalized mutual information (NMI) are often used, which is computationally intensive and requires large coordinate batches. To overcome these challenges, this paper proposes a Spline-enhanced INR (SINR) by incorporating Free Form Deformations (FFD) [7] to parameterize the implicit representation of deformable transformations only using spatially sparse FFD control points, hence to achieve smoother transformations without losing accuracy, reduced sensitivity to the choice of frequency in SIRENs, and decreased computational burden. 

Dataset

This work evaluates inter-subject brain registration using the Cam-CAN dataset [17, 18]. The dataset consists of 310 T1w and T2w MR 3D volumetric images with 1mm^3 isotropic spatial resolution. 

Methods

FFD enables non-rigid transformations by flexibly altering images through adjusting control points within a parametric space. By establishing a mesh of control points over the spatial domain of the image volume, B-spline-based FFD models parameterize a deformable transformation between two images [7]. Given a uniform spacing \(\delta\), the FFD of given point \(xyz\) can be formulated as:
\begin{gather}
\label{eqn:ffd}
\mathbf{u}(x, y, z) = \sum_{l=0}^{3} \sum_{m=0}^{3} \sum_{n=0}^{3} B_l(u) B_m(v) B_n(w) c_{i+l, j+m, k+n},
\end{gather}
where \((i, j, k)\) are the indices of the control point that is closest to the origin in the control point cube that encloses \((x, y, z)\) (enclosing cube), \(B\) are the B-spline basis functions as presented in Equation 2, and \((u, v, w)\) are the normalized local coordinates of \((x, y, z)\) in the enclosing cube.

(2) \begin{equation} \begin{aligned} B_0(u) &= \frac{(1-u)^3}{6}, \quad B_1(u) = \frac{3u^3 - 6u^2 + 4}{6}, \\ \quad B_2(u) &= \frac{-3u^3 + 3u^2 + 3u + 1}{6}, \quad B_3(u) = \frac{u^3}{6} \end{aligned} \end{equation}

The proposed SINR method combines spline-based FFD with INRs. As shown in Figure 5, given a densely sampled image, SINR first selects a subset of coordinates as control points. During the fitting of the INR, the network optimizes the parameters \(\theta\) of the INR to approximate the transformation \(\phi(\mathbf{x}_{\text{cp}}) = \mathbf{x}_{\text{cp}} + u(\mathbf{x}_{\text{cp}})\) between the fixed and moving images, where \(\mathbf{x}_{\text{cp}} \in \Omega\) are the control point coordinates and \(\phi(\mathbf{x}) = f_\theta(\mathbf{x})\) are the displacements on the control point coordinates. In the end, b-spline basis functions are used to interpolate the coordinates within the lattice to get the deformation filed.

Experiments and Results

To investigates the effectiveness of SINR for multi-modal image registration, particularly for brain MRI data from the CamCAN dataset, the study compares SINR against one conventional iterative method, MIRTK [19], two contemporary deep learning approaches, VoxelMorph (VMorph) [9] and MIDIR [20], and one INR-based method, IDIR [14], where both SIREN and ReLu are tested as activation functions. For evaluation metrics, as ground truth deformations are unknown, accuracy is measured by assessing the overlap between the segmented anatomical structures using Dice score. The extent of folding caused by transformation is measured by the percentage of points with \(J = |\nabla \phi| < 0\).

Figure 6 presents the mean Dice score and folding rate of the experiments. Notably, SINR with SIREN consistently achieves the highest mean Dice scores, outperforming other methods in both T1w-T1w and T1w-T2w registration tasks. In addition, as shown in Figure 7, the proposed method demonstrates improved performance in almost all individual classes of brain structures, except that MIRTK obtains slightly higher Dice scores for Noncortical GM in mono-modal registration and comparable results for White Matter in multi-modal registration.

For the folding ratio, although SINR with SIREN activations has a marginally higher folding ratio compared to non-INR based methods, it still achieves reduced results compared with the baseline IDIR. In addition, it produces qualitatively smoother and more accurate transformation compared with IDIR, as shown in Figure 8.

The results also show the impact of hyperparameter \(\omega\) on the Dice score when the folding ratio is maintained below 0.9%. As shown in Figure 9, SINR displays robustness to variations in \(\omega\), achieving consistently high Dice scores across different values of \(\omega\), which indicates the method's stability and reliability.

Robust Deformable Image Registration Using Cycle-Consistent Implicit Representations

Research Problem

Traditional methods, including deep-learning-based techniques, often suffer from fragility to out-of-distribution data and require human quality control, which is impractical for real-time analysis or large-scale tasks. For INR-based methods, the optimization process is non-convex and highly sensitive to initialization and hyperparameter settings, which may lead to collapse of optimization into local minima. To improve the robustness of registration using INRs, this study introduces cycle-consistency to the INR optimization process to ensure accurate and reliable alignment without the need for extensive manual intervention.

Dataset

Two datasets are used in this work to evaluate the proposed method: the DIR-Lab 4D Lung CT dataset and an abdominal 4D MRI dataset. The DIR-Lab dataset contains 10 axial 4D lung CTs with 10 time points across a shallow breathing cycle. Manually annotated lung landmarks at maximum inspiration and expiration are available as ground truth for evaluation.

The abdominal 4D MRI dataset includes scans of 14 healthy volunteers obtained during a breath-hold to capture the motion of the intestines. Centerline segment annotations are available for one volume in each sequence.

Method

This paper introduces a cycle-consistent optimization framework that involves two INRs: one estimating the forward transformation that maps coordinates in the target domain to the source domain, and one the backward transformation from the source domain to the target domain. 

The optimization objective includes data losses, regularization terms based on the Jacobian determinant, and cycle-consistency losses. The total loss function is given by:
\begin{equation}
L_{\text{total}} = L_{F}^{data} + \alpha L_{\phi_{F}}^{reg} + \beta L_{F\rightarrow B}^{cycle} + L_{B}^{data} + \alpha L_{\phi_{B}}^{reg} + \beta L_{B\rightarrow F}^{cycle},
\end{equation}
where \(\Phi_F\) parameterizes the forward transformation and \(\Phi_B\) the backward transformation, and \(\alpha\) and \(\beta\) are the weighing factors for the regularization terms. A schematic overview is shown in Figure 10.

The data losses are defined as:
\begin{equation}
\begin{aligned}
L_{F}^{\text{data}} &= \frac{2}{bs} \sum_{i=1}^{bs/2} - \text{NCC}(S[\vec{x}_i], T[\Phi_{F}(\vec{x}_i)]),\\
L_{B}^{\text{data}} &= \frac{2}{bs} \sum_{i=bs/2}^{bs} - \text{NCC}(T[\vec{x}_i], S[\Phi_{B}(\vec{x}_i)]),
\end{aligned}
\end{equation}
where \(bs\) is the batch size and \(\vec{x}_i\) are sampled coordinates, half from the valid domain of source image (\(S\)) and half from that of target image (\(T\)).

For regularization, a symmetric Jacobian determinant regularization is defined:
\begin{equation}
L^{sjac}[\Phi] = \frac{1}{bs} \sum_{i=1}^{bs} \min \left( \frac{(\det \nabla \Phi [\vec{x}_i] - 1)^2}{\det \nabla \Phi [\vec{x}_i]}, \tau \right),
\end{equation}
where \(\tau=10 \) is used to clip the penalty. For comparison, bending energy penalty [7] is also tested.

For the cycle-consistency terms:

\begin{equation}
\begin{aligned}
L^{cycle}_{F \to B} &= \frac{2}{bs} \sum_{i=1}^{bs/2} [\Phi_{B}(\Phi_{F}(\vec{x}_i)) - \vec{x}_i]^2 \\
L^{cycle}_{B \to F} &= \frac{2}{bs} \sum_{i=bs/2}^{bs} [\Phi_{F}(\Phi_{B}(\vec{x}_i)) - \vec{x}_i]^2,
\end{aligned}
\end{equation}
they penalize the square norm of the cycle-error vectors, preventing large spatial fluctuations in the vector fields.

During inference, as shown in Figure 11, (1) \(\Phi_F\) first transforms each coordinate \(\vec{x}\) from \(T\) to \(S\), and then (2) \(\Phi_B\) transforms each coordinate back to \(T\), (3) which yields the cycle-error vector \(\vec{x} - \Phi_{B}(\Phi_{F}(\vec{x}))\). (4) Tayler expansion is then used to estimate \(\Phi_{B}^{-1}\) around each coordinate \(\Phi_{B}(\Phi_{F}(\vec{x}))\):
\begin{equation}
\begin{aligned}
\Phi_{B}^{-1}(\vec{x}) = &\Phi_{B}^{-1}[\Phi_{B}(\Phi_{F}(\vec{x}))] + \\
&\nabla \Phi_{B}^{-1}[\Phi_{B}(\Phi_{F}(\vec{x}))] \cdot (\vec{x} - \Phi_{B}(\Phi_{F}(\vec{x}))) +\\ 
&\frac{1}{2} \nabla^2 \Phi_{B}^{-1}[\Phi_{B}(\Phi_{F}(\vec{x}))] \cdot (\vec{x} - \Phi_{B}(\Phi_{F}(\vec{x})))^2.
\end{aligned}
\end{equation}
(5) Finally, a consensus is evaluated at the midpoint of these two estimates:
\begin{equation}
\vec{x}_{result} = \frac{1}{2}\left(\Phi_{F}(\vec{x}) + \Phi_{B^{-1}(\vec{x})} \right),
\end{equation}
and the norm of the vector can represent the uncertainty of the deformation vector.

Experiments and Results

To evaluate the performance of the proposed cycle-consistent INRs for deformable image registration, the first set of experiments involve lung CT data, comparing the registration accuracy and robustness of the cycle-consistent INRs against non-cycle-consistent regularization methods. In the experiments, the lungs in the inspiration phase is registered to the lungs in the expiration phase, and vice-versa. The mean landmark registration error is measured for quantitative analysis. 

Results showed that the cycle-consistent INRs achieved improved registration accuracy and robustness compared to traditional methods. Specifically, using the cycle-consistent regularization can significantly reduce the mean landmark registration error and decrease the number of failed registrations, as illustrated in Figure 12. Figure 13 shows the comparison with other existing methods, further demonstrating the improved performance of the cycle-consistent INRs.

To examine the impact of the proposed method to different hyper-parameters and design choices, a sensitivity analysis is conducted by measuring the total registration error (TRE) and failure rate. As shown in Figure 14, the results indicated that the proposed method is relatively insensitive to hyperparameter changes, consistently outperforming the single-INR baseline in terms of both TRE and reliability, with only one exception when ReLU activation is used.

In addition, an experiment is conducted on the small intestine registration to propagate intestinal centerline across multiple timepoints. Figure 15 provides a comparison of the mean propagation discrepancy across different runs, showing that the cycle-consistent INRs achieve lower discrepancies and more consistent results compared to single INRs.

Criticism


Geometry-informed INR

SINR

Cycle-consistent INRs

Added Element

Geometry prior

Spline-based FFD

Cycle-consistency

Code Avaliability

https://github.com/Louisvh/tangent_INR_registration_3D

https://github.com/vasl12/SINR

https://github.com/louisvh/cycle_consistent_INR

Tested Dataset

abdominal 4D cine-MRI dataset

CamCAN dataset (brain)

DIR-Lab 4D lung CT dataset, and abdominal 4D cine-MRI dataset

Tested Modality

mono-modal

mono- and multi-modal

mono-modal

Advantages

  • improved registration accuracy for bowel loops with active motility when intestine geometry is explicitly encoded;
  • accuracy further improved with the combined model that operates on both image and tangent space;
  • improved efficiency when motion-of-interest is influenced by the anatomical geometric prior;
  • potential application in other areas, e.g. cardiac motion, ischemic stroke follow-up
  • higher Dice than conventional and CNN-based methods;
  • more efficient calculation of NMI thanks to FFD control point sparsity, enabling multi-modal INR-based registration for the first time;
  • reduced folding ratio and smoother transformation than previous INR-based method (IDIR);
  • potential application where smooth transformation is desired, e.g. inhale-exhale lung registration;
  • significant improvements in registration accuracy and robustness compared to existing state-of-the-art methods;
  • model insensitive to hyperparameter settings;
  • introduces an uncertainty metric strongly correlate with registration accuracy, which can serve as a useful tool for automatic quality control;
  • generalisability tested on two different datasets;
  • discussed possible changes for multi-modal registration (change metric from NCC to MI)

Limitations

  • additional resources for centerline extraction;
  • limited / negative impact when centerline extraction is inaccurate or breathing motion is dominant;
  • un-clarified degree of impact of an inaccurate centerline / geometry extraction on the performance of the model, hence limiting its clinical use;
  • generalisability questionable without further test on different dataset.
  • higher folding ratio compared to conventional and CNN-based methods;
  • balance between the accuracy and folding ratio yet to be explored;
  • generalisability questionable without further test on different dataset.

• higher computational complexity and runtime (90s), 1.8x longer than single registration method, which may limit its application in high dimensional data and real-time application;


Conclusion

In this post, we explored three innovative approaches to deformable image registration using INRs, each addressing unique challenges in the field. The geometry-informed approach introduces geometric constraints to reduce the complexity of the non-rigid deformation function and improve registration accuracy. The spline-enhanced approach, SINR, ensures smooth and continuous transformations, particularly suited for multi-modal image registration. The cycle-consistent approach enforces mutual constraints to achieve more realistic and robust deformations. These methods succeed in improving the performance of INR-based medical image registration. Despite the fact that the increased computational complexity and generalisability across diverse datasets remain yet to be further addressed, these efforts bring the technology one step closer to real-world application. 

For future works, the concept of cycle-consistent INRs has demonstrated how two INRs can mutually regularize each other to enhance registration robustness and accuracy. This approach raises the question of how generalization of INRs can further improve the image registration performance. As multi-layer perceptrons, INRs can be pretrained on population data [14]. This offers a foundation for obtaining a general or statistical representation of the anatomical structure generalized from a cohort of images, e.g. an atlas of the structure of interest. By conditioning the registration of new images on such an atlas, INRs may act as regularizers, promoting more precise alignment. A recent study exemplifies this potential by modeling spatio-temporal changes in fetal brain development using INRs to generate atlases and probability maps for segmentation purpose [21]. Integrating atlas-based methods with INRs may provide a global reference while INRs capture local anatomical variations, thereby enhancing the registration process. Additionally, incorporating various modalities as complementary data sources, such as using ECG signals to provide concurrent references for the scale and rhythm of heart beats, may enable INRs representing different modalities to condition each other for more accurate registration results. 

References

[1] A. Sotiras, C. Davatzikos, and N. Paragios. “Deformable Medical Image Registration: A Survey”. In: IEEE Transactions on Medical Imaging 32.7 (2013), pp. 1153–1190. doi : 10.1109/TMI.2013.2265603 .

[2] L. van Harten, R. L. M. Van Herten, J. Stoker, and I. Isgum. “Deformable Image Registration with Geometry-informed Implicit Neural Representations”. In: Medical Imaging with Deep Learning . PMLR. 2024, pp. 730– 742.

[3] H. Wang and A. A. Amini. “Cardiac motion and deformation recovery from MRI: a review”. In: IEEE transactions on medical imaging 31.2 (2011), pp. 487–503.

[4] W. M. Wells III, P. Viola, H. Atsumi, S. Nakajima, and R. Kikinis. “Multimodal volume registration by maximization of mutual information”. In: Medical image analysis 1.1 (1996), pp. 35–51.

[5] P. Castadot, X. Geets, J. A. Lee, N. Christian, and V. Gr ́egoire. “Assessment by a deformable registration method of the volumetric and positional changes of target volumes and organs at risk in pharyngo-laryngeal tumors treated with concomitant chemo-radiation”. In: Radiotherapy and Oncology 95.2 (2010), pp. 209–217.

[6] J. Ashburner. “A fast diffeomorphic image registration algorithm”. In: Neuroimage 38.1 (2007), pp. 95–113.

[7] D. Rueckert, L. I. Sonoda, C. Hayes, D. L. Hill, M. O. Leach, and D. J. Hawkes. “Nonrigid registration using free-form deformations: application to breast MR images”. In: IEEE transactions on medical imaging 18.8 (1999), pp. 712–721.

[8] G. Balakrishnan, A. Zhao, M. R. Sabuncu, J. Guttag, and A. V. Dalca. “Voxelmorph: a learning framework for deformable medical image registration”. In: IEEE transactions on medical imaging 38.8 (2019), pp. 1788– 1800.

[9] A. V. Dalca, G. Balakrishnan, J. Guttag, and M. R. Sabuncu. “Unsupervised learning for fast probabilistic diffeomorphic registration”. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2018: 21st International Conference, Granada, Spain, September 16-20, 2018, Proceedings, Part I . Springer. 2018, pp. 729–738.

[10] L. Hansen and M. P. Heinrich. “Revisiting iterative highly efficient optimisation schemes in medical image registration”. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2021: 24th International Conference, Strasbourg, France, September 27–October 1, 2021, Proceedings, Part IV 24 . Springer. 2021, pp. 203–212.

[11] L. D. Van Harten, J. Stoker, and I. Iˇsgum. “Robust deformable image registration using cycle-consistent implicit representations”. In: IEEE Transactions on Medical Imaging (2023).

[12] V. Sitzmann, J. Martel, A. Bergman, D. Lindell, and G. Wetzstein. “Implicit neural representations with periodic activation functions”. In: Advances in neural information processing systems 33 (2020), pp. 7462–7473.

[13] E. Dupont, A. Golin ́ski, M. Alizadeh, Y. W. Teh, and A. Doucet. “Coin: Compression with implicit neural representations”. In: arXiv preprint arXiv:2103.03123 (2021).

[14] J. M. Wolterink, J. C. Zwienenberg, and C. Brune. “Implicit neural representations for deformable image registration”. In: International Conference on Medical Imaging with Deep Learning . PMLR. 2022, pp. 1349– 1359.

[15] V. Sideri-Lampretsa, J. McGinnis, H. Qiu, M. Paschali, W. Simson, and D. Rueckert. “SINR: Spline-enhanced implicit neural representation for multi-modal registration”. In: Medical Imaging with Deep Learning . 2024.

[16] L.D.vanHarten,C.S.deJonge,K.J.Beek,J.Stoker,andI.Iˇsgum. “Untangling and segmenting the small intestine in 3D cine-MRI using deep learning”. In: Medical image analysis 78 (2022), p. 102386.

[17] M. A. Shafto, L. K. Tyler, M. Dixon, J. R. Taylor, J. B. Rowe, R. Cusack, A. J. Calder, W. D. Marslen-Wilson, J. Duncan, T. Dalgleish, et al. “The Cambridge Centre for Ageing and Neuroscience (Cam-CAN) study protocol: a cross-sectional, lifespan, multidisciplinary examination of healthy cognitive ageing”. In: BMC neurology 14 (2014), pp. 1–25.

[18] J. R. Taylor, N. Williams, R. Cusack, T. Auer, M. A. Shafto, M. Dixon, L. K. Tyler, R. N. Henson, et al. “The Cambridge Centre for Ageing and Neuroscience (Cam-CAN) data repository: Structural and functional MRI, MEG, and cognitive data from a cross-sectional adult lifespan sample”. In: neuroimage 144 (2017), pp. 262–269.

[19] A. Schuh, M. Murgasova, A. Makropoulos, C. Ledig, S. J. Counsell, J. V. Hajnal, P. Aljabar, and D. Rueckert. “Construction of a 4D brain atlas and growth model using diffeomorphic registration”. In: Spatio-temporal Image Analysis for Longitudinal and Time-Series Image Data: Third International Workshop, STIA 2014, Held in Conjunction with MICCAI 2014, Boston, MA, USA, September 18, 2014, Revised Selected Papers 3 . Springer. 2015, pp. 27–37.

[20] H. Qiu, C. Qin, A. Schuh, K. Hammernik, and D. Rueckert. “Learning diffeomorphic and modality-invariant registration using b-splines”. In: Medical Imaging with Deep Learning . 2021.

[21] M. Dannecker, V. Kyriakopoulou, L. Cordero-Grande, A. N. Price, J. V. Hajnal, and D. Rueckert. “CINA: Conditional Implicit Neural Atlas for Spatio-Temporal Representation of Fetal Brains”. In: arXiv preprint arXiv:2403.08550 (2024).

  • Keine Stichwörter