mridc.collections.reconstruction.models package

Subpackages

Submodules

mridc.collections.reconstruction.models.base module

class mridc.collections.reconstruction.models.base.BaseMRIReconstructionModel(cfg: DictConfig, trainer: Optional[Trainer] = None)[source]

Bases: ModelPT, ABC

Base class of all MRIReconstruction models.

log_image(name, image)[source]

Logs an image.

Parameters
  • name (Name of the image.) – str

  • image (Image to log.) – torch.Tensor, shape [batch_size, n_x, n_y, 2]

static process_inputs(y, mask, init_pred)[source]

Processes the inputs to the method.

Parameters
  • y (Subsampled k-space data.) – list of torch.Tensor, shape [batch_size, n_coils, n_x, n_y, 2]

  • mask (Sampling mask.) – list of torch.Tensor, shape [1, 1, n_x, n_y, 1]

  • init_pred (Initial prediction.) – list of torch.Tensor, shape [batch_size, n_x, n_y, 2]

Returns

  • y (Subsampled k-space data.) – randomly selected y

  • mask (Sampling mask.) – randomly selected mask

  • init_pred (Initial prediction.) – randomly selected init_pred

  • r (Random index.)

process_loss(target, pred, _loss_fn, mask=None)[source]

Processes the loss.

Parameters
  • target (Target data.) – torch.Tensor, shape [batch_size, n_x, n_y, 2]

  • pred (Final prediction(s).) – list of torch.Tensor, shape [batch_size, n_x, n_y, 2], or torch.Tensor, shape [batch_size, n_x, n_y, 2]

  • _loss_fn (Loss function.) – torch.nn.Module, default torch.nn.L1Loss()

  • mask (Mask for the loss.) – torch.Tensor, shape [batch_size, n_x, n_y, 2]

Returns

loss – If self.accumulate_loss is True, returns an accumulative result of all intermediate losses.

Return type

torch.FloatTensor, shape [1]

setup_test_data(test_data_config: Optional[DictConfig])[source]

Setups the test data.

Parameters

test_data_config (Test data configuration.) – dict

Returns

test_data – torch.utils.data.DataLoader

Return type

Test data.

setup_training_data(train_data_config: Optional[DictConfig])[source]

Setups the training data.

Parameters

train_data_config (Training data configuration.) – dict

Returns

train_data – torch.utils.data.DataLoader

Return type

Training data.

setup_validation_data(val_data_config: Optional[DictConfig])[source]

Setups the validation data.

Parameters

val_data_config (Validation data configuration.) – dict

Returns

val_data – torch.utils.data.DataLoader

Return type

Validation data.

test_epoch_end(outputs)[source]

Called at the end of test epoch to aggregate outputs.

Parameters

outputs (List of outputs of the test batches.) – list of dicts

Return type

Saves the reconstructed images to .h5 files.

test_step(batch: Dict[float, Tensor], batch_idx: int) Tuple[str, int, Tensor][source]

Performs a test step.

Parameters
  • batch (Batch of data. Dict[str, torch.Tensor], with keys,) –

    ‘y’: subsampled kspace,

    torch.Tensor, shape [batch_size, n_coils, n_x, n_y, 2]

    ’sensitivity_maps’: sensitivity_maps,

    torch.Tensor, shape [batch_size, n_coils, n_x, n_y, 2]

    ’mask’: sampling mask,

    torch.Tensor, shape [1, 1, n_x, n_y, 1]

    ’init_pred’: initial prediction. For example zero-filled or PICS.

    torch.Tensor, shape [batch_size, n_x, n_y, 2]

    ’target’: target data,

    torch.Tensor, shape [batch_size, n_x, n_y, 2]

    ’phase_shift’: phase shift for simulated motion,

    torch.Tensor

    ’fname’: filename,

    str, shape [batch_size]

    ’slice_idx’: slice_idx,

    torch.Tensor, shape [batch_size]

    ’acc’: acceleration factor,

    torch.Tensor, shape [batch_size]

    ’max_value’: maximum value of the magnitude image space,

    torch.Tensor, shape [batch_size]

    ’crop_size’: crop size,

    torch.Tensor, shape [n_x, n_y]

  • batch_idx (Batch index.) – int

Returns

  • name (Name of the volume.) – str

  • slice_num (Slice number.) – int

  • pred (Predicted data.) – torch.Tensor, shape [batch_size, n_x, n_y, 2]

training: bool
training_step(batch: Dict[float, Tensor], batch_idx: int) Dict[str, Tensor][source]

Performs a training step.

Parameters
  • batch (Batch of data.) –

    Dict[str, torch.Tensor], with keys,

    ’y’: subsampled kspace,

    torch.Tensor, shape [batch_size, n_coils, n_x, n_y, 2]

    ’sensitivity_maps’: sensitivity_maps,

    torch.Tensor, shape [batch_size, n_coils, n_x, n_y, 2]

    ’mask’: sampling mask,

    torch.Tensor, shape [1, 1, n_x, n_y, 1]

    ’init_pred’: initial prediction. For example zero-filled or PICS.

    torch.Tensor, shape [batch_size, n_x, n_y, 2]

    ’target’: target data,

    torch.Tensor, shape [batch_size, n_x, n_y, 2]

    ’phase_shift’: phase shift for simulated motion,

    torch.Tensor

    ’fname’: filename,

    str, shape [batch_size]

    ’slice_idx’: slice_idx,

    torch.Tensor, shape [batch_size]

    ’acc’: acceleration factor,

    torch.Tensor, shape [batch_size]

    ’max_value’: maximum value of the magnitude image space,

    torch.Tensor, shape [batch_size]

    ’crop_size’: crop size,

    torch.Tensor, shape [n_x, n_y]

  • batch_idx (Batch index.) – int

Returns

  • Dict[str, torch.Tensor], with keys,

  • ’loss’ (loss,) – torch.Tensor, shape [1]

  • ’log’ (log,) – dict, shape [1]

validation_epoch_end(outputs)[source]

Called at the end of validation epoch to aggregate outputs.

Parameters

outputs (List of outputs of the validation batches.) – list of dicts

Returns

metrics – dict

Return type

Dictionary of metrics.

validation_step(batch: Dict[float, Tensor], batch_idx: int) Dict[source]

Performs a validation step.

Parameters
  • batch (Batch of data. Dict[str, torch.Tensor], with keys,) –

    ‘y’: subsampled kspace,

    torch.Tensor, shape [batch_size, n_coils, n_x, n_y, 2]

    ’sensitivity_maps’: sensitivity_maps,

    torch.Tensor, shape [batch_size, n_coils, n_x, n_y, 2]

    ’mask’: sampling mask,

    torch.Tensor, shape [1, 1, n_x, n_y, 1]

    ’init_pred’: initial prediction. For example zero-filled or PICS.

    torch.Tensor, shape [batch_size, n_x, n_y, 2]

    ’target’: target data,

    torch.Tensor, shape [batch_size, n_x, n_y, 2]

    ’phase_shift’: phase shift for simulated motion,

    torch.Tensor

    ’fname’: filename,

    str, shape [batch_size]

    ’slice_idx’: slice_idx,

    torch.Tensor, shape [batch_size]

    ’acc’: acceleration factor,

    torch.Tensor, shape [batch_size]

    ’max_value’: maximum value of the magnitude image space,

    torch.Tensor, shape [batch_size]

    ’crop_size’: crop size,

    torch.Tensor, shape [n_x, n_y]

  • batch_idx (Batch index.) – int

Returns

  • Dict[str, torch.Tensor], with keys,

  • ’loss’ (loss,) – torch.Tensor, shape [1]

  • ’log’ (log,) – dict, shape [1]

class mridc.collections.reconstruction.models.base.BaseSensitivityModel(chans: int = 8, num_pools: int = 4, in_chans: int = 2, out_chans: int = 2, drop_prob: float = 0.0, padding_size: int = 15, mask_type: str = '2D', fft_centered: bool = True, fft_normalization: str = 'ortho', spatial_dims: Optional[Sequence[int]] = None, coil_dim: int = 1, normalize: bool = True, mask_center: bool = True)[source]

Bases: Module, ABC

Model for learning sensitivity estimation from k-space data. This model applies an IFFT to multichannel k-space data and then a U-Net to the coil images to estimate coil sensitivities.

static batch_chans_to_chan_dim(x: Tensor, batch_size: int) Tensor[source]

Converts the number of channels in a tensor to the channel dimension.

Parameters
  • x (Tensor to convert.) – torch.Tensor

  • batch_size (Original batch size.) – int

Returns

torch.Tensor

Return type

Converted tensor.

static chans_to_batch_dim(x: Tensor) Tuple[Tensor, int][source]

Converts the number of channels in a tensor to the batch dimension.

Parameters

x (Tensor to convert.) – torch.Tensor

Returns

Tuple[torch.Tensor, int]

Return type

Tuple of the converted tensor and the original last dimension.

static divide_root_sum_of_squares(x: Tensor, coil_dim: int) Tensor[source]

Divide the input by the root of the sum of squares of the magnitude of each complex number.

Parameters
  • x (Tensor to divide.) – torch.Tensor

  • coil_dim (Coil dimension.) – int

Returns

torch.Tensor

Return type

RSS output tensor.

forward(masked_kspace: Tensor, mask: Tensor, num_low_frequencies: Optional[int] = None) Tensor[source]

Forward pass of the model.

Parameters
  • masked_kspace (Subsampled k-space data.) – torch.Tensor, shape [batch_size, n_coils, n_x, n_y, 2]

  • mask (Sampling mask.) – torch.Tensor, shape [batch_size, 1, n_x, n_y, 1]

  • num_low_frequencies (Number of low frequencies to keep.) – int

Returns

torch.Tensor, shape [batch_size, n_coils, n_x, n_y, 2]

Return type

Normalized UNet output tensor.

static get_pad_and_num_low_freqs(mask: Tensor, num_low_frequencies: Optional[int] = None) Tuple[Tensor, Tensor][source]

Get the padding to apply to the input to make it square and the number of low frequencies to keep.

Parameters
  • mask (Mask to use.) – torch.Tensor

  • num_low_frequencies (Number of low frequencies to keep.) – int

Returns

Tuple[torch.Tensor, torch.Tensor]

Return type

Tuple of the padding and the number of low frequencies to keep.

training: bool
class mridc.collections.reconstruction.models.base.DistributedMetricSum(dist_sync_on_step=True)[source]

Bases: Metric

A metric that sums the values of a metric across all workers. Taken from: https://github.com/facebookresearch/fastMRI/blob/main/fastmri/pl_modules/mri_module.py

compute()[source]

Compute the metric value.

update(batch: Tensor)[source]

Update the metric with a batch of data.

mridc.collections.reconstruction.models.ccnn module

class mridc.collections.reconstruction.models.ccnn.CascadeNet(cfg: DictConfig, trainer: Optional[Trainer] = None)[source]

Bases: BaseMRIReconstructionModel, ABC

Implementation of the Deep Cascade of Convolutional Neural Networks, as presented in Schlemper, J., Caballero, J., Hajnal, J. V., Price, A., & Rueckert, D.

References

Schlemper, J., Caballero, J., Hajnal, J. V., Price, A., & Rueckert, D., A Deep Cascade of Convolutional Neural Networks for MR Image Reconstruction. Information Processing in Medical Imaging (IPMI), 2017. Available at: https://arxiv.org/pdf/1703.00555.pdf

allow_zero_length_dataloader_with_multiple_devices: bool
forward(y: Tensor, sensitivity_maps: Tensor, mask: Tensor, init_pred: Tensor, target: Tensor) Tensor[source]

Forward pass of the network.

Parameters
  • y (Subsampled k-space data.) – torch.Tensor, shape [batch_size, n_coils, n_x, n_y, 2]

  • sensitivity_maps (Coil sensitivity maps.) – torch.Tensor, shape [batch_size, n_coils, n_x, n_y, 2]

  • mask (Sampling mask.) – torch.Tensor, shape [1, 1, n_x, n_y, 1]

  • init_pred (Initial prediction.) – torch.Tensor, shape [batch_size, n_x, n_y, 2]

  • target (Target data to compute the loss.) – torch.Tensor, shape [batch_size, n_x, n_y, 2]

Returns

pred – If self.accumulate_loss is True, returns a list of all intermediate estimates. If False, returns the final estimate.

Return type

list of torch.Tensor, shape [batch_size, n_x, n_y, 2], or torch.Tensor, shape [batch_size, n_x, n_y, 2]

mse_vals: Dict
nmse_vals: Dict
precision: int
prepare_data_per_node: bool
psnr_vals: Dict
ssim_vals: Dict
training: bool

mridc.collections.reconstruction.models.cirim module

class mridc.collections.reconstruction.models.cirim.CIRIM(cfg: DictConfig, trainer: Optional[Trainer] = None)[source]

Bases: BaseMRIReconstructionModel, ABC

Implementation of the quantitative Recurrent Inference Machines (qRIM), as presented in Zhang, C. et al.

References

Zhang, C. et al. (2022) ‘A unified model for reconstruction and R2 mapping of accelerated 7T data using quantitative Recurrent Inference Machine’. In review.

allow_zero_length_dataloader_with_multiple_devices: bool
forward(y: Tensor, sensitivity_maps: Tensor, mask: Tensor, init_pred: Tensor, target: Tensor) Union[Generator, Tensor][source]

Forward pass of the network.

Parameters
  • y (Subsampled k-space data.) – torch.Tensor, shape [batch_size, n_coils, n_x, n_y, 2]

  • sensitivity_maps (Coil sensitivity maps.) – torch.Tensor, shape [batch_size, n_coils, n_x, n_y, 2]

  • mask (Sampling mask.) – torch.Tensor, shape [1, 1, n_x, n_y, 1]

  • init_pred (Initial prediction.) – torch.Tensor, shape [batch_size, n_x, n_y, 2]

  • target (Target data to compute the loss.) – torch.Tensor, shape [batch_size, n_x, n_y, 2]

Returns

pred – If self.accumulate_loss is True, returns a list of all intermediate estimates. If False, returns the final estimate.

Return type

list of torch.Tensor, shape [batch_size, n_x, n_y, 2], or torch.Tensor, shape [batch_size, n_x, n_y, 2]

mse_vals: Dict
nmse_vals: Dict
precision: int
prepare_data_per_node: bool
process_intermediate_pred(pred, sensitivity_maps, target, do_coil_combination=False)[source]

Process the intermediate prediction.

Parameters
  • pred (Intermediate prediction.) – torch.Tensor, shape [batch_size, n_coils, n_x, n_y, 2]

  • sensitivity_maps (Coil sensitivity maps.) – torch.Tensor, shape [batch_size, n_coils, n_x, n_y, 2]

  • target (Target data to crop to size.) – torch.Tensor, shape [batch_size, n_x, n_y, 2]

  • do_coil_combination (Whether to do coil combination.) – bool, default False

Returns

pred – Processed prediction.

Return type

torch.Tensor, shape [batch_size, n_x, n_y, 2]

process_loss(target, pred, _loss_fn=None, mask=None)[source]

Process the loss.

Parameters
  • target (Target data.) – torch.Tensor, shape [batch_size, n_x, n_y, 2]

  • pred (Final prediction(s).) – list of torch.Tensor, shape [batch_size, n_x, n_y, 2], or torch.Tensor, shape [batch_size, n_x, n_y, 2]

  • _loss_fn (Loss function.) – torch.nn.Module, default torch.nn.L1Loss()

Returns

loss – If self.accumulate_loss is True, returns an accumulative result of all intermediate losses.

Return type

torch.FloatTensor, shape [1]

psnr_vals: Dict
ssim_vals: Dict
training: bool

mridc.collections.reconstruction.models.crnn module

class mridc.collections.reconstruction.models.crnn.CRNNet(cfg: DictConfig, trainer: Optional[Trainer] = None)[source]

Bases: BaseMRIReconstructionModel, ABC

Implementation of the Convolutional Recurrent Neural Network, inspired by C. Qin, J. Schlemper, J. Caballero, A. N. Price, J. V. Hajnal and D. Rueckert.

References

  1. Qin, J. Schlemper, J. Caballero, A. N. Price, J. V. Hajnal and D. Rueckert, “Convolutional Recurrent Neural Networks for Dynamic MR Image Reconstruction,” in IEEE Transactions on Medical Imaging, vol. 38, no. 1, pp. 280-290, Jan. 2019, doi: 10.1109/TMI.2018.2863670.

allow_zero_length_dataloader_with_multiple_devices: bool
forward(y: Tensor, sensitivity_maps: Tensor, mask: Tensor, init_pred: Tensor, target: Tensor) Union[Generator, Tensor][source]

Forward pass of the network.

Parameters
  • y (Subsampled k-space data.) – torch.Tensor, shape [batch_size, n_coils, n_x, n_y, 2]

  • sensitivity_maps (Coil sensitivity maps.) – torch.Tensor, shape [batch_size, n_coils, n_x, n_y, 2]

  • mask (Sampling mask.) – torch.Tensor, shape [1, 1, n_x, n_y, 1]

  • init_pred (Initial prediction.) – torch.Tensor, shape [batch_size, n_x, n_y, 2]

  • target (Target data to compute the loss.) – torch.Tensor, shape [batch_size, n_x, n_y, 2]

Returns

pred – If self.accumulate_loss is True, returns a list of all intermediate estimates. If False, returns the final estimate.

Return type

list of torch.Tensor, shape [batch_size, n_x, n_y, 2], or torch.Tensor, shape [batch_size, n_x, n_y, 2]

mse_vals: Dict
nmse_vals: Dict
precision: int
prepare_data_per_node: bool
process_intermediate_pred(pred, sensitivity_maps, target)[source]

Process the intermediate prediction.

Parameters
  • pred (Intermediate prediction.) – torch.Tensor, shape [batch_size, n_coils, n_x, n_y, 2]

  • sensitivity_maps (Coil sensitivity maps.) – torch.Tensor, shape [batch_size, n_coils, n_x, n_y, 2]

  • target (Target data to crop to size.) – torch.Tensor, shape [batch_size, n_x, n_y, 2]

Returns

pred – Processed prediction.

Return type

torch.Tensor, shape [batch_size, n_x, n_y, 2]

process_loss(target, pred, _loss_fn)[source]

Process the loss.

Parameters
  • target (Target data.) – torch.Tensor, shape [batch_size, n_x, n_y, 2]

  • pred (Final prediction(s).) – list of torch.Tensor, shape [batch_size, n_x, n_y, 2], or torch.Tensor, shape [batch_size, n_x, n_y, 2]

  • _loss_fn (Loss function.) – torch.nn.Module, default torch.nn.L1Loss()

Returns

loss – If self.accumulate_loss is True, returns an accumulative result of all intermediate losses.

Return type

torch.FloatTensor, shape [1]

psnr_vals: Dict
ssim_vals: Dict
training: bool

mridc.collections.reconstruction.models.dunet module

class mridc.collections.reconstruction.models.dunet.DUNet(cfg: DictConfig, trainer: Optional[Trainer] = None)[source]

Bases: BaseMRIReconstructionModel, ABC

Implementation of the Down-Up NET, inspired by Hammernik, K, Schlemper, J, Qin, C, et al.

References

Hammernik, K, Schlemper, J, Qin, C, et al. Systematic evaluation of iterative deep neural networks for fast parallel MRI reconstruction with sensitivity-weighted coil combination. Magn Reson Med. 2021; 86: 1859– 1872. https://doi.org/10.1002/mrm.28827

allow_zero_length_dataloader_with_multiple_devices: bool
forward(y: Tensor, sensitivity_maps: Tensor, mask: Tensor, init_pred: Tensor, target: Tensor) Tensor[source]

Forward pass of the network.

Parameters
  • y (Subsampled k-space data.) – torch.Tensor, shape [batch_size, n_coils, n_x, n_y, 2]

  • sensitivity_maps (Coil sensitivity maps.) – torch.Tensor, shape [batch_size, n_coils, n_x, n_y, 2]

  • mask (Sampling mask.) – torch.Tensor, shape [1, 1, n_x, n_y, 1]

  • init_pred (Initial prediction.) – torch.Tensor, shape [batch_size, n_x, n_y, 2]

  • target (Target data to compute the loss.) – torch.Tensor, shape [batch_size, n_x, n_y, 2]

Returns

pred – If self.accumulate_loss is True, returns a list of all intermediate estimates. If False, returns the final estimate.

Return type

list of torch.Tensor, shape [batch_size, n_x, n_y, 2], or torch.Tensor, shape [batch_size, n_x, n_y, 2]

mse_vals: Dict
nmse_vals: Dict
precision: int
prepare_data_per_node: bool
psnr_vals: Dict
ssim_vals: Dict
training: bool

mridc.collections.reconstruction.models.jointicnet module

class mridc.collections.reconstruction.models.jointicnet.JointICNet(cfg: DictConfig, trainer: Optional[Trainer] = None)[source]

Bases: BaseMRIReconstructionModel, ABC

Implementation of the Joint Deep Model-Based MR Image and Coil Sensitivity Reconstruction Network (Joint-ICNet), as presented in Jun, Yohan, et al.

References

Jun, Yohan, et al. “Joint Deep Model-Based MR Image and Coil Sensitivity Reconstruction Network (Joint-ICNet) for Fast MRI.” 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, 2021, pp. 5266–75. DOI.org (Crossref), https://doi.org/10.1109/CVPR46437.2021.00523.

allow_zero_length_dataloader_with_multiple_devices: bool
forward(y: Tensor, sensitivity_maps: Tensor, mask: Tensor, init_pred: Tensor, target: Tensor) Tensor[source]

Forward pass of the network.

Parameters
  • y (Subsampled k-space data.) – torch.Tensor, shape [batch_size, n_coils, n_x, n_y, 2]

  • sensitivity_maps (Coil sensitivity maps.) – torch.Tensor, shape [batch_size, n_coils, n_x, n_y, 2]

  • mask (Sampling mask.) – torch.Tensor, shape [1, 1, n_x, n_y, 1]

  • init_pred (Initial prediction.) – torch.Tensor, shape [batch_size, n_x, n_y, 2]

  • target (Target data to compute the loss.) – torch.Tensor, shape [batch_size, n_x, n_y, 2]

Returns

pred – If self.accumulate_loss is True, returns a list of all intermediate estimates. If False, returns the final estimate.

Return type

list of torch.Tensor, shape [batch_size, n_x, n_y, 2], or torch.Tensor, shape [batch_size, n_x, n_y, 2]

mse_vals: Dict
nmse_vals: Dict
precision: int
prepare_data_per_node: bool
psnr_vals: Dict
ssim_vals: Dict
training: bool
update_C(idx, DC_sens, sensitivity_maps, image, y, mask) Tensor[source]

Update the coil sensitivity maps.

\[ \begin{align}\begin{aligned}C = (1 - 2 * \lambda_{k}^{C} * ni_{k}) * C_{k}\\C = 2 * \lambda_{k}^{C} * ni_{k} * D_{C}(F^-1(b))\\A(x_{k}) = M * F * (C * x_{k})\\C = 2 * ni_{k} * F^-1(M.T * (M * F * (C * x_{k}) - b)) * x_{k}^*\end{aligned}\end{align} \]
Parameters
  • idx (int) – The current iteration index.

  • DC_sens (torch.Tensor [batch_size, num_coils, num_sens_maps, num_rows, num_cols]) – The initial coil sensitivity maps.

  • sensitivity_maps (torch.Tensor [batch_size, num_coils, num_sens_maps, num_rows, num_cols]) – The coil sensitivity maps.

  • image (torch.Tensor [batch_size, num_coils, num_rows, num_cols]) – The predicted image.

  • y (torch.Tensor [batch_size, num_coils, num_rows, num_cols]) – The subsampled k-space data.

  • mask (torch.Tensor [batch_size, 1, num_rows, num_cols]) – The subsampled mask.

Returns

sensitivity_maps – The updated coil sensitivity maps.

Return type

torch.Tensor [batch_size, num_coils, num_sens_maps, num_rows, num_cols]

update_X(idx, image, sensitivity_maps, y, mask)[source]

Update the image.

\[ \begin{align}\begin{aligned}x_{k} = (1 - 2 * \lamdba_{{k}_{I}} * mi_{k} - 2 * \lamdba_{{k}_{F}} * mi_{k}) * x_{k}\\x_{k} = 2 * mi_{k} * (\lambda_{{k}_{I}} * D_I(x_{k}) + \lambda_{{k}_{F}} * F^-1(D_F(f)))\\A(x{k} - b) = M * F * (C * x{k}) - b\\x_{k} = 2 * mi_{k} * A^* * (A(x{k} - b))\end{aligned}\end{align} \]
Parameters
  • idx (int) – The current iteration index.

  • image (torch.Tensor [batch_size, num_coils, num_rows, num_cols]) – The predicted image.

  • sensitivity_maps (torch.Tensor [batch_size, num_coils, num_sens_maps, num_rows, num_cols]) – The coil sensitivity maps.

  • y (torch.Tensor [batch_size, num_coils, num_rows, num_cols]) – The subsampled k-space data.

  • mask (torch.Tensor [batch_size, 1, num_rows, num_cols]) – The subsampled mask.

Returns

image – The updated image.

Return type

torch.Tensor [batch_size, num_coils, num_rows, num_cols]

mridc.collections.reconstruction.models.kikinet module

class mridc.collections.reconstruction.models.kikinet.KIKINet(cfg: DictConfig, trainer: Optional[Trainer] = None)[source]

Bases: BaseMRIReconstructionModel, ABC

Based on KIKINet implementation [1]. Modified to work with multi-coil k-space data, as presented in Eo, Taejoon, et al.

References

Eo, Taejoon, et al. “KIKI-Net: Cross-Domain Convolutional Neural Networks for Reconstructing Undersampled Magnetic Resonance Images.” Magnetic Resonance in Medicine, vol. 80, no. 5, Nov. 2018, pp. 2188–201. PubMed, https://doi.org/10.1002/mrm.27201.

allow_zero_length_dataloader_with_multiple_devices: bool
forward(y: Tensor, sensitivity_maps: Tensor, mask: Tensor, init_pred: Tensor, target: Tensor) Tensor[source]

Forward pass of the network.

Parameters
  • y (Subsampled k-space data.) – torch.Tensor, shape [batch_size, n_coils, n_x, n_y, 2]

  • sensitivity_maps (Coil sensitivity maps.) – torch.Tensor, shape [batch_size, n_coils, n_x, n_y, 2]

  • mask (Sampling mask.) – torch.Tensor, shape [1, 1, n_x, n_y, 1]

  • init_pred (Initial prediction.) – torch.Tensor, shape [batch_size, n_x, n_y, 2]

  • target (Target data to compute the loss.) – torch.Tensor, shape [batch_size, n_x, n_y, 2]

Returns

pred – If self.accumulate_loss is True, returns a list of all intermediate estimates. If False, returns the final estimate.

Return type

list of torch.Tensor, shape [batch_size, n_x, n_y, 2], or torch.Tensor, shape [batch_size, n_x, n_y, 2]

mse_vals: Dict
nmse_vals: Dict
precision: int
prepare_data_per_node: bool
psnr_vals: Dict
ssim_vals: Dict
training: bool

mridc.collections.reconstruction.models.lpd module

class mridc.collections.reconstruction.models.lpd.LPDNet(cfg: DictConfig, trainer: Optional[Trainer] = None)[source]

Bases: BaseMRIReconstructionModel, ABC

Implementation of the Learned Primal Dual network, inspired by Adler, Jonas, and Ozan Öktem.

References

Adler, Jonas, and Ozan Öktem. “Learned Primal-Dual Reconstruction.” IEEE Transactions on Medical Imaging, vol. 37, no. 6, June 2018, pp. 1322–32. arXiv.org, https://doi.org/10.1109/TMI.2018.2799231.

allow_zero_length_dataloader_with_multiple_devices: bool
forward(y: Tensor, sensitivity_maps: Tensor, mask: Tensor, init_pred: Tensor, target: Tensor) Tensor[source]

Forward pass of the network.

Parameters
  • y (Subsampled k-space data.) – torch.Tensor, shape [batch_size, n_coils, n_x, n_y, 2]

  • sensitivity_maps (Coil sensitivity maps.) – torch.Tensor, shape [batch_size, n_coils, n_x, n_y, 2]

  • mask (Sampling mask.) – torch.Tensor, shape [1, 1, n_x, n_y, 1]

  • init_pred (Initial prediction.) – torch.Tensor, shape [batch_size, n_x, n_y, 2]

  • target (Target data to compute the loss.) – torch.Tensor, shape [batch_size, n_x, n_y, 2]

Returns

pred – If self.accumulate_loss is True, returns a list of all intermediate estimates. If False, returns the final estimate.

Return type

list of torch.Tensor, shape [batch_size, n_x, n_y, 2], or torch.Tensor, shape [batch_size, n_x, n_y, 2]

mse_vals: Dict
nmse_vals: Dict
precision: int
prepare_data_per_node: bool
psnr_vals: Dict
ssim_vals: Dict
training: bool

mridc.collections.reconstruction.models.multidomainnet module

class mridc.collections.reconstruction.models.multidomainnet.MultiDomainNet(cfg: DictConfig, trainer: Optional[Trainer] = None)[source]

Bases: BaseMRIReconstructionModel, ABC

Feature-level multi-domain module. Inspired by AIRS Medical submission to the FastMRI 2020 challenge.

allow_zero_length_dataloader_with_multiple_devices: bool
forward(y: Tensor, sensitivity_maps: Tensor, mask: Tensor, init_pred: Tensor, target: Tensor) Tensor[source]

Forward pass of the network.

Parameters
  • y (Subsampled k-space data.) – torch.Tensor, shape [batch_size, n_coils, n_x, n_y, 2]

  • sensitivity_maps (Coil sensitivity maps.) – torch.Tensor, shape [batch_size, n_coils, n_x, n_y, 2]

  • mask (Sampling mask.) – torch.Tensor, shape [1, 1, n_x, n_y, 1]

  • init_pred (Initial prediction.) – torch.Tensor, shape [batch_size, n_x, n_y, 2]

  • target (Target data to compute the loss.) – torch.Tensor, shape [batch_size, n_x, n_y, 2]

Returns

pred – If self.accumulate_loss is True, returns a list of all intermediate estimates. If False, returns the final estimate.

Return type

list of torch.Tensor, shape [batch_size, n_x, n_y, 2], or torch.Tensor, shape [batch_size, n_x, n_y, 2]

mse_vals: Dict
nmse_vals: Dict
precision: int
prepare_data_per_node: bool
psnr_vals: Dict
ssim_vals: Dict
training: bool

mridc.collections.reconstruction.models.pics module

class mridc.collections.reconstruction.models.pics.PICS(cfg: DictConfig, trainer: Optional[Trainer] = None)[source]

Bases: BaseMRIReconstructionModel, ABC

Parallel-Imaging Compressed Sensing (PICS) reconstruction using the BART by Uecker, M. et al.

References

Uecker, M. et al. (2015) ‘Berkeley Advanced Reconstruction Toolbox’, Proc. Intl. Soc. Mag. Reson. Med., 23.

allow_zero_length_dataloader_with_multiple_devices: bool
forward(y: Tensor, sensitivity_maps: Tensor, mask: Tensor, target: Optional[Tensor] = None) Union[list, Any][source]

Forward pass of PICS.

Parameters
  • y (Subsampled k-space data.) – torch.Tensor, shape [batch_size, n_coils, n_x, n_y, 2]

  • sensitivity_maps (Coil sensitivity maps.) – torch.Tensor, shape [batch_size, n_coils, n_x, n_y, 2]

  • mask (Sampling mask.) – torch.Tensor, shape [1, 1, n_x, n_y, 1]

  • init_pred (Initial prediction.) – torch.Tensor, shape [batch_size, n_x, n_y, 2]

  • target (Target data to compute the loss.) – torch.Tensor, shape [batch_size, n_x, n_y, 2]

Returns

pred – Predicted data.

Return type

torch.Tensor, shape [batch_size, n_x, n_y, 2]

mse_vals: Dict
nmse_vals: Dict
precision: int
prepare_data_per_node: bool
psnr_vals: Dict
ssim_vals: Dict
test_step(batch: Dict[float, Tensor], batch_idx: int) Tuple[str, int, Tensor][source]

Test step.

Parameters
  • batch (Batch of data.) – Dict of torch.Tensor, shape [batch_size, n_coils, n_x, n_y, 2]

  • batch_idx (Batch index.) – int

Returns

  • name (Name of the volume.) – str

  • slice_num (Slice number.) – int

  • pred (Predicted data.) – torch.Tensor, shape [batch_size, n_x, n_y, 2]

training: bool

mridc.collections.reconstruction.models.rvn module

class mridc.collections.reconstruction.models.rvn.RecurrentVarNet(cfg: DictConfig, trainer: Optional[Trainer] = None)[source]

Bases: BaseMRIReconstructionModel, ABC

Implementation of the Recurrent Variational Network implementation, as presented in Yiasemis, George, et al.

References

Yiasemis, George, et al. “Recurrent Variational Network: A Deep Learning Inverse Problem Solver Applied to the Task of Accelerated MRI Reconstruction.” ArXiv:2111.09639 [Physics], Nov. 2021. arXiv.org, http://arxiv.org/abs/2111.09639.

allow_zero_length_dataloader_with_multiple_devices: bool
forward(y: Tensor, sensitivity_maps: Tensor, mask: Tensor, init_pred: Tensor, target: Tensor, **kwargs) Tensor[source]

Forward pass of the network.

Parameters
  • y (Subsampled k-space data.) – torch.Tensor, shape [batch_size, n_coils, n_x, n_y, 2]

  • sensitivity_maps (Coil sensitivity maps.) – torch.Tensor, shape [batch_size, n_coils, n_x, n_y, 2]

  • mask (Sampling mask.) – torch.Tensor, shape [1, 1, n_x, n_y, 1]

  • init_pred (Initial prediction.) – torch.Tensor, shape [batch_size, n_x, n_y, 2]

  • target (Target data to compute the loss.) – torch.Tensor, shape [batch_size, n_x, n_y, 2]

Returns

pred – If self.accumulate_loss is True, returns a list of all intermediate estimates. If False, returns the final estimate.

Return type

list of torch.Tensor, shape [batch_size, n_x, n_y, 2], or torch.Tensor, shape [batch_size, n_x, n_y, 2]

mse_vals: Dict
nmse_vals: Dict
precision: int
prepare_data_per_node: bool
psnr_vals: Dict
ssim_vals: Dict
training: bool

mridc.collections.reconstruction.models.unet module

class mridc.collections.reconstruction.models.unet.UNet(cfg: DictConfig, trainer: Optional[Trainer] = None)[source]

Bases: BaseMRIReconstructionModel, ABC

Implementation of the UNet, as presented in O. Ronneberger, P. Fischer, and Thomas Brox.

References

  1. Ronneberger, P. Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. In International Conference on Medical image computing and computer-assisted intervention, pages 234–241. Springer, 2015.

allow_zero_length_dataloader_with_multiple_devices: bool
forward(y: Tensor, sensitivity_maps: Tensor, mask: Tensor, init_pred: Tensor, target: Tensor) Tensor[source]

Forward pass of the network.

Parameters
  • y (Subsampled k-space data.) – torch.Tensor, shape [batch_size, n_coils, n_x, n_y, 2]

  • sensitivity_maps (Coil sensitivity maps.) – torch.Tensor, shape [batch_size, n_coils, n_x, n_y, 2]

  • mask (Sampling mask.) – torch.Tensor, shape [1, 1, n_x, n_y, 1]

  • init_pred (Initial prediction.) – torch.Tensor, shape [batch_size, n_x, n_y, 2]

  • target (Target data to compute the loss.) – torch.Tensor, shape [batch_size, n_x, n_y, 2]

Returns

pred – If self.accumulate_loss is True, returns a list of all intermediate estimates. If False, returns the final estimate.

Return type

list of torch.Tensor, shape [batch_size, n_x, n_y, 2], or torch.Tensor, shape [batch_size, n_x, n_y, 2]

mse_vals: Dict
nmse_vals: Dict
precision: int
prepare_data_per_node: bool
psnr_vals: Dict
ssim_vals: Dict
training: bool

mridc.collections.reconstruction.models.vn module

class mridc.collections.reconstruction.models.vn.VarNet(cfg: DictConfig, trainer: Optional[Trainer] = None)[source]

Bases: BaseMRIReconstructionModel, ABC

Implementation of the End-to-end Variational Network (VN), as presented in Sriram, A. et al.

References

Sriram, A. et al. (2020) ‘End-to-End Variational Networks for Accelerated MRI Reconstruction’. Available at: https://github.com/facebookresearch/fastMRI.

allow_zero_length_dataloader_with_multiple_devices: bool
forward(y: Tensor, sensitivity_maps: Tensor, mask: Tensor, init_pred: Tensor, target: Tensor) Tensor[source]

Forward pass of the network.

Parameters
  • y (Subsampled k-space data.) – torch.Tensor, shape [batch_size, n_coils, n_x, n_y, 2]

  • sensitivity_maps (Coil sensitivity maps.) – torch.Tensor, shape [batch_size, n_coils, n_x, n_y, 2]

  • mask (Sampling mask.) – torch.Tensor, shape [1, 1, n_x, n_y, 1]

  • init_pred (Initial prediction.) – torch.Tensor, shape [batch_size, n_x, n_y, 2]

  • target (Target data to compute the loss.) – torch.Tensor, shape [batch_size, n_x, n_y, 2]

Returns

pred – If self.accumulate_loss is True, returns a list of all intermediate estimates. If False, returns the final estimate.

Return type

list of torch.Tensor, shape [batch_size, n_x, n_y, 2], or torch.Tensor, shape [batch_size, n_x, n_y, 2]

mse_vals: Dict
nmse_vals: Dict
precision: int
prepare_data_per_node: bool
psnr_vals: Dict
ssim_vals: Dict
training: bool

mridc.collections.reconstruction.models.vsnet module

class mridc.collections.reconstruction.models.vsnet.VSNet(cfg: DictConfig, trainer: Optional[Trainer] = None)[source]

Bases: BaseMRIReconstructionModel, ABC

Implementation of the Variable-Splitting Net, as presented in Duan, J. et al.

References

Duan, J. et al. (2019) ‘Vs-net: Variable splitting network for accelerated parallel MRI reconstruction’, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 11767 LNCS, pp. 713–722. doi: 10.1007/978-3-030-32251-9_78.

allow_zero_length_dataloader_with_multiple_devices: bool
forward(y: Tensor, sensitivity_maps: Tensor, mask: Tensor, init_pred: Tensor, target: Tensor) Tensor[source]

Forward pass of the network.

Parameters
  • y (Subsampled k-space data.) – torch.Tensor, shape [batch_size, n_coils, n_x, n_y, 2]

  • sensitivity_maps (Coil sensitivity maps.) – torch.Tensor, shape [batch_size, n_coils, n_x, n_y, 2]

  • mask (Sampling mask.) – torch.Tensor, shape [1, 1, n_x, n_y, 1]

  • init_pred (Initial prediction.) – torch.Tensor, shape [batch_size, n_x, n_y, 2]

  • target (Target data to compute the loss.) – torch.Tensor, shape [batch_size, n_x, n_y, 2]

Returns

pred – If self.accumulate_loss is True, returns a list of all intermediate estimates. If False, returns the final estimate.

Return type

list of torch.Tensor, shape [batch_size, n_x, n_y, 2], or torch.Tensor, shape [batch_size, n_x, n_y, 2]

mse_vals: Dict
nmse_vals: Dict
precision: int
prepare_data_per_node: bool
psnr_vals: Dict
ssim_vals: Dict
training: bool

mridc.collections.reconstruction.models.xpdnet module

class mridc.collections.reconstruction.models.xpdnet.XPDNet(cfg: DictConfig, trainer: Optional[Trainer] = None)[source]

Bases: BaseMRIReconstructionModel, ABC

Implementation of the XPDNet, as presented in Ramzi, Zaccharie, et al.

References

Ramzi, Zaccharie, et al. “XPDNet for MRI Reconstruction: An Application to the 2020 FastMRI Challenge. ” ArXiv:2010.07290 [Physics, Stat], July 2021. arXiv.org, http://arxiv.org/abs/2010.07290.

allow_zero_length_dataloader_with_multiple_devices: bool
forward(y: Tensor, sensitivity_maps: Tensor, mask: Tensor, init_pred: Tensor, target: Tensor) Tensor[source]

Forward pass of the network.

Parameters
  • y (Subsampled k-space data.) – torch.Tensor, shape [batch_size, n_coils, n_x, n_y, 2]

  • sensitivity_maps (Coil sensitivity maps.) – torch.Tensor, shape [batch_size, n_coils, n_x, n_y, 2]

  • mask (Sampling mask.) – torch.Tensor, shape [1, 1, n_x, n_y, 1]

  • init_pred (Initial prediction.) – torch.Tensor, shape [batch_size, n_x, n_y, 2]

  • target (Target data to compute the loss.) – torch.Tensor, shape [batch_size, n_x, n_y, 2]

Returns

pred – If self.accumulate_loss is True, returns a list of all intermediate estimates. If False, returns the final estimate.

Return type

list of torch.Tensor, shape [batch_size, n_x, n_y, 2], or torch.Tensor, shape [batch_size, n_x, n_y, 2]

mse_vals: Dict
nmse_vals: Dict
precision: int
prepare_data_per_node: bool
psnr_vals: Dict
ssim_vals: Dict
training: bool

mridc.collections.reconstruction.models.zf module

class mridc.collections.reconstruction.models.zf.ZF(cfg: DictConfig, trainer: Optional[Trainer] = None)[source]

Bases: BaseMRIReconstructionModel, ABC

Zero-Filled reconstruction using either root-sum-of-squares (RSS) or SENSE (SENSitivity Encoding), as presented in Pruessmann KP, Weiger M, Scheidegger MB, Boesiger P.

References

Pruessmann KP, Weiger M, Scheidegger MB, Boesiger P. SENSE: Sensitivity encoding for fast MRI. Magn Reson Med 1999; 42:952-962.

allow_zero_length_dataloader_with_multiple_devices: bool
forward(y: Tensor, sensitivity_maps: Tensor, mask: Tensor, target: Optional[Tensor] = None) Union[list, Any][source]

Forward pass of the zero-filled method.

Parameters
  • y (Subsampled k-space data.) – torch.Tensor, shape [batch_size, n_coils, n_x, n_y, 2]

  • sensitivity_maps (Coil sensitivity maps.) – torch.Tensor, shape [batch_size, n_coils, n_x, n_y, 2]

  • mask (Sampling mask.) – torch.Tensor, shape [1, 1, n_x, n_y, 1]

  • init_pred (Initial prediction.) – torch.Tensor, shape [batch_size, n_x, n_y, 2]

  • target (Target data to compute the loss.) – torch.Tensor, shape [batch_size, n_x, n_y, 2]

Returns

pred – Predicted data.

Return type

torch.Tensor, shape [batch_size, n_x, n_y, 2]

mse_vals: Dict
nmse_vals: Dict
precision: int
prepare_data_per_node: bool
psnr_vals: Dict
ssim_vals: Dict
test_step(batch: Dict[float, Tensor], batch_idx: int) Tuple[str, int, Tensor][source]

Test step.

Parameters
  • batch (Batch of data.) – Dict of torch.Tensor, shape [batch_size, n_coils, n_x, n_y, 2]

  • batch_idx (Batch index.) – int

Returns

  • name (Name of the volume.) – str

  • slice_num (Slice number.) – int

  • pred (Predicted data.) – torch.Tensor, shape [batch_size, n_x, n_y, 2]

training: bool

Module contents