mridc.collections.quantitative.models.qvarnet package

Submodules

mridc.collections.quantitative.models.qvarnet.qvn_block module

class mridc.collections.quantitative.models.qvarnet.qvn_block.qVarNetBlock(model: Module, fft_centered: bool = True, fft_normalization: str = 'ortho', spatial_dims: Optional[Tuple[int, int]] = None, coil_dim: int = 1, no_dc: bool = False, linear_forward_model=None)[source]

Bases: Module

Implementation of the quantitative End-to-end Variational Network (qVN), as presented in Zhang, C. et al.

References

Zhang, C. et al. (2022) ‘A unified model for reconstruction and R2 mapping of accelerated 7T data using quantitative Recurrent Inference Machine’. In review.

forward(prediction: Tensor, masked_kspace: Tensor, R2star_map_init: Tensor, S0_map_init: Tensor, B0_map_init: Tensor, phi_map_init: Tensor, TEs: List, sensitivity_maps: Tensor, sampling_mask: Tensor, gamma: Optional[Tensor] = None) Tensor[source]
Parameters
  • prediction (Initial prediction of the subsampled k-space.) – torch.Tensor, shape [batch_size, n_coils, n_x, n_y, 2]

  • masked_kspace (Data.) – torch.Tensor, shape [batch_size, n_coils, n_x, n_y, 2]

  • R2star_map_init (Initial R2* map.) – torch.Tensor, shape [batch_size, n_echoes, n_coils, n_x, n_y]

  • S0_map_init (Initial S0 map.) – torch.Tensor, shape [batch_size, n_echoes, n_coils, n_x, n_y]

  • B0_map_init (Initial B0 map.) – torch.Tensor, shape [batch_size, n_echoes, n_coils, n_x, n_y]

  • phi_map_init (Initial phi map.) – torch.Tensor, shape [batch_size, n_echoes, n_coils, n_x, n_y]

  • TEs (List of echo times.) – List of int, shape [batch_size, n_echoes]

  • sensitivity_maps (Coil sensitivity maps.) – torch.Tensor, shape [batch_size, n_coils, n_x, n_y, 2]

  • sampling_mask (Mask of the sampling.) – torch.Tensor, shape [batch_size, 1, n_x, n_y, 2]

  • gamma (Scaling normalization factor.) – torch.Tensor, shape [batch_size, 1, 1, 1, 1]

Return type

Reconstructed image.

sens_expand(x: Tensor, sens_maps: Tensor) Tensor[source]

Expand the sensitivity maps to the same size as the input.

Parameters
  • x (Input data.) –

  • sens_maps (Coil Sensitivity maps.) –

Return type

SENSE reconstruction expanded to the same size as the input sens_maps.

sens_reduce(x: Tensor, sens_maps: Tensor) Tensor[source]

Reduce the sensitivity maps.

Parameters
  • x (Input data.) –

  • sens_maps (Coil Sensitivity maps.) –

Return type

SENSE coil-combined reconstruction.

training: bool

Module contents