mridc.collections.quantitative.models.qrim package

Submodules

mridc.collections.quantitative.models.qrim.qrim_block module

class mridc.collections.quantitative.models.qrim.qrim_block.qRIMBlock(recurrent_layer=None, conv_filters=None, conv_kernels=None, conv_dilations=None, conv_bias=None, recurrent_filters=None, recurrent_kernels=None, recurrent_dilations=None, recurrent_bias=None, depth: int = 2, time_steps: int = 8, conv_dim: int = 2, no_dc: bool = False, linear_forward_model=None, fft_centered: bool = True, fft_normalization: str = 'ortho', spatial_dims: Optional[Tuple[int, int]] = None, coil_dim: int = 1, coil_combination_method: str = 'SENSE', dimensionality: int = 2)[source]

Bases: Module

qRIMBlock extends a block of Recurrent Inference Machines (RIMs).

forward(pred: Tensor, masked_kspace: Tensor, R2star_map_init: Tensor, S0_map_init: Tensor, B0_map_init: Tensor, phi_map_init: Tensor, TEs: List, sensitivity_maps: Tensor, sampling_mask: Tensor, eta: Optional[Tensor] = None, hx: Optional[Tensor] = None, gamma: Optional[Tensor] = None, keep_eta: bool = False) Tuple[Any, Optional[Union[list, Tensor]]][source]

Forward pass of the RIMBlock.

Parameters
  • pred (Initial prediction of the subsampled k-space.) – torch.Tensor, shape [batch_size, n_coils, n_x, n_y, 2]

  • masked_kspace (Data.) – torch.Tensor, shape [batch_size, n_coils, n_x, n_y, 2]

  • R2star_map_init (Initial R2* map.) – torch.Tensor, shape [batch_size, n_echoes, n_coils, n_x, n_y]

  • S0_map_init (Initial S0 map.) – torch.Tensor, shape [batch_size, n_echoes, n_coils, n_x, n_y]

  • B0_map_init (Initial B0 map.) – torch.Tensor, shape [batch_size, n_echoes, n_coils, n_x, n_y]

  • phi_map_init (Initial phi map.) – torch.Tensor, shape [batch_size, n_echoes, n_coils, n_x, n_y]

  • TEs (List of echo times.) – List of int, shape [batch_size, n_echoes]

  • sensitivity_maps (Coil sensitivity maps.) – torch.Tensor, shape [batch_size, n_coils, n_x, n_y, 2]

  • sampling_mask (Mask of the sampling.) – torch.Tensor, shape [batch_size, 1, n_x, n_y, 2]

  • eta (Initial zero-filled.) – torch.Tensor, shape [batch_size, n_x, n_y, 2]

  • hx (Initial guess for the hidden state.) –

  • gamma (Scaling normalization factor.) –

  • keep_eta (Whether to keep the eta.) –

Return type

Reconstructed image and hidden states.

training: bool

mridc.collections.quantitative.models.qrim.utils module

class mridc.collections.quantitative.models.qrim.utils.RescaleByMax(slack=1e-06)[source]

Bases: object

forward(data)[source]

Apply scaling.

static reverse(data, gamma)[source]

Reverse scaling.

class mridc.collections.quantitative.models.qrim.utils.SignalForwardModel(sequence: Optional[str] = None)[source]

Bases: object

Defines a signal forward model

MEGRENoPhaseSignalModel(R2star_map: Tensor, S0_map: Tensor, TEs: List)[source]

MEGRE no phase forward model.

Parameters
  • R2star_map (R2* map.) – torch.Tensor, shape [batch_size, n_x, n_y]

  • S0_map (S0 map.) – torch.Tensor, shape [batch_size, n_x, n_y]

  • TEs (List of echo times.) – List of float, shape [n_echoes]

MEGRESignalModel(R2star_map: Tensor, S0_map: Tensor, B0_map: Tensor, phi_map: Tensor, TEs: List)[source]

MEGRE forward model.

Parameters
  • R2star_map (R2* map.) – torch.Tensor, shape [batch_size, n_x, n_y]

  • S0_map (S0 map.) – torch.Tensor, shape [batch_size, n_x, n_y]

  • B0_map (B0 map.) – torch.Tensor, shape [batch_size, n_x, n_y]

  • phi_map (phi map.) – torch.Tensor, shape [batch_size, n_x, n_y]

  • TEs (List of echo times.) – List of float, shape [n_echoes]

__call__(R2star_map: Tensor, S0_map: Tensor, B0_map: Tensor, phi_map: Tensor, TEs=None)[source]

Defines forward model based on sequence.

Parameters
  • R2star_map (R2* map.) – torch.Tensor, shape [batch_size, n_x, n_y]

  • S0_map (S0 map.) – torch.Tensor, shape [batch_size, n_x, n_y]

  • B0_map (B0 map.) – torch.Tensor, shape [batch_size, n_x, n_y]

  • phi_map (phi map.) – torch.Tensor, shape [batch_size, n_x, n_y]

  • TEs (List of echo times.) – List of float, shape [n_echoes]

mridc.collections.quantitative.models.qrim.utils.analytical_log_likelihood_gradient(linear_forward_model: SignalForwardModel, R2star_map: Tensor, S0_map: Tensor, B0_map: Tensor, phi_map: Tensor, TEs: List, sensitivity_maps: Tensor, masked_kspace: Tensor, sampling_mask: Tensor, fft_centered: bool, fft_normalization: str, spatial_dims: Sequence[int], coil_dim: int, coil_combination_method: str = 'SENSE', scaling: float = 0.001) Tensor[source]

Computes the analytical gradient of the log-likelihood function.

Parameters
  • linear_forward_model (SignalForwardModel) – Signal forward model to use.

  • R2star_map (R2* map.) – torch.Tensor, shape [batch_size, n_x, n_y]

  • S0_map (S0 map.) – torch.Tensor, shape [batch_size, n_x, n_y]

  • B0_map (B0 map.) – torch.Tensor, shape [batch_size, n_x, n_y]

  • phi_map (phi map.) – torch.Tensor, shape [batch_size, n_x, n_y]

  • TEs (List of echo times.) – List of float, shape [n_echoes]

  • sensitivity_maps (Coil sensitivity maps.) – torch.Tensor, shape [batch_size, n_coils, n_x, n_y, 2]

  • masked_kspace (Data.) – torch.Tensor, shape [batch_size, n_echoes, n_coils, n_x, n_y, 2]

  • sampling_mask (Mask of the sampling.) – torch.Tensor, shape [batch_size, 1, n_x, n_y, 1]

  • fft_centered (If True, the FFT is centered.) – bool

  • fft_normalization (Normalization of the FFT.) – str, one of “ortho”, “forward”, “backward”, None

  • spatial_dims (Spatial dimensions of the input.) – Sequence of int, shape [n_dims]

  • coil_dim (Coils dimension of the input.) – int

  • coil_combination_method (Method to use for coil combination.) – str, one of “SENSE”, “RSS”

  • scaling (Scaling factor.) – float

Return type

Analytical gradient of the log-likelihood function.

mridc.collections.quantitative.models.qrim.utils.expand_op(x, sensitivity_maps)[source]

Expand a coil-combined image to multicoil.

Module contents