mridc.collections.reconstruction.models.rim package

Submodules

mridc.collections.reconstruction.models.rim.conv_layers module

class mridc.collections.reconstruction.models.rim.conv_layers.ConvNonlinear(input_size, features, conv_dim, kernel_size, dilation, bias, nonlinear='relu')[source]

Bases: Module

A convolutional layer with nonlinearity.

check_forward_input(_input)[source]

Checks input for correct size and shape.

static determine_conv_class(n_dim)[source]

Determines the convolutional layer class.

extra_repr()[source]

Extra information about the layer.

forward(_input)[source]

Forward pass of the convolutional layer.

reset_parameters()[source]

Resets the parameters of the convolutional layer.

training: bool
class mridc.collections.reconstruction.models.rim.conv_layers.ConvRNNStack(convs, rnn)[source]

Bases: Module

A stack of convolutional RNNs.

forward(x, hidden)[source]
Parameters
  • x ([batch_size, seq_len, input_size]) –

  • hidden ([num_layers * num_directions, batch_size, hidden_size) –

Returns

output

Return type

[batch_size, seq_len, hidden_size]

training: bool

mridc.collections.reconstruction.models.rim.rim_block module

class mridc.collections.reconstruction.models.rim.rim_block.RIMBlock(recurrent_layer=None, conv_filters=None, conv_kernels=None, conv_dilations=None, conv_bias=None, recurrent_filters=None, recurrent_kernels=None, recurrent_dilations=None, recurrent_bias=None, depth: int = 2, time_steps: int = 8, conv_dim: int = 2, no_dc: bool = False, fft_centered: bool = True, fft_normalization: str = 'ortho', spatial_dims: Optional[Tuple[int, int]] = None, coil_dim: int = 1, dimensionality: int = 2)[source]

Bases: Module

RIMBlock is a block of Recurrent Inference Machines (RIMs).

forward(pred: Tensor, masked_kspace: Tensor, sense: Tensor, mask: Tensor, eta: Optional[Tensor] = None, hx: Optional[Tensor] = None, sigma: float = 1.0, keep_eta: bool = False) Tuple[Any, Optional[Union[list, Tensor]]][source]

Forward pass of the RIMBlock.

Parameters
  • pred (Predicted k-space.) –

  • masked_kspace (Subsampled k-space.) –

  • sense (Coil sensitivity maps.) –

  • mask (Sample mask.) –

  • eta (Initial guess for the eta.) –

  • hx (Initial guess for the hidden state.) –

  • sigma (Noise level.) –

  • keep_eta (Whether to keep the eta.) –

Return type

Reconstructed image and hidden states.

training: bool

mridc.collections.reconstruction.models.rim.rnn_cells module

class mridc.collections.reconstruction.models.rim.rnn_cells.ConvGRUCell(input_size, hidden_size, conv_dim, kernel_size, dilation=1, bias=True)[source]

Bases: ConvGRUCellBase

A Convolutional GRU cell.

forward(_input, hx)[source]

Forward pass of the ConvGRUCell.

training: bool
class mridc.collections.reconstruction.models.rim.rnn_cells.ConvGRUCellBase(input_size, hidden_size, conv_dim, kernel_size, dilation, bias)[source]

Bases: Module

Base class for Conv Gated Recurrent Unit (GRU) cells. # TODO: add paper reference

check_forward_hidden(_input, hx, hidden_label='')[source]

Check forward hidden.

check_forward_input(_input)[source]

Check forward input.

static determine_conv_class(n_dim)[source]

Determine the convolutional class to use.

extra_repr()[source]

Extra information to be printed when printing the model.

static orthotogonalize_weights(weights, chunks=1)[source]

Orthogonalize the weights of a convolutional layer.

reset_parameters()[source]

Initialize parameters following the way proposed in the paper.

training: bool
class mridc.collections.reconstruction.models.rim.rnn_cells.ConvMGUCell(input_size, hidden_size, conv_dim, kernel_size, dilation=1, bias=True)[source]

Bases: ConvMGUCellBase

Convolutional Minimal Gated Unit cell.

forward(_input, hx)[source]

Forward the ConvMGUCell.

training: bool
class mridc.collections.reconstruction.models.rim.rnn_cells.ConvMGUCellBase(input_size, hidden_size, conv_dim, kernel_size, dilation, bias)[source]

Bases: Module

A base class for a Convolutional Minimal Gated Unit cell. # TODO: add paper reference

check_forward_hidden(_input, hx, hidden_label='')[source]

Check the forward hidden.

check_forward_input(_input)[source]

Check the forward input.

static determine_conv_class(n_dim)[source]

Determine the convolutional class.

extra_repr()[source]

Extra information about the ConvMGUCellBase.

static orthotogonalize_weights(weights, chunks=1)[source]

Orthogonalize the weights.

reset_parameters()[source]

Reset the parameters.

training: bool
class mridc.collections.reconstruction.models.rim.rnn_cells.IndRNNCell(input_size, hidden_size, conv_dim, kernel_size, dilation=1, bias=True)[source]

Bases: IndRNNCellBase

Independently Recurrent Neural Network cell.

forward(_input, hx)[source]

Forward propagate the RNN cell.

training: bool
class mridc.collections.reconstruction.models.rim.rnn_cells.IndRNNCellBase(input_size, hidden_size, conv_dim, kernel_size, dilation, bias)[source]

Bases: Module

Base class for Independently RNN cells as presented in 1.

References

1

Li, S. et al. (2018) ‘Independently Recurrent Neural Network (IndRNN): Building A Longer and Deeper RNN’, Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, (1), pp. 5457–5466. doi: 10.1109/CVPR.2018.00572.

check_forward_hidden(_input, hx, hidden_label='')[source]

Check forward hidden.

check_forward_input(_input)[source]

Check forward input.

static determine_conv_class(n_dim)[source]

Determine the convolutional class.

extra_repr()[source]

Extra information about the module, used for printing.

static orthotogonalize_weights(weights, chunks=1)[source]

Orthogonalize the weights.

reset_parameters()[source]

Reset the parameters.

training: bool

mridc.collections.reconstruction.models.rim.utils module

mridc.collections.reconstruction.models.rim.utils.log_likelihood_gradient(eta: Tensor, masked_kspace: Tensor, sense: Tensor, mask: Tensor, sigma: float, fft_centered: bool, fft_normalization: str, spatial_dims: Sequence[int], coil_dim: int) Tensor[source]

Computes the gradient of the log-likelihood function.

Parameters
  • eta (Initial guess for the reconstruction.) –

  • masked_kspace (Subsampled k-space data.) –

  • sense (Sensing matrix.) –

  • mask (Sampling mask.) –

  • sigma (Noise level.) –

  • fft_centered (Whether to center the FFT.) –

  • fft_normalization (Whether to normalize the FFT.) –

  • spatial_dims (Spatial dimensions of the data.) –

  • coil_dim (Dimension of the coil.) –

Return type

Gradient of the log-likelihood function.

Module contents