mridc.collections.reconstruction.parts package
Submodules
mridc.collections.reconstruction.parts.transforms module
- class mridc.collections.reconstruction.parts.transforms.MRIDataTransforms(apply_prewhitening: bool = False, prewhitening_scale_factor: float = 1.0, prewhitening_patch_start: int = 10, prewhitening_patch_length: int = 30, apply_gcc: bool = False, gcc_virtual_coils: int = 10, gcc_calib_lines: int = 24, gcc_align_data: bool = True, coil_combination_method: str = 'SENSE', dimensionality: int = 2, mask_func: Optional[List[MaskFunc]] = None, shift_mask: bool = False, mask_center_scale: Optional[float] = 0.02, half_scan_percentage: float = 0.0, remask: bool = False, crop_size: Optional[Tuple[int, int]] = None, kspace_crop: bool = False, crop_before_masking: bool = True, kspace_zero_filling_size: Optional[Tuple] = None, normalize_inputs: bool = False, fft_centered: bool = True, fft_normalization: str = 'ortho', max_norm: bool = True, spatial_dims: Optional[Sequence[int]] = None, coil_dim: int = 0, use_seed: bool = True)[source]
Bases:
object
MRI preprocessing data transforms.
- __call__(kspace: ndarray, sensitivity_map: ndarray, mask: ndarray, eta: ndarray, target: ndarray, attrs: Dict, fname: str, slice_idx: int) Tuple[Tensor, Union[List, Tensor], Union[Tensor, None, Any], Union[List, Any], Union[Tensor, None, Any], Union[Tensor, Any], str, int, Union[List, Any]] [source]
Apply the data transform.
- Parameters
kspace (The kspace.) –
sensitivity_map (The sensitivity map.) –
mask (The mask.) –
eta (The initial estimation.) –
target (The target.) –
attrs (The attributes.) –
fname (The file name.) –
slice_idx (The slice number.) –
- Return type
The transformed data.
mridc.collections.reconstruction.parts.utils module
- mridc.collections.reconstruction.parts.utils.apply_mask(data: Tensor, mask_func: MaskFunc, seed: Optional[Union[int, Tuple[int, ...]]] = None, padding: Optional[Sequence[int]] = None, shift: bool = False, half_scan_percentage: Optional[float] = 0.0, center_scale: Optional[float] = 0.02, existing_mask: Optional[Tensor] = None) Tuple[Any, Any, int] [source]
Subsample given k-space by multiplying with a mask.
- Parameters
data (The input k-space data. This should have at least 3 dimensions, where dimensions -3 and -2 are the) – spatial dimensions, and the final dimension has size 2 (for complex values).
mask_func (A function that takes a shape (tuple of ints) and a random number seed and returns a mask.) –
seed (Seed for the random number generator.) –
padding (Padding value to apply for mask.) –
shift (Toggle to shift mask when subsampling. Applicable on 2D data.) –
half_scan_percentage (Percentage of kspace to be dropped.) –
center_scale (Scale of the center of the mask. Applicable on Gaussian masks.) –
existing_mask (When given, use this mask instead of generating a new one.) –
- Return type
Tuple of subsampled k-space, mask, and mask indices.
- mridc.collections.reconstruction.parts.utils.batched_mask_center(x: Tensor, mask_from: Tensor, mask_to: Tensor, mask_type: str = '2D') Tensor [source]
Initializes a mask with the center filled in. Can operate with different masks for each batch element.
- Parameters
x (The input real image or batch of real images.) –
mask_from (Part of center to start filling.) –
mask_to (Part of center to end filling.) –
mask_type (Type of mask to apply. Can be either "1D" or "2D".) –
- Return type
A mask with the center filled.
- mridc.collections.reconstruction.parts.utils.center_crop(data: Tensor, shape: Tuple[int, int]) Tensor [source]
Apply a center crop to the input real image or batch of real images.
- Parameters
data (The input tensor to be center cropped. It should have at least 2 dimensions and the cropping is applied) – along the last two dimensions.
shape (The output shape. The shape should be smaller than the corresponding dimensions of data.) –
- Return type
The center cropped image.
- mridc.collections.reconstruction.parts.utils.center_crop_to_smallest(x: Union[Tensor, ndarray], y: Union[Tensor, ndarray]) Tuple[Union[Tensor, ndarray], Union[Tensor, ndarray]] [source]
Apply a center crop on the larger image to the size of the smaller.
- The minimum is taken over dim=-1 and dim=-2. If x is smaller than y at dim=-1 and y is smaller than x at dim=-2,
then the returned dimension will be a mixture of the two.
- Parameters
x (The first image.) –
y (The second image.) –
- Return type
Tuple of tensors x and y, each cropped to the minimum size.
- mridc.collections.reconstruction.parts.utils.complex_center_crop(data: Tensor, shape: Tuple[int, int]) Tensor [source]
Apply a center crop to the input image or batch of complex images.
- Parameters
data (The complex input tensor to be center cropped. It should have at least 3 dimensions and the cropping is) – applied along dimensions -3 and -2 and the last dimensions should have a size of 2.
shape (The output shape. The shape should be smaller than the corresponding dimensions of data.) –
- Return type
The center cropped image.
- mridc.collections.reconstruction.parts.utils.mask_center(x: Tensor, mask_from: Optional[int], mask_to: Optional[int], mask_type: str = '2D') Tensor [source]
Apply a center crop to the input real image or batch of real images.
- Parameters
x (The input real image or batch of real images.) –
mask_from (Part of center to start filling.) –
mask_to (Part of center to end filling.) –
mask_type (Type of mask to apply. Can be either "1D" or "2D".) –
- Return type
A mask with the center filled.