mridc.core.conf package
Submodules
mridc.core.conf.base_config module
mridc.core.conf.dataloader module
- class mridc.core.conf.dataloader.DataLoaderConfig(batch_size: int = '???', shuffle: bool = False, sampler: Optional[Any] = None, batch_sampler: Optional[Any] = None, num_workers: int = 0, collate_fn: Optional[Any] = None, pin_memory: bool = False, drop_last: bool = False, timeout: int = 0, worker_init_fn: Optional[Any] = None, multiprocessing_context: Optional[Any] = None)[source]
Bases:
object
Configuration of PyTorch DataLoader.
- ..note:
For the details on the function/meanings of the arguments, please refer to: https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader
- batch_sampler: Optional[Any] = None
- batch_size: int = '???'
- collate_fn: Optional[Any] = None
- drop_last: bool = False
- multiprocessing_context: Optional[Any] = None
- num_workers: int = 0
- pin_memory: bool = False
- sampler: Optional[Any] = None
- shuffle: bool = False
- timeout: int = 0
- worker_init_fn: Optional[Any] = None
mridc.core.conf.hydra_runner module
- mridc.core.conf.hydra_runner.hydra_runner(config_path: Optional[str] = '.', config_name: Optional[str] = None, schema: Optional[Any] = None) Callable[[Callable[[Any], Any]], Any] [source]
Decorator used for passing the Config paths to main function. Optionally registers a schema used for validation/providing default values.
- Parameters
config_path (Path to the config file.) –
config_name (Name of the config file.) –
schema (Schema used for validation/providing default values.) –
- Return type
A decorator that passes the config paths to the main function.
mridc.core.conf.modelPT module
- class mridc.core.conf.modelPT.HydraConfig(run: ~typing.Dict[str, ~typing.Any] = <factory>, job_logging: ~typing.Dict[str, ~typing.Any] = <factory>)[source]
Bases:
object
Configuration for the hydra framework.
- job_logging: Dict[str, Any]
- run: Dict[str, Any]
- class mridc.core.conf.modelPT.MRIDCConfig(name: str = '???', model: ModelConfig = '???', trainer: TrainerConfig = TrainerConfig(logger=False, callbacks=None, default_root_dir=None, gradient_clip_val=0, num_nodes=1, gpus=None, auto_select_gpus=False, tpu_cores=None, enable_progress_bar=True, overfit_batches=0.0, track_grad_norm=-1, check_val_every_n_epoch=1, fast_dev_run=False, accumulate_grad_batches=1, max_epochs=1000, min_epochs=1, max_steps=-1, min_steps=None, limit_train_batches=1.0, limit_val_batches=1.0, limit_test_batches=1.0, val_check_interval=1.0, log_every_n_steps=1, accelerator='gpu', sync_batchnorm=False, precision=32, weights_save_path=None, num_sanity_val_steps=2, resume_from_checkpoint=None, profiler=None, benchmark=False, deterministic=False, auto_lr_find=False, replace_sampler_ddp=True, detect_anomaly=False, auto_scale_batch_size=False, amp_backend='native', amp_level=None, plugins=None, move_metrics_to_cpu=False, multiple_trainloader_mode='max_size_cycle', limit_predict_batches=1.0, gradient_clip_algorithm='norm', max_time=None, reload_dataloaders_every_n_epochs=0, ipus=None, devices=None, strategy='ddp', enable_checkpointing=False, enable_model_summary=True), exp_manager: Optional[Any] = ExpManagerConfig(explicit_log_dir=None, exp_dir=None, name=None, version=None, use_datetime_version=True, resume_if_exists=False, resume_past_end=False, resume_ignore_no_checkpoint=False, create_tensorboard_logger=True, summary_writer_kwargs=None, create_wandb_logger=False, wandb_logger_kwargs=None, create_checkpoint_callback=True, checkpoint_callback_params=CallbackParams(filepath=None, dirpath=None, filename=None, monitor='val_loss', verbose=True, save_last=True, save_top_k=3, save_weights_only=False, mode='min', every_n_epochs=1, prefix=None, postfix='.mridc', save_best_model=False, always_save_mridc=False, save_mridc_on_train_end=True, model_parallel_size=None), files_to_copy=None, log_step_timing=True, step_timing_kwargs=StepTimingParams(reduction='mean', sync_cuda=False, buffer_size=1), log_local_rank_0_only=False, log_global_rank_0_only=False, model_parallel_size=None), hydra: HydraConfig = HydraConfig(run={'dir': '.'}, job_logging={'root': {'handlers': None}}))[source]
Bases:
object
Configuration for the mridc framework.
- exp_manager: Optional[Any] = ExpManagerConfig(explicit_log_dir=None, exp_dir=None, name=None, version=None, use_datetime_version=True, resume_if_exists=False, resume_past_end=False, resume_ignore_no_checkpoint=False, create_tensorboard_logger=True, summary_writer_kwargs=None, create_wandb_logger=False, wandb_logger_kwargs=None, create_checkpoint_callback=True, checkpoint_callback_params=CallbackParams(filepath=None, dirpath=None, filename=None, monitor='val_loss', verbose=True, save_last=True, save_top_k=3, save_weights_only=False, mode='min', every_n_epochs=1, prefix=None, postfix='.mridc', save_best_model=False, always_save_mridc=False, save_mridc_on_train_end=True, model_parallel_size=None), files_to_copy=None, log_step_timing=True, step_timing_kwargs=StepTimingParams(reduction='mean', sync_cuda=False, buffer_size=1), log_local_rank_0_only=False, log_global_rank_0_only=False, model_parallel_size=None)
- hydra: HydraConfig = HydraConfig(run={'dir': '.'}, job_logging={'root': {'handlers': None}})
- model: ModelConfig = '???'
- name: str = '???'
- trainer: TrainerConfig = TrainerConfig(logger=False, callbacks=None, default_root_dir=None, gradient_clip_val=0, num_nodes=1, gpus=None, auto_select_gpus=False, tpu_cores=None, enable_progress_bar=True, overfit_batches=0.0, track_grad_norm=-1, check_val_every_n_epoch=1, fast_dev_run=False, accumulate_grad_batches=1, max_epochs=1000, min_epochs=1, max_steps=-1, min_steps=None, limit_train_batches=1.0, limit_val_batches=1.0, limit_test_batches=1.0, val_check_interval=1.0, log_every_n_steps=1, accelerator='gpu', sync_batchnorm=False, precision=32, weights_save_path=None, num_sanity_val_steps=2, resume_from_checkpoint=None, profiler=None, benchmark=False, deterministic=False, auto_lr_find=False, replace_sampler_ddp=True, detect_anomaly=False, auto_scale_batch_size=False, amp_backend='native', amp_level=None, plugins=None, move_metrics_to_cpu=False, multiple_trainloader_mode='max_size_cycle', limit_predict_batches=1.0, gradient_clip_algorithm='norm', max_time=None, reload_dataloaders_every_n_epochs=0, ipus=None, devices=None, strategy='ddp', enable_checkpointing=False, enable_model_summary=True)
- class mridc.core.conf.modelPT.ModelConfig(train_ds: Optional[DatasetConfig] = None, validation_ds: Optional[DatasetConfig] = None, test_ds: Optional[DatasetConfig] = None, optim: Optional[OptimConfig] = None)[source]
Bases:
object
Configuration for the model.
- optim: Optional[OptimConfig] = None
- test_ds: Optional[DatasetConfig] = None
- train_ds: Optional[DatasetConfig] = None
- validation_ds: Optional[DatasetConfig] = None
- class mridc.core.conf.modelPT.ModelConfigBuilder(model_cfg: ModelConfig)[source]
Bases:
object
Builder for the ModelConfig class.
- build() ModelConfig [source]
Validate config
- set_optim(cfg: OptimizerParams, sched_cfg: Optional[SchedulerParams] = None)[source]
Set the optimizer configuration.
- set_test_ds(cfg: Optional[DatasetConfig] = None)[source]
Set the test dataset configuration.
- set_train_ds(cfg: Optional[DatasetConfig] = None)[source]
Set the training dataset configuration.
- set_validation_ds(cfg: Optional[DatasetConfig] = None)[source]
Set the validation dataset configuration.
- class mridc.core.conf.modelPT.OptimConfig(name: str = '???', sched: Optional[SchedConfig] = None)[source]
Bases:
object
Configuration for the optimizer.
- name: str = '???'
- sched: Optional[SchedConfig] = None
mridc.core.conf.optimizers module
- class mridc.core.conf.optimizers.AdadeltaParams(lr: Optional[float] = '???', rho: float = 0.9, eps: float = 1e-06, weight_decay: float = 0)[source]
Bases:
OptimizerParams
Default configuration for Adadelta optimizer.
Note
For the details on the function/meanings of the arguments, please refer to: https://pytorch.org/docs/stable/optim.html#torch.optim.Adadelta
- eps: float = 1e-06
- rho: float = 0.9
- weight_decay: float = 0
- class mridc.core.conf.optimizers.AdagradParams(lr: Optional[float] = '???', lr_decay: float = 0, weight_decay: float = 0, initial_accumulator_value: float = 0, eps: float = 1e-10)[source]
Bases:
OptimizerParams
Default configuration for Adagrad optimizer.
Note
For the details on the function/meanings of the arguments, please refer to: https://pytorch.org/docs/stable/optim.html#torch.optim.Adagrad
- eps: float = 1e-10
- initial_accumulator_value: float = 0
- lr_decay: float = 0
- weight_decay: float = 0
- class mridc.core.conf.optimizers.AdamParams(lr: Optional[float] = '???', eps: float = 1e-08, weight_decay: float = 0, amsgrad: bool = False)[source]
Bases:
OptimizerParams
Default configuration for Adam optimizer.
Note
For the details on the function/meanings of the arguments, please refer to: https://pytorch.org/docs/stable/optim.html?highlight=adam#torch.optim.Adam
- amsgrad: bool = False
- eps: float = 1e-08
- weight_decay: float = 0
- class mridc.core.conf.optimizers.AdamWParams(lr: Optional[float] = '???', betas: Tuple[float, float] = (0.9, 0.999), eps: float = 1e-08, weight_decay: float = 0, amsgrad: bool = False)[source]
Bases:
OptimizerParams
Default configuration for AdamW optimizer.
Note
For the details on the function/meanings of the arguments, please refer to: https://pytorch.org/docs/stable/optim.html#torch.optim.AdamW
- amsgrad: bool = False
- betas: Tuple[float, float] = (0.9, 0.999)
- eps: float = 1e-08
- weight_decay: float = 0
- class mridc.core.conf.optimizers.AdamaxParams(lr: Optional[float] = '???', betas: Tuple[float, float] = (0.9, 0.999), eps: float = 1e-08, weight_decay: float = 0)[source]
Bases:
OptimizerParams
Default configuration for Adamax optimizer.
Note
For the details on the function/meanings of the arguments, please refer to: https://pytorch.org/docs/stable/optim.html#torch.optim.Adamax
- betas: Tuple[float, float] = (0.9, 0.999)
- eps: float = 1e-08
- weight_decay: float = 0
- class mridc.core.conf.optimizers.NovogradParams(lr: float = 0.001, betas: Tuple[float, float] = (0.95, 0.98), eps: float = 1e-08, weight_decay: float = 0, grad_averaging: bool = False, amsgrad: bool = False, luc: bool = False, luc_trust: float = 0.001, luc_eps: float = 1e-08)[source]
Bases:
OptimizerParams
Configuration of the Novograd optimizer. It has been proposed in “Stochastic Gradient Methods with Layer-wise Adaptive Moments for Training of Deep Networks” (https://arxiv.org/abs/1905.11286). The OptimizerParams is a Base Optimizer params with no values. User can choose to explicitly override it via command line arguments.
- amsgrad: bool = False
- betas: Tuple[float, float] = (0.95, 0.98)
- eps: float = 1e-08
- grad_averaging: bool = False
- lr: float = 0.001
- luc: bool = False
- luc_eps: float = 1e-08
- luc_trust: float = 0.001
- weight_decay: float = 0
- class mridc.core.conf.optimizers.OptimizerParams(lr: Optional[float] = '???')[source]
Bases:
object
Base Optimizer params with no values. User can chose it to explicitly override via command line arguments.
- lr: Optional[float] = '???'
- class mridc.core.conf.optimizers.RMSpropParams(lr: Optional[float] = '???', alpha: float = 0.99, eps: float = 1e-08, weight_decay: float = 0, momentum: float = 0, centered: bool = False)[source]
Bases:
OptimizerParams
Default configuration for RMSprop optimizer.
Note
For the details on the function/meanings of the arguments, please refer to: https://pytorch.org/docs/stable/optim.html#torch.optim.RMSprop
- alpha: float = 0.99
- centered: bool = False
- eps: float = 1e-08
- momentum: float = 0
- weight_decay: float = 0
- class mridc.core.conf.optimizers.RpropParams(lr: Optional[float] = '???', etas: Tuple[float, float] = (0.5, 1.2), step_sizes: Tuple[float, float] = (1e-06, 50))[source]
Bases:
OptimizerParams
Default configuration for RpropParams optimizer.
Note
For the details on the function/meanings of the arguments, please refer to: https://pytorch.org/docs/stable/optim.html#torch.optim.Rprop
- etas: Tuple[float, float] = (0.5, 1.2)
- step_sizes: Tuple[float, float] = (1e-06, 50)
- class mridc.core.conf.optimizers.SGDParams(lr: Optional[float] = '???', momentum: float = 0, dampening: float = 0, weight_decay: float = 0, nesterov: bool = False)[source]
Bases:
OptimizerParams
Default configuration for Adam optimizer.
Note
For the details on the function/meanings of the arguments, please refer to: https://pytorch.org/docs/stable/optim.html?highlight=sgd#torch.optim.SGD
- dampening: float = 0
- momentum: float = 0
- nesterov: bool = False
- weight_decay: float = 0
- mridc.core.conf.optimizers.get_optimizer_config(name: str, **kwargs: Optional[Dict[str, Any]]) Union[Dict[str, Optional[Dict[str, Any]]], partial] [source]
Convenience method to obtain a OptimizerParams class and partially instantiate it with optimizer kwargs.
- Parameters
name (Name of the OptimizerParams in the registry.) –
kwargs (Optional kwargs of the optimizer used during instantiation.) –
- Return type
A partially instantiated OptimizerParams.
- mridc.core.conf.optimizers.register_optimizer_params(name: str, optimizer_params: OptimizerParams)[source]
Checks if the optimizer param name exists in the registry, and if it doesn’t, adds it. This allows custom optimizer params to be added and called by name during instantiation.
- Parameters
name (Name of the optimizer. Will be used as key to retrieve the optimizer.) –
optimizer_params (Optimizer class) –
mridc.core.conf.schedulers module
- class mridc.core.conf.schedulers.CosineAnnealingParams(last_epoch: int = -1, max_steps: int = 0, warmup_steps: Optional[float] = None, warmup_ratio: Optional[float] = None, constant_steps: Optional[float] = None, constant_ratio: Optional[float] = None, min_lr: float = 0.0)[source]
Bases:
WarmupAnnealingHoldSchedulerParams
Cosine Annealing parameter config
- min_lr: float = 0.0
- class mridc.core.conf.schedulers.CyclicLRParams(last_epoch: int = -1, base_lr: float = 0.001, max_lr: float = 0.1, step_size_up: int = 2000, step_size_down: Optional[int] = None, mode: str = 'triangular', gamma: float = 1.0, scale_mode: str = 'cycle', cycle_momentum: bool = True, base_momentum: float = 0.8, max_momentum: float = 0.9)[source]
Bases:
SchedulerParams
Config for CyclicLR.
- base_lr: float = 0.001
- base_momentum: float = 0.8
- cycle_momentum: bool = True
- gamma: float = 1.0
- max_lr: float = 0.1
- max_momentum: float = 0.9
- mode: str = 'triangular'
- scale_mode: str = 'cycle'
- step_size_down: Optional[int] = None
- step_size_up: int = 2000
- class mridc.core.conf.schedulers.ExponentialLRParams(last_epoch: int = -1, gamma: float = 0.9)[source]
Bases:
SchedulerParams
Config for ExponentialLR.
- gamma: float = 0.9
- class mridc.core.conf.schedulers.InverseSquareRootAnnealingParams(last_epoch: int = -1, max_steps: int = 0, warmup_steps: Optional[float] = None, warmup_ratio: Optional[float] = None)[source]
Bases:
WarmupSchedulerParams
Inverse Square Root Annealing parameter config
- class mridc.core.conf.schedulers.NoamAnnealingParams(last_epoch: int = -1, max_steps: int = 0, warmup_steps: Optional[float] = None, warmup_ratio: Optional[float] = None, min_lr: float = 0.0)[source]
Bases:
WarmupSchedulerParams
Cosine Annealing parameter config
- min_lr: float = 0.0
- class mridc.core.conf.schedulers.NoamHoldAnnealingParams(last_epoch: int = -1, max_steps: int = 0, warmup_steps: Optional[float] = None, warmup_ratio: Optional[float] = None, hold_steps: Optional[float] = None, hold_ratio: Optional[float] = None, min_lr: float = 0.0, decay_rate: float = 0.5)[source]
Bases:
WarmupHoldSchedulerParams
Polynomial Hold Decay Annealing parameter config. It is not derived from Config as it is not a MRIDC object (and in particular it doesn’t need a name).
- decay_rate: float = 0.5
- class mridc.core.conf.schedulers.PolynomialDecayAnnealingParams(last_epoch: int = -1, max_steps: int = 0, warmup_steps: Optional[float] = None, warmup_ratio: Optional[float] = None, power: float = 1.0, cycle: bool = False)[source]
Bases:
WarmupSchedulerParams
Polynomial Decay Annealing parameter config
- cycle: bool = False
- power: float = 1.0
- class mridc.core.conf.schedulers.PolynomialHoldDecayAnnealingParams(last_epoch: int = -1, max_steps: int = 0, warmup_steps: Optional[float] = None, warmup_ratio: Optional[float] = None, power: float = 1.0, cycle: bool = False)[source]
Bases:
WarmupSchedulerParams
Polynomial Hold Decay Annealing parameter config
- cycle: bool = False
- power: float = 1.0
- class mridc.core.conf.schedulers.ReduceLROnPlateauParams(mode: str = 'min', factor: float = 0.1, patience: int = 10, verbose: bool = False, threshold: float = 0.0001, threshold_mode: str = 'rel', cooldown: int = 0, min_lr: float = 0, eps: float = 1e-08)[source]
Bases:
object
Config for ReduceLROnPlateau.
- cooldown: int = 0
- eps: float = 1e-08
- factor: float = 0.1
- min_lr: float = 0
- mode: str = 'min'
- patience: int = 10
- threshold: float = 0.0001
- threshold_mode: str = 'rel'
- verbose: bool = False
- class mridc.core.conf.schedulers.SchedulerParams(last_epoch: int = -1)[source]
Bases:
object
Base configuration for all schedulers.
- last_epoch: int = -1
- class mridc.core.conf.schedulers.SquareAnnealingParams(last_epoch: int = -1, max_steps: int = 0, warmup_steps: Optional[float] = None, warmup_ratio: Optional[float] = None, min_lr: float = 1e-05)[source]
Bases:
WarmupSchedulerParams
Square Annealing parameter config
- min_lr: float = 1e-05
- class mridc.core.conf.schedulers.SquareRootAnnealingParams(last_epoch: int = -1, max_steps: int = 0, warmup_steps: Optional[float] = None, warmup_ratio: Optional[float] = None, min_lr: float = 0.0)[source]
Bases:
WarmupSchedulerParams
Square Root Annealing parameter config
- min_lr: float = 0.0
- class mridc.core.conf.schedulers.SquareRootConstantSchedulerParams(last_epoch: int = -1, constant_steps: Optional[float] = None, constant_ratio: Optional[float] = None)[source]
Bases:
SchedulerParams
Base configuration for all schedulers. It is not derived from Config as it is not a mridc object (and in particular it doesn’t need a name).
- constant_ratio: Optional[float] = None
- constant_steps: Optional[float] = None
- class mridc.core.conf.schedulers.StepLRParams(last_epoch: int = -1, step_size: float = 0.1, gamma: float = 0.1)[source]
Bases:
SchedulerParams
Config for StepLR.
- gamma: float = 0.1
- step_size: float = 0.1
- class mridc.core.conf.schedulers.WarmupAnnealingHoldSchedulerParams(last_epoch: int = -1, max_steps: int = 0, warmup_steps: Optional[float] = None, warmup_ratio: Optional[float] = None, constant_steps: Optional[float] = None, constant_ratio: Optional[float] = None, min_lr: float = 0.0)[source]
Bases:
WarmupSchedulerParams
Base configuration for all schedulers.
- constant_ratio: Optional[float] = None
- constant_steps: Optional[float] = None
- min_lr: float = 0.0
- class mridc.core.conf.schedulers.WarmupAnnealingParams(last_epoch: int = -1, max_steps: int = 0, warmup_steps: Optional[float] = None, warmup_ratio: Optional[float] = None)[source]
Bases:
WarmupSchedulerParams
Warmup Annealing parameter config
- warmup_ratio: Optional[float] = None
- class mridc.core.conf.schedulers.WarmupHoldSchedulerParams(last_epoch: int = -1, max_steps: int = 0, warmup_steps: Optional[float] = None, warmup_ratio: Optional[float] = None, hold_steps: Optional[float] = None, hold_ratio: Optional[float] = None, min_lr: float = 0.0)[source]
Bases:
WarmupSchedulerParams
Base configuration for all schedulers.
- hold_ratio: Optional[float] = None
- hold_steps: Optional[float] = None
- min_lr: float = 0.0
- class mridc.core.conf.schedulers.WarmupSchedulerParams(last_epoch: int = -1, max_steps: int = 0, warmup_steps: Optional[float] = None, warmup_ratio: Optional[float] = None)[source]
Bases:
SchedulerParams
Base configuration for all schedulers.
- max_steps: int = 0
- warmup_ratio: Optional[float] = None
- warmup_steps: Optional[float] = None
- mridc.core.conf.schedulers.get_scheduler_config(name: str, **kwargs: Optional[Dict[str, Any]]) partial [source]
Convenience method to obtain a SchedulerParams class and partially instantiate it with optimizer kwargs.
- Parameters
name (Name of the SchedulerParams in the registry.) –
kwargs (Optional kwargs of the optimizer used during instantiation.) –
- Return type
A partially instantiated SchedulerParams.
- mridc.core.conf.schedulers.register_scheduler_params(name: str, scheduler_params: SchedulerParams)[source]
Checks if the scheduler config name exists in the registry, and if it doesn’t, adds it. This allows custom schedulers to be added and called by name during instantiation.
- Parameters
name (Name of the optimizer. Will be used as key to retrieve the optimizer.) –
scheduler_params (SchedulerParams class) –
mridc.core.conf.trainer module
- class mridc.core.conf.trainer.TrainerConfig(logger: Any = True, callbacks: Optional[Any] = None, default_root_dir: Optional[str] = None, gradient_clip_val: float = 0, num_nodes: int = 1, gpus: Optional[Any] = None, auto_select_gpus: bool = False, tpu_cores: Optional[Any] = None, enable_progress_bar: bool = True, overfit_batches: Any = 0.0, track_grad_norm: Any = -1, check_val_every_n_epoch: int = 1, fast_dev_run: bool = False, accumulate_grad_batches: Any = 1, max_epochs: int = 1000, min_epochs: int = 1, max_steps: Optional[int] = -1, min_steps: Optional[int] = None, limit_train_batches: Any = 1.0, limit_val_batches: Any = 1.0, limit_test_batches: Any = 1.0, val_check_interval: Any = 1.0, log_every_n_steps: int = 50, accelerator: Optional[str] = None, sync_batchnorm: bool = False, precision: Any = 32, weights_save_path: Optional[str] = None, num_sanity_val_steps: int = 2, resume_from_checkpoint: Optional[str] = None, profiler: Optional[Any] = None, benchmark: bool = False, deterministic: bool = False, auto_lr_find: Any = False, replace_sampler_ddp: bool = True, detect_anomaly: bool = False, auto_scale_batch_size: Any = False, amp_backend: str = 'native', amp_level: Optional[str] = None, plugins: Optional[Any] = None, move_metrics_to_cpu: bool = False, multiple_trainloader_mode: str = 'max_size_cycle', limit_predict_batches: float = 1.0, gradient_clip_algorithm: str = 'norm', max_time: Optional[Any] = None, reload_dataloaders_every_n_epochs: int = 0, ipus: Optional[int] = None, devices: Optional[Any] = None, strategy: Optional[Any] = None, enable_checkpointing: bool = False, enable_model_summary: bool = True)[source]
Bases:
object
TrainerConfig is a dataclass that holds all the hyperparameters for the training process.
- accelerator: Optional[str] = None
- accumulate_grad_batches: Any = 1
- amp_backend: str = 'native'
- amp_level: Optional[str] = None
- auto_lr_find: Any = False
- auto_scale_batch_size: Any = False
- auto_select_gpus: bool = False
- benchmark: bool = False
- callbacks: Optional[Any] = None
- check_val_every_n_epoch: int = 1
- default_root_dir: Optional[str] = None
- detect_anomaly: bool = False
- deterministic: bool = False
- devices: Any = None
- enable_checkpointing: bool = False
- enable_model_summary: bool = True
- enable_progress_bar: bool = True
- fast_dev_run: bool = False
- gpus: Optional[Any] = None
- gradient_clip_algorithm: str = 'norm'
- gradient_clip_val: float = 0
- ipus: Optional[int] = None
- limit_predict_batches: float = 1.0
- limit_test_batches: Any = 1.0
- limit_train_batches: Any = 1.0
- limit_val_batches: Any = 1.0
- log_every_n_steps: int = 50
- logger: Any = True
- max_epochs: int = 1000
- max_steps: Optional[int] = -1
- max_time: Optional[Any] = None
- min_epochs: int = 1
- min_steps: Optional[int] = None
- move_metrics_to_cpu: bool = False
- multiple_trainloader_mode: str = 'max_size_cycle'
- num_nodes: int = 1
- num_sanity_val_steps: int = 2
- overfit_batches: Any = 0.0
- plugins: Optional[Any] = None
- precision: Any = 32
- profiler: Optional[Any] = None
- reload_dataloaders_every_n_epochs: int = 0
- replace_sampler_ddp: bool = True
- resume_from_checkpoint: Optional[str] = None
- strategy: Any = None
- sync_batchnorm: bool = False
- tpu_cores: Optional[Any] = None
- track_grad_norm: Any = -1
- val_check_interval: Any = 1.0
- weights_save_path: Optional[str] = None