Skip to content

Module bastionlab.torch.optimizer

Classes

Adam(lr: float = 0.001, betas: Tuple[float, float] = (0.9, 0.999), eps: float = 1e-08, weight_decay: float = 0.0, amsgrad: bool = False)

Adam optimizer configuration.

Parameters are the same as in Pytorch: https://pytorch.org/docs/stable/generated/torch.optim.Adam.html#torch.optim.Adam

Ancestors (in MRO)

  • bastionlab.torch.optimizer.OptimizerConfig

Class variables

amsgrad: bool :

betas: Tuple[float, float] :

eps: float :

lr: float :

weight_decay: float :

Methods

to_msg_dict(self, lr: Optional[float] = None) ‑> Dict[str, Any]
Please refer to the base class.
OptimizerConfig(lr: float)

Base class for optimizer configs.

Args: lr: Leraning rate used by the training algorithm.

Descendants

  • bastionlab.torch.optimizer.Adam
  • bastionlab.torch.optimizer.SGD

Class variables

lr: float :

Methods

to_msg_dict(self, lr: Optional[float] = None) ‑> Dict[str, Any]
Returns a dict representation of the config to be used in a gRPC message.
SGD(lr: float, momentum: float = 0.0, dampening: float = 0.0, weight_decay: float = 0.0, nesterov: bool = False)

SGD (Standard Gradient Descent) optimizer configuration.

Parameters are the same as in Pytorch: https://pytorch.org/docs/stable/generated/torch.optim.SGD.html#torch.optim.SGD

Ancestors (in MRO)

  • bastionlab.torch.optimizer.OptimizerConfig

Class variables

dampening: float :

momentum: float :

nesterov: bool :

weight_decay: float :

Methods

to_msg_dict(self, lr: Optional[float] = None) ‑> Dict[str, Any]
Please refer to the base class.