SGD with linesearch

class optim.sgd_modified.SGD_MOD(params, lr=1, momentum=0, dampening=0, weight_decay=0, nesterov=False, line_search_fn=None, line_search_eps=0.0001)[source]

Extends the original steepest gradient descent of PyTorch [torch.optim.SGD] with optional backtracking linesearch. The linesearch implementation is adapted from scipy [scipy.optimize.linesearch] and relies only on the value of the loss function, not derivatives.


This optimizer doesn’t support per-parameter options and parameter groups (there can be only one).

  • params (iterable) – iterable of parameters to optimize or dicts defining parameter groups

  • lr (float) – learning rate (default: 1)

  • momentum (float, optional) – momentum factor (default: 0)

  • dampening (float, optional) – dampening for momentum (default: 0)

  • weight_decay (float, optional) – weight decay (L2 penalty) (default: 0)

  • nesterov (bool, optional) – enables Nesterov momentum (default: False)

  • line_search_fn (str, optional) – line search algorithm. Supported options "backtracking" (default: None)

  • line_search_eps (float, optional) – minimal step size (default: 1.0e-4)

step_2c(closure, closure_linesearch=None)[source]

Performs a single optimization step.

  • closure (callable) – A closure that reevaluates the model and returns the loss.

  • closure_linesearch (callable, optional) – A closure that reevaluates the model and returns the loss in torch.no_grad context