🚀 Feature
It is valuable to add the attribute _last_lr for the schedular SequentialLR.
Motivation
When I want to observe whether the learning rate is set as expected, getting the learning rate from the scheduler is a very common operation.
But the current scheduler SequentialLR does not directly provide support similar to the method get_last_lr(it wil use the attribute _last_lr, but it does not exist in the scheduler SequentialLR) in other scheduler.
|
def step(self): |
|
self.last_epoch += 1 |
|
idx = bisect_right(self._milestones, self.last_epoch) |
|
if idx > 0 and self._milestones[idx - 1] == self.last_epoch: |
|
self._schedulers[idx].step(0) |
|
else: |
|
self._schedulers[idx].step() |
Pitch
Alternatives
In fact, we only need to simply modify its step method as follows.
class SequentialLR(torch.optim.lr_scheduler.SequentialLR):
def step(self):
self.last_epoch += 1
idx = bisect_right(self._milestones, self.last_epoch)
if idx > 0 and self._milestones[idx - 1] == self.last_epoch:
self._schedulers[idx].step(0)
self._last_lr = self._schedulers[idx].get_last_lr()
else:
self._schedulers[idx].step()
self._last_lr = self._schedulers[idx].get_last_lr()
...
def plot_lr_curve_for_scheduler(scheduler, num_steps, save_path=None):
scheduler = copy.deepcopy(scheduler)
fig, ax = plt.subplots()
# give plot a title
ax.set_title("Learning Rate Curve")
# make axis labels
ax.set_xlabel("Iter")
ax.set_ylabel("LR")
# set ticks
ax.set_xticks(np.linspace(0, num_steps, 21))
x_data = np.arange(num_steps)
ys = []
for _ in x_data:
scheduler.step()
ys.append(max(scheduler.get_last_lr()))
y_data = np.array(ys)
# set lim
ax.set_xlim((-int(num_steps * 0.1), int(num_steps * 1.5)))
ax.set_ylim((y_data.min(), y_data.max()))
ax.plot(x_data, y_data, linewidth=2)
maximum_xs = signal.argrelextrema(y_data, comparator=np.greater_equal)[0]
maximum_ys = y_data[maximum_xs]
minimum_xs = signal.argrelextrema(y_data, comparator=np.less_equal)[0]
minimum_ys = y_data[minimum_xs]
end_point_xs = np.array([x_data[0], x_data[-1]])
end_point_ys = np.array([y_data[0], y_data[-1]])
for pt in zip(
np.concatenate((maximum_xs, minimum_xs, end_point_xs)),
np.concatenate((maximum_ys, minimum_ys, end_point_ys)),
):
ax.text(pt[0], pt[1], s=f"x={pt[0]:d}")
ax.text(pt[0], pt[1] - 0.05, s=f"y={pt[1]:.3e}")
if save_path:
fig.savefig(save_path, dpi=300)
else:
plt.show()
Additional context
cc @vincentqb @jbschlosser @albanD
🚀 Feature
It is valuable to add the attribute
_last_lrfor the schedularSequentialLR.Motivation
When I want to observe whether the learning rate is set as expected, getting the learning rate from the scheduler is a very common operation.
But the current scheduler
SequentialLRdoes not directly provide support similar to the methodget_last_lr(it wil use the attribute_last_lr, but it does not exist in the schedulerSequentialLR) in other scheduler.pytorch/torch/optim/lr_scheduler.py
Lines 628 to 634 in cffad59
Pitch
Alternatives
In fact, we only need to simply modify its
stepmethod as follows.Additional context
cc @vincentqb @jbschlosser @albanD