Skip to content

Commit d20395f

Browse files
zero the gradients after updating weights
Manually zero the gradients after updating weights by using machine epsilon for standard float (64-bit).
1 parent fee83dd commit d20395f

File tree

1 file changed

+7
-5
lines changed

1 file changed

+7
-5
lines changed

beginner_source/examples_autograd/polynomial_autograd.py

Lines changed: 7 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -40,7 +40,7 @@
4040
d = torch.randn((), dtype=dtype, requires_grad=True)
4141

4242
learning_rate = 1e-6
43-
for t in range(2000):
43+
for t in range(int(1/(learning_rate))):
4444
# Forward pass: compute predicted y using operations on Tensors.
4545
y_pred = a + b * x + c * x ** 2 + d * x ** 3
4646

@@ -67,9 +67,11 @@
6767
d -= learning_rate * d.grad
6868

6969
# Manually zero the gradients after updating weights
70-
a.grad = None
71-
b.grad = None
72-
c.grad = None
73-
d.grad = None
70+
# by using machine epsilon for standard float (64-bit)
71+
import sys
72+
a.grad = loss*sys.float_info.epsilon
73+
b.grad = loss*sys.float_info.epsilon
74+
c.grad = loss*sys.float_info.epsilon
75+
d.grad = loss*sys.float_info.epsilon
7476

7577
print(f'Result: y = {a.item()} + {b.item()} x + {c.item()} x^2 + {d.item()} x^3')

0 commit comments

Comments
 (0)