diff --git a/Chapters/5-Optimization/INDEX.md b/Chapters/5-Optimization/INDEX.md index 90d9542..b9e9928 100644 --- a/Chapters/5-Optimization/INDEX.md +++ b/Chapters/5-Optimization/INDEX.md @@ -280,15 +280,15 @@ small value, usually in the order of $10^{-8}$ > This example is tough to understand if we where to apply it to a matrix $W$ > instead of a vector. To make it easier to understand in matricial notation: > -> $$ +$$ \begin{aligned} \nabla L^{(k + 1)} &= \frac{d \, Loss^{(k)}}{d \, W^{(k)}} \\ G^{(k + 1)} &= G^{(k)} +(\nabla L^{(k+1)}) ^2 \\ W^{(k+1)} &= W^{(k)} - \eta \frac{\nabla L^{(k + 1)}} {\sqrt{G^{(k+1)} + \epsilon}} \end{aligned} -> $$ -> +$$ +> > In other words, compute the gradient and scale it for the sum of its squares > until that point