2025-11-20 18:47:36 +01:00

4.6 KiB

Appendix A

Entropy1

The entropy of a random value gives us the "surprise" or "informativeness" of knowing the result.

You can visualize it like this: "What can I learn from getting to know something obvious?"

As an example, you would be unsurprised to know that if you leav an apple mid-air it falls. However, if it where to remain suspended, that would be mind boggling!

The entropy now gives us this same sentiment analyzing the actual values, the lower its value, the more suprising the events, and its formula is:


H(\mathcal{X}) \coloneqq - \sum_{x \in \mathcal{X}} p(x) \log p(x)

Note

Technically speaking, anothet interpretation is the amount of bits needed to represent a random event happening, but in that case we use \log_2

Kullback-Leibler Divergence

This value gives us the difference in distribution between an estimation q and the real one p:


D_{KL}(p || q) = \sum_{x\in \mathcal{x}} p(x) \log \frac{p(x)}{q(x)}

Cross Entropy Loss derivation

Cross entropy2 is the measure of "surprise" we get from distribution p knowing results from distribution q. It is defined as the entropy of p plus the Kullback-Leibler Divergence between p and q


\begin{aligned}
    H(p, q) &= H(p) + D_{KL}(p || q) =\\
    &= - \sum_{x\in\mathcal{X}}p(x)\log p(x) +
        \sum_{x\in \mathcal{X}} p(x) \log \frac{p(x)}{q(x)} = \\
    &= \sum_{x\in \mathcal{X}} p(x) \left(
            \log \frac{p(x)}{q(x)} - \log p(x)
        \right) = \\
    &= \sum_{x\in \mathcal{X}} p(x) \log \frac{1}{q(x)} = \\
    &= - \sum_{x\in \mathcal{X}} p(x) \log q(x)
\end{aligned}

Since we in deep learning we usually don't work with distributions, but actual probabilities, it becomes:


l_n = -   \log \hat{y}_{n,c} \\
\hat{y} \coloneqq \text{probability of class}

Usually \hat{y} comes from using a softmax. Moreover, since it uses a logaritm and probability values are at most 1, the closer to 0, the higher the loss

Computing PCA3

Caution

X here is the matrix of dataset with features over rows

  • \Sigma = \frac{X \times X^T}{N} \coloneqq Correlation Matrix approximation
  • \vec{\lambda} \coloneqq vector of eigenvalues of \Sigma
  • \Lambda \coloneqq eigenvector columnar matrix sorted by eigenvalues
  • \Lambda_{red} \coloneqq eigenvector matrix reduced to k^{th} highest eigenvalue
  • Z = X \times\Lambda_{red}^T \coloneqq Compressed representation

Note

You may have studied PCA in terms of SVD, Singular Value Decomposition. The 2 are closely related and apply the same concept but applying different mathematical formulas.

Laplace Operator4

It is defined as \nabla \cdot \nabla f \in \R and is equivalent to the divergence of the function. Technically speaking it gives us the magnitude of a local maximum or minimum.

Positive values mean that we are around a local maximum and vice-versa. The higher the magnitude, the higher (or lower) is the local maximum (or minimum).

Another way to see this is as the divergence of the function that tells us whether that is a point of attraction or divergence.

It can also be used to compute the net flow of particles in that region of space

Caution

This is not a discrete laplace operator, which is instead a matrix here, as there are many other formulations.

Hessian Matrix

A Hessian Matrix represents the 2nd derivative of a function, thus it gives us the curvature of a function.

It is also used to tell us whether the point is a local minimum (it is positive defined), local maximum (it is negative defined) or saddle (neither positive or negative defined).

It is computed by computing the partial derivatives of the gradient along all dimensions and then transpose it.


\nabla f = \begin{bmatrix}
    \frac{d \, f}{d\,x} & \frac{d \, f}{d\,y}
\end{bmatrix} \\
H(f) = \begin{bmatrix}
     \frac{d \, f}{d\,x^2} & \frac{d \, f}{d \, x\,d\,y} \\
      \frac{d \, f}{d\, y \, d\,x} & \frac{d \, f}{d\,y^2}
\end{bmatrix}