diff --git a/Chapters/14-GNN-GCN/INDEX.md b/Chapters/14-GNN-GCN/INDEX.md
index 3029ffe..04f410b 100644
--- a/Chapters/14-GNN-GCN/INDEX.md
+++ b/Chapters/14-GNN-GCN/INDEX.md
@@ -122,6 +122,8 @@ Take all node embeddings that are in the neighbouroud and do similar steps as th
## Polynomial Filters
+Each polynomial filter is order invariant
+
### Graph Laplacian
Let's set an order over nodes of a graph, where $A$ is the adjacency matrix:
@@ -195,4 +197,97 @@ $$
> So this shows that the degree of the polynomial decides the max number of hops
> to be included during the filtering stage, like if it were defining a [kernel](./../7-Convolutional-Networks/INDEX.md#filters)
-### ChebNet
\ No newline at end of file
+### ChebNet
+
+The polynomial in ChebNet becomes:
+
+$$
+\begin{aligned}
+p_{\vec{w}}(L) &= \sum_{i = 1}^{d} w_{i} T_{i}(\tilde{L}) \\
+T_{i} &= cos(i\theta) \\
+\tilde{L} &= \frac{2L}{\lambda_{\max}(L)} - I_{n}
+\end{aligned}
+$$
+
+- $T_{i}$ is Chebischev first kind polynomial
+- $\tilde{L}$ is a reduced version of $L$ because we divide for its max eigenvalue,
+ keeping it in range $[-1, 1]$. Moreover $L$ ha no negative eigenvalues, so it's
+ positive semi-definite
+
+These polynomials are more stable as they do not explode with higher powers
+
+### Embedding Computation
+
+
+
+## Other methods
+
+- Learnable parameters
+- Embeddings of node v
+- Embeddings of neighbours of v
+
+### Graph Convolutional Networks
+
+$$
+\textcolor{orange}{h_{v}^{(k)}} =
+\textcolor{skyblue}{f^{(k)}} \left(
+ \underbrace{\textcolor{skyblue}{W^{(k)}} \cdot
+ \frac{
+ \sum_{u \in \mathcal{N}(v)} \textcolor{violet}{h_{u}^{(k-1)}}
+ }{
+ |\mathcal{N}(v)|
+ }}_{\text{mean of previous neighbour embeddings}} + \underbrace{\textcolor{skyblue}{B^{(k)}} \cdot
+\textcolor{orange}{h_{v}^{(k - 1)}}}_{\text{previous embeddings}}
+\right) \forall v \in V
+$$
+
+### Graph Attention Networks
+
+$$
+\textcolor{orange}{h_{v}^{(k)}} =
+\textcolor{skyblue}{f^{(k)}} \left(
+ \textcolor{skyblue}{W^{(k)}} \cdot \left[
+ \underbrace{
+ \sum_{u \in \mathcal{N}(v)} \alpha^{(k-1)}_{v,u}
+ \textcolor{violet}{h_{u}^{(k-1)}}
+ }_{\text{weighted mean of previous neighbour embeddings}} +
+ \underbrace{\alpha^{(k-1)}_{v,v}
+ \textcolor{orange}{h_{v}^{(k-1)}}}_{\text{previous embeddings}}
+\right] \right) \forall v \in V
+$$
+
+where
+
+$$
+\alpha^{(k)}_{v,u} = \frac{
+ \textcolor{skyblue}{A^{(k)}}(
+ \textcolor{orange}{h_{v}^{(k)}},
+ \textcolor{violet}{h_{u}^{(k)}},
+ )
+}{
+ \sum_{w \in \mathcal{N}(v)} \textcolor{skyblue}{A^{(k)}}(
+ \textcolor{orange}{h_{v}^{(k)}},
+ \textcolor{violet}{h_{w}^{(k)}},
+ )
+} \forall (v, u) \in E
+$$
+
+### Graph Sample and Aggregate (GraphSAGE)
+
+
+
+### Graph Isomorphism Network (GIN)
+
+$$
+\textcolor{orange}{h_{v}^{(k)}} =
+ \textcolor{skyblue}{f^{(k)}}
+\left(
+ \sum_{u \in \mathcal{N}(v)}
+ \textcolor{violet}{h_{u}^{(k - 1)}} +
+ (
+ 1 +
+ \textcolor{skyblue}{\epsilon^{(k)}}
+ ) \cdot \textcolor{orange}{h_{v}^{(k - 1)}}
+\right)
+\forall v \in V
+$$
\ No newline at end of file