Added Polynomials of Laplacian

This commit is contained in:
chris-admin 2025-09-12 22:26:25 +02:00
parent 0e9e33a281
commit 82f8d8e906

View File

@ -89,3 +89,110 @@ graph: any = "tavola"
If we find some parts of the graph that are disconnected, we can just avoid storing and computing those parts
## Graph Neural Networks (GNNs)
At the simpkest form we take a **graph-in** and **graph-out** approach with MLPs separate for
vertices, edges and master nodes that we apply **one at a time** over each element
$$
\begin{aligned}
V_{i + 1} &= MLP_{V_{i}}(V_{i}) \\
E_{i + 1} &= MLP_{E_{i}}(E_{i}) \\
U_{i + 1} &= MLP_{U_{i}}(U_{i}) \\
\end{aligned}
$$
### Pooling
> [!CAUTION]
> This step comes after the embedding phase described above
This is a step that can be used to take info about other elements, different from what we were considering
(for example, taking info from edges while making the computation over vertices).
By using this approach we usually gather some info from edges of a vertex, then we concat them in a matrix and
aggregate by summing them.
### Message Passing
Take all node embeddings that are in the neighbouroud and do similar steps as the pooling function.
### Special Layers
<!-- TODO: Read PDF 14 Anelli pg 47 to 52 -->
## Polynomial Filters
### Graph Laplacian
Let's set an order over nodes of a graph, where $A$ is the adjacency matrix:
$$
D_{v,v} = \sum_{u} A_{v,u}
$$
In other words, $D_{v, v}$ is the number of nodes connected ot that one
The **graph Laplacian** of the graph will be
$$
L = D - A
$$
### Polynomials of Laplacian
These polynomials, which have the same dimensions of $L$, can be though as being **filter** like in
[CNNs](./../7-Convolutional-Networks/INDEX.md#convolutional-networks)
$$
p_{\vec{w}}(L) = w_{0}I_{n} + w_{1}L^{1} + \dots + w_{d}L^{d} = \sum_{i=0}^{d} w_{i}L^{i}
$$
We then can get a ***filtered node*** by simply multiplying the polynomial with the node value
$$
\begin{aligned}
\vec{x}' = p_{\vec{w}}(L) \vec{x}
\end{aligned}
$$
> [!NOTE]
> In order to extract new features for a single vertex, supposing only $w_1 \neq 0$
>
> Observe that we are only taking $L_{v}$
>
> $$
> \begin{aligned}
> \vec{x}'_{v} &= (L\vec{x})_{v} \\
> &= \sum_{u \in G} L_{v,u} \vec{x}_{u} \\
> &= \sum_{u \in G} (D_{v,u} - A_{v,u}) \vec{x}_{u} \\
> &= \sum_{u \in G} D_{v,u} \vec{x}_{u} - A_{v,u} \vec{x}_{u} \\
> &= D_{v, v} \vec{x}_{v} - \sum_{u \in \mathcal{N}(v)} \vec{x}_{u}
> \end{aligned}
> $$
>
> Where the last step holds as $D$ is a diagonal matrix, and in the summatory we are only considering the neighbours
> of v
>
> It can be demonstrated that in any graph
>
> $$
> dist_{G}(v, u) > i \rightarrow L_{v, u}^{i} = 0
> $$
>
> More in general it holds
>
> $$
> \begin{aligned}
> \vec{x}'_{v} = (p_{\vec{w}}(L)\vec{x})_{v} &= (p_{\vec{w}}(L))_{v} \vec{x} \\
> &= \sum_{i = 0}^{d} w_{i}L_{v}^{i} \vec{x} \\
> &= \sum_{i = 0}^{d} w_{i} \sum_{u \in G} L_{v,u}^{i}\vec{x}_{u} \\
> &= \sum_{i = 0}^{d} w_{i} \sum_{\substack{u \in G \\ dist_{G}(v, u) \leq i}} L_{v,u}^{i}\vec{x}_{u} \\
> \end{aligned}
> $$
>
> So this shows that the degree of the polynomial decides the max number of hops
> to be included during the filtering stage, like if it were defining a [kernel](./../7-Convolutional-Networks/INDEX.md#filters)
### ChebNet