This commit is contained in:
Christian Risi 2025-01-08 15:08:32 +01:00
parent d424d2e7aa
commit 417d87c0eb
31 changed files with 12660 additions and 98 deletions

View File

@ -0,0 +1,242 @@
# Modern Control
Normally speaking, we know much about classical control, in the form
of:
$$
\dot{x}(t) = ax(t) + bu(t) \longleftrightarrow sX(s) - x(0) = aX(S) + bU(s)
$$
With the left part being a derivative equation in continuous time, while the
right being its tranformation in the complex domain field.
> [!NOTE]
>
> $$
> \dot{x}(t) = ax(t) + bu(t) \longleftrightarrow x(k+1) = ax(k) + bu(k)
> $$
>
> These are equivalent, but the latter one is in discrete time.
>
## A brief recap over Classical Control
Be $Y(s)$ our `output variable` in `classical control` and $U(s)$ our
`input variable`. The associated `transfer function` $G(s)$ is:
$$
G(s) = \frac{Y(s)}{U(s)}
$$
### Root Locus
<!-- TODO: write about Root Locus -->
### Bode Diagram
### Nyquist Diagram
## State Space Representation
### State Matrices
A state space representation has 4 Matrices: $A, B, C, D$ with coefficients in
$\R$:
- $A$: State Matrix `[x_rows, x_columns]`
- $B$: Input Matrix `[x_rows, u_columns]`
- $C$: Output Matrix `[y_rows, x_columns]`
- $D$: Direct Coupling Matrix `[y_rows, u_columns]`
$$
\begin{cases}
\dot{x}(t) = Ax(t) + Bu(t) \;\;\;\; \text{Dynamic of the system}\\
y(t) = C{x}(t) + Du(t) \;\;\;\; \text{Static of the outputs}
\end{cases}
$$
This can be represented with the following diagrams:
#### Continuous Time:
![continuous state space diagram](..\Images\Modern-Control\state-space-time.png)
---
#### Discrete time:
![discrete state space diagram](..\Images\Modern-Control\state-space-discrete.png)
### State Vector
This is a state vector `[x_rows, 1]`:
$$
x(t) = \begin{bmatrix}
x_1(t)\\
\dots\\
x_x(t)
\end{bmatrix}
\text{or} \:
x(k) = \begin{bmatrix}
x_1(k)\\
\dots\\
x_x(k)
\end{bmatrix}
$$
Basically, from this we can know each next step of the state vector, represented
as:
$$
x(k + 1) = f\left(
x(k), u(k)
\right) = Ax(k) + Bu(k)
$$
### Examples
#### Cart attached to a spring and a damper, pulled by a force
![cart being pulled image](../Images/Modern-Control/cart-pulled.png)
##### Formulas
- <span style="color:#55a1e6">Spring: $\vec{F} = -k\vec{x}$</span>
- <span style="color:#ff8382">Fluid Damper: $\vec{F_D} = -b \vec{\dot{x}}$</span>
- Initial Force: $\vec{F_p}(t) = \vec{F_p}(t)$
- Total Force: $m \vec{\ddot{x}}(t) = \vec{F_p}(t) -b \vec{\dot{x}} -k\vec{x}$
> [!TIP]
>
> A rule of thumb is to have as many variables in our state as the max number
> of derivatives we encounter. In this case `2`
>
> Solve the equation for the highest derivative order
>
> Then, put all variables equal to the previous one derivated:
>
> $$
x(t) = \begin{bmatrix}
x_1(t)\\
x_2(t) = \dot{x_1}(t)\\
\dots\\
x_n(t) = \dot{x}_{n-1}(t)
\end{bmatrix}
\;
\dot{x}(t) = \begin{bmatrix}
\dot{x_1}(t) = x_2(t)\\
\dot{x_2}(t) = x_3(t)\\
\dots\\
\dot{x}_{n-1}(t) = \dot{x}_{n}(t)\\
\dot{x}_{n}(t) = \text{our formula}
\end{bmatrix}
> $$
>
Now in our state we may express `position` and `speed`, while in our
`next_state` we'll have `speed` and `acceleration`:
$$
x(t) = \begin{bmatrix}
x_1(t)\\
x_2(t) = \dot{x_1}(t)
\end{bmatrix}
\;
\dot{x}(t) = \begin{bmatrix}
\dot{x_1}(t) = x_2(t)\\
\dot{x_2}(t) = \ddot{x_1}(t)
\end{bmatrix}
$$
Our new state is then:
$$
\begin{cases}
\dot{x}_1(t) = x_2(t)\\
\dot{x}_2(t) = \frac{1}{m} \left( \vec{F}(t) - b x_2(t) - kx_1(t) \right)
\end{cases}
$$
let's say we only want to check for the `position` and `speed` of the system, our
State Space will be:
$$
A = \begin{bmatrix}
0 & 1 \\
- \frac{k}{m} & - \frac{b}{m} \\
\end{bmatrix}
B = \begin{bmatrix}
0 \\
\frac{1}{m} \\
\end{bmatrix}
C = \begin{bmatrix}
1 & 0 \\
0 & 1
\end{bmatrix}
D = \begin{bmatrix}
0
\end{bmatrix}
$$
let's say we only want to check for the `position` of the system, our
State Space will be:
$$
A = \begin{bmatrix}
0 & 1 \\
- \frac{k}{m} & - \frac{b}{m} \\
\end{bmatrix}
B = \begin{bmatrix}
0 \\
\frac{1}{m} \\
\end{bmatrix}
C = \begin{bmatrix}
1 & 0
\end{bmatrix}
D = \begin{bmatrix}
0
\end{bmatrix}
$$
> [!TIP]
> In order to being able to plot the $\vec{x}$ against the time, you need to
> multiply $\vec{\dot{x}}$ for the `time_step` and then add it to the state[^so-how-to-plot-ssr]
>
### Horner Factorization
let's say you have a complete polynomial of order `n`, you can factorize
in this way:
$$
\begin{align*}
p(s) &= s^5 + 4s^4 + 5s^3 + 2s^2 + 10s + 1 =\\
&= s ( s^4 + 4s^3 + 5s^2 + 2s + 10) + 1 = \\
&= s ( s (s^3 + 4s^2 + 5s + 2) + 10) + 1 = \\
&= s ( s (s (s^2 + 4s + 5) + 2) + 10) + 1 = \\
&= s ( s (s ( s (s + 4) + 5) + 2) + 10) + 1
\end{align*}
$$
If you were to take each s with the corresponding number in the parenthesis,
you'll make this block:
![horner factorization to diagram block](../Images/Modern-Control/horner-factorization.png)
### Case Studies
<!-- TODO: Complete case studies -->
- PAGERANK
- Congestion Control
- Video Player Control
- Deep Learning
[^so-how-to-plot-ssr]: [Stack Exchange | How to plot state space variables against time on unit step input? | 05 January 2025 ](https://electronics.stackexchange.com/questions/307227/how-to-plot-state-space-variables-against-time-on-unit-step-input)

View File

@ -0,0 +1,160 @@
# Relation to Classical Control
## A Brief Recap of Discrete Control
Let's say we want to control something ***Physical***, hence intrinsically
***time continuous***, we can model our control in the `z` domain and make our
$G_c(z)$. But how do we connect these systems:
![scheme of how digital control interconnects with classical control](../Images/Relation-to-classical-control/digital-control.png)
#### Contraints
- $T_s$: Sampling time
- $f_s \geq 2f_m$: Sampling Frequency must be at least 2 times the max frequency
of the system
#### Parts of the system
1. Take `reference` and `output` and compute the `error`
2. Pass this signal into an `antialiasing filter` to avoid ***aliases***
3. Trasform the `Laplace Transform` in a `Z-Transform` by using the following
relation:\
$z = e^{sT}$
4. Control everything through a `control block` engineered through
`digital control`
5. Transform the `digital signal` to an `analogic signal` through the use of a
`holder` (in this case a `zero order holder`)
6. Pass the signal to our `analogic plant` (which is our physical system)
7. Take the `output` and pass it in `retroaction`
### Zero Order Holder
It has the following formula:
$$
ZoH = \frac{1}{s} \left( 1 - e^{sT}\right)
$$
#### Commands:
- `c2d(sysc, Ts [, method | opts] )`[^matlab-c2d]: Converts `LTI` systems into
`Discrete` ones
## Relation between $S(A, B, C, D)$ to $G(s)$
### From $S(A, B, C, D)$ to $G(s)$
Be this our $S(A, B, C, D)$ system:
$$
\begin{cases}
\dot{x}(t) = Ax(t) + Bu(t) \;\;\;\; \text{Dynamic of the system}\\
y(t) = C{x}(t) + Du(t) \;\;\;\; \text{Static of the outputs}
\end{cases}
$$
now let's make from this a `Laplace Transform`:
$$
\begin{align*}
& \begin{cases}
sX(s) - x(0)= AX(s) + BU(s) \\
Y(s) = CX(s) + DU(s)
\end{cases} \longrightarrow && \text{Normal Laplace Transformation}\\
& \longrightarrow
\begin{cases}
sX(s) = AX(s) + BU(s) \\
Y(s) = CX(s) + DU(s)
\end{cases} \longrightarrow && \text{Usually $x(0)$ is 0}\\
& \longrightarrow
\begin{cases}
X(s) \left(sI -A \right) =BU(s) \\
Y(s) = CX(s) + DU(s)
\end{cases} \longrightarrow && \text{$sI$ is technically equal to $s$}\\
& \longrightarrow
\begin{cases}
X(s) = \left(sI - A\right)^{-1}BU(s) \\
Y(s) = CX(s) + DU(s)
\end{cases} \longrightarrow && \\
& \longrightarrow
\begin{cases}
X(s) = \left(sI - A\right)^{-1}BU(s) \\
Y(s) = C\left(sI - A\right)^{-1}BU(s) + DU(s)
\end{cases} \longrightarrow && \text{Substitute for $X(s)$}\\
& \longrightarrow
\begin{cases}
X(s) = \left(sI - A\right)^{-1}BU(s) \\
Y(s) = \left(C\left(sI - A\right)^{-1}B + D\right)U(s)
\end{cases} \longrightarrow && \text{Group for $U(s)$}\\
& \longrightarrow
\begin{cases}
X(s) = \left(sI - A\right)^{-1}BU(s) \\
\frac{Y(s)}{U(s)} = \left(C\left(sI - A\right)^{-1}B + D\right)
\end{cases} \longrightarrow && \text{Get $G(s)$ from definition}\\
\longrightarrow \;& G(s) = \left(C\left(sI - A\right)^{-1}B + D\right) &&
\text{Formal definition of $G(s)$}\\
\end{align*}
$$
#### Properties
- Since $G(s)$ can be ***technically*** a matrix, this may represent a
`MIMO System`
- The system is ***always*** `proper` (so it's denominator is of an order
higher of the numerator)
- If $D$ is $0$, then the system is `strictly proper` and ***realizable***
- While each $S(A_i, B_i, C_i, D_i)$ can be transformed into a ***single***
$G(s)$, this isn't true viceversa.
- Any particular $S(A_a, B_a, C_a, D_a)$ is called `realization`
- $det(sI - A)$ := Characteristic Polinome
- $det(sI - A) = 0$ := Characteristic Equation
- $eig(A)$ := Solutions of the Characteristic Equation and `poles` of the system
- If the system is `SISO` and this means that $C \in \R^{1,x}$,
$B \in \R^{x,1}$ and $D \in \R$, meaning
that:
$$
\begin{align*}
G(s) &= \left(C\left(sI - A\right)^{-1}B + D\right) =\\
&= \left(C \frac{Adj\left(sI - A\right)}{det\left(sI - A\right)}B + D\right)
= && \text{Decompose the inverse in its formula}\\
&= \frac{n(s)}{det\left(sI - A\right)} \in \R
\end{align*}
$$
> [!NOTE]
> As you can see here, by decomposing the inverse matrix in its formula it's
> easy to see that the divisor is a `scalar`, a `number`.
>
> Moreover, because of how $B$ and $C$ are composed, the result of this Matrix
> multiplication is a `scalar` too, hence we can write this as a single formula.
>
> Another thing to notice, regardless if this is a `MIMO` or `SISO` system is
> that at the divisor we have all `eigenvalues` of A as `poles` by
> [definition](../Formularies/GEOMETRY-FORMULARY.md/#eigenvalues)
>
### Transforming a State-Space into Another one
We basically need to use some non singular `Permutation Matrices`:
$$
\begin{align*}
&A_1, B_1, C_1, D_1 \\
&A_2 = PAP^{-1} \\
&B_2 = PB \\
&C_2 = CP^{-1} \\
&D_2 = D_1
\end{align*}
$$
[^matlab-c2d]: [Matlab Official Docs | c2d | 05 January 2025](https://it.mathworks.com/help/control/ref/dynamicsystem.c2d.html)

View File

@ -0,0 +1,67 @@
# Canonical Forms
In order to see if we are in one of these canonical forms, just write the
equations from the block diagram, and find the associated $S(A, B, C, D)$.
> [!TIP]
> In order to find a rough diagram, use
> [Horner Factorization](MODERN-CONTROL.md/#horner-factorization) to find
> $a_i$ values. Then put all the $b_i$ to the right integrator by shifting them
> as many left places, starting from the rightmost, for the number of
> associated $s$
## Control Canonical Form
It is in such forms when:
$$
A = \begin{bmatrix}
- a_1 & -a_2 & -a_3 & \dots & -a_{n-1} &-a_n\\
1 & 0 & 0 & \dots & 0 & 0\\
0 & 1 & 0 & \dots & 0 & 0\\
\dots & \dots & \dots & \dots & \dots \\
0 & 0 & 0 & \dots & 1 & 0
\end{bmatrix}
B = \begin{bmatrix}
1 \\ 0 \\ \dots \\ \dots \\ 0
\end{bmatrix}
C = \begin{bmatrix}
b_1 & b_2 & \dots & b_n
\end{bmatrix}
D = \begin{bmatrix}
0
\end{bmatrix}
$$
## Modal Canonical Forms
> [!CAUTION]
> This form is the most difficult to find, as this varies drastically in cases
> of double roots
>
$$
A = \begin{bmatrix}
- a_1 & 0 & 0 & \dots & 0\\
0 & -a_2 & 0 & \dots & 0\\
0 & 0 & -a_3 & \dots & 0\\
\dots & \dots & \dots & \dots & \dots \\
0 & 0 & 0 & 0 & -a_n
\end{bmatrix}
B = \begin{bmatrix}
1 \\ 1 \\ \dots \\ \dots \\ 1
\end{bmatrix}
C = \begin{bmatrix}
b_1 & b_2 & \dots & b_n
\end{bmatrix}
D = \begin{bmatrix}
0
\end{bmatrix}
$$
## Observable Canonical Form
<!--TODO: Correct here -->
[^reference-input-pole-allocation]: [MIT | 06 January 2025 | pg. 2](https://ocw.mit.edu/courses/16-30-feedback-control-systems-fall-2010/c553561f63feaa6173e31994f45f0c60_MIT16_30F10_lec11.pdf)

View File

@ -0,0 +1,140 @@
# Reachability and Observability
## Reachability
While in the non linear world, we can solve a `system`
***numerically***, through an `iterative-approach`:
$$
\begin{align*}
\dot{x}(t) &= f(x(t), u(t)) && t \in \R \\
x(k+1) &= f(x(k), u(k)) && t \in \N
\end{align*}
$$
In the linear world, we can do this ***analitically***:
> [!TIP]
> We usually consider $x(0) = 0$
$$
\begin{align*}
x(1) &= Ax(0) + Bu(0) \\
x(2) &= Ax(1) + Bu(1) &&= A^{2}x(0) + ABu(0) + Bu(1) \\
x(3) &= Ax(2) + Bu(2) &&= A^{3}x(0) + A^{2}Bu(0) + ABu(1) + Bu(2) \\
\dots \\
x(k) &= Ax(k-1) + Bu(k-1) &&=
\underbrace{A^{k}x(0)}_\text{Free Dynamic} +
\underbrace{A^{k-1}Bu(0) + \dots + ABu(k-2) + Bu(k-1) }
_\text{Forced Dynamic} \\[40pts]
x(k) &= \begin{bmatrix}
B & AB & \dots & A^{k-2}B & A^{k-1}B
\end{bmatrix}
\begin{bmatrix}
u(k-1) \\ u(k-2) \\ \dots \\ u(1) \\ u(0)
\end{bmatrix}
\end{align*}
$$
Now, there's a relation between the determinant and the matrix containing
$A$ and $B$:
$$
\begin{align*}
&p_c(s) = det(sI -A) =
s^n + a_{n-1} s^{n-1} + a_{n-2}s^{n - 2} + \dots + a_1s + a_0 = 0
\\[10pt]
&\text{Apply Caley-Hamilton theorem:} \\
&p_c(A) = A^{n} + a_{n-1}A^{n_1} + \dots + a_1A + a_0 = 0\\[10pt]
&\text{Remember $G(s)$ formula and multiply $p_c(A)$ for $B$:} \\
&p_c(A)B = A^{n}B + a_{n-1}A^{n_1}B + \dots + a_1AB + a_0B = 0 \rightarrow \\
&p_c(A)B = a_{n-1}A^{n_1}B + \dots + a_1AB + a_0B = -A^{n}B
\end{align*}
$$
All of these makes us conclude something about the Kallman
Controllability Matrix:
$$
K_c = \begin{bmatrix}
B & AB & \dots & A^{k-2}B & A^{k-1}B
\end{bmatrix}
$$
Moreover, $x(n) \in range(K_c)$ and $range(K_c)$ is said
`reachable space`.\
In particular if $rank(K_c) = n \rightarrow range(K_c) = \R^{n}$ this
is `fully reachable` or `controllable`
> [!TIP]
> Some others use `non-singularity` instead of the $range()$ definition
> [!NOTE]
> On the Franklin Powell there's another definition to $K_c$ that
> comes from the fact that we needed to find a way to transform
> ***any*** `realization` into the
> [`Control Canonical Form`](./CANONICAL-FORMS.md/#control-canonical-form)
>
## Observability
This is the capability of being able to deduce the `initial state` by just
observing the `output`.
Let's focus on the $y(t)$ part:
$$
y(t) =
\underbrace{Cx(t)}_\text{Free Output} +
\underbrace{Du(t)}_\text{Forced Output}
$$
Assume that $u(t) = 0$:
$$
\begin{align*}
& y(0) = Cx(0) && x(0) = x(0) \\
& y(1) = Cx(1) && x(1) = Ax(0) &&
\text{Since $u(t) = 0 \rightarrow Bu(t) = 0$} \\
& y(2) = Cx(2) && x(2) = A^2x(0) \\
& \vdots && \vdots \\
&y(n) = Cx(n) && x(n) = A^nx(0) \rightarrow \\
\rightarrow &y(n) = CA^nx(0)
\end{align*}
$$
Now we have that:
$$
\begin{align*}
\vec{y} = &\begin{bmatrix}
C \\
CA \\
\vdots \\
CA^{n}
\end{bmatrix} x(0) \rightarrow \\
\rightarrow x(0) = &\begin{bmatrix}
C \\
CA \\
\vdots \\
CA^{n}
\end{bmatrix}^{-1}\vec{y} \rightarrow \\
\rightarrow x(0) = & \frac{Adj(K_o)}{det(K_o)} \vec{y}
\end{align*}
$$
For the same reasons as before, we can use Caley-Hamilton here too, also, we can see that if $K_o$ is `singular`, there can't be an inverse.
As before, $K_o$ is such a matrix that allows us to see if there exists a
[`Canonical Observable Form`](CANONICAL-FORMS.md/#observable-canonical-form)
The `non-observable-space` is equal to:
$$
X_{no} = Kern(K_o) : \left\{ K_ox = 0 | x \in X\right\}
$$
## Decomposition of these spaces
The space of possible points is $X$ and is equal to
$X = X_r \bigoplus X_{r}^{\perp} = X_r \bigoplus X_{nr}$
Analogously we can do the same with the observable spaces
$X = X_no \bigoplus X_{no}^{\perp} = X_no \bigoplus X_{o}$

View File

@ -0,0 +1,104 @@
# State Feedback
## State Feedback
When you have your $G(s)$, just put poles where you want them to be and compute the right $k_i$.
What we are doing is to put $u(t) = -Kx(t)$, feeding back the `state` to the
`system`.
> [!WARNING]
> While $K$ matrix is controllable, the A matrix is equivalent to our plant, so
> it's impossible to change $a_i$ without changing the `system`
> [!CAUTION]
> We are doing some oversimplifications here, but this can't be used as formal
> proof for whatever we are saying. To see more, look at
> [Ackerman's formula](https://en.wikipedia.org/wiki/Ackermann%27s_formula).
>
> However is it possible to put a fictional reference input to make everything
> go back to a similar proof for the
> characteristic equation[^reference-input-pole-allocation]
While it is possible to allocate poles in each form, the Canonical Control
one makes it very easy to accomplish this task:
$$
A = \begin{bmatrix}
- a_1 \textcolor{#ff8382}{-k_1}& -a_2 \textcolor{#ff8382}{-k_2}
& -a_3 \textcolor{#ff8382}{-k_3}& \dots
& -a_{n-1}\textcolor{#ff8382}{-k_{n-1}}
&-a_n \textcolor{#ff8382}{-k_n}\\
1 & 0 & 0 & \dots & 0 & 0\\
0 & 1 & 0 & \dots & 0 & 0\\
\dots & \dots & \dots & \dots & \dots \\
0 & 0 & 0 & \dots & 1 & 0
\end{bmatrix}
$$
This changes our initial considerations, however. The new Characteristi Equation
becomes:
$$
det\left(sI - (A - BK)\right)
$$
## State Observer
What happens if we have not enough sensors to get the state in
`realtime`?
> [!NOTE]
> Measuring the state involves a sensor, which most of the times is
> either inconvenient to place because of space, or inconvenient
> economically speaking
In this case we `observe` our output to `estimate` our state
($\hat{x}$). THis new state will be used in our $G_c$ block:
$$
\dot{\hat{x}} = A\hat{x} + Bu + L(\hat{y} - y) = A\hat{x} + Bu +
LC(\hat{x} - x)
$$
Since we are estimating our state, we need the estimator to be fast,
**at least 6 times fater** than our `plant`, and $u = -K\hat{x}$.
Let's compute the error we introduce in our system:
<!--TODO: See error formula-->
$$
\begin{align*}
e &= x - \hat{x} \rightarrow \\
\rightarrow \dot{e} &= \dot{x} - \dot{\hat{x}} \rightarrow\\
&\rightarrow Ax - BK\hat{x} - A\hat{x} + BK\hat{x} - LCe \rightarrow\\
&\rightarrow -Ae - LCe \rightarrow\\
&\rightarrow eig(-A-LC)
\end{align*}
$$
## Making a Controller
So, this is how we will put all of these blocks saw until now.
![block diagram](..\Images\State-Feedback\block-diagram.png)
From this diagram we can deduce immediately that to us our input is $y$
and our output is $u$. By treating the blue part as a `black box`, we can
say that:
$$
\begin{cases}
\dot{\hat{x}} = A\hat{x} - BK\hat{x} + LC\hat{x} - Ly \\
u = -K\hat{x}
\end{cases}
$$
So our $S(A_c, B_c, C_c, D_c)$ is:
$$
\begin{align*}
&A_c = A_p - B_pK + LC_p \\
&B_c = -L \\
&C_c = -K \\
&D_c = 0 \\
\end{align*}
$$
and so, you just need to substitute these matrices to the equation of the
$G_p(s)$ and you get you controller

View File

View File

@ -0,0 +1,132 @@
# Example 3
## Double Mass Cart
![double mass cart](./../../Images/Examples/Example-3/double-cart.png)
### Formulas
- Resulting forces for cart 1:\
$
m_1 \ddot{p}_1 = k_2(p_2 - p_1) + b_2( \dot{p}_2 - \dot{p}_1) -
k_1 p_1 - b_1 \dot{p}_1
$
- Resulting forces for cart 2:\
$
m_2 \ddot{p}_2 = F - k_2(p_2 - p_1) - b_2( \dot{p}_2 - \dot{p}_1)
$
### Reasoning
We now have 2 different accelerations. The highest order of derivatives is 2 for
2 variables, hence we need 4 variables in the `state`:
$$
x = \begin{bmatrix}
x_1 = p_1\\
x_2 = p_2\\
x_3 = \dot{p}_1\\
x_4 = \dot{p}_2
\end{bmatrix}
\dot{x} = \begin{bmatrix}
\dot{x}_1 = \dot{p}_1 = x_3 \\
\dot{x}_2 = \dot{p}_2 = x_4\\
\dot{x}_3 = \ddot{p}_1 =
\frac{1}{m_1} \left[ k_2(x_2 - x_1) + b_2( x_4 - x_3) -
k_1 x_1 - b_1 x_3 \right]\\
\dot{x}_4 = \ddot{p}_2 =
\frac{1}{m_2} \left[ F - k_2(x_2 - x_1) - b_2( x_4 - x_3) \right]\\
\end{bmatrix}
$$
Let's write our $S(A, B, C, D)$:
$$
A = \begin{bmatrix}
0 & 0 & 1 & 0 \\
0 & 0 & 0 & 1 \\
% 3rd row
- \frac{k_2 - k_1}{m_1} &
\frac{k_2}{m_1} &
-\frac{b_2 + b_1}{m_1} &
\frac{b_2}{m_1} \\
% 4th row
\frac{k_2}{m_12} &
- \frac{k_2}{m_2} &
\frac{b_2}{m_2} &
- \frac{b_2}{m_2} \\
\end{bmatrix}
B = \begin{bmatrix}
0 \\
0 \\ 0 \\ 1
\end{bmatrix}
C = \begin{bmatrix}
1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0
\end{bmatrix}
D = \begin{bmatrix}
0
\end{bmatrix}
$$
## Suspended Mass
> [!NOTE]
> For those of you the followed CNS course, refer to professor
> PDF for this excercise, as it has some unclear initial conditions
>
> However, in the formulas section, I'll take straight up his own
![suspended mass](./../../Images/Examples/Example-3/suspended-mass.png)
### Formulas
- Resulting forces for mass:\
$
m \ddot{p} = -k(p - r) -b(\dot{p} - \dot{r})
$
### Reasoning
$$
x = \begin{bmatrix}
x_1 = p \\
x_2 = \dot{x}_1
\end{bmatrix}
\dot{x} = \begin{bmatrix}
\dot{x}_1 = x_2 \\
\dot{x}_2 = \frac{1}{m} \left[-k(x_1 - r) -b(x_2 - \dot{r}) \right]
\end{bmatrix}
$$
<!-- TODO: Correct here looking from book -->
> [!WARNING]
> Info here are wrong
Let's write our $S(A, B, C, D)$:
$$
A = \begin{bmatrix}
0 & 1\\
-\frac{k}{m} & - \frac{b}{m}
\end{bmatrix}
B = \begin{bmatrix}
0 \\
\frac{k + sb}{m}
\end{bmatrix}
C = \begin{bmatrix}
1 & 0
\end{bmatrix}
D = \begin{bmatrix}
0 & 0
\end{bmatrix}
$$

View File

View File

View File

@ -1,98 +0,0 @@
# Modern Control
Normally speaking, we know much about classical control, in the form
of:
$$
\dot{x}(t) = ax(t) + bu(t) \longleftrightarrow sX(s) - x(0) = aX(S) + bU(s)
$$
With the left part being a derivative equation in continuous time, while the
right being its tranformation in the complex domain field.
> [!NOTE]
>
> $$
> \dot{x}(t) = ax(t) + bu(t) \longleftrightarrow x(k+1) = ax(k) + bu(k)
> $$
>
> These are equivalent, but the latter one is in discrete time.
>
## A brief recap over Classical Control
Be $Y(s)$ our `output variable` in `classical control` and $U(s)$ our
`input variable`. The associated `transfer function` $G(s)$ is:
$$
G(s) = \frac{Y(s)}{U(s)}
$$
### Root Locus
<!-- TODO: write about Root Locus -->
### Bode Diagram
### Nyquist Diagram
## State Space Representation
### State Matrices
A state space representation has 4 Matrices: $A, B, C, D$ with coefficients in
$\R$:
- $A$: State Matrix `[x_rows, x_columns]`
- $B$: Input Matrix `[x_rows, u_columns]`
- $C$: Output Matrix `[y_rows, x_columns]`
- $D$: Direct Coupling Matrix `[y_rows, u_columns]`
$$
\begin{cases}
\dot{x}(t) = Ax(t) + Bu(t) \;\;\;\; \text{Dynamic of the system}\\
y(t) = C{x}(t) + Du(t) \;\;\;\; \text{Static of the outputs}
\end{cases}
$$
This can be represented with the following diagrams:
Continuous Time:
![continuous state space diagram](..\Images\Modern-Control\state-space-time.png)
---
Discrete time:
![discrete state space diagram](..\Images\Modern-Control\state-space-discrete.png)
### State Vector
This is a state vector `[x_rows, 1]`:
$$
x(t) = \begin{bmatrix}
x_1(t)\\
\dots\\
x_x(t)
\end{bmatrix}
\text{or} \:
x(k) = \begin{bmatrix}
x_1(k)\\
\dots\\
x_x(k)
\end{bmatrix}
$$
Basically, from this we can know each next step of the state vector, represented
as:
$$
x(k + 1) = f\left(
x(k), u(k)
\right) = Ax(k) + Bu(k)
$$
### Case Studies
<!-- TODO: Complete case studies -->
- PAGERANK
- Congestion Control
- Video Player Control
- Deep Learning

View File

View File

View File

@ -0,0 +1,35 @@
# Control Formulary
## Settling time
$
T_s = \frac{\ln(a_{\%})}{\zeta \omega_{n}}
$
- $\zeta$ := Damping ratio
- $\omega_{n}$ := Natural frequency
## Overshoot
$
\mu_{p}^{\%} = 100 e^{
\left(
\frac{- \zeta \pi}{\sqrt{1 - \zeta^{2}}}
\right)
}
$
## Reachable Space
$X_r = Span(K_c)$
$X = X_r \bigoplus X_{r}^{\perp} = X_r \bigoplus X_{nr}$
> [!TIP]
> Since $X_{nr} = X_r^{\perp}$ we can find a set of perpendicular
> vectors by finding $Ker(X_r^{T})$
## Non Observable Space
$X_no = Kern(K_o)$
$X = X_no \bigoplus X_{no}^{\perp} = X_no \bigoplus X_{o}$
> [!TIP]
> Since $X_{o} = X_no^{\perp}$ we can find a set of perpendicular
> vectors by finding $Ker(X_{no}^{T})$

View File

@ -0,0 +1,49 @@
# Geometry Formulary
## Inverse of a Matrix
$A^{-1} = \frac{1}{det(A)} Adj(A)$
## Adjugate Matrix
The adjugate of a matrix $A$ is the `transpose` of the `cofactor matrix`:\
$Adj(A) = C^{T}$
### $(i-j)$-minor (AKA $M_{ij}$)
$M_{ij}$ := Determinant of the matrix $B$ got by removing the
***$i$-row*** and the ***$j$-column*** from matrix $A$
### Cofactors
$C$ is the matrix of `cofactors` of a matrix $A$ where all the elements $c_{ij}$
are defined like this:\
$
c_{ij} = \left( -1\right)^{i + j}M_{ij}
$
## Eigenvalues
By starting from the definition of `eigenvectors`:\
$A\vec{v} = \lambda\vec{v}$
As we can see, the vector $\vec{v}$ was unaffected by this matrix
multiplication appart from a scaling factor $\lambda$, called `eigenvalue`
By rewriting this formula we get:\
$
\left(A - \lambda I\right)\vec{v} = 0
$
This is solved for:\
$\det(A- \lambda I) = 0$
> [!NOTE]
> If the determinant is 0, then $(A - \lambda I )$ is not invertible, so
> we can't solve the previous equation by using the trivial solution (which
> can't be taken into account since $\vec{v}$ is not $0$ by definition)
## Caley-Hamilton
Each square matrix over a `commutative ring` satisfies its own
characteristic equation $det(\lambda I - A)$
> [!TIP]
> In other words, once found the characteristic equation, we can
> substitute the ***unknown*** variable $\lambda$ with the matrix itself
> (***known***),
> powered to the correspondent power

View File

@ -0,0 +1,37 @@
# Physics Formulary
> [!TIP]
>
> You'll often see $\vec{v}$ and $\vec{\dot x}$, and $\vec{a}$ and
> $\vec{\ddot{x}}$.
>
> These are equal, but the latter forms, express better the relation
> between state space variables
## Hooke's Law (AKA Spring Formula)
$\vec{F} = -k\vec{x}$
- $k$: Spring Constant
- $\vec{x}$: vector of spring stretch from rest position
## Fluid drag
$\vec{F_D} = \frac{1}{2}b\vec{v}^2C_{D}A$
- $b$: density of fluid
- $v$: speed of object ***relative*** to the fluid
- $C_D$: drag coefficient
- $A$: cross section area
### Stokes Drag
$\vec{F_D} = -6 \pi R\mu \vec{v}$
- $\mu$: dynamic viscosity
- $R$: radius (in meters) of the sphere
- $\vec{v}$: flow velocity ***relative*** to the fluid
### Simplified Fluid Drag (Simplified Stokes Equation)
$\vec{F_D} = -b \vec{v}$
- $b$: simplified coefficient that has everything else
- $\vec{v}$: flow velocity ***relative*** to the fluid
##
## Newton force

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

Binary file not shown.

After

Width:  |  Height:  |  Size: 90 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 101 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 60 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 77 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 115 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 286 KiB

0
docs/README.md Normal file
View File