51 lines
1.4 KiB
Markdown
51 lines
1.4 KiB
Markdown
|
|
# Solving Problems by Searching
|
||
|
|
|
||
|
|
Whenever we need to solve a problem, our `agent` will need to be able to foresee
|
||
|
|
outcomes in order to find a `sequence` of `actions` to get to the `goal`.
|
||
|
|
|
||
|
|
Here our `environment` will always be:
|
||
|
|
|
||
|
|
- Episodic
|
||
|
|
- Single-Agent
|
||
|
|
- Fully Observable
|
||
|
|
- Deterministic
|
||
|
|
- Static
|
||
|
|
- Discrete
|
||
|
|
- Known
|
||
|
|
|
||
|
|
And our `agents` may be:
|
||
|
|
|
||
|
|
- Informed: when they know how far they are from the objective
|
||
|
|
- Uninformed: when they don't know how far they are from the objective
|
||
|
|
|
||
|
|
## Problem-Solving agent
|
||
|
|
|
||
|
|
This is an `agent` which has `atomic` representations of states.
|
||
|
|
|
||
|
|
### Problem-Solving Phases
|
||
|
|
|
||
|
|
1. Formulate `Goal`
|
||
|
|
2. Formulate problem with adequate *abstaction*
|
||
|
|
3. Search a solution ***before*** taking any action
|
||
|
|
4. Excute plan
|
||
|
|
|
||
|
|
With these 4 phases the `agent` will either come to a `solution` or
|
||
|
|
that there are ***none**.
|
||
|
|
|
||
|
|
Once it gets a `solution`, our `agent` will be able to ***blindly execute*** its
|
||
|
|
action plan, as it will be ***fixed*** and thus, the `agent` won't need to perceive
|
||
|
|
anything else.
|
||
|
|
|
||
|
|
> [!NOTE]
|
||
|
|
> This is also called **Open Loop** in Control Theory
|
||
|
|
|
||
|
|
> [!CAUTION]
|
||
|
|
> The 3rd and 4th step can be done only in this `environment` as it is `fully-observable`, `deterministic` and `known`,
|
||
|
|
> so the `environment` can be predicted at each `step` of the **searching simulation**.
|
||
|
|
|
||
|
|
## Planning Agent
|
||
|
|
|
||
|
|
This is an `agent` which has `factored` or `structures` representation of states.
|
||
|
|
|
||
|
|
## Search Problem
|