8.8 KiB
Solving Problems by Searching
Warning
In this chaptee we'll be talking about an environment with these properties:
- episodic
- single agent
- fully observable
- deterministic
- static
- discrete
- known
Problem-solving agent
It uses a search method to solve a problem. This agent is useful when it is not possible to define an immediate action to perform, but rather it is needed to plan ahead
However, this agent only uses Atomic States, making it faster, but limiting its power.
Algorithms
They can be of 2 types:
- Informed: The agent can estimate how far it is from the goal
- Uninformed: The agent canìt estimate how far is from the goal
And is usually composed of 4 phases:
- Goal Formulation: Choose a goal
- Porblem Formulation: Create a representation of the world
- Search: Before taking any action, search a sequence that brings the agent to the goal
- Execution: Execute the solution (the sequence of actions to the goal) in the real world
Note
This is technically an open loop as the agent after the search phase will ignore any other percept
Problem
This is what we are trying to solve with our agent, and it's a description of our environment
- Set of States - State space
- Initial State
- One or more Goal States
- Available Actions -
def actions(state: State) -> list[Action] - Transition Model -
def result(state: State, action: Action) -> State - Action Cost Function -
def action_cost(current_state: State, action: Action, new_state: State) -> float
the sequence of actions that brings us to the goal is a solution, and if it has the lowest cost, then it is optimal.
Tip
A State-Space can be represented as a graph
Search algorithms
We usually use a tree to represent our state space:
- Root node: Initial State
- Child nodes: Reachable States
- Frontier: Unexplored Nodes
class Node:
def __init__(
self,
state: State, # State represented
parent: Node | None, # Node that generated this node
action: Action, # Action that generated this node
path_cost: float # Total cost to reach this node
):
pass
class Frontier:
"""
Can Inherit from:
- heapq -> Best-first search
- queue -> Breadth-first search
- stack -> Depth-first search
"""
def is_empty() -> bool:
"""
True if no nodes in Frontier
"""
pass
def pop() -> Node:
"""
Returns the top node and removes it
"""
pass
def top() -> Node:
"""
Returns top node without removing it
"""
pass
def add(node: Node):
"""
Adds node to frontier
"""
pass
Loop Detection
There are 3 maun techniques to avoid loopy paths
- Remember all previous states and keep only the best path
- Used in best-first search
- Fast computation
- Huge memroy requirement
- Just forget about it (Tree-like search, because it does not check for redundant paths)
- Not every problem has repeated states, or little
- no memory overhead
- may potentially slow computation by a lot, or halt it completely
- Check for repeated states along a chain
- Slows computation based on how many links are inspected
- Computation overhead
- No memory impact
Performance metrics for search algorithms
- Completeness:
Can the algorithm always find a solution if any or report if none are found? - Cost optimality:
Has the found solution the lowest cost? - Time complexity:
O(n) notation for computation - Space complexity:
O(n) notation for memory
Note
We'll be using the following notation
b: branching factor (max number of branches by one action)d: solution depthm: maximum tree depthC: costC^*: Theoretical optimal cost\epsilon: Minimum cost
Uninformed Strategies
Best First Search
- Take node with least cost and expand
- Frontier must be a Priority Queue (or Heap Queue)
- Equivalent to a Breadth-First approach when each node has the same cost
- If
\epsilon = 0then the algorithm may stall, thus give every action a minial cost
It is:
- Complete
- Optimal
O(b^{1 + \frac{C^*}{\epsilon}})Time ComplexityO(b^{1 + \frac{C^*}{\epsilon}})Space Complexity
Note
If everything costs
\epsilonthis is basically a breadth-first that costs 1 more on depth
Breadth-First Search
- Take nodes that were expanded the first
- Frontier should be a Queue
- Equivalent to a Best-First Search with an evaluation function that takes depth as the cost
It is:
- Complete
- Optimal as long cost is uniform across all States
O(b^d)Time ComplexityO(b^d)Space Complexity
Depth-First Search
It is the best whenever we need to save up on memory, implemented as a tree like search, but has many drawbacks
- Take nodes that were expansed the last
- Frontier should be a Stack
- Equivalent to a Best-First Search when the evaluation function has -depth as the cost
It is:
- Complete as long as the space is finite
- Non-Optimal
O(b^m)Time ComplexityO(bm)Space complexity
Tip
It has 2 variants
- Backtracking Search: Uses less memory going from
O(bm)toO(m)Space complexity at the expense of making the algorithm more complex- Depth-Limited: We limit depth at a certain limit and we don't expand once reached it. By expanding such limit iteratively, we have an Iterative deepening search
Bidirectional Search
Instead of searching only from the Initial State, we may search from the Goal State as well, potentially reducing our
complexity down to O(b^{frac{d}{2}})
- We need two Frontiers
- Two tables of reached states
- Need to find a way to find a collision
It is:
- Complete if both directions are breadth-first or uniform cost and space is finite
- Optimal (as for breadth-first)
O(b^{frac{d}{2}})Time ComplexityO(b^{frac{d}{2}})Space Complexity
Informed Strategies
A strategy is informed if it uses info about the domain to make a decision.
Here we introduce a new funciton called heuristic function h(n) which is the estimated cost of
the cheapest path from the state at node n to the goal state
Greedy best-first search
- It is a best-first search with
h(n)as the evaluation function - Never expands anything that is not on the solution path
- May yield a worse outcome than other strategies
- Can amortize Complexity to
O(bm)with good heuristics
Caution
In other words, a greedy search takes only in account the heuristic distance from a state to the goal, not the whole cost
Note
There exists a version called speedy search that uses the estimated number of actions to reach the goal as its heuristic, regardless of the actual cost
It is:
- Complete when space state is finite
- Non-Optimal
O(|V|)Time complexityO(|V|)Space complexity
A*
- Best-First search with the evaluation valuation function equal to the path cost to node n plus the heuristic:
f(n) = g(n) + h(n) - Optimality depends on the admissibility of its heuristic (a function that does not overestimate the cost to reach a goal)
It is:
- Complete
- Optimal as long as the heuristic is admissable (demonstration on page 106 of book)
- $O(b^d) Time complexity
- $O(b^d) Space complexity
Tip
While A* is a great algorithm, we may sacrifice its optimality for a faster approach by introducing a weigth over the heuristic function:
f(n) = g(n) + W \cdot h(n).The higher
Wthe faster and less accurate A* will be
Note
There exists a version called Iterative-Deepening A* which cuts off at a certain value of
f(n)and if no goal is reached it restarts by increasing the cut off using the smalles cost of the node that went over the previousf(n), practically searching over a contour
Search Contours
See pg 107 to 110 book
Memory bounded search
See pg 110 to 115
Reference Count and Separation
Discard nodes that have been visited by all of their neighbours
Beam Search
Discard all Frontiers nodes that are not the K-strongest, or that are more than \delta away from the strongest candidate
RBFS
It's an algorithm similar to IDA*, but faster. It tries to mimic a best-first search in linear space, but it regenerates a lot nodes already explored. At its core, it keeps the second best value in memory and each time expands, if it ahs not reached the goal, overwrites the cost of a node with it's cheapest child (in order to resume the exploration) and then goes to the second best it had in memory, and keeps the other result as the second best.
MA* and SMA*
These are 2 algorithms that were born to make use of all the memory available to solve the "too little memory" sufferance of both IDA* and RBFS
Heuristic Functions
See pg 115 to 122