# Help With COMM7370Help With Assignment

COMM7370
AI Theories and Applications
Lecture 3
Informed Search
Heuristics
Dr. Paolo Mengoni

Visiting Scholar @HKBU School of Communication
Agenda
▪Search strategies
▪Tree Search recap
▪Informed Search
▪Greedy Best First
▪A*
▪Heuristics
COMM7370 AI Theories and Applications
Tree Search
COMM7370 AI Theories and Applications
Tree-based Search
▪State: a possible configuration for the problem
▪State space: a set of the possible configurations
▪Basic idea:
▪Exploration of state space by generating successors of
▪Use a tree-like representation
▪Every state is evaluated: is it a goal state?
▪Example 8 puzzle game
▪State configuration to tree node
COMM7370 AI Theories and Applications
State Spaces versus Search Trees
▪State Space
▪Set of valid states for a problem
▪e.g., 20 valid states (cities) in the Romanian travel problem
▪Search Tree
▪Root node = initial state
▪Child nodes = states that can be visited from parent
▪Note that the depth of the tree can be infinite
▪ E.g., via repeated states
▪Partial search tree
▪ Portion of tree that has been expanded so far
▪ Fringe
▪ Leaves of partial search tree, candidates for expansion
▪Search trees = data structure to search state-space
COMM7370 AI Theories and Applications
Search Tree for the 8 puzzle problem
COMM7370 AI Theories and Applications
Search
COMM7370 AI Theories and Applications
Depth-First Search (DFS)
▪Expand deepest unexpanded node
▪Fringe
▪ last-in-first-out (LIFO) queue
▪ E.g. the elevator with only one door
▪new successors go at beginning of the queue
▪Repeated nodes?
▪Simple strategy:
▪To avoid loops do not add a state as a leaf if that state is
on the path from the root to the current node
COMM7370 AI Theories and Applications
COMM7370 AI Theories and Applications
Depth-limited search
▪To overcome the problem of DFS search, set a depth limit
and perform tree search
▪DLS = Depth Limited Search
▪ Set limit l
▪ Use DFS
▪When the limit l is reached the nodes have no successors
▪ IDS = Iterative Deepening Search
▪ Perform depth limited search
▪ Increase the depth limit until the solution is found
COMM7370 AI Theories and Applications
Iterative deepening search l =0
COMM7370 AI Theories and Applications
Iterative deepening search l =1
COMM7370 AI Theories and Applications
Iterative deepening search l =2
COMM7370 AI Theories and Applications
Iterative deepening search l =3
COMM7370 AI Theories and Applications
Iterative deepening search
▪Number of nodes generated in a depth-limited search to
depth d with branching factor b:
NDLS = b
0 + b1 + b2 + … + bd-2 + bd-1 + bd
▪Number of nodes generated in an iterative deepening
search to depth d with branching factor b:
NIDS = (d+1)b
0 + d b^1 + (d-1)b^2 + … + 3bd-2 +2bd-1 + 1bd
▪For b = 10, d = 5,
▪NDLS = 1 + 10 + 100 + 1,000 + 10,000 + 100,000 = 111,111
▪NIDS = 6 + 50 + 400 + 3,000 + 20,000 + 100,000 = 123,456
▪Overhead = (123,456 - 111,111)/111,111 = 11%
COMM7370 AI Theories and Applications
Properties of iterative deepening search
▪Complete?
▪Yes
▪Time?
▪ (d+1)b0 + d b1 + (d-1)b2 + … + bd = O(bd)
▪Space?
▪O(bd)
▪Optimal?
▪Yes, if step cost = 1
Informed search strategies recap
COMM7370 AI Theories and Applications
▪Comparison between DFS and BFS
▪DFS use less space, in average may be faster
▪BFS is complete and optimal
Measure DFS BFS IDS
Completeness No Yes Yes
Time Complexity bm bm bd
Space Complexity b x m bm b x d
Optimality No Yes Yes
Informed search
COMM7370 AI Theories and Applications
Informed Search
▪Motivations: Limitations of uninformed search
methods
▪Too many nodes to expand
▪Informed (or heuristic) search uses problem-
specific heuristics to improve efficiency
▪Best-first
▪A*
▪Can provide significant speed-ups in practice
▪e.g., on 8-puzzle
COMM7370 AI Theories and Applications
Best-first search
▪Idea: use an evaluation function (also called
heuristic ℎ ) for each node estimate of "desirability“
▪Expand most desirable unexpanded node
▪Implementation:
▪Order the nodes in fringe by their heuristics value
▪Most desirable nodes have higher priority
▪Expands the node that appears to be closest to goal
▪ Not always true
▪Special cases:
▪greedy best-first search
▪A* search
▪Note
▪Evaluation function is an estimate of node quality
▪More on heuristics later
COMM7370 AI Theories and Applications
Example heuristic functions for 8-puzzle
▪8-puzzle
▪A good heuristic function can reduce the search process
▪Commonly used heuristics
▪Number of misplaced tiles
▪ Example: ℎ = 8
COMM7370 AI Theories and Applications
Example Romania with step costs in km
▪Commonly used heuristics
▪Straight line distance to destination
▪ Example: ℎ = 366
COMM7370 AI Theories and Applications
Greedy best-first search example
COMM7370 AI Theories and Applications
Greedy best-first search example
COMM7370 AI Theories and Applications
Greedy best-first search example
COMM7370 AI Theories and Applications
Greedy best-first search example
COMM7370 AI Theories and Applications
Optimal Path
COMM7370 AI Theories and Applications
Optimal vs Greedy BestFirst
COMM7370 AI Theories and Applications
h(Fagaras)=176
h(RimnicuVilcea)=193
Properties of greedy best-first search
▪Complete? No
▪can get stuck in loops
▪e.g., Iasi → Neamt → Iasi → Neamt → …
▪Time? O(bm)
▪Worst case
▪A good heuristic can give dramatic improvement
▪Space? O(bm)
▪Keeps all nodes in memory
▪Optimal? No
COMM7370 AI Theories and Applications
A* Search
▪Expand node based on estimate of total path cost
through node
▪Idea: avoid expanding paths that are already
expensive
▪Evaluation function = + ℎ considers
▪cost so far
▪estimated cost to goal ℎ
▪Efficiency of search will depend on quality of
heuristic
COMM7370 AI Theories and Applications
A* search example
COMM7370 AI Theories and Applications
A* search example
COMM7370 AI Theories and Applications
A* search example
COMM7370 AI Theories and Applications
A* search example
COMM7370 AI Theories and Applications
A* search example
COMM7370 AI Theories and Applications
A* search example
COMM7370 AI Theories and Applications
Heuristics
COMM7370 AI Theories and Applications
▪A heuristic h(n) is admissible if for every node n,
h(n) ≤ h*(n)
▪where h*(n) is the true cost to reach the goal state from n
▪An admissible heuristic never overestimates the
cost to reach the goal, i.e. it is optimistic
▪E.g.: in the map navigation problem, the heuristic
straight-line distance never overestimates the actual
▪Theorem: If h(n) is admissible, A* is optimal
COMM7370 AI Theories and Applications
Optimality of A*
▪A* expands nodes in order of increasing f value
▪Contour i has all nodes with f=fi, where fi < fi+1
COMM7370 AI Theories and Applications
Properties of A*
▪Complete?
▪Yes (unless there are infinitely many nodes with f ≤ f(G) )
▪Time?
▪Exponential
▪Space?
▪Keeps all nodes in memory
▪Optimal?
▪Yes
COMM7370 AI Theories and Applications
▪E.g., for the 8-puzzle:
▪h1(n) = number of misplaced tiles
▪h2(n) = total Manhattan distance
▪ i.e. n. of squares from desired location of each tile
▪h1(S) = ?
▪h2(S) = ?
COMM7370 AI Theories and Applications
▪E.g., for the 8-puzzle:
▪h1(n) = number of misplaced tiles
▪h2(n) = total Manhattan distance
▪ i.e. n. of squares from desired location of each tile
▪h1(S) = 8
▪h2(S) = 3+1+2+2+2+3+3+2 = 18
COMM7370 AI Theories and Applications
Dominance
▪ If h2(n) ≥ h1(n) for all n (both admissible)
▪ then h2 dominates h1
▪ i.e. h2 is better for search
▪Typical search costs (average number of nodes expanded):
▪d=12 IDS = 3,644,035 nodes
A*(h1) = 227 nodes
A*(h2) = 73 nodes
▪d=24 IDS = too many nodes
A*(h1) = 39,135 nodes
A*(h2) = 1,641 nodes
▪ IDS = Iterative Deepening Search
COMM7370 AI Theories and Applications
Relaxed problems
▪A problem with fewer restrictions on the actions is
called a relaxed problem
▪The cost of an optimal solution to a relaxed
problem is an admissible heuristic for the original
problem
▪If the rules of the 8-puzzle are relaxed so that a tile
can move anywhere, then h1(n) gives the shortest
solution
▪If the rules are relaxed so that a tile can move to
any adjacent square, then h2(n) gives the shortest
solution
COMM7370 AI Theories and Applications