Intelligent Agents: Design, Environments, and Knowledge Representation
ITEM 2: Understanding Intelligent Agents
What is an Agent?
An agent is anything that can perceive its environment and take actions based on those perceptions. The role of an agent defines the specific actions it should perform in response to a performance measure. A sequence of perceptions and actions over time evaluates the agent’s behavior.
Rational Agents
A rational agent aims to maximize its expected performance, given its past perceptions. The work environment, which includes the performance measure, external environment, actuators, and sensors, is crucial for agent design. Understanding the work environment is the first step in designing an agent.
Types of Work Environments
Work environments can vary based on several parameters:
- Visibility: Fully or partially observable
- Determinism: Deterministic or stochastic
- Episodic or Sequential
- Static or Dynamic
- Discrete or Continuous
- Single-agent or Multi-agent
Agent Programs and Designs
The program of an agent implements its agent function. Different program designs reflect the type of information used in decision-making. These designs vary in efficiency, robustness, and flexibility. The best design depends on the environment.
Types of Agent Programs
- Simple reactive agents: React directly to perceptions.
- Model-based reactive agents: Maintain an internal state to track aspects of the world not immediately evident.
- Goal-based agents: Act to achieve specific goals.
- Utility-based agents: Aim to maximize their desired “happiness” or utility.
All agents can improve their performance through learning mechanisms.
ITEM 3: Problem Formulation and Search Algorithms
Defining the Problem
Before searching for solutions, an agent must formulate a problem based on its goal. A problem consists of:
- Initial state
- Actions
- Goal test function
- Path cost function
The problem is represented by a state space. A path in the state space from the initial state to a goal state is a solution.
Search Algorithms
The general SEARCH-TREE algorithm can solve any problem. Specific variations incorporate different strategies. Search algorithms are evaluated based on:
- Completeness: Finding a solution if one exists.
- Optimality: Finding the best solution.
- Time complexity
- Space complexity
Complexity depends on b (branching factor) and d (depth of the solution).
Types of Search Algorithms
- Breadth-first search: Expands the shallowest unexpanded node. Complete, optimal for unit costs, time and space complexity O(bd).
- Uniform cost search: Similar to breadth-first but expands the node with the lowest path cost g(n). Complete and optimal if step costs exceed a positive bound.
- Depth-first search: Expands the deepest unexpanded node. Not complete or optimal, time complexity O(bm), space complexity O(bm) (m is the maximum depth).
- Limited depth search: Depth-first with a depth limit.
- Iterative deepening search: Repeatedly calls limited depth search with increasing limits. Complete, optimal for unit costs, time complexity O(bd), space complexity O(bd).
- Bidirectional search: Searches from both the initial and goal states. Can reduce time complexity but requires more space.
Handling Repeated States and Partial Observability
The GRAPH-SEARCH algorithm eliminates duplicate states. In partially observable environments, agents can search in the space of belief states (sets of possible states). Sometimes, a simple solution sequence suffices; other times, a contingency plan is needed.
ITEM 4: Informed Search and Heuristics
Best-First Search
Best-first search is a GRAPH-SEARCH where nodes with the lowest estimated cost (using a heuristic) are expanded first. Heuristic function h(n) estimates the cost of a solution from node n.
Types of Best-First Search
- Greedy best-first search: Expands nodes with the lowest h(n). Not optimal but often efficient.
- A* search: Expands nodes with the lowest f(n) = g(n) + h(n) (g(n) is the path cost). Complete and optimal if h(n) is admissible (for SEARCH-TREE) or consistent (for GRAPH-SEARCH).
Heuristic Function Quality
The performance of heuristic search depends on the quality of the heuristic function. Good heuristics can be derived by:
- Relaxing the problem definition.
- Precomputing solution costs for sub-problems.
- Using a pattern database.
- Learning from experience.
Memory-Bounded Search
RBFS and SMA* are robust and optimal A* search algorithms that use limited memory. They can solve problems that A* cannot due to memory limitations.
Local Search Methods
Local search methods like hill climbing operate on complete-state formulations, keeping only a few nodes in memory. Stochastic algorithms like simulated annealing can find optimal solutions with proper cooling schedules. Local search methods can also solve problems in continuous spaces.
Genetic Algorithms
A genetic algorithm is a stochastic hill-climbing search that maintains a population of states. New states are generated by mutation and crossover.
Exploration Problems
Exploration problems occur when the agent has no knowledge of its environment. In explorable environments, online search agents can build a map and find a goal if one exists. Heuristic estimates, updated through experience, help escape local minima.
ITEM 6: Knowledge Representation and Reasoning
Knowledge-Based Agents
Intelligent agents need knowledge to make good decisions. They store knowledge as sentences in a knowledge representation language within a knowledge base. A knowledge-based agent consists of a knowledge base and an inference mechanism. It stores facts about the world in its knowledge base, uses inference to derive new sentences, and uses these sentences to decide on actions.
Knowledge Representation Languages
A knowledge representation language is defined by its syntax (structure of sentences) and semantics (truth value of sentences in different possible worlds or models).
Implication and Inference
Implication is crucial for reasoning. Sentence A implies sentence B if B is true in all worlds where A is true. Key concepts include validity (of A => B) and unsatisfiability (of A ∧ ¬B).
Inference is the process of deriving new sentences from old ones. Sound inference algorithms derive only entailed sentences; complete algorithms derive all entailed sentences.
Propositional Logic
Propositional logic is a simple language with propositional symbols and logical connectives. It can handle propositions that are true, false, or unknown.
Inference in Propositional Logic
With a finite set of possible models, implication can be determined by listing them. Efficient model-finding algorithms, like local search and backtracking, can solve complex problems quickly.
Inference rules are sound patterns of inference used for finding proofs. The resolution rule provides a complete inference algorithm for knowledge bases in conjunctive normal form.
Forward chaining and backward chaining are suitable for knowledge bases in Horn clauses.
Agents Using Propositional Logic
- Inference-based agents: Use inference to track the world and deduce hidden properties.
- Circuit-based agents: Represent propositions as bits in registers, updating them using signal propagation in logic circuits.
Limitations of Propositional Logic
While effective for some tasks, propositional logic lacks the expressive power to handle time, space, or generic relationships between objects, limiting its scalability for complex environments.