Introduction to Artificial Intelligence: Logic, Search, and Environments

Propositional Logic in Artificial Intelligence

Propositional logic (PL) is the simplest form of logic where all statements are made by propositions. A proposition is a declarative statement which is either true or false. It is a technique of knowledge representation in logical and mathematical form.

Example:

  • It is Sunday.
  • The Sun rises from the West (False proposition)
  • 3+3= 7 (False proposition)
  • 5 is a prime number.

Basic Facts About Propositional Logic:

  • Propositional logic is also called Boolean logic as it works on 0 and 1.
  • In propositional logic, we use symbolic variables to represent the logic, and we can use any symbol for representing a proposition, such as A, B, C, P, Q, R, etc.
  • Propositions can be either true or false, but they cannot be both.
  • Propositional logic consists of an object, relations or functions, and logical connectives.
  • These connectives are also called logical operators.

Types of Propositions:

  • Atomic Propositions: Atomic propositions are the simple propositions. They consist of a single proposition symbol. These are the sentences which must be either true or false.
  • Compound Propositions: Compound propositions are constructed by combining simpler or atomic propositions, using parentheses and logical connectives.

Example:

  • 2+2 is 4, it is an atomic proposition as it is a true fact.
  • “The Sun is cold” is also a proposition as it is a false fact.
  • “It is raining today, and the street is wet.”
  • “Ankit is a doctor, and his clinic is in Mumbai.”

Logical Connectives

Logical connectives are used to connect two simpler propositions or representing a sentence logically. We can create compound propositions with the help of logical connectives. There are mainly five connectives:

Types of Logical Connectives:

  • Negation: A sentence such as ¬ P is called the negation of P. A literal can be either a positive literal or a negative literal.
  • Conjunction: A sentence which has the connective such as, P ∧ Q is called a conjunction.
    Example: Rohan is intelligent and hardworking. It can be written as,
    P= Rohan is intelligent,
    Q= Rohan is hardworking. → P∧ Q.
  • Disjunction: A sentence which has the or connective, such as P ∨ Q, is called disjunction, where P and Q are the propositions.
    Example: “Ritika is a doctor or Engineer”,
    Here P= Ritika is Doctor. Q= Ritika is Doctor, so we can write it as P ∨ Q.

The A* Search Algorithm

Algorithm A* (Hart et al., 1968):

  • f(n) = g(n) + h(n)
  • h(n) = cost of the cheapest path from node n to a goal state.
  • g(n) = cost of the cheapest path from the initial state to node n.
  • f*(n) = g*(n) + h*(n)
  • h*(n) (heuristic factor) = estimate of h(n).
  • g*(n) (depth factor) = approximation of g(n) found by A* so far.

Algorithm: A*

  1. Start with OPEN containing only the initial node. Set that node’s g value to 0, its h’ value to whatever it is, and its f’ value to h’. Set CLOSED to the empty list.
  2. Until a goal node is found, repeat the following procedure: If there are no nodes on OPEN, report failure. Otherwise, pick the node on OPEN with the lowest f’ value. Call it BESTNODE. Remove it from OPEN. Place it on CLOSED. Check if BESTNODE is a goal node. If so, exit. Otherwise, generate the successors of BESTNODE but do not set BESTNODE to point to them yet. For each such SUCCESSOR, do the following:
  1. Set SUCCESSOR to point back to BESTNODE.
  2. Compute g(SUCCESSOR)=g(BESTNODE)+ the cost of getting from BESTNODE to SUCCESSOR.
  3. Check if SUCCESSOR is the same as any node on OPEN. If so, call that node OLD.
  4. If SUCCESSOR was not on OPEN, check if it is on CLOSED. If so, call the node on CLOSED OLD and add OLD to the list of BESTNODE’s successors.

If SUCCESSOR was not already on either OPEN or CLOSED, then put it on OPEN, and add it to the list of BESTNODE’s successors. Compute f ‘(SUCCESSOR)=-g(successor)+h'(successor).

Consider a class example for this algorithm…

First-Order Logic

First-order logic is another way of knowledge representation in artificial intelligence. It is an extension to propositional logic.

FOL is sufficiently expressive to represent natural language statements in a concise way.

First-order logic is also known as Predicate logic or First-order predicate logic. First-order logic is a powerful language that develops information about the objects in a more easy way and can also express the relationship between those objects.

First-order logic (like natural language) does not only assume that the world contains facts like propositional logic but also assumes the following things in the world:

  • Objects: A, B, people, numbers, colors, wars, theories, squares, pits, wumpus, ……
  • Relations: It can be a unary relation such as: red, round, is adjacent, or n-ary relation such as: the sister of, brother of, has color, comes between
  • Function: Father of, best friend, third inning of, end of.

As a natural language, first-order logic also has two main parts:

  • Syntax
  • Semantics

Syntax of First-Order Logic

The syntax of FOL determines which collection of symbols is a logical expression in first-order logic. The basic syntactic elements of first-order logic are symbols. We write statements in short-hand notation in FOL.

Basic Elements of First-order logic:

Constant

1, 2, A, John, Mumbai, cat,….

Variables

x, y, z, a, b,….

Predicates

Brother, Father, >,….

Function

sqrt, LeftLegOf, ….

Connectives

∧, ∨, ¬, ⇒, ⇔

Equality

==

Quantifier

∀, ∃

A Short Note on Supervised Machine Learning

Supervised learning is a type of machine learning in which machines are trained using well “labelled” training data, and based on that data, machines predict the output. Labelled data means some input data is already tagged with the correct output.

In supervised learning, the training data provided to the machines works as the supervisor that teaches the machines to predict the output correctly. It applies the same concept as a student learns under the supervision of a teacher.

Supervised learning is a process of providing input data as well as correct output data to the machine learning model. The aim of a supervised learning algorithm is to find a mapping function to map the input variable(x) with the output variable(y).

In the real-world, supervised learning can be used for Risk Assessment, Image classification, Fraud Detection, spam filtering, etc.

How Supervised Learning Works

In supervised learning, models are trained using a labelled dataset, where the model learns about each type of data. Once the training process is completed, the model is tested on the basis of test data (a subset of the training set), and then it predicts the output.

5Nl25zm2B5QAAAAASUVORK5CYII=

Suppose we have a dataset of different types of shapes which includes square, rectangle, triangle, and polygon. Now the first step is that we need to train the model for each shape.

  1. If the given shape has four sides, and all the sides are equal, then it will be labelled as a Square.
  2. If the given shape has three sides, then it will be labelled as a triangle.
  3. If the given shape has six equal sides then it will be labelled as hexagon.

Steps Involved in Supervised Learning

  1. Determine the type of training dataset.
  2. Collect/Gather the labelled training data.
  3. Split the training dataset into a training dataset, test dataset, and validation dataset.
  4. Determine the input features of the training dataset, which should have enough knowledge so that the model can accurately predict the output.
  5. Determine the suitable algorithm for the model, such as support vector machine, decision tree, etc.

Breadth-First Search Algorithm

Algorithm: Breadth-First-Search

  1. Create a variable called NODE-LIST and set it to the initial state.
  2. Until a goal state is found or NODE-LIST is empty:

a) Remove the first element from NODE-LIST and call it E. If NODE-LIST was empty, quit.

b) For each way that each rule can match the state described in E do:

I) Apply the rule to generate a new state,

II) If the new state is a goal state, quit and return this state.

III) Otherwise, add the new state to the end of NODE-LIST

Consider a class example for this algorithm.

Criterion

Breadth-First

Depth-First

Time

bd

bm

Space

bd

bm

Optimal?

Yes

No

Complete?

Yes

No

Advantages of Breadth-First-Search

  • BFS will not get trapped exploring a blind alley.
  • If there is a solution, then BFS is guaranteed to find it.

Depth-First Search Algorithm

Algorithm: Depth-First-Search

  1. If the initial state is a goal state, quit and return success.
  2. Otherwise, do the following until success or failure is signaled:

a) Generate a successor, E, of the initial state. If there are no more successors, signal failure.

b) Call DFS with E as the initial state.

c) If success is returned, signal success. Otherwise, continue in this loop.

Advantages of DFS

  • DFS requires less memory since only the nodes on the current path are stored.
  • DFS can find a solution without examining much of the search space at all.

Consider a class example for this algorithm.

Informed Search (Heuristic Search)

  • The algorithms of an informed search contain information regarding the goal state.
  • More efficient than uninformed search.

Best-First Search

Depth-first search: not all competing branches have to be expanded.

Breadth-first search: not getting trapped on dead-end paths.

Combining the two is to follow a single path at a time, but switch paths whenever some competing path looks more promising than the current one.

The figure shows the beginning of a best-first search procedure. Initially, there is only one node, so it will be expanded. During so, it generates three new nodes. The heuristic function, which, in this example, is an estimate of the cost of getting to a solution from a given node, is applied to each of these new nodes. Since node D is the most promising, it is expanded next, producing two successor nodes, E and F. But then the heuristic function is applied to them. Now another path, that going through node B, looks more promising, so it is pursued, generating nodes G and H. But again when these new nodes are evaluated, they look less promising than another path, so attention is returned to the path through D to E. E is then expanded, yielding nodes I and J. At the next step, J will be expanded since it is most promising. This process can continue until a solution is found. If a solution is found, then the process will move to a quit state or stop state.

To implement such a graph-search procedure, we will need to use two lists of nodes:

  • OPEN: nodes that have been generated, but have not been examined.
    This is organized as a priority queue.
  • CLOSED: nodes that have already been examined.

Algorithm

  1. Start with OPEN containing just the initial state.
  2. Loop until a goal is found or there are no nodes left in OPEN:
  • Pick the best node in OPEN
  • Generate its successors
  • For each successor:
    • If new ® evaluate it, add it to OPEN, record its parent
    • If generated before ® change parent, update successors

Greedy search:

  • h(n) = estimated cost of the cheapest path from node n to a goal state.
  • Neither optimal nor complete

Uniform-cost search:

  • g(n) = cost of the cheapest path from the initial state to node n.
  • Optimal and complete, but very inefficient

A Brief History of AI

  • In 1950, Alan Turing published a paper titled “Can Machines Think?”. Later, he proposed a Turing Test as a measure of a machine’s ability to exhibit intelligent behavior that is equivalent to a human. Unfortunately, we did not find that machine.
  • Followed by in 1951 by using the Ferranti Mark 1 machine at the University of Manchester, a computer scientist Christopher Strachey wrote a checkers program and a chess program that could play a chess game as well as it could compete with humans by playing chess.
  • Followed by in 1956, John McCarthy coined the term Artificial Intelligence at the Dartmouth conference. Hence he was known as the father of AI. According to John McCarthy, AI is the Science and Engineering of making intelligent machines, especially intelligent computer programs.
  • In 1964, Danny Bobrow’s dissertation at MIT showed that computers can understand natural language well enough to solve algebra word problems correctly.
  • In 1965, Joseph Weizenbaum at MIT built ELIZA, an interactive problem that carries on a dialogue in English.
  • In 1969, Scientists at Stanford Research Institute developed Shakey, a robot, equipped with locomotion, perception, and problem-solving.
  • In 1997, The Deep Blue Chess Program beat the world chess champion, Garry Kasparov.

Why Artificial Intelligence?

  • With the help of AI, you can create such software or devices which can solve real-world problems very easily and with accuracy such as health issues, marketing, traffic issues, etc.
  • With the help of AI, you can create your personal virtual Assistant, such as Google Assistant, Siri, Alexa, etc.
  • With the help of AI, you can build such Robots which can work in an environment where the survival of humans can be at risk.
  • AI opens a path for other new technologies, new devices, and new Opportunities.

AI Environments

An AI environment refers to the world or context within which an artificial intelligence (AI) agent operates. It encompasses all the external factors that an AI agent interacts with, perceives, and responds to, including objects, entities, events, and conditions that influence the agent’s behavior.

Types of AI Environments

  1. Fully Observable vs. Partially Observable

    • Fully Observable: The agent has complete and accurate information about the environment at all times.
      • Example: Chess, where the AI can see the entire board and knows the location of every piece.
    • Partially Observable: The agent has limited information and must infer missing details or deal with uncertainty.
      • Example: Poker, where the AI doesn’t know the opponents’ cards.
  2. Deterministic vs. Stochastic

    • Deterministic: The outcomes of all actions are predictable and consistent. If an action is taken, the result is always the same.
      • Example: A mathematical puzzle where the outcome is pre-determined by the rules.
    • Stochastic: There is an element of randomness, and actions can lead to different outcomes even under the same conditions.
      • Example: Stock market predictions, where numerous factors introduce uncertainty.
  3. Static vs. Dynamic

    • Static: The environment remains unchanged unless acted upon by the agent. There is no independent evolution of the environment.
      • Example: A game like Sudoku where the environment doesn’t change while the agent is thinking.
    • Dynamic: The environment can change while the agent is deciding what to do.
      • Example: A self-driving car’s environment, where other cars, pedestrians, and traffic signals can change independently of the AI.
  4. Discrete vs. Continuous

    • Discrete: The environment consists of a finite number of distinct states or actions.
      • Example: Board games like Tic-Tac-Toe, where there are clear, finite moves and states.
    • Continuous: The environment has a range of possible states and actions that aren’t countable.
      • Example: Robotic control, where the position of arms and joints can vary continuously.
  5. Episodic vs. Sequential

    • Episodic: Each episode or task is independent of the others. The agent doesn’t need to consider previous or future states while making a decision.
      • Example: Image classification tasks, where each image is evaluated separately.
    • Sequential: The agent’s current actions influence future states or outcomes.
      • Example: Video games, where current decisions affect future game states and player progress.
  6. Single-agent vs. Multi-agent

    • Single-agent: Only one AI agent is interacting with the environment, without the influence of other agents.
      • Example: Solving a maze where only one agent navigates the maze.
    • Multi-agent: Multiple agents operate in the environment, either in cooperation or competition with each other.
      • Example: Autonomous vehicles interacting with other vehicles and pedestrians.

Components of an AI Environment

  1. Sensors: Devices or components that collect data from the environment. These can include cameras, microphones, or virtual inputs in the case of non-physical environments.
  2. Perception: The AI’s ability to interpret sensory data and understand its environment, which could be as simple as reading the state of a game board or as complex as analyzing video feeds.
  3. Actions: The responses or decisions made by the AI in the environment, such as moving a robotic arm, driving a car, or choosing the next move in a game.
  4. Actuators: The mechanisms through which an AI agent influences the environment, such as motors in robots or control signals in software systems.

Challenges in AI Environments

  • Uncertainty: Handling incomplete or ambiguous information.
  • Complexity: Dealing with environments that have a large number of possible states or actions.
  • Adaptability: Adjusting to changing conditions or unforeseen events.

Conclusion

The design and understanding of an AI environment are crucial to building effective AI systems. Different environments present unique challenges, and the nature of the environment dictates the algorithms and strategies an AI agent will use to achieve its goals.