📘

Intelligent agents

3.

Agent typePerformance
measure
EnvironmentActuatorsSensors
Human garbage collectorsafe, maximize profits, safe.Roads, police, customers, weather.Accelerator, brake, display, horn. Cameras, radar, GPS, engine sensors.
Chess enginefast, winning is better than a tie, a tie is better than losing, and winning is checkmate, maximizing better positions. Another player, chess rules.Chess piece location on the chess board.Virtual chess board feed.
Google newsmaximize profits, legal, fact-checkersCustomers, police, and current events.News Feed.Database feed.
Maze solverfast, minimize obstacles and impact other maze users.Maze, maze rules, and obstacles. Agent position on Maze.Virtual maze feed.

4. Decision tree provided by homework.

InputOutputCondition
-201True
404False
22False
354False
144True
454False
62True
224False
93False

Is this a better decision tree?

#include <limits.h>

struct State {
	long long min;
	long long max;
	unsigned char response;
};

/*@
   requires LLONG_MIN < environment < LLONG_MAX;
   assigns \nothing;
   ensures \result \in {1,2,4,5,6,7};
   ensures environment == -20 ==> \result == 1;
   ensures environment == 40 ==> \result == 6;
   ensures environment == 2 ==> \result == 1;
   ensures environment == 35 ==> \result == 6;
   ensures environment == 14 ==> \result == 4;
   ensures environment == 45 ==> \result == 7;
   ensures environment == 6 ==> \result == 2;
   ensures environment == 22 ==> \result == 5;
   ensures environment == 9 ==> \result == 2;
*/
int choose_best_decision(int environment) {
	struct State states[6] = {
		{.min=LLONG_MIN,.max=2,.response=1},
		{.min=3,.max=9,.response=2},
		{.min=10,.max=14,.response=4},
		{.min=15,.max=22,.response=5},
		{.min=23,.max=40,.response=6},
		{.min=41,.max=LLONG_MAX,.response=7},
	};
	int n = sizeof(states)/sizeof(states[0]);
	//@ assert \exists int j; 0 <= j < n ==> states[j].min <= environment <= states[j].max;
	int j = 0;
	/*@ loop invariant 0 <= j < n;
	    loop invariant \forall int i; 0 <= i < j ==> !(states[i].min <= environment <= states[i].max);
	    loop assigns j;
	    loop variant n-j;
	 */
	while((states[j].min > environment || environment > states[j].max) && j++<n-1);
	return states[j].response;
}

/*@ 
 assigns \nothing; 
*/
int main() {
	return 0;
}

5.

  1. Our decision tree uses accuracy as a performance measure, that is, which measures are correctly categorized.
  1. Formally, accuracy is
Accuracy=TP+TNTP+TN+FP+FNAccuracy= \dfrac{TP+TN}{TP+TN+FP+FN}

where TP=True positive; FP=False positive; TN=True negative; FN=False negative.

but our case is multiclass classification so

Accuracy=correct classificationsall classificationsAccuracy=\dfrac{\text{correct classifications}}{\text{all classifications}}

c.

The following is the confusion matrix from the decision tree that was provided by the instructor.

PositiveNegative
True3NA
False7NA
Accuracy=310=30%Accuracy=\dfrac{3}{10}=30\%

The following is the confusion matrix from the decision tree that was provided by me.

PositiveNegative
True10NA
False0NA
Accuracy=1010=100%Accuracy=\dfrac{10}{10}=100\%

d. Our performance measure can’t measure true negative and false negative, so it is misleading.

  1. Simplex reflex agent
Agent typePerformance
measure
EnvironmentSensorsActuators
Equation solveraccuracy
and
precision x values,
fast
Floating-point
arithmetic
ai,na_i, n [x∣x∈R][x|x\in \R]
solve equation for x::let ai∈Z,n∈N,∑i=0i=naixi=0  ⟹  [x∣x∈R]solve equation for x id∣1=[x1]∣2=[x2]∣3=[x3]...∣m=[xm,xm+1]∣m+1=[xm+1,xm+2]...where id=identify(ai∈Z,n∈N,∑i=0i=naixi=0)\text{solve equation for x} :: \text {let } a_i\in \Z,n\in \N,\sum_{i=0}^{i=n}a_ix^i=0\implies [x|x\in \R]\\ \text{solve equation for x } id \\ | 1 = [x_1]\\ | 2 = [x_2] \\ | 3 = [x_3] \\ ...\\ |m=[x_m,x_{m+1}]\\ |{m+1}=[x_{m+1},x_{m+2}]\\ ...\\ \text{where } id=identify(a_i\in \Z,n\in \N,\sum_{i=0}^{i=n}a_ix^i=0)

Note it’s not practical to build such an agent because it occupies a lot of memory, in fact, O(ln)O(ln) where nn is the number of solutions and ll is the number of equations in our database.

It doesn’t solve all cases because our memory is limited.

Another reflex agent can be a one-solution solver from Numerical analysis fitting our needs.

9.

DirtyCleanAction
AABSuck
BABMove forward & turn 90 degrees

Where environment provides ‘Dirty’ and ‘Clean’ which are perceptions. Our agent has the current state when it percepts a new state, it transits to a new state and save cells.

class Agent:
   current_state = "A"
   cell = 1
   cells = 4
   states = {
      'A': {
         'dirty': 'A',
         'clean': 'B',
         'response': 'Suck' 
      },
      'B': {
         'dirty': 'A',
         'clean': 'B',
         'response': 'Move forward & turn 90 degrees' 
      }
   }
 def choose_decision(perception: 'dirty' | 'clean'):
     if cell == 4:
        return 'FINISH'
     current_state = states[current_state][perception]
     cell++
     return states[current_state]['response']
Agent typePerformance
measure
EnvironmentSensorsActuators
K-queen problem
solver
find the right queen positions —goal.chessboard ruleskqueen positions

Agent Program.

Search for solutions with a path-finder algorithm (DFS, A*, BFS, …), generate a new qnq_{n} location from a qn−1q_{n-1} location where q0q_0 is free, and test the position with chess queen movements.

11.

Agent typePerformance
measure
EnvironmentSensorsActuators
eight puzzle
solver
find the right ‘number locations’ in the puzzle such that they are in order —goal.puzzle rules, initial puzzlepuzzle positionsnumber locations

Agent Program.

Search for solutions with a path-finder algorithm, generate a new qnq_{n} location from a qn−1q_{n-1} location where q0q_0 is the initial puzzle, and test the position with the puzzle goal.