Intelligent agents
3.
Agent type | Performance measure | Environment | Actuators | Sensors |
Human garbage collector | safe, maximize profits, safe. | Roads, police, customers, weather. | Accelerator, brake, display, horn. | Cameras, radar, GPS, engine sensors. |
Chess engine | fast, winning is better than a tie, a tie is better than losing, and winning is checkmate, maximizing better positions. | Another player, chess rules. | Chess piece location on the chess board. | Virtual chess board feed. |
Google news | maximize profits, legal, fact-checkers | Customers, police, and current events. | News Feed. | Database feed. |
Maze solver | fast, minimize obstacles and impact other maze users. | Maze, maze rules, and obstacles. | Agent position on Maze. | Virtual maze feed. |
4. Decision tree provided by homework.
Input | Output | Condition |
-20 | 1 | True |
40 | 4 | False |
2 | 2 | False |
35 | 4 | False |
14 | 4 | True |
45 | 4 | False |
6 | 2 | True |
22 | 4 | False |
9 | 3 | False |
Is this a better decision tree?
#include <limits.h>
struct State {
long long min;
long long max;
unsigned char response;
};
/*@
requires LLONG_MIN < environment < LLONG_MAX;
assigns \nothing;
ensures \result \in {1,2,4,5,6,7};
ensures environment == -20 ==> \result == 1;
ensures environment == 40 ==> \result == 6;
ensures environment == 2 ==> \result == 1;
ensures environment == 35 ==> \result == 6;
ensures environment == 14 ==> \result == 4;
ensures environment == 45 ==> \result == 7;
ensures environment == 6 ==> \result == 2;
ensures environment == 22 ==> \result == 5;
ensures environment == 9 ==> \result == 2;
*/
int choose_best_decision(int environment) {
struct State states[6] = {
{.min=LLONG_MIN,.max=2,.response=1},
{.min=3,.max=9,.response=2},
{.min=10,.max=14,.response=4},
{.min=15,.max=22,.response=5},
{.min=23,.max=40,.response=6},
{.min=41,.max=LLONG_MAX,.response=7},
};
int n = sizeof(states)/sizeof(states[0]);
//@ assert \exists int j; 0 <= j < n ==> states[j].min <= environment <= states[j].max;
int j = 0;
/*@ loop invariant 0 <= j < n;
loop invariant \forall int i; 0 <= i < j ==> !(states[i].min <= environment <= states[i].max);
loop assigns j;
loop variant n-j;
*/
while((states[j].min > environment || environment > states[j].max) && j++<n-1);
return states[j].response;
}
/*@
assigns \nothing;
*/
int main() {
return 0;
}
5.
- Our decision tree uses accuracy as a performance measure, that is, which measures are correctly categorized.
- Formally, accuracy is
where TP=True positive; FP=False positive; TN=True negative; FN=False negative.
but our case is multiclass classification so
c.
The following is the confusion matrix from the decision tree that was provided by the instructor.
Positive | Negative | |
True | 3 | NA |
False | 7 | NA |
The following is the confusion matrix from the decision tree that was provided by me.
Positive | Negative | |
True | 10 | NA |
False | 0 | NA |
d. Our performance measure can’t measure true negative and false negative, so it is misleading.
- Simplex reflex agent
Agent type | Performance measure | Environment | Sensors | Actuators |
Equation solver | accuracy and precision x values, fast | Floating-point arithmetic |  |  |
Note it’s not practical to build such an agent because it occupies a lot of memory, in fact,  where  is the number of solutions and  is the number of equations in our database.
It doesn’t solve all cases because our memory is limited.
Another reflex agent can be a one-solution solver from Numerical analysis fitting our needs.
9.
Dirty | Clean | Action | |
A | A | B | Suck |
B | A | B | Move forward & turn 90 degrees |
Where environment provides ‘Dirty’ and ‘Clean’ which are perceptions. Our agent has the current state when it percepts a new state, it transits to a new state and save cells.
class Agent:
current_state = "A"
cell = 1
cells = 4
states = {
'A': {
'dirty': 'A',
'clean': 'B',
'response': 'Suck'
},
'B': {
'dirty': 'A',
'clean': 'B',
'response': 'Move forward & turn 90 degrees'
}
}
def choose_decision(perception: 'dirty' | 'clean'):
if cell == 4:
return 'FINISH'
current_state = states[current_state][perception]
cell++
return states[current_state]['response']
Agent type | Performance measure | Environment | Sensors | Actuators |
K-queen problem solver | find the right queen positions —goal. | chessboard rules | k | queen positions |
Agent Program.
Search for solutions with a path-finder algorithm (DFS, A*, BFS, …), generate a new  location from a  location where  is free, and test the position with chess queen movements.
11.
Agent type | Performance measure | Environment | Sensors | Actuators |
eight puzzle solver | find the right ‘number locations’ in the puzzle such that they are in order —goal. | puzzle rules, initial puzzle | puzzle positions | number locations |
Agent Program.
Search for solutions with a path-finder algorithm, generate a new  location from a  location where  is the initial puzzle, and test the position with the puzzle goal.