Agents can be groups into 5 categories based on their degree of perceived intelligence and capability:
Simple Reflex Agents forgets about the past history of percepts and works upon only on present percept. The Agent function is based on condition action rule. Condition Action Rule is the rule that maps the state/condition to actions. If the condition is true then the action will take otherwise not. This Agents function Succeed only when the entire Environment is fully observable. For a simple Reflex Agents when the Environment is partially observable then the problem of infinite loops is unavoidable. It may be possible to avoid infinite loops if the Agent can randomize its action. There are some problems with Simple Reflex Agents:
A model-based Agent can handle partially observable environments by the use of model about the world. The Agent has to find a rule whose condition matches the current situation of the Environment. It always keep track of internal state which is adjusted by each percept and that depends on the percept history. The current state is stored inside the agent which maintains some kind of structure describing the part of the Environment which can’t be seen. Updating the state requires about the agent which maintains some kind of information about that: How the world involves independently from the Agent, and how Agents action effect the world.
This kind of Agents have a particular goal to achieve so the Agents always tries to reduce the distance towards their goal and takes decision based on how far they are currently form the goal. This allows an Agent to select among multiple paths/ possibilities selecting the one to reach the goal state. The knowledge that supports to take decision can be modified and that is represented explicitly which makes the Agent more flexible. They usually represent planning and search. The Goal-based Agents behaviour can easily change according to the situation or can be modified.
In utility-based Agents Utility describes the state of happiness of the Agent. Sometimes only to reach ndestination is not enough, Agents have to achieve the goal in a safer, cheaper and quicker manner to achieve the goal, for that reason Utility-based Agents choose the Actions that maximized the expected utility to chose the best path to reach the goal. A utility function maps a state into real number which describes the associate degree of happiness.
Learning Agents have the ability to learn from the past experience and it starts to act with the basic knowledge and then it is able to act and adapt the knowledges from the Environment automatically through the learning. It has basically 4 components, these are:
1. Learning Element: it is responsible for making Environment by learning from the Environment.
2. Critic: The learning takes feedback from the Critics that how well the Agent is performing with respect to basic knowledge or fixed performance standard of that Agent.
3. Performance Element: It is responsible for selecting external action.
4. Problem Generator: this component is responsible for suggesting the actions that will lead to new and effective informative experience.
Silan Software is one of the India's leading provider of offline & online training for Java, Python, AI (Machine Learning, Deep Learning), Data Science, Software Development & many more emerging Technologies.
We provide Academic Training || Industrial Training || Corporate Training || Internship || Java || Python || AI using Python || Data Science etc