Agents Reasoning

Introduction
In order the agents can make the decisions, they have to be able to evaluate the external environment. But when we talk about the computer agents, they are nothings else than bunch of 0 and 1. So as the basic step we have to set the rules for evaluating the external reality. For describing these rules, there are many complex models of agents reasoning. They describe the way how an agent understands its external environment and what are its reactions for the accepted inputs. The goal of this article is describing the most common ones and explaining their optimal using.

Before I start with the description, it is good to realize that some models can be used only for single-agent systems, some for multi-agent systems and some for both. In most of cases the description of selected model’s principles shows for which kind of systems the model can be used.

Case-Based Reasoning
Case-Based Reasoning is that type of reasoning that most of the people probably imagine at first. Generally we can say it is based on very simple if-then principle. Although it can sounds quite primitive, in many occasion it can still be the best choice.

CARDS
CARDS is a model of case-based reasoning used for multiagent systems. The acronym means case-based reasoning decision support. It sets the principles of negotiating where more agents are present.

BDI Model
The BDI acronym means Belief-Desire-Intention. The Beliefs are the representatives of information an agent has received about itself or about its environment (for example a robot at Mars knows the external temperature, the terrain etc.). The Desires represent the activity done by an agent (it wants to collect a stone). The intentions are the goals of an agency while doing the desires (it has to collect at least 5 various kind of stones at Mars).

Biased Reasoning
The biased reasoning is more similar to the human mind than the previously mentioned ones. It is able to put ahead agent’s experience over strictly rational reasoning.