Agents Reasoning

Revision as of 21:08, 24 January 2018 by Xbilr00 (talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search


In order the agents can make the decisions, they have to be able to evaluate the external environment. But when we talk about the computer agents, they are nothings else than bunch of 0 and 1. So as the basic step we have to set the rules for evaluating the external reality. For describing these rules, there are many complex models of agents reasoning. They describe the way how an agent understands its external environment and what are its reactions for the accepted inputs. The goal of this article is describing the most common ones and explaining their optimal using.

Before I start with the description, it is good to realize that some models can be used only for single-agent systems, some for multi-agent systems and some for both. In most of cases the description of selected model’s principles shows for which kind of systems the model can be used.

Case-Based Reasoning

Case-Based Reasoning is that type of reasoning that most of the people probably imagine at first. Generally we can say it is based on very simple if-then principle.[1] Although it can sounds quite primitive, in many occasion it can still be the best choice.


CARDS is a model of case-based reasoning used for multiagent systems. The acronym means case-based reasoning decision support. It sets the principles of negotiating where more agents are present. [2]

BDI Model

The BDI acronym means Belief-Desire-Intention. The Beliefs are the representatives of information an agent has received about itself or about its environment (for example a robot at Mars knows the external temperature, the terrain etc.). The Desires represent the activity done by an agent (it wants to collect a stone). The intentions are the goals of an agency while doing the desires (it has to collect at least 5 various kind of stones at Mars). [3]

Biased Reasoning

The biased reasoning is more similar to the human mind than the previously mentioned ones. It is able to put ahead agent’s experience over strictly rational reasoning. [4]

  1. AHA, David W., Leonard A. BRESLOW a Héctor MUÑOZ-AVILA. Conversational Case-Based Reasoning. Applied Intelligence [online]. 2001, vol. 14, no. 1, s. 9-32. ISSN 0924669X.
  2. LI, Jing a Zhaohan SHENG. A multi-agent model for the reasoning of uncertainty information in supply chains. International Journal of Production Research [online]. 2011, 49(19), 5737-5753 [cit. 2018-01-24]. DOI: 10.1080/00207543.2010.524257. ISSN 0020-7543. Available from:
  3. DUFF, Simon. Maintenance Goals in Intelligent Systems. Computational intelligence [online]. [cit. 2018-01-24]. ISSN 0824-7935.
  4. HEUVELINK, Annerieke, Michel C.A. KLEIN a Jan TREUR. An Agent Memory Model Enabling Rational and Biased Reasoning. 2008 IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology [online]. IEEE, 2008, 2008, , 193-199 [cit. 2018-01-24]. DOI: 10.1109/WIIAT.2008.274. ISBN 978-0-7695-3496-1. Available from: