Difference between revisions of "Agent Environments"

From Simulace.info
Jump to: navigation, search
 
(2 intermediate revisions by the same user not shown)
Line 1: Line 1:
 
'''The Paper topic:''' Agent Environments
 
'''The Paper topic:''' Agent Environments
  
'''Class:''' 4IT496 System Simulation (WS 2017/2018)
+
'''Class:''' 4IT496 Simulation of Systems (WS 2017/2018)
  
 
'''Author:''' Bc. David Feldstein
 
'''Author:''' Bc. David Feldstein
 
'''!!!!THIS PAPER IS NOT READY YET!!!!'''
 
  
 
== Introduction ==
 
== Introduction ==
Line 89: Line 87:
  
 
To have a desired effect we usually need to deliver results in a reasonable time. What purpose would it have if we let for instance a chess machine get the best move if it toke several years? There are short-term and long-term problems. But always there is a time limit, when exceeded the solution doesn't really matter.
 
To have a desired effect we usually need to deliver results in a reasonable time. What purpose would it have if we let for instance a chess machine get the best move if it toke several years? There are short-term and long-term problems. But always there is a time limit, when exceeded the solution doesn't really matter.
 
== Reactive agents and environment ==
 
 
As we mentioned earlier the inteligent behaviour of a reactive agent does not come from itself (it does not simulate thinking) but from behaving in a system.
 
  
 
== References ==
 
== References ==

Latest revision as of 15:46, 22 January 2018

The Paper topic: Agent Environments

Class: 4IT496 Simulation of Systems (WS 2017/2018)

Author: Bc. David Feldstein

Introduction

In multiagent systems there are agents as programmed operating units in certain types of enviroments. The basic distribution of agents is reactive and delibarative agents.

Reactive agents exist in the enviroment, they are influenced by its properties and changes however they do not create symbolic representation of the enviroment. Simply they don't try to simulate inteligent decisions or brain work. They don't read the environment, make no logic assumptions, they just react to it. In this case the inteligent behaviour comes from emergencies in the system. There are approaches to simulate inteligent behaviour without understanding the environment. For example submsumption achitecture where reactive agent is divided into layers that compete with each other to control the agent.

On the other hand deliberative agents do try to simulate inteligence as we percieve it in our brains. They gather information from the environment and they do create symbolic representation of it. And based on their experience they try to make an adequate inteligent decisions.

This textbook chapter shows differences between different types of agents environments. It offers possible perceptions of the environment and how it affects each individual agent interacting with it and each other.

Interpretation of the the environment

Before we will approach to describing environment types we have to know how an agent gather and interprets the information from the environment. Informations are processed through an input function which can vary from a very simple to a very complex task. For example to get an information about the temperature in the environment you only need a cable and a thermo diode. However for instance face recognition can be very difficult task and not always you can rely on it by 100%. Interpretation of the environment is a huge part of transduction problem (machine learning).

After getting an information from the environment the agent proccesses it and saves cerain state of the environment. The simplified function of how agent works is below:[1]

Deliberative agent in an environment (E) is defined by {P, A, D, perception, state_change, action}, where

1) E is set of environment states

2) P is set of agents perceptions

3) D is set of agents internal states (personal)

4) A is set of possible actions

5) perception: E → P (Based on environment state there are some possible perceptions)

6) state_change: P × D → D (With a perception and internal state we change to different state)

7) action: D → A (With an internal state there comes an action)

Environments

There are serveral different points of view in the environments which tells us how the environment can be percieved and helps us adapt the agents to operate in it. The basic perceptions are below:[2]

Accessible vs. Inaccessible

In full accessible environment the agent is certain that he can easily get the full information needed any time. This situation typicaly happens in software environment. For example a robot indexing webpages can certainly download all the information that common visiter can see. It is importatnt that the environment must be accessible to the agent in the moment. Theoretic accessibility is not enough.

The question: Can I access complete informations about the environment?

Deterministic vs. Non-deterministic

Deterministic environment tells us that we can rely on certain outcome if we perform an action. For example in calculater I can multiply numbers and I am certain I get the right number. However when lets say automatic reaper cuts grass, can I be sure it is always grass that I cut? Most of agents interacting with real world operate in non-deterministic environments.

The question: Does a action in the environment have a specific effect? Are we certain about the state of environment afterwards?

Static vs. Dynamic

In static environments the agent is the only entity there that can actualy change the environment. Again in real world this doesn't happen very often. Usually possibilities of environment changes could be endless. The example of static enviroment could be games or puzzles like hanoi tower or chess. Example of dynamic enviroment is traffic on the road.

The question: Are there other entities that can change the environment or is the agent the only one? Is it changing during an action of the agent?

Discrete vs. Continuous

In a discrete environment you can allways choose from a certain number of actions. In a continuos one there are endless possibilities of actions. Again discrete environments are usualy the ones created by people by stating a certain finite rules like games, puzzles where agent can do some moves in a certain way. However rules can be defined but actions can be infinate like in legal system or programming languages.

The question: Is a number of possible actions in the environment finite or infinite?

Episodic vs. Non-episodic

Environments in the real world can be often very complex. The complexity problem could be solved by creating sort of subenvironments, dividing the environments into segments (episodes) that are independent of each other. The sate in each episode does not influence a state in another one. Typical example is computer simulation vs. the real world. Often the episodic or non-episodic environment depends on how it is percieved by an external observer.

Dimensional vs. Dimensionless

In Dimensional environment as its name suggests the decision proccess of an agent dependes on dimensions (distances, lengths, object sizes). Typical example is the real world vs. the internet.

The question: Does the agent take into account spatial characteristics of the environment?

Enviromental problems

Environments recognition problem

Sometimes it could be tricky to determine whether the environment is either static or dynamic, either accessible or inaccessible, etc. Often we can be sure about a specific property messured only to a certain extent.

Environment complexity problem

The more inaccessible, non-deterministic, dynamic and continuous the environment, the more complex and less recognizable it is. Which goes hand to hand with the fact that the more complex the environment, the more difficult it is to implement an agent that should operate there.

Limited time problem

To have a desired effect we usually need to deliver results in a reasonable time. What purpose would it have if we let for instance a chess machine get the best move if it toke several years? There are short-term and long-term problems. But always there is a time limit, when exceeded the solution doesn't really matter.

References

  1. Communication and Cooperation in multiagent systems [online]. Available at: http://sorry.vse.cz/~berka/docs/4iz430/P09-Kooperace.pdf
  2. Multiagent systems [online]. Available at: http://www.cs.cmu.edu/~softagents/papers/multiagentsystems.PDF