Difference between revisions of "Agent Environments"

From Simulace.info
Jump to: navigation, search
Line 10: Line 10:
  
 
This textbook chapter shows differences between different types of agents environments. It offers possible perceptions of the environment and how it affects each individual agent interacting with it and each other.
 
This textbook chapter shows differences between different types of agents environments. It offers possible perceptions of the environment and how it affects each individual agent interacting with it and each other.
 +
 +
== Interpretation of the the environment ==
 +
 +
Before we will approach to describing environment types we have to know how an agent gather and interprets the information from the environment. Informations are processed through an input function which can vary from a very simple to a very complex task. For example to get an information about the temperature in the environment you need only a cable and thermo diode. However for instance face recognition can be very difficult task and not always you can rely on it by 100%. Interpretation of the environment is a huge part of transduction problem (machine learning).
 +
 +
After getting an information from the environment the agent proccesses it and saves cerain state of the environment. The simplified function of how agent works is below:<ref>Communication and Cooperation in multiagent systems [online]. Available at: http://sorry.vse.cz/~berka/docs/4iz430/P09-Kooperace.pdf</ref>
 +
 +
Deliberative agent in an environment (E) is defined by {P, A, D, perception, state_change, action}, where
 +
 +
1) E is set of environment states
 +
 +
2) P is set of agents perceptions
 +
 +
3) D is set of agents internal states (personal)
 +
 +
4) A is set of possible actions
 +
 +
5) perception: E → P
 +
 +
6) state_change: P × D → D
 +
 +
7) action: D → A
  
 
== Environments ==
 
== Environments ==
Line 48: Line 70:
  
 
'''The question:''' Does the agent take into account spatial characteristics of the environment?
 
'''The question:''' Does the agent take into account spatial characteristics of the environment?
 
== Interpretation of the the environment ==
 
 
Before we will approach to describing environment types we have to know how an agent gather and interprets the information from the environment. Informations are processed through an input function which can vary from a very simple to a very complex task. For example to get an information about the temperature in the environment you need only a cable and thermo diode. However for instance face recognition can be very difficult task and not always you can rely on it by 100%. Interpretation of the environment is a huge part of transduction problem (machine learning).
 
 
After getting an information from the environment the agent proccesses it and saves cerain state of the environment. The simplified function of how agent works is below:<ref>Communication and Cooperation in multiagent systems [online]. Available at: http://sorry.vse.cz/~berka/docs/4iz430/P09-Kooperace.pdf</ref>
 
 
Deliberative agent in an environment (E) is defined by {P, A, D, perception, state_change, action}, where
 
 
1) E is set of environment states
 
 
2) P is set of agents perceptions
 
 
3) D is set of agents internal states (personal)
 
 
4) A is set of possible actions
 
 
5) perception: E → P
 
 
6) state_change: P × D → D
 
 
7) action: D → A
 
 
== Interactions ==
 
  
 
== Environments recognition problem ==
 
== Environments recognition problem ==

Revision as of 13:59, 22 January 2018

The Paper topic: Agent Environments

Class: 4IT496 System Simulation (WS 2017/2018)

Author: Bc. David Feldstein

!!!!THIS PAPER IS NOT READY YET!!!!

In multiagent systems there are agents as programmed operating units in certain types of enviroments. The basic distribution of agents is reactive and delibarative agents. Reactive agents exist in the enviroment, they are influenced by its properties and changes however they do not create symbolic representation of the enviroment. Simply they don't try to simulate inteligent decisions or brain work. They don't read the environment, make no logic assumptions, they just react to it. In this case the inteligent behaviour comes from emergencies in the system which will be looked upon closely in below. On the other hand deliberative agents do try to simulate inteligence as we percieve it in our brains. They gather information from the environment and they do create symbolic representation of it. And based on their experience they try to make an adequate inteligent decisions.

This textbook chapter shows differences between different types of agents environments. It offers possible perceptions of the environment and how it affects each individual agent interacting with it and each other.

Interpretation of the the environment

Before we will approach to describing environment types we have to know how an agent gather and interprets the information from the environment. Informations are processed through an input function which can vary from a very simple to a very complex task. For example to get an information about the temperature in the environment you need only a cable and thermo diode. However for instance face recognition can be very difficult task and not always you can rely on it by 100%. Interpretation of the environment is a huge part of transduction problem (machine learning).

After getting an information from the environment the agent proccesses it and saves cerain state of the environment. The simplified function of how agent works is below:[1]

Deliberative agent in an environment (E) is defined by {P, A, D, perception, state_change, action}, where

1) E is set of environment states

2) P is set of agents perceptions

3) D is set of agents internal states (personal)

4) A is set of possible actions

5) perception: E → P

6) state_change: P × D → D

7) action: D → A

Environments

There are serveral different points of view in the environments which tells us how the environment can be percieved and helps us adapt the agents to operate in it. The basic perceptions are below:[2]

Accessible vs. Inaccessible

In full accessible environment the agent is certain that he can easily get the full information needed any time. This situation typicaly happens in software environment. For example a robot indexing webpages can certainly download all the information that common visiter can see. It is importatnt that the environment must be accessible to the agent in the moment. Theoretic accessibility is not enough.

The question: Can I access complete informations about the environment?

Deterministic vs. Non-deterministic

Deterministic environment tells us that we can rely on certain outcome if we perform an action. For example in calculater I can multiply numbers and I am certain I get the right number. However when lets say automatic reaper cuts grass, can I be sure it is always grass that I cut? Most of agents interacting with real world operate in non-deterministic environments.

The question: Does a action in the environment have a specific effect? Are we certain about the state of environment afterwards?

Static vs. Dynamic

In static environments the agent is the only entity there that can actualy change the environment. Again in real world this doesn't happen very often. Usually possibilities of environment changes could be endless. The example of static enviroment could be games or puzzles like hanoi tower or chess. Example of dynamic enviroment is traffic on the road.

The question: Are there other entities that can change the environment or is the agent the only one? Is it changing during an action of the agent?

Discrete vs. Continuous

In a discrete environment you can allways choose from a certain number of actions. In a continuos one there are endless possibilities of actions. Again discrete environments are usualy the ones created by people by stating a certain finite rules like games, puzzles where agent can do some moves in a certain way. However rules can be defined but actions can be infinate like in legal system or programming languages.

The question: Is a number of possible actions in the environment finite or infinite?

Episodic vs. Non-episodic

Environments in the real world can be often very complex. The complexity problem could be solved by creating sort of subenvironments, dividing the environments into segments (episodes) that are independent of each other. The sate in each episode does not influence a state in another one. Typical example is computer simulation vs. the real world. Often the episodic or non-episodic environment depends on how it is percieved by an external observer.

Dimensional vs. Dimensionless

In Dimensional environment as its name suggests the decision proccess of an agent dependes on dimensions (distances, lengths, object sizes). Typical example is the real world vs. the internet.

The question: Does the agent take into account spatial characteristics of the environment?

Environments recognition problem

Often, we cannot simply denote the environment either static or dynamic, either accessible or inaccessible, etc. The environment could have a certain level of the particular trait.

Environment complexity problem

The more inaccessible, non-deterministic, dynamic and continuous the environment, the more complex and less recognizable it is.

The more complex the environment, the more difficult it is to design an agent that should work there.

Time dimension of the environment

The agent is often constrained by time. Is cannot explore and analyze the situation for years, but it has to deliver results in a reasonable time.

short-term X long-term problem (especialy in dynamic environmet)

Subsumption architecture

Reactive agents architecture developed by Brooks. Two key ideas: – Situatedness and embodiment. The agents are physically present

in the environment, draw all their information from the interaction with it and directly influence environment’s dynamics.

– Intelligence and emergence. The intelligence does not exist per se. It emerges from agents’ interactions with the environment and it is not present in single specific component of the system

Agents sense the environment and their percepts directly trigger the proper actions. They are typically as simple as: situation → action.

• Situations are arranged into layers. The lower layer, the more specific behavior and the higher priority.

• The actions are fired concurrently, each layer has its own sensors and effectors.

References

  1. Communication and Cooperation in multiagent systems [online]. Available at: http://sorry.vse.cz/~berka/docs/4iz430/P09-Kooperace.pdf
  2. Multiagent systems [online]. Available at: http://www.cs.cmu.edu/~softagents/papers/multiagentsystems.PDF