Many organisms in the world function without complex nervous systems, yet they perform feats that seem to be impossible without decisive thinking. For example, the algae Physarum Polycephalum (literally the „many-headed slime“) are a slime mold that inhabits shady, cool, moist areas, such as decaying leaves and logs. It consists of many single cell organisms that merge together in a symbiotic relationship. It has no brain, not even a single neuron and yet it shows signs of intelligence. It is capable of solving the travelling salesmen problem (TSP for short), as it connects bits of food with shortest paths of itself, to establish a network with efficient nutrient transport. It avoids obstacles or harmful substances, it remembers and even seems to be able to share information between its cells. While, there are many mysteries about Physarum yet to be solved, we already know, how it is capable of solving the travelling salesman problem. Each of Physarum cells is programmed in such a way, that they are progressively able to optimize its paths. One Physarum cell is unable to determine anything, yet many cells can perform wonders. This phenomenon is called emergence.
Emergence is described as condition of an entity having properties that its parts do not have. This property emerges from the interaction between those parts. There are two kinds of emergence, strong and weak. Weak emergence describes properties that could be derived from the properties of the single parts. This emergence can be easily reproduced in computer simulation. However, strong emergence describes entirely new properties that cannot be connected with the properties of the original parts. An example of strong emergence property is the human consciousness. There is no apparent way our neurons could produce a sense of consciousness, yet we obviously have it. It is an emergent property of the complex system that is our brain.
An agent is a simulated or artificial unit that simulates the behavior of an existing or theoretical lifeform. It can also be defined as “a singular entity in an environment, where it can perform or interact”. Agents perform tasks based on conditions they were programmed with. They don’t need to be aware of their surroundings or even of themselves. Their function is of automatons, they react to inputs, which can be sensory information or just internal information such as time, with predetermined reactions.
Agents have several characteristics:
- Autonomy – agents are proactive, they have target oriented modules, they are capable of solving problems without outside help, they have the ability to provide or ask for information
- Reactivity – agents react to events, they are capable of perceiving time
- Intentionality – agents focus on reaching long term goals using planning, forecasting and decision-making
- Social behavior – agents can communicate and cooperate with others, they keep information about others and they might form groups
There are several kinds of agents, but for the purpose of this text, two is enough:
Reactive agents are the simplest kind of agents. They can be described as an entity that performs its action exclusively based on input from the environment. Their rationality is a direct result of interaction with the outside world. They cannot plan ahead nor make decisions with preference. These agents are usually composed of modules, where each module is activated by a certain condition. The main feature of reactive agents is reactivity, meaning they are able to react to the environment with the purpose of reaching their set goal. Deliberative agents Sometimes called intentional, these agents can reach a quality of reasoning that is close to that of people. Deliberative agents hold a symbolical representation of the environment, meaning a database of facts about the world. With such information, they are capable of evaluating, calculating and planning ahead. They can make decision about the priority of their tasks; however, they are still predetermined. They have no freedom in making decisions, they cannot reflect on their past and learn from it. They are capable of having different mental states and attitudes.
Creating a multiagent system or a simulation is a way of testing a hypothesis or developing autonomous robots to perform tasks. The system consists of many connected entities. This connection may origin from actual communication between agents and agents with environment; however, this connection may just stem from the fact, that they share the environment and interact with it. There may be signs of order in the system, just as there is in the world and nature. However, these are just consequences of the programming of entities and their interaction. The system is not globally controlled and the agents make decisions for themselves. Intelligent behavior is either caused by a sophisticated algorithm or emergent properties. There are a few characteristics of multiagent systems:
- Each agent has incomplete data about the environment or lacks the ability to solve the task at hand on its own
- There is no centralized control of the system
- Data is decentralized
- Calculations are done asynchronously
- Single algorithms are predictable, multiagent systems can lead to unpredictable results due to emergence
Why use agent systems Agent systems are useful in a situation where global control is becoming too complex or costly. Let’s look at vacuuming your apartment. Controlling the actions of the agent directly means vacuuming yourself. Even if you were to control the vacuum remotely, nothing changes. It still costs you time and doesn’t allow vacuuming several places at once. Recently, however, autonomous vacuums (also called roombas) are on the rise. These are basically reactive agents. You can deploy as many of them as you like and unless they get stuck or deplete their battery, they work autonomously. Virtual agent systems such as in a simulation are useful for simulating an imaginary environment in which the result is unknown. Such simulation allows repetition, change of environment variables and evolution of the agents.
Real life applications
Agent systems are already widespread in today’s civilization. They are used whenever centralized systems with predetermined behavior aren’t sufficient. There can be two (not mutually exclusive) reasons. For one, it might be impossible, during the analysis of the applicable environment, to foresee the possible behavior of the environment. The second case is emergency situations. Centralized systems are prone to collapse in unknown situations. However, in an agent system, it does not matter that much, if few agents stop working, as the rest can still react. Due to this ability, agent systems can be used to control airspace and land transport alike. This includes even trains and ships. Today’s focus in development is autonomous vehicles, which are of course agents. An example of publicly functioning virtual agents are Google’s bots (also sometimes called spiders or crawlers). These are purely software agents that search the whole web and catalogue new pages as well as update the old ones. The purpose of their work is to index the internet to allow for fast function of our favorite search engine.
Multiagent simulation in Netlogo
Netlogo is an agent-based programming language and integrated modeling environment. It allows for the creation of simulations in Lisp based language Logo. Its purpose is mainly education, although it has been severely used in academic papers as well. As an example let us find a way to simulate the algae Physarum Polycephalum in Netlogo. Physarum spreads in its environment looking for food. As it finds several sources of food it connects them with its cells in an effective manner (shortest paths between nodes, using existing paths instead of creating new ones, optimizing existing paths).
This video shows the movement, food search and path optimization of the physarum. 
Our starting point is an initial node in the middle of the simulation area. It is represented by an agent – food. Our algae agents are created on this patch. First we need to simulate the spread of the algae. The cells have no vision, they can probably sense light levels and chemical properties of the environment. This however we cannot use. For the purposes of our simulation, the algae spread randomly.
To make-step set randy random 2 ifelse randy = 1 [rt 40][rt -40] fd 1 end
Using random we can make the algae turn in a random direction by a set number of degrees. It doesn’t really matter what is the number of degrees, we just need randomness to the movement. However, from trying we find that 40° stimulates outward motion that mimics the radial spread of the algae and also causes a fractal pattern to the resulting algae area. Now we need to draw the actual algae. Physarum is composed by single cells merged together; however having hundreds of agents in the simulation could be expensive. It is better to draw the algae as patches with certain color. You might object; but the algae also optimize its paths, thus it removes itself from area! To allow optimization, we give the patches a variable popularity, which describes how often the patch gets visited. Patches which are seldom visited get erased.
;the algae sets patches yellow on each step To make-step let popularity-per-step 50 set randy random 2 ifelse randy = 1 [rt 40][rt -40] fd 1 set pcolor yellow ask patch-here [ set popularity popularity + popularity-per-step ] end
;this function is called every tick to lower the popularity of all yellow patches to decay-popularity ask patches with [ not any? algae-here and pcolor = yellow] [ set popularity popularity - (popularity-decay-rate / 10) if popularity < 1 [ set pcolor brown + 4] ] end
This gives us a nice fractal and wildly spreading algae. Now we need something to optimize the pathways. So far the algae spreads and decays randomly. In reality, we believe that the flow of nutrients through the algae is what tells it where to stay a where to withdraw. Therefore, we need to find a way to let the nutrients flow. Let’s create a new agent and call him collector. Collectors are going to travel between food agents, but only on the yellow patches. Setting direction to the nearest food agent is easy, but how do we make it choose paths with only yellow patches? Lets create a reporter class to tell our agent where to go.
to-report best-way-to [ destination ] ; of all the visible route patches, select the ones ; that would take me closer to my destination set collector-vision-dist 30 let visible-patches patches in-radius collector-vision-dist let visible-routes visible-patches with [ pcolor = 45] let routes-that-take-me-closer visible-routes with [ distance destination < [ distance destination - 1 ] of myself ] ifelse any? x [ ; from those route patches, choose the one that is the closest to me report min-one-of routes-that-take-me-closer [ distance self ] ] [ ; if there are no nearby routes to my destination report destination ] end ;Now we can move the collectors. to move-collectors let popularity-per-step 1000 ask collectors [ ;standing on food is the best time to set the next goal ifelse any? foods-on patch-here [ set goal one-of other foods face best-way-to goal jump 1 ;each step we strongly increase the popularity of the patch ask patch-here [ set popularity popularity + popularity-per-step ] ] End
With enough ticks passed, the algae optimizes nicely and connects all of the food nodes. But wait, that’s not correct, is it? Physarum is able to solve the shortest path problem, that means not only connecting the nodes, but also making connections only when they are actually shorter than existing ones. To remedy this problem, we need some serious adjustments.
What the collector does so far is tread randomly between foods. So we need to make it visit all the foods consecutively and with the shortest path.
We give collectors own list foodList that holds the unvisited foods, we give them own list goalList to hold foods already visited and finally listx as a temporary variable, as we need a list that foods can access
to move-collectors ask collectors [ let listx  if length goalList = count foods[ set goalList  ] ifelse any? foods-on patch-here [ set listx foodList if length listx = 0[ ask foods [set listx fput self listx] ] ;we put visited foods to goalList if member? goal listx [ set goalList fput goal goalList ;sorting foods by distance causes the list to fill with all the foods set listx sort-on [distance myself] foods ] let list3  ;so we need to deduct the already visited foods from the sorted list foreach listx [ [x] -> if not member? x goalList [ set list3 lput x list3 ] ] set listx list3 if length listx = 0 [ let x min-one-of foods [distance myself] set listx fput x listx ] ;now we simply take the first goal on the sorted list set goal first listx walk-towards-goal set foodList listx ] [ walk-towards-goal ] ]
- Multiagent Systems: Algorithmic, Game-Theoretic, and Logical Foundations Yoav Shoham, Kevin Leyton-Brown, Cambridge University Press 2009, 473 p. ISBN 0521899435 [cit. 2019-01-8]