Sensors 2015, 15, 4513-4549; doi:10.3390/s150204513
ISSN 1424-8220
Design and Simulation of Material-Integrated Distributed
Sensor Processing with a Code-Based Agent Platform and
Mobile Multi-Agent Systems
Stefan Bosse
University of Bremen, Dept. of Mathematics & Computer Science, Robert Hooke Str. 5,
28359 Bremen, Germany; E-Mail:; Tel.: +49-421-17845-4103
Academic Editor: Stefano Mariani
Received: 15 October 2014 / Accepted: 9 January 2015 / Published: 16 February 2015
Abstract: Multi-agent systems (MAS) can be used for decentralized and self-organizing
data processing in a distributed system, like a resource-constrained sensor network,
enabling distributed information extraction, for example, based on pattern recognition
and self-organization, by decomposing complex tasks in simpler cooperative agents.
Reliable MAS-based data processing approaches can aid the material-integration of
structural-monitoring applications, with agent processing platforms scaled to the microchip
level. The agent behavior, based on a dynamic activity-transition graph (ATG) model, is
implemented with program code storing the control and the data state of an agent, which
is novel. The program code can be modified by the agent itself using code morphing
techniques and is capable of migrating in the network between nodes. The program code is
a self-contained unit (a container) and embeds the agent data, the initialization instructions
and the ATG behavior implementation. The microchip agent processing platform used for
the execution of the agent code is a standalone multi-core stack machine with a zero-operand
instruction format, leading to a small-sized agent program code, low system complexity and
high system performance. The agent processing is token-queue-based, similar to Petri-nets.
The agent platform can be implemented in software, too, offering compatibility at the
operational and code level, supporting agent processing in strong heterogeneous networks.
In this work, the agent platform embedded in a large-scale distributed sensor network is
simulated at the architectural level by using agent-based simulation techniques.
Keywords: sensor networks; multi-agent system; code morphing; stack machines;
distributed computing; agent-based platform and network simulation
Sensors 2015, 15 4514
1. Introduction and State-of-the-Art
Structural monitoring of mechanical structures allows deriving not just loads, but also their effects
on the structure, its safety and its functioning from sensor data. A load monitoring system (LM)
can be considered as a subclass of a structural health monitoring (SHM) system, which provides
spatially-resolved information about loads (forces, moments, etc.) applied to a technical structure.
Multi-agent systems (MAS) can be used for a decentralized and self-organizing approach to data
processing in a distributed system, like a sensor network (discussed in [1]), enabling information
extraction, for example, based on pattern recognition [2], decomposing complex tasks in simpler
cooperative agents. MAS-based data processing approaches can aid the material-integration of structural
health monitoring applications, with agent processing platforms scaled to the microchip level, which
offer material-integrated real-time sensor processing. The agent mobility, capable of crossing different
execution platforms in mesh-like networks and agent interaction by using tuple-space databases and
global signal propagation, aids with solving data distribution and synchronization issues in the design of
distributed sensor networks, as already shown in [3,4].
In [5], the agent-based architecture considers sensors as devices used by an upper layer of controller
agents. Agents are organized according to roles related to the different aspects to integrate, mainly sensor
management, communication and data processing. This organization largely isolates and decouples the
data management from changing networks, while encouraging the reuse of solutions.
Currently, there are only very few works related to low-resource agent processing platforms,
especially related to sensor networks. Examples are presented in [6] and [7], but the proposed platform
architectures do not match the constraints and requirements arising in multi-scale and multi-domain
sensor networks. For example, in [8], a Java virtual machine (VM) approach is used, which is not
scalable entirely to the hardware level and, therefore, limited to software-based designs.
The importance of the deployment of virtual machines in heterogeneous and multi-purpose sensor
networks was already pointed out in [9]. In this work, a new operational paradigm for the programming
and design of sensor network applications was addressed, showing the suitability of database-like
communication approaches, which is proposed in a different way in this work using synchronized
tuple-spaces for MAS. The system architecture also uses a stack-based bytecode interpreter with integer
arithmetic, but supporting low-level instructions only (Java VM subset), though the VM can directly
access sensors and network messages. There is no hardware implementation of the VM, degrading the
performance significantly.
Usually, sensor networks are part of and connected to a larger heterogeneous computational
network [5] and can be part of the emerging field of ambient intelligence, supporting intelligent behavior
and information retrieval combined for ubiquitous computing (see [10] for details). The deployment of
agents can overcome interface barriers arising between platforms differing considerably in computational
and communication capabilities. That is why agent specification models and languages must be
independent of the underlying run-time platform. The adaptive and learning behavior of MAS, central
to the agent model, can aid in overcoming technical unreliability and limitations [11].
The capability of agents to migrate between different processing nodes (sensor node, computer, server,
mobile device) extends the interaction domain and increases the capability to adapt to environmental
Sensors 2015, 15 4515
changes, including the failure of network nodes [10]. Migration is closely related to the agent
behavior, programming and architecture model, which immediately shifts the focus to the agent
processing platform.
This work is based on an earlier data processing architecture described in [12] using virtual stack
machines and mobile program code based on the FORTHinstruction set and which can migrate between
different VMs and nodes of a distributed (sensor) network. A code morphing mechanism was used to
enable self-modification of the program code at run-time. Code morphing is the capability of programs
to modify their own code or the program code of other programs. This early approach matched only
partially the agent model and had limited practically use due to very fine-grained code modification at
the instruction word level. Furthermore, the VM architecture supported only coarse-grained parallelism.
The first considerations to improve the early approach and to match it with a more reasonable agent
model were presented in [13], and this is finally investigated, refined and evaluated in more depth in
this work. The FORTH programming language (PL) combines the advantages of being a low-level
machine and a structured programming language with statements, which can be directly executed by a
VM interpreter [14].
In [15], a register-based virtual machine deployed in wireless sensor networks was proposed, arguing
for the lower code size and higher processing speed of a register machine compared with stack machine
code. However, a stack machine has the advantage of a simpler control and data processing unit
interacting mostly with the top elements of the stacks, speeding up the code processing and simplifying
the hardware design significantly [16]. Hence, processing speed, which means computational latency,
does not only depends on the number of instructions to be processed. Furthermore, there are complex
FORTH control flow instructions, like loops, which compensate for a higher number of instructions
required for data processing. One major advantage of the FORTH instruction set is that most instructions
carry no operand, which eases the code morphing performed by the virtual machine at run-time.
Commonly, programs are closely coupled with the interface of the execution platform (system data
structures and functions). Furthermore, binary programs cannot exchange and share code in a simple
way, and the issues with non-matching dynamic libraries are well known. The FORTH PL avoids this
lack of code exchange and sharing by providing a simple dictionary approach. Programs can store new
function words and retrieve functions by using textual string identifiers, enabling different programs to
exchange and share program code, assuming the functions have no side effects (dependencies). FORTH
programs can directly access the dictionary. This dictionary approach can be used for agents to share
behavioral activities and utility functions. Furthermore, it supports self-organization. An overview of
the FORTH programming language can be found in [17]. FORTH-based stack processors are well suited
for massive parallel and distributed computing systems, referring, e.g., to [18].
The new processing platform architecture is optimized for the agent programming model and
language used in this work. The FORTH programming language was extended with agent-specific
actions (migration, forking, communication), supported entirely at the machine level.
Sections 2 and 3 give an overview of the front-end using the Activity-based and Agent-orientated
Programming Language, AAPL, and the agent behavior model, leading to the Agent Forthand Agent
Machine Language (AFL/AML), which is discussed in detail in Section 5. This section helps to
understand the processing and morphing of agents having their origin in high-level behavior models
Sensors 2015, 15 4516
with a machine, which can fit entirely on 50 mm
silicon! The pipelined agent virtual machine (PAVM)
processing architecture and its operational semantics is explained in detail in Section 4, followed by
the design flow and transformation rules in Section 6. A simulation environment is introduced in
Section 7, which performs a network and platform simulation using agent-based simulation techniques
at a fine-grained architectural level. The simulation environment can be used to study the deployment
of MAS in complex network environments. Furthermore, it can be connected to a real-world sensor
network, too. Finally, the suitability of the proposed programming and processing model is demonstrated
with an extended case study. Figure 1 summarizes all parts of this work and their relationships.
!"# $%"#
!"#" $%&'()*
'',- '.- '*-
0%123%1# +4563%1# +789:%674;
Figure 1. Design overview and design flow: from the model, to the programming,
to the machine level, with one unique agent model (DATG, dynamic activity-transition
graph; AAPL, Agent-orientated Programming Language; AFL, Agent ForthProgramming
Language; AML, Agent Forth Machine Language; MAS, multi-agent system; PAVM,
pipelined agent virtual machine).
What is novel with respect to other approaches?
Large-domain reactivity in heterogeneous networks is provided by mobile state-based agents
capable of reconfiguring the agent behavior (activity-transition graph modification) for each
particular agent at run-time, including the inheritance of (modified) agent behavior, which
increases the reliability and autonomy of multi-agent systems.
Agent interaction offered by a tuple-space database and global signal propagation aids with solving
data distribution and synchronization issues in distributed systems design (machine-to-machine
communication), whereby tuple spaces represent the knowledge of single and multiple agents.
The common agent programming language, AAPL, and processing architecture, PAVM, enables
the synthesis of standalone parallel hardware implementations or, alternatively, standalone
Sensors 2015, 15 4517
software implementations and behavioral simulation models, compatible at the operational and
processing level, which enables the design and testing of large-scale heterogeneous systems.
An agent instantiation is represented by and the behavior is implemented with a unified, very
compact code frame consisting of machine instructions with embedded (private) agent data and all
control units, like relocation lookup tables and the transition network section. The code frame can
migrate between nodes, preserving the control and data state of an agent.
AAPL provides powerful statements for computation, agent control, agent interaction and mobility
with static and limited resources.
An intermediate and machine language, AFL/AML, is based on the stack machine FORTH
programming language, matching AAPL well and which offers direct transformation of the AAPL
behavior model and the AAPL statements to the machine VM level.
A token-based pipelined multi-core stack VM architecture for the agent processing (PAVM), which
is suitable for hardware microchip implementations on register-transfer level and system-on-a-chip
architectures, offers optimized computational resources and exceptional speed, requiring less than
1-M gates. There are alternative efficient software implementations of the VM, fully coded and
operationally compatible.
The processing platform is a standalone unit, which does not require any operating system
(OS) and boot code for initialization, leading to a low start-up time latency, which is well
suited for self-powered devices. All agent-specific actions, like migration or communication, are
implemented at the VM machine level.
There is improved scaling in large heterogeneous network applications, due to low host platform
and communication dependencies of the VM and the agent FORTH programming model.
2. Agent Behavior Modeling: The Activity-Based Agent Model and Graphs
The implementation of mobile multi-agent systems for resource-constrained embedded systems with
a particular focus on the microchip level is a complex design challenge. High-level agent programming
and behavior modeling languages can aid with solving this design issue. Activity-based agent models
can aid with carrying out multi-agent systems on hardware platforms.
The behavior of an activity-based agent is characterized by an agent state, which is changed by
activities. Activities perform perception, plan actions and execute actions modifying the control
and data state of the agent. Activities and transitions between activities are represented by an
activity-transition graph (ATG). The Activity-Based and Agent-orientated Programming Language,
AAPL (detailed description in [16]), was designed to offer modeling of the agent behavior at the
programming level, defining activities with procedural statements and transitions between activities with
conditional expressions (predicates). Though the imperative programming model is quite simple and
closer to a traditional PL, it can be used as a common source and intermediate representation for different
agent processing platform implementations (hardware, software, simulation) by using a high-level
synthesis approach.
Sensors 2015, 15 4518
2.1. Agent Classes
The agent behavior, perception, reasoning and the action on the environment are encapsulated in
agent classes, with activities representing the control state of the agent reasoning engine and conditional
transitions connecting and enabling activities. Activities provide a procedural agent processing by a
sequential execution of imperative data processing and control statements. Agents can be instantiated
from a specific class at run-time. A multi-agent system composed of different agent classes enables the
factorization of an overall global task into sub-tasks, with the objective of decomposing the resolution of
a large problem into agents in which they communicate and cooperate with one other.
The activity-graph based agent model is attractive due to the proximity to the finite-state machine
model, which simplifies the hardware implementation.
Figure 2. Agent behavior programming level with activities and transitions (Activity-Based
and Agent-orientated Programming Language (AAPL) (left)); agent class model and
activity-transition graphs (top); agent instantiation, processing and agent interaction on the
network node level (right) [16].
An activity is started by a transition depending on the evaluation of (private) agent data (conditional
transition) related to a part of the agents’ belief in terms of the belief-desire-intention (BDI) architecture
or started by unconditional transitions (providing sequential composition), shown in Figure 2. Each
agent belongs to a specific parameterizable agent class, AC, specifying local agent data (only visible for
the agent itself), types, signals, activities, signal handlers and transitions.
Definition: There is a multi-agent system (MAS) consisting of a set of individual agents {a
There is a set of different agent behaviors, called classes, C = {AC
, AC
,..}. An agent belongs to
Sensors 2015, 15 4519
one class. In a specific situation, an agent Ag
is bound to and processed on a network node N
(e.g., a microchip, a computer or a virtual simulation node) at a unique spatial location (m,n). There is
a set of different nodes, N = {N
, N
,..}, arranged in a mesh-like network with peer-to-peer neighbor
connectivity (e.g., two-dimensional grid). Each node is capable of processing a number of agents n
belonging to one agent behavior class AC
and supporting at least a subset of C’C. An agent (or at least
its state) can migrate to a neighbor node, where it continues working. Each agent class is specified by the
tuple AC = hA,T,F,S,H,Vi. A is the set of activities (graph nodes); T is the set of transitions connecting
activities (relations, graph edges); F is the set of computational functions; S is the set of signals; H is the
set of signal handlers; and V is the set of body variables used by the agent class.
2.2. The Dynamic ATG and Sub-Classing
Usually, agents are used to decompose complex tasks into simpler ones. Agents can change their
behavior based on learning and environmental changes or by executing a particular sub-task with only
a sub-set of the original agent behavior. The case study in Section 8. shows one example of a
self-organizing multi-agent system with different agent behaviors and goals forked from one original
root agent. An ATG describes the complete agent behavior. Any sub-graph and part of the ATG can be
assigned to a subclass behavior of an agent. Therefore, modifying the set of activities Aand transitions T
of the original ATG introduces several sub-behaviors for implementing algorithms to satisfy a diversity
of different goals. The reconfiguration of activities A = {A
A, ..} from the original set A
and the modification or reconfiguration of transitions T’ = {T
,..} enable dynamic ATGs and agent
sub-classing at run-time, shown in Figure 3.
Figure 3. Dynamic activity-transition graph (ATG) transformation at run-time by modifying
the set of transitions and activities, creating new agent (sub-)classes from an original
root class.
Sensors 2015, 15 4520
3. Agent Behavior Programming: The High-Level AAPL
The AAPL (details can be found in [3]) offers statements for parameterized agent instantiation, like
the parameterized creation of new agents and the forking of child agents inheriting the control and data
state of the parent agent.
3.1. Agent Interaction and Coordination
Multi-agent and group interaction are offered with synchronized Linda-like tuple database space
access operations and peer-to-peer interaction using signal propagation carrying simple data delivered
to and processed by the signal handlers of agents. The tuple-space model, first introduced by the
coordination language, Linda [19], is basically a shared memory database used for synchronized data
exchange among a collection of individual agents, which was proposed in [20] and [8] as a suitable MAS
interaction and coordination paradigm. Synchronization is offered by matching producer commitments
of tuples and consumer requests for tuples. If a consumer requests a tuple that is not available, it will be
blocked (waiting) until a producer commits a matching tuple, which is explained later.
A tuple database stores a set of n-ary data tuples, t
= (v
), an n-dimensional value tuple.
The tuple space is organized and partitioned into sets of n-ary tuple sets = {TS
}. A
tuple is identified by its dimension and the data type signature. Commonly, the first data element of
a tuple is treated as a key. Agents can add new tuples (the output operation) and read or remove tuples
(the input operations) based on the tuple pattern and pattern matching, p
= (v
?, ..,v
), a
n-dimensional tuple with actual and formal parameters. Formal parameters are wildcard placeholders,
which are replaced with values from a matching tuple. The input operations can suspend the agent
processing if there is actually no matching tuple available. After a matching tuple is stored, blocked
agents are resumed and can continue processing. The pattern of tuples matches iff the tuples have
the same arity (equal to the number of elements), all actual values match and all formal parameters
can be satisfied (e.g., the data type of actual values and formal parameters must be equal). Therefore,
tuple databases provide inter-agent synchronization, too. This tuple-space approach can be used to build
distributed data structures, and the atomic tuple operations provide data structure locking. The distributed
tuple spaces represent the knowledge of agents and the history. The scope of a tuple-space is limited in
this work to the node domain.
In contrast, signals, which can carry additional scalar data values, can be used for local (in terms of
the node scope) and global (in terms of the network scope) domain agent interaction. In contrast to the
anonymous tuple-space interaction, signals are directly addressed to a specific agent or a group of agents.
The deliveries of signals are not reliable in the case that the agents raising and receiving the signal are
not processed on the same node.
3.2. Agent Mobility
Agent mobility is offered by a simple move operation, which migrates the agent to a node in the
neighborhood, assuming mesh-like networks, not necessarily with a static topology. Communication
links are assumed as unreliable, which can be tested by an agent in advance.
Sensors 2015, 15 4521
3.3. Agent Classes
Agent classes are defined by their parameters, variables, activities and transition definitions, reflecting
the ATG model. Optionally, an agent class can define additional functions for computation and signal
handlers. There are several statements for ATG transformations and composition. Transitions and
activities can be added, removed or changed at run-time.
Appendix A.1 introduces a short notation, which is a one-to-one and isomorphic mapping of the
AAPL. This short notation is used in the following section and is used to describe the agent behavior in
the case-study section.
Figure 4 shows the effects of selected major AAPL statements on the behavior of a mobile multi-agent
system consisting of agents instantiated from different agent behavior classes.
Figure 4. Effects of AAPL statements on the behavior of a multi-agent system.
4. Architecture: Agent Processing Platform
The requirements for the agent processing platform can be summarized as: (1) the suitability
for microchip-level (SoC) implementations; (2) the support of a standalone platform without any
operating system; (3) the efficient parallel processing of a large number of different agents; (4) the
scalability regarding the number of agents processed concurrently; and (5) the capability for the creation,
modification and migration of agents at run-time. The migration of agents requires the transfer of the
data and the control state of the agent between different virtual machines (at different node locations). To
simplify this operation, the agent behavior based on the activity-transition graph model is implemented
with program code, which embeds the (private) agent data, as well as the activities, the transition network
and the current control state. It can be handled as a self-contained execution unit. The execution of
the program by a virtual machine (VM) is handled by a task. The program instruction set consists of
Sensors 2015, 15 4522
zero-operand instructions, mainly operating on the stacks. The VM platform and the machine instruction
set implement traditional operating system services, too, offering a full operational and autonomous
platform, with a hybrid RISC and CISC architecture approach. No boot code is required at start-up
time. The hardware implementation of the platform is capable of operating after a few clock cycles,
which can be vital in autonomous sensor nodes with local energy supply from energy harvesting. An
ASICtechnology platform requires about 500–1000-k gates (16-bit word size), and can be realized with
a single SoC design.
4.1. Platform Architecture
The virtual machine executing tasks is based on a traditional FORTH processor architecture and
an extended zero-operand word instruction set (αFORTH), discussed in Section 5. Most instructions
directly operate on the data stack DS and the control return stack RS. A code segment CS stores
the program code with embedded data, shown in Figure 5. There is no separate data segment.
Temporary data are stored only on the stacks. The program is mainly organized by a composition of
words (functions). A word is executed by transferring the program control to the entry point in the
CS; arguments and computation results are passed only by the stack(s). There are multiple virtual
machines with each attached to (private) stack and code segments. There is one global code segment
CCS storing global available functions and code templates, which can be accessed by all programs.
A dictionary is used to resolve the CCS code addresses of global functions and templates. This
multi-segment architecture ensures high-speed program execution, and the local CS can be implemented
with (asynchronous) dual-port RAM (the other side is accessed by the agent manager, as discussed
below) and the stacks with simple single-port RAM. The global CCS requires a Mutex scheduler to
resolve competition by different VMs.
The register set of each VM consists of: < = {CF, CFS, IP, IR, TP, LP, A, .. , F }. The code
segment is partitioned into physical code frames. The current code frame that is processed is stored
in the code frame pointer register (CF). The instruction pointer (IP) is the offset relative to the start
of the current code frame. The instruction word register (IR) holds the current instruction. The
look-up table pointer register LP stores an absolute code address pointing to the actual relocation
LUT in the code frame, and the transition table pointer register TP stores an absolute address
pointing to the currently used transition table (discussed later). The registers A to F are general
purpose registers.
The program code frame (shown on the right of Figure 5) of an agent consists basically of four
parts: (1) a lookup table and embedded agent body variable definitions; (2) word definitions defining
agent activities, signal handlers (procedures without arguments and return values) and generic functions;
(3) bootstrap instructions, which are responsible for setting up the agent in a new environment (i.e.,
after migration or on the first run); and (4) the transition table calling activity words (defined above)
and branching to succeeding activity transition rows, depending on the evaluation of conditional
computations with private data (variables). The transition table section can be modified by the agent
by using special instructions, explained in Section 5.4. Furthermore, new agents can be created by
composing activities and transition tables from existing agent programs, creating subclasses of agent
Sensors 2015, 15 4523
super classes with a reduced, but optimized, functionality. The program frame (referenced by the
frame pointer CF) is stored in the local code segment of the VM executing the program task (using
the instruction pointer, IP). The code frame loading and modifications of the code are performed by the
virtual machine and the agent task manager only. A migration of the program code between different
VMs requires a copy operation applied to the code frame. Code morphing can be applied to the currently
executed code frame or to any other code frame of the VM, referenced by the shadow code frame
register (CFS).
Figure 5. (Left) The agent processing architecture based on a pipelined stack machine
processor approach. Tasks are execution units of the agent code, which are assigned to a
token passed to the VM by using processing queues. The control state is stored in and
restored from the process table. After execution, the task token is either passed back to the
input processing queue or to another queue of either the agent manager or a different VM;
(Right) The content and format of a code frame.
Each time a program task is executed, the stacks are initially empty. After returning from the current
activity execution, the stacks are left empty, too. This approach enables the sharing of only one data and
return stack by all program tasks executed on the VM to which they are bound! This design significantly
reduces the required hardware resources. In the case of a program task interruption (process blocking)
occurring within an activity word, the stack content is morphed to code instructions, which are stored in
the boot section of the code frame, which is discussed later. After the process resumption, the stacks can
be restored.
Sensors 2015, 15 4524
Each VM processor is connected to the agent process manager (PM). The VM and the agent manager
share the same VM code segment and the process table (PT). The process table contains only basic
information about processes required for the process execution. The column entries of a process table
row are explained in Table 1.
Table 1. Process table (PT) row format and description.
Root code
number of
of the
IP offset
ID of the
of the
4.2. Token-Based Agent Processing
Commonly, the number of agent tasks N
executed on a node is much larger than the number of
available virtual machines N
. Thus, efficient and well-balanced multi-task scheduling is required to get
the proper response times of individual agents. To provide fine-grained granularity of task scheduling, a
token-based pipelined task processing architecture was chosen. A task of an agent program is assigned to
a token holding the task identifier of the agent program to be executed. The token is stored in a queue and
consumed by the virtual machine from the queue. After a (top-level) word is executed, leaving an empty
data and return stack, the token is either passed back to the processing queue or to another queue (e.g., of
the agent manager). Therefore, the return from an agent activity word execution (leaving empty stacks) is
an appropriate task scheduling point for a different task waiting in the VM processing token queue. This
task scheduling policy allows fair and low-latency multi-agent processing with fine-grained scheduling.
Tokens are colored by extending tokens with a type tag. There are generic processing tokens, signal
processing tokens and data tokens, for example, appearing in compounds with signal processing tokens,
which are discussed later.
Each VM interacts with the process and agent task manager. The process manager passes process
tokens of ready processes to the token queue of the appropriate VM. Processes that are suspended (i.e.,
waiting for an event) are passed back to the process manager by transferring the process token from the
current VM to the manager token queue.
4.3. Instruction Format and Coding
The width of a code word is commonly equal to the data width of the machine. There are four
different instruction code classes: (1) value; (2) short command; (3) long command Class A; and (4) long
command Class B. A value word is coded by setting the most significant bit of the code word (MSB) and
filling the remaining bits (N-1, N machine word size) with the value argument. To enable the full range
of values (full data size N bit), a sign extension word can follow a value word setting the most significant
bit. A short command has a fixed length of eight bits, independent of the machine word and data width.
Sensors 2015, 15 4525
Short commands can be packed in one full-sized word, for example two commands in a 16-bit code word.
This feature increases the code processing speed and decreases the length of a code frame significantly.
The long commands provide N-4 (class A) and N-7 (class B) bits for argument values.
4.4. Process Scheduling and VM Assignment
The token-based approach enables fine-grained auto-scheduling of multiple agent processes already
executed sequentially on one VM with a FIFO scheduling policy. A new process (not forked or created by
a parent) must be assigned to a selected VM for execution. There are different VM selection algorithms
available: round-robin, load-normalized, memory-normalized and random. The VM selection policy has
a large impact on the probability of the failure of a process creation and process forking by a running
process, requiring child agents to be created on the same VM!
5. Agent FORTH: The Intermediate and the Machine Language
The FORTH programming language corresponds to an intermediate programming language level,
with constructs from high-level languages, like loops or branches, and low-level constructs used in
machine languages, like stack manipulation. The αFORTH (AFL) instruction set I
consists of
a generic FORTH sub-set I
with common data processing words (summarized in Appendix A.2)
operating on the data and a return stack used for computation, a control flow instruction set I
, i.e.,
loops and branches, a special instruction set I
for agent processing and creation, mobility and agent
behavior modification at run-time based on code morphing and, finally, an agent interaction sub-set I
based on the tuple space database access and signals. The AFL language is still a high-level programming
language close to AAPL, which can be used directly to program multi-agent systems. The PAVM agent
processing platform will only support a machine language sub-set (AML) with a small set of special
low-level instructions I
for process control, so that I
), and with some notational
differences. Several complex and high-level statements of I
are implemented with code sequences
of simpler instructions from the I
set, and some of them are introduced in Section 6.
The (current) AML instruction set consists of 92 instructions, most of them being common FORTH
data processing instructions operating immediately on stack values, and 31 complex special instructions
required for agent processing, communication and migration. The AML instruction set is not fixed and
can be extended, which leads to the increased resource requirement and control complexity of the VM.
5.1. Program Code Frame
An αFORTH code frame (see Figure 6) starts with a fixed sized boot section immediately followed
by a program look-up relocation table (LUT). The instructions in the boot section are used to:
set up the LUT offset register LP (always the first instruction),
to enable program parameter loading (passed by the data stack),
restore stack content after migration or program suspending and
to branch the program flow to the transition table section.