Context and attention in activity-based intelligent systems

. Complex natural systems are natural systems that display strong emergent behavior. Adaptive agents in a resource limited environment optimize their behavior by processing only the most relevant information, i.e., activating the most germane components, learned or evolved from previous experience, thereby responding optimally with least expenditure of energetic resources. A new class of artificial systems known as Resource-constrained Complex Intelligent Dynamical Systems (RCIDS) aim to capture this agent behavior and how these agents switch their attention from one object/subject to the next by way of using activity-aware algorithms in a multi-level hierarchy. In this article, we provide an overview of adaptive systems based on RCIDS. The activity manifested by the resource-constrained agents can be captured towards engineering an intelligent system capable of switching attention. We will discuss the relationship between Activity, Systems Theory, Emergent Behavior, and Second Order Cybernetics, and establish that an activity-based intelligent system (ABIS) can be engineered using Discrete Event Systems (DEVS) formalism through its Levels of Systems Specification. We elaborate on the multi-faceted nature of activity and describe how activity is positioned at a higher level of abstraction than the agent, thereby, abstracting away the agent architecture underneath by making it completely transparent in the same way that Minsky’s Society of Mind abstracts away neurons, axons, and synapses to compose a brain-like system of agents and k-lines. We present the theory behind ABIS, the context-agnostic behavior of algorithms implemented within agents and various issues that need to be addressed to engineer an ABIS.


Introduction
Complex natural systems (CNS) are natural systems that display strong emergent behavior.A strong emergent behavior is a special type of emergent behavior where the emergent phenomena has causal properties at lower levels of system specifications.It warrants a new nomenclature and the emergent behavior is irreducible to behavior of any of its sub-systems.Examples of CNS include the biological cell, the brain, the immune system, communities, etc.These systems can also be classified as open systems as their behavior cannot be closed under composition and they exchange energy and matter with a dynamical environment.CNS bear self-similar and fractal properties.The complexity in CNS exists at each level and is marked by information boundaries.Information gets transformed as it travels across these levels.Each level is engaged in the cycle of sensing, processing, synthesizing and actuating information.As Pinker [1] stated, only the "relevant" information crosses these boundaries.Sometimes, Emergent behavior is the lack of understanding of the system to begin with [14].Ashby also well acknowledged that as systems grow in complexity, the labour necessary to develop a complete understanding of the system parts is prohibitive.In conclusion, it is difficult to capture the complete holistic behavior of a complex system.Cybernetics gave way to Complex Adaptive Systems in the nineties [15] and emergence was addressed in various other disciplines around the same time.
Indeed, an emergent behavior is an observer phenomenon [3].Banabeau and Dessalles [16] suggest that it is also the characteristic feature of detection of hierarchies.Wolf and Holvoet [17] summarize emergence as primarily dealing with capturing macro-level behavior resulting from the interactions of individual parts of the system.In the literature on Complex Adaptive Systems theory, emergence is classified into two broad categories: • Weak emergence: This type of emergence occurs when the emergent phenomena can be traced back to individual parts of the system.The emergent phenomena does not have causal powers and the presence of such behavior does not alter the dynamics of the individual parts.This behavior is repeatable and reproducible.• Strong emergence: This type of emergence lead to new nomenclature for the behavior's definition, classification and description.Usually, the subject matter experts (SMEs) are best to identify such behavior.This behavior has causal powers at lower levels and results in behavior modification of the individual parts of the system.
While weak emergence can be easily engineered in an artificial system (e.g.DEVS-based systems, or agent-based systems), engineering strong emergence is difficult.The former is easily captured in Systems Theory's closed-under-composition principle, while the latter makes the system an open system that synthesizes new knowledge and consumes it.

Attention and Computational Intelligence
The human cognitive ability to attend has been widely researched in cognitive and perceptual psychology, neurophysiology and in computational systems and the core issue has been of Information Reduction [1,18].This capacity to attend has been computationally implemented as a search-limiting heuristic in early AI literature.Tsotsos and Rothenstein [18] found that various cognitive architectures implemented information reduction in different ways, mostly to limit the "working memory" component but still fail to explicitly discuss human capacity, bottlenecks and resource limits.Computational models of Attention can be specialized into four hypotheses: salience map, temporal tagging, emergent attention and selective routing.Styles [19] suggested that attentional behavior emerges as a result of complex underlying processes in brain and Ships's review [20] concludes that emergent attention is the most likely hypothesis.
Mittal and Zeigler [8] argued that while attention switching occurs naturally, the focus of attention is a deliberate, top-down phenomenon guided by a goal-directed behavior: When scalability and measurement of intelligence is viewed from the Computer Science perspective, then intelligent processing can be viewed as a subset of universal computation.The notion of intelligence is multifaceted, subjective and adding intelligence in a system always has a cost, whether computational time, energy, resources or knowledge.Given the apparent complexity of such a system at both the design/compile time or at run-time, how does one focus its attention at a specific feature of such system?Attention is defined as the capacity to direct one's resources (or mind in psychological terms) preferentially to an object from a set of complex stimuli, thereby reducing their footprint.While attention switching occurs naturally, the focus of attention is a deliberate, top-down phenomenon guided by a goal-directed behavior.
We would require an architecture where attention emerges and computation intelligence is engineered by way of computational algorithms [8,21] and/or cognitive/system architectures.

Resource constrained Complex Intelligent Dynamical Systems (RCIDS)
A resource-constrained scalable complex intelligent dynamical system (RCIDS) [8] is defined as follows: • Resource constrained environment: In the modeled connectionist system, network bandwidth, and computational resources available to any sensor/agent are finite.The constraints may take the form of energy, time, knowledge, control, etc. that are available to any processing component • Complex: Presence of emergent behavior that is irreducible to any specific component in the system.Attention switching is an emergent phenomena.• Intelligent: The capacity to process sensory input from the environment and act on the sensory input by processing the information to pursue a goal-oriented behavior • Dynamical: The behavior is temporal in nature.The system has emergent response and stabilization periods.
• System: The model conforms to systems theoretical principles.
One of the emergent properties of RCIDS is its capacity to direct attention to the most active component and switch to the next component when activity distribution changes.While such behavior is natural in CNS, developing this type of behavior in artificial systems is a challenge.One such prototype model was attempted using DEVS Formalism [8,21] and it validated that attention switching is an emergent property of a resource-constrained dynamical intelligent system.

Properties of an intelligent adaptive system
Summarizing the background, what we are looking in an artificial intelligent adaptive system is the capacity to: 1. Attend strong emergent behavior both quantitatively and qualitatively 2. To modify both structure and behavior at multiple levels of system/agent specifications 3. Channel resources to focus attention on relevant "subject" or stay "in-context" 4. Switch attention in a resource-limited environment 5. Situate in a new environment (when strong emergent behavior has causal effects) 6. Pursue goal-directed behavior We shall in the sections ahead how such capacities are addressed by Activity-based Intelligent Systems.

Activity, Systems Theory, Emergence and Second-order Cybernetics
According to Hu and Zeigler [22], Energy is the general concept that represents the physical cost of action in the real world.Information is the general concept that models how systems decide on, manage, and control their actions [3].As in Figure 1 a), information and energy are two key concepts whose interaction is well understood in the following common sense manner: On one hand, information processing takes energy, and on the other hand, getting that energy requires information processing to find and consume energy-bearing resources.The information processing that a system can do is limited by the energy available to it.However, to increase the amount of energy available to it, a system must use its information processesbut these use some of that energy.A SoS is sustainable in the environment if the energy expended by the SoS to meet behavioral requirements is matched by the energy accruing to it by satisfying the requirements [8].
Activity is a measure of system behavior that allows estimating how much energy a behavior needs to consume.Intuitively, the more active that a component is, the more energy it requires to maintain its activity.Hu & Zeigler [22] and Zeigler [23] postulated the linear allocation strategy: where E i and A i are the energy allocated and activity, resp. of the component with subscript i and a is the proportionality factor.Assume that each pattern sensed by the system requires a corresponding distribution of activities among its components to be properly sensed.Then Hu and Zeigler show that the potential to save total energy using the linear allocation strategy is determined by the activity disparity, which is the difference between the maximum and minimum activity of the components, To achieve the linear allocation condition requires coordination mechanisms such as the attention focusing capability.Semantically, activity is an abstract concept that samples the information and based on the developed metrics, associating it with energy required or consumed.Activity-based Systems were recently conceptualized by Zeigler [9] wherein four concepts were defined: 1.Activity Measures: These reflect various metrics defined for an abstract activity 2. Activity Equivalence Classes: These reflect similar activity patterns predicting system behavior 3. Activity Distribution: These reflect allocation of energy/resources for an activity 4. Activity Correlation: These reflect correlation of activity with system's outcome Activity concept linking information and energy [22] These concepts are applied to the Discrete Event Systems (DEVS) Levels of Systems specification (Table 1) and is extended from Zeigler [9].As can be easily seen, activity being at a higher level of abstraction (against granular Event) is applicable to higher levels of system specifications (Levels 2, 3, and 4).
DEVS formalism is closed-under-coupling as it is founded upon Systems theory that is closedunder-composition [10].To display intelligent behavior (both emergent and adaptive behavior), the systems formalism must have the capacity of variable structure at all levels of systems specification.Mittal [3] summarized various extension mechanisms to DEVS formalism in Multi-Level-DEVS, Stigmergic-DEVS and Complex-adaptive-systems-DEVS (Figure 2).To specify adaptive behavior the following three properties of an open-system must be addressed by Levels of system specification: 1. Adaptation: change in behavior at multiples level of systems specification.It "learns" new behavior within an agent, modifies state transitions and communicates to a new dynamic environment (with a different set of couplings).2. Causation: The acquired behavior is causal at multiple levels of system specification 3. Persistent: The agent's behavior and system's structure is persistent i.e. the system has history and the new behavior is based on behavior acquired over time.
Modeling of the above three properties are in alignment with the notion of weak and strong emergent behaviors.Weak emergent behaviors can be traced back to system's hierarchical structure, constituent component behaviors and their external interactions.For strong emergent behavior, new knowledge needs to be added.Knowledge to add/remove states, transitions, weights to these transitions, input/output interfaces, interactions and the number of system components, has to be either synthesized internally within the system or has to be provided externally by Subject Matter Experts (SMEs).Many workers in systems theory, beginning with Ashby [14] have expressed similar views with respect to emergent properties which are rooted in the incomplete understanding of the system [24].The development of a strong emergent system is an iterative process wherein knowledge once added, becomes a part of the system thereby now displaying weak emergent behavior (Figure 3).It is worth arguing that the display of strong emergent behavior by a system model is analogous to lack of understanding of the real system to begin with.This argument is in congruence with the definitions of weak and strong emergent behavior.In the computational environment, we are certainly trying to close the system under composition principle so that it does not display any unintended behaviors.Alternatively, the problem of "semantic internal knowledge synthesis" is still an open problem as there needs to be SME involvement to validate whether the new knowledge of the observed behavior is semantically relevant, is persistent and has causal properties.If the observed emergent behavior is tagged "intelligent" by the SME, it may so be the case.However, the system "just-is"!Activity characterization for the above three properties of an open-system can be seen in Table 2.It summarizes that the properties of adaptation and persistence have to be implemented at Levels 2, 3 and 4 (behavior, interface and structure), and causality to be implemented at level 3 and 4 (interface and structure level) of systems specification.This cyclical developmental methodology in Figure 3 is also supported by the approach given by Foo and Zeigler [24] wherein they attempt to capture the holistic effects.Indeed, the emergent behavior is holistic in nature.They define holism as: Holism = reductionism + computation + higher-order effects (2) While the computation is strictly the algorithmic complexity of the system, the higher-order effects are the emergent behaviors that are considered in the holistic representation (2) above.In activity-based systems, as laid out in Table 2 and 3, activity characterization occurs at higher levels of systems specification and aim to capture some of the higher-order effects.However, a more rigorous definition of higher-order effects that accounts for "intuition', "insight", etc. is warranted [23].The engineering of a formal system must incorporate the observed behavior as the system is being built (in simulation runs) is also in accordance with the Second order cybernetics philosophy [25] wherein the observer and the observed cannot be separated.A strong emergent phenomena is an observer phenomena at a higher level of systems specification and the observed system must incorporate it to be a true model of a real system.An open system is a second-order cybernetic system.An intelligent system/agent is consequently, also a second-order cybernetic system.

Activity-based Intelligent Systems
In the previous section, we saw that activity characterization can be associated with DEVS Levels of Systems specifications.Activity measures, equivalence classes, distributions and correlations can be employed to capture the emergent behavior at different system specifications levels.In a sense, formulation of high-level constructs to abstract the system's behavior towards understanding higherorder effects is very much the objective.Activity modeling happens at a higher level of abstraction than that of the systems low-level behavior (see Table 2 and 3).Activity characterization is a pattern identification exercise at higher level of system specifications and rightly so, with the help of a SME, the behavior can be marked "intelligent".
The fundamental basis of all DEVS systems is Event that occurs at Levels 0 and 1 of systems specification.Activity characterization allows the analysis of systems behavior at event (using activity measures), event-streams (using activity equivalence classes and activity distributions) and complexevent-processing (using activity distribution and activity correlation) abstractions.In addition, as the system strives for closed-under-coupling property for predictable and reproducible behavior, emergent event-streams (validated by a SME) are continuously incorporated in systems behavior at multiple system specification levels (Figure 3).An Emergent Event-Stream is defined as an event-stream or event-trajectory that emerges out of systems behavior and can be characterized using activity measures, equivalence classes, distributions and correlations.It is a new pattern that has semantic validity according to SME and rightly so, it is a strong emergent behavior if it has causal nature, which again is determined by the SME.The newly provided SME knowledge is incorporated in the system and various new interactions are defined that do the job of positive and negative feedbacks in the interaction network and also update the constituent component's behavior to utilize the new knowledge.This is analogous to "learning" new knowledge by the system and utilize it, which is where artificial intelligent systems are the weakest in design.
Taking the same concept to system-of-systems environment, the emergent streams are detected syntactically, semantically validated by a SME in a pragmatic context.As the system needs to be adaptive, to display intelligent behavior in symbolic terms as well (according to AI literature), semantic interoperability must exist across systems components that detect this event.However, there is no guarantee that two systems will attach the same semantics to a syntactic event (Figure 4).At this juncture, the involvement of SME ensures that the semantics exist in a pragmatic context.Computationally, as a syntactic event occurs, activity measures capture the event syntactically, activity distributions ensure semantic validity, and activity equivalence classes and activity correlation place the detected activity in a pragmatic context (Figure 5).The incorporation of new knowledge by SME at different levels of activity characterization ensures that the activity stays "in-context" and conforms to higher level systems behavior.
ABIS are systems that use activity characterization as a means to incorporate strong emergent behavior in adapting to their dynamic environments.They incorporate both the perspectives on intelligent behavior, i.e. emergent as well as symbolic.ABIS have the following properties: 1.They are multi-agent systems 2. Each agent is an adaptive agent that can sense the environment and act on the perceived environment thereby changing it.3. Agent is composed of sensors and actuators.4. Agent can switch attention and regulate resource allocation to stay within the context.5. Activity occurs within the agent as well as outside the agent where the external interaction occurs with environment and/or other agents.
6.The emergent behavior as a result of direct/indirect multi-agent interaction is causal at the agent level making the system evolving as a whole.7. The SME role is critical in the development lifecycle of an ABIS.8. Computations (both in agent and in the environment) are dynamical in nature.9. Strong emergent behavior appears either due to complex agent behavior in a simple environment (e.g.predator-prey pursuit models), or complex agent behavior in a complex environment (e.g.human models in Live, Virtual and Constructive [LVC] environments [27]). .

Activity in Context and Attention-focusing in ABIS/RCIDS
Activity is an abstract concept [6], is multi-level and multi-resolutional with the granularity of a formal event.From a simple count of state transition (in formal DEVS specification), to abstract activity characterization discussed in Table 1, activity modeling is a complex endeavor.While the former can be addressed computationally, activity characterization with the help of SME can take activity detection to semantic and pragmatic levels, and incorporate strong emergent behavior in systems refinement.Activity has dynamical behavior and is multi-faceted.The same syntactical event when incorporated into activity characterization, can be studied under different contexts belonging to different areas of interest.Each different context is sampled and studied using appropriate "currency", which is essentially the activity metric.We define Context Currency as the medium of exchange for information in a particular context and define context in relation to activity as: the background activity supporting a foreground event.
Context has different connotations with respect to the application field.It is multi-faceted.Table 3 list some generally accepted prevalent definitions of context in the respective community of interest.As can be seen in Table 3, every context has a different currency and can be easily quantified using the activity concept.While only a few of the areas of interest are listed, the list can be expanded further.Through the activity characterization, the Context Currency allows us to raise the level of context to the semantic domain.For intelligent goal-pursuing behavior by an agent, the activity (both designed and emergent) needs to be partitioned according to both bottom-up and top-down phenomena [8].Incorporating both these phenomena results in a Sensor-Gateway system (Figure 6).It can also be construed as an agent-gateway system where an agent has the sensory and the perceptual apparatus.The subject of focusing attention to one of the gateways is a major issue in fractal systems such as CPS.The implementation is attempted through RCIDS mentioned in Section 2.3 that display an emergent property of focusing attention to an agent-gateway that detects "change" in activity.RCIDS work with finite resources and consequently, resource management is a critical aspect of such systems.Resources may take the form of computational time, or any other abstract measure such as knowledge partitioning that is applicable in problems dealing with Big Data and Genetic Algorithms.The distribution is done through feedback loops from other supporting system components, with the most active component receiving the maximum number of resources.As the agent pass through cycles of high and low activity so does the assignment of the resources allocated to it.RCIDS are equipped with programmable sensor-gateways that can dynamically change their sampling rate and their threshold value to report the data.A fractal sensor/agent-gateway system is minimally composed of four components: 1. Sensor/Agent: This component is attuned to the appropriate activity currency and has a sampling rate to detect a quantum change.

Sampling or Resource Allocation Manager (RAM):
This component samples the activity currency in a pragmatic context using multiple computational algorithms.

Rate Estimator (RE): This component estimates the dynamical behavior of Gateway and
provides a smoothing function to prevent rapid system oscillations and to ensure that an activity persists "long enough".

Data-driven Decision Maker (DDM): This component quantifies the information flow from
Sensor, sets new Goal (threshold for sensor) and quantifies the context.The quantized change in activity is both quantitatively and qualitatively evaluated by various computational algorithms (such as "Winner Take All" [WTA]) implemented in RAM that sample the activity at that level.To review some of the algorithms, see Mittal and Zeigler [8].RAM validates the importance of any activity sensed by the sensor.The criteria for deciding an activity "important" is based on the sensitivity of the sensor and the RE threshold.Every sensor is provided with a RE to validate the results of the sensor.RE is a realization of context sensitivity guided by task-directed biasing.The RE may or may not be present at the intermediate levels in the hierarchy but it must be at the coarsest level, to deduce and validate what the sensors are witnessing.
The top-down partitioning through activity characterization ensures that it has semantic and pragmatic validity.A bottom-up prototype model was developed using the DEVS formalism [8,21] and the simulation results validated that the system is capable of directing and switching its focus to components that display persistent high activity during simulation and also can withdraw attention from components which are not displaying any activity.
In a self-similar design, there exists a RAM with different currencies at every level to direct focus and attention and an RE coupled to every sensory element.The communication between RAM and RE is guided by activity characterization that delivers a Quantized Context operating on a specific activity currency.The system also allows resources and peripheral attention to the ongoing working sensors and doesn't inhibit or stall their operation in the pursuit of focusing attention to the important one.For different WTA mechanisms, the sensor population is met accordingly and in no case, the resources are completely withdrawn from the running sensors as it is not predictable which sensor might produce an important information the next instant.The system lets the other sensors keep working at their default settings and provide the resources for their operation and intermittently switches when an activity of high importance is encountered and advertised by any sensor.
While activity is at abstract level, another related concept activation can be grounded in both the computational and biological hardware.Activation needs an actor that activates or gets activated.Activity is manifested by one component and is detected by another component.Certainly actors on both sides of the activity can be merged together and result in a complex actor that is capable of causing and detecting its own activity (which aligns well with the observer in Second-order Cybernetics and articulated by Mittal [3]).Technically, activation is the act of supplying energy/resources to a component to enable/increase its activity.This act is specifically different from, and independent of, stimulating directly by an external input.Sensory-based activation is the detection of activity by a sensor and can be traced back to sensor events.It can be classified into two types: 1. External sensory-based: This activation is due to external sensors (e.g.bodily sensors interfacing with the external environment) 2. Internal sensory-based: This activation is due to internal sensors (e.g.proprioceptive sensors that detect activity and induce further activation).
In addition, knowledge-based activation is the detection of activity by SMEs (in case of strong emergence) or the existing knowledge-base (in case of weak emergence).The latter case has been amply researched in cognitive psychology where in semantic network works through the activation equation.The former requires a continuous influx of new knowledge by inferencing/sense-making agents (in an artificial system) or by a SME with a live human to perform the sense-making.This very act of correlating new knowledge is incorporated through switching the Context Currency explicitly, thereby employing a new set of activity characterization for a different facet (Table 3) of the same situation.In plain terms, it brings a new perspective to the existing situation providing leverage towards better situation understanding.
In an artificial system, both internal and external sensor units send activity information to the Decision Center (Figure 7) which can send resources and/or directives to internal components (e.g sensor-gateways or agents).Resource allocation can be reactive: to send resources to newly active agents, or can be proactive: to prime or "encourage" activity in inactive agents to activate them over their based threshold activity based on both exteroceptors and knowledge-based context switching.In the latter case, this could be for working on problems without specific instructions but given abstract goals (e.g.global biases and multiple facets): stimulating autonomous problem solving or "thinking".Such agents have to self-organize into collaborative assemblies.This dynamic self-organization may result in an emergent network where the emergent hub (or a group of agents) is indicative of the agents gathering most attention.This is also supported by the Network Science as discovered by work of Barabasi [2] and later analyzed as a formal discrete event system by Mittal [3].The Decision Maker could have the ability to select these subsets of potentially collaborative agents that are likely to address a particular problem due to correlations of activity and outcome in context that it has previously encountered and memorized.If it is able to select these subsets according to the existing knowledge then it validates the case for weak emergence as the activity characterization results in expected emergent behavior.Contrary to this, any involvement of a SME to validate the newly emergent behavior makes this a strong emergent phenomena which can be associated with any new facet in the SME knowledge-base.In a more complex environment, more than one SME brings additional facets and experience to look at the same activity characterizations.This is further elaborated in the theory of society of minds, brought forward by Minsky [13].The klines agent as conceptualized by Minsky is an agent whose primary job is to turn on other agents in the connectionist network.It is our hypothesis that in the ABIS architecture, the agent receiving the attention may actually play the role of the k-line agent where the agents that are connected to this agent start receiving resources as the k-line agent starts getting resources.In natural systems, over sustained duration and through biological evolution, such k-lines naturally activate a range of agents connected to various sub-systems each performing their own job.Activities like walking, supported by the bodily structure, come with a neural apparatus and a network of k-lines that once activated and solidified in early years of life, stay strong forever.In contrast, acquiring new declarative knowledge and the associated procedural knowledge is the very act of learning.Such learning in Minsky's perspective is the establishment and emergence of new k-line agents that control new neural pathways and networks.In a system based on RCIDS, presence of k-line agents when coupled with a rich ontology and appropriate currency of exchange, can effectively switch attention based on different types of sensoryactivations.Attention switching caused by external sensory-based activation makes the system reactive and situates the actor in the environment and when caused by internal sensory-based activation allows the actor to explore the internal "knowledge" by switching to other potential "knowledge-nodes" that can help him better "sense" the situation.This process is also known as sense-making.The act of switching attention and resources to the appropriate agent result in cascaded effects through the neural landscape and ultimately the mind [13].ABIS provides a computational mechanism to identify agents that can evolve as k-line agents.Certainly, this hypothesis need to be tested in a more complex implementation of ABIS and is left for future work.
Zeigler [23] shows that the reduction in energy in actual implemented systems (e.g., hardware) can be measured and compared to the ideal level given by the disparity measure.Likewise, the attention switching architecture saves energy by not wasting it on components that do not need it in dynamic fashion.The (non-functional) performance of such architectures can be gauged by the disparity measure, a task left open for continued research.Muzy and Zeigler [28] report on dynamic credit assignment as a step towards identification of high-performing agents based on the correlation of their activity with overall system outcome.The next step is to condition the correlation process by quantized context so as to be able to identify high-performing agent subsets by context and reactivate them as such contexts re-occur.In other words, to classify subsets of agents as k-lines -indeed a task left for future work.
abstraction level of any agent-based system and detect emergent behavior through activity characterization and being founded on RCIDS, facilitates the contextualization at pragmatic levels, on both qualitative and quantitative terms, by switching attention to the facet of next undertaking.ABIS through their sensor/agent-gateways facilitate the quantized context to influence any low level behavior by switching attention to a new facet of importance.
We have presented the theory behind ABIS with more in-depth research left to be reported in future work.

Figure 3 :
Figure 3: Weak and Strong emergent behavior in design of complex adaptive systems

Figure 5 :
Figure 5: Activity Characterization and Interoperability levels

Figure 6 :
Figure 6: Sensor-gateway system model for a single level architecture (adapted from [8])

Figure 7
Figure 7 Sensor-provided activity information enables resource allocation in addition to directives in an ABIS

Table 1 :
DEVS Systems Specification and Activity characterization

Table 2 :
Open-system feature and Activity Characterization

Table 3 :
Context and its currencies