Sensor network prediction based on spatial and temporal GNN

. Multi-sensor prediction is a hotspot for research and development in sensor management technologies. Thanks to artificial intelligence, researchers have been able to effectively use neural networks and traditional artificial intelligence approaches to multi-sensor prediction in recent years. In this model, we try to present the sensors network as an unweighted graph, based on the GNN with spatial and temporal features, combine the characteristics of the Gated recurrent unit with temporal context, and use the Graph Neural Network to predict sensor feature. We tackle the issue of poor sensor network efficiency and sluggish speed without data fusion.


Introduction
Sensor forecasting has gotten a lot of attention as a result of the Intelligent Multi-sensor System's development.It's an important aspect of sensor planning, sensor management, and sensor control, and it's a crucial part of advanced sensor management.Sensor forecasting is the act of anticipating the changes in other sensors in a sensor network based on the changes in one sensor, such as beam location, beam aiming, and transmitted waveform.The way the operator interacts with the sensor has fundamentally altered because of intelligent sensor prediction.Instead of constantly changing the sensor's working mode to match the environment, the operator creates an autonomous resource management algorithm that dynamically adjusts the pulse signal emitted by the sensor.Sensor forecasting, on the other hand, has always been a difficult task due to its intricate geographical and temporal dependencies.
(1) Spatial dependence.The topological topology of the sensor network has the most influence on the change in sensor parameters.The status of one sensor in the network has an impact on the status of others in the network due to information sharing, and the status of other sensors in the network has an impact on this sensor due to the feedback effect.The sensor properties will change as illustrated in Figure 2 due to the influence of adjacent sensors.These guidelines, written in the style of a submission to J. Phys.: Conf.Ser., show the best layout for your paper using Microsoft Word.If you don't wish to use the Word template provided, please use the following page setup measurements.(2) Temporal dependence.The sensor parameters fluctuate over time in a dynamic way, which is primarily expressed in periodicity and trend.Sensor 1's sensor characteristics exhibit a periodic variation over an hour, as seen in Figure 1.The sensor parameters, for example, will be influenced by the sensor network from the previous moment or even further back.
The following are the key contributions of this article: i) A distributed multi-sensor networking technique was proposed.ii) Our sensor network can process data at the signal level, allowing for faster data processing.iii) We don't need data fusion because the programmability difficulties have been resolved.

Relate works 2.1 Sensors network
Since the 1970s, the majority of study on multi-sensor collaborative prediction has been focused on fusion judgment.Researchers at MIT, with the help of the US Naval Laboratory proposed to detect data-level fusion using Bayesian criteria, and then choose the local optimal.Z.Chair and P.K. proposed an outstanding data-level fusion prediction solution by using data fusion to the multi-sensor, which is based on Bayes detection problem and the binary hypothesis testing problem [1].R. Niu's Counting Rule [2] is an unique fusion rule based on the concept that the node sensor's detection probability is equal to the false alarm probability.To address the problem of sensor network detection performance being readily influenced by the environment, this paper proposes a multi-sensor cooperative target identification strategy based on adaptive iterative thresholds.More observation data may be transported to the fusion center for analysis because to developments in computer memory, transmission bandwidth, and data processing capabilities [3].With help from the US Air Force Rome Laboratory, researchers investigated the signal-level fusion detection approach of original echo signal fusion and its fundamental problems.
The great majority of signal-level fusion detection is predicated on the assumption of unrelated echoes [4,5].Fishler establishes the self-contained requirements for the nodal sensor used to detect the echo.In a work, I. Y. Hoballah [6] proposed the use of distributed Bayesian signal detection.When it comes to multi-sensor integrated collaboration on a single platform [7], The core of the integrated sensor design is the multifunctional phased array radar.The most succesful project is the IT(Integrated Topside), which is backed by the US Navy.In order to realize multi-sensor signal-level collaborative detection on a single platform, the project adopts unified resource management, data processing, status control, signal processing, signal presentation and other technologies [8,9].The InTop's primary objective is to produce a modular, open architecture that is adaptable to advancing technologies and naval warfare requirements.Use the INTop framework to build a sensor system that can process any one or more given narrow/wide beams, can generate, receive and process multifrequency complex waveforms, can track multiple targets at the same time, and achieve alert and precise tracking [10][11][12][13].

Graph neural network
To begin, we must define GE (Graph Embedding), GNN (Graph Neural Network), and GCN (Graph Convolutional Network), as well as their distinctions and relationships.Graph embedding (GE [14][15][16][17][18] is a learning representation method, usually has two levels of meaning.Firstly, use low-dimensional vectors to represent node features, so that the resultant vector form may be used for vector space representation and reasoning.For instance, the node representation of the sensors network is each sensor's representation vector, and then the vector is used for node classification.Secondly, the whole network structure is represented in a real-valued, dense and low-dimensional vector forms, which is then classified.Three techniques to graph embedding include GNN and deepwalk and matrix factorization. The term GNN (graph neural network) relates to models of neural networks applied to graphs.GNN [19] may be categorized using a variety of classification approaches and technology.GAT, GCN, and GLSTM [20][21][22] are now more effective.Graph Convolutional Networks (GCNs) are a form of neural network that use graph convolution [23].With the continuous development and enhancement of graph convolutional neural networks, a variety of new methods have evolved, which makes the status of graph convolution in graph neural networks similar to that of convolution in image processing.For the graph, the following features are defined: A graph is defined as a pair G = (V, E) with N nodes v i ∈ V and edges �v i , v j � ∈ E, with each node  having a distinct characteristic.x i , X N * D is a matrix of feature vectors for nodes x i ,  denotes the total number of all the nodes., and  present the dimension of each node's features, commonly known as the feature vector's dimension.As a consequence, each graph convolutional layer may be represented in terms of this sort of nonlinear function: 0 =  is the first layer input,  ∈   *  , A is named as adjacency matrix.Various models are try to used for various issues.The distinction is in the way the function  is implemented.

Our work
The fundamental concept is to use an unweight graph to present the sensor network.We normalize the sensor interconnections and sensor properties before constructing the adjacency and feature matrices.Then, as demonstrated in Figure 2, feed them into GCN-GRU to train the model.
This section digs further into the structure and implementation of the algorithm.To capture both geographical and temporal connections at the same time, we employ Zhao, Temporal L's Graph Convolutional Network.In this method [24] the graph convolutional network (GCN) and the gated recursive unit (GRU) are integrated (GRU).GRU is used to capture temporal dependencies by learning the dynamic changes of sensor networks, and GCN is used to capture spatial dependencies by learning intricate topological structures.The multi-sensor network's objective in this study is to forecast the sensor characteristic at a certain moment based on the network's past detection data.The term "sensor characteristic" in our approach refers to a broad concept that encompasses the center frequency, sampling rate，beamforming and bandwidth.
Definition 1: multi-sensor network .In this work, we try to make use of an unweighted graph.We use the letter  = (, ) to define the sensor network's topological structure, and we treat each sensor as an ac node, where  denotes a collection of sensor nodes. =  1 ,  2 , … ,   The number of nodes is , while the number of edges is , �  ,   � ∈  To represent the relationship between sensors, the adjacency matrix  is employed,  ∈   *  .Only the numbers 0 and 1 appear in the adjacency matrix.The element 0 indicates that there is no link between two sensors, while 1 indicates that there is one.Because the sensors are all linked, we do not need to address data fusion in this case since the learning process is endto-end.So we also not need the collaorative control system.Definition 2: feature matrix   *  .In a multi-sensor network, we define the characteristics of each node as the intrinsic properties of each sensor, denoted by  ∈   *  , where N is the number of nodes,  is the feature dimension of each node, and we define   ∈   *  as the bandwidth of each sensor at time .The properties of each node can be characterized by any sensor parameter such as center frequency, sampling rate, bandwidth or beamforming.
As shown in equation 2, Spatio-temporal sensor prediction may be thought of as trying to learn the mapping function  from the mulit-sensor network architecture G and feature matrix , and the next step is calculating the multi-sensor features in the succeeding T moments.
where  denotes the duration of the projected time series and  denotes the length of the historical time series.Now we'll teach you how to use a sensor network and the GCN-GRU model to do sensor forecasting.There are two pieces to the GCN-GRU model: the GCN and the GRU.As illustrated in Fig2, In this work, we take the features of n time series as the input of the network, and use the graph convolutional neural network to capture the topology of the multi-sensor network to obtain the spatial properties.Then, we capture the temporal attributes by taking the generated time series containing the spatial attributes as the input to the GRU model, and the dynamic transformation is accomplished by information exchange between the components.At the conclusion of the operation, the fully connected layer generates results.Given a feature matrix  and give an adjacency matrix , the GCN (graph convolutional neural network) model generates a Fourier domain filter.And the GCN model is created by stacking many convolutional layers, as shown in equation 3.By using its first-order neighbourhood, the filter captures spatial properties between nodes in a network.
where  ̃=  +   is the matrix adding self-connections, and   is defined as identity matrix,  � is the degree matrix,  � = ∑  ̃  ,  () is the output of l layer, θ () contains the parameters of that layer, and σ(⋅) represents the sigmoid function for a nonlinear model.
To get spatial dependence, we employ the 2-layer GCN model [24], which can be expressed as equation 4: where  ̂=  � − 1 2  ̂ � − 1 2 denotes pre-processing step, The weight matrix from the input to the hidden layer is represented by  0 ∈   *  , The length of the feature matrix is ., and The number of hidden units is represented by , We denote the weight matrix between the hidden layer and the output layer by  1 ∈   *  .(, ) ∈   *  represents the output with the prediction length , and ( * ), standing for REctified Linear Unit, which is a frequently used activation layer in modern deep neural networks.
The graph convolution process (, ), as defined in equation 4, is then combined with GRU.(, ) represents the point-wise multiplication.The training process's weights and biases are denoted by  and .
The purpose of the training procedure is to reduce the difference between the real sensor data and the predicted value.  is the ground truth value and  �  is the predicted.The first step is used to reduce the difference between the actual sensor feature and the predicted value.The second step,   , is the L2 regularization term, which helps to avoid an overfitting problem, and the third term, λ, is a hyperparameter.
The purpose of the training technique is to close the gap between the observed sensor feature and the predicted value..The true value is   , whereas the projected value is  �  .The first term is used to bring the difference between the actual sensor feature and the anticipated value closer together.The third term, λ, is a hyperparameter, while the second step, We introduce   as the L2 regularization term in order to prevent the overfitting problem during training.

Data description
In this section, we test the GCN-GRU model's prediction performance using two simulation software: Mozi and STK.Because these two simulation programs are all about sensor networks.In the experiment part, we use beam aiming as sensor information without sacrificing generality.
(1) Mozi.The Mozi joint combat deduction system is based mostly on contemporary naval and air combat deduction simulation, which allows for quick building and simulation.Closed-loop simulation with a "big sample" based on experimental design tools is supported.The experimental data is divided into two sections.The first is a 156*156 adjacency matrix, which specifies the sensor's spatial connection.The values in the matrix reflect the connection between the sensors, with each row representing one sensor.Another is a feature matrix, which depicts how each sensor's beam pointing varies over time.Each row represents a sensor, and each column displays the beam aiming at the sensors over time.Every 15 seconds, we combine the beam aiming on each sensor.
(2) STK.Analytical Graphics, Inc.'s Systems Tool Kit is a multi-physics software tool that allows engineers and scientists to do complicated assessments of ground, sea, air, and space platforms and exchange the findings in a single integrated environment.The data format is identical to Mozi's.
In the experiments,the input data was normalized to the interval [0,1].In addition, 80\% of the data was used as the training set and the remaining 20\% was used as the testing set.We predicted the beam pointing of the next 30 minutes.

Evaluation metrics
To evaluate the prediction performance of the GCN-GRU model, we use five metrics to evaluate the difference between the real traffic information   and the prediction   � , including: (1) Root Mean Squared Error (RMSE): (2) Mean Absolute Error (MAE): (3) Accuracy: (4) Coefficient of Determination (R2): The prediction error is measured using RMSE and MAE, with the lesser the value, the greater the prediction effect.The prediction precision is determined by accuracy: the higher the value, the greater the forecast effect.The correlation coefficient is calculated using R2 and Var, and it indicates the capacity of the prediction result to match the actual data: the higher the number, the better the prediction effect.

Experimental results
Capacity for spatial and temporal prediction.We compare the GCN-GRU model to the GCN model and the GRU model to see if the GCN-GRU model can depict spatial and temporal features from sensor data.Figure 8 shows that the method based on Spatio-temporal features has a good prediction, implying that the GCN-GRU model can capture spatial and temporal aspects from sensor data.
To further understand the GCN-GRU model, we choose one road from the SZ-taxi dataset and illustrate the test set's prediction results.Figures 11,12,13, and 14 depict the visualization results for 15 minute, 30 minute, 45 minute, and 60 minute forecast horizons, respectively.The following are the outcomes:

Conclusion
In sonsor networks, sensor network prediction deals with complicated spatial dependencies and temporal dynamics.On the one hand, we employ GCN to obtain the spatial dependency by capturing the topological structure of the sensor network.The GRU, on the other hand, is utilized to capture the dynamic variation of sensor features in the network in order to get temporal dependency and, ultimately, to realize sensor prediction tasks.

Fig. 1 .
Fig. 1.Temporal dependence, the sensor parameters on Sensor 1 shows a periodic change over 30 minutes.

Fig. 2 .
Fig. 2. Spatial dependence, Key idea is to abstract the sensor network as an unweighted graph.

ITMFig. 3 .
Fig. 3. Overview.We use the historical sensor feature as an input and the Graph Convolution Network (GCN) and Gated Recurrent Unit (GRU) model to reach the final prediction result.

ITMFig. 4 .
Fig. 4. Changes in RMSE and MAE of the sensor network prediction.

Fig. 5 .
Fig. 5.The accuracy of the sensor network prediction.