Analytical estimates of efficiency of attractor neural networks with inborn connections

The analysis is restricted to the features of neural networks endowed to the latter by the inborn (not learned) connections. We study attractor neural networks in which for almost all operation time the activity resides in close vicinity of a relatively small number of attractor states. The number of the latter, M, is proportional to the number of neurons in the neural network, N, while the total number of the states in it is 2. The unified procedure of growth/fabrication of neural networks with sets of all attractor states with dimensionality d=0 and d=1, based on model molecular markers, is studied in detail. The specificity of the networks (d=0 or d=1) depends on topology (i.e., the set of distances between elements) which can be provided to the set of molecular markers by their physical nature. The neural networks parameters estimates and trade-offs for them in attractor neural networks are calculated analytically. The proposed mechanisms reveal simple and efficient ways of implementation in artificial as well as in natural neural networks of multiplexity, i.e. of using activity of single neurons in representation of multiple values of the variables, which are operated by the neural systems. It is discussed how the neuronal multiplexity provides efficient and reliable ways of performing functional operations in the neural systems.


Introduction
The analysis and multiple applications of neural networks are currently on a steep rise.Among the recent achievements in this field are: human-like recognition of images, human-level spoken words identification, machine translation between multiple images [1].There emerges general understanding that constructing the neuromorphic devices might present the easiest way to obtain smart informational technologies (deep learning and related methods).These impressive results increased visibility of and attracted tens of thousands of scientists to the neural network research, both modern and classical.In this novel atmosphere of research of particular importance become the topics of computational neuroscience, which have not yet been included into modern neural network toolkit.Potentially, they can improve particular aspects of neural network functioning.
In present work, we complement computer simulations of [2] with a thorough mathematical analysis, which enables us to make analytical estimates of the network operational parameters.
The structure of the present note is following.First, we consider a notion of neural attractors and their types.In this work, we restrict the analysis with stationary attractors, i.e. stable in time states of activities of neural networks.Our attention is focused on relations between these states: the distances between the states in configurational space of the neural networks and properties of the set of attractor states of a neural network.Our contribution to the field is in an analysis of the preformed, inborn connections between the neurons in the network.Most of previous works, following John Hopfield, considered networks, in which the connections were established, using the Hebb rules [3].That means that the connections were formed, depending on coincidence of activities of the neurons, which are to be connected.We deal with the connections, established by the mechanism, which does not depend on the neural activity.Unlike previous works, which simply postulated the existence of inborn connections with needed properties [4], we introduce in Section 2 biologically realistic details of connection setting mechanism, the method of model molecular markers.The gains, obtained with this method are analyzed in detail throughout our work.
Section 3, consisting of four subsections, presents details of the analysis.The first two subsections are devoted to the networks with d = 0, the last two ± to the networks with d = 1.
The subsection 3.1 deals with evaluation of the number of attractor states for d = 0.The number to find is GHWHUPLQHG E\ WKH FRQGLWLRQ WKDW HYHU\ µWKHRUHWLFDO ¶ RU predetermined by the connection formation process ITM Web of Conferences attractor state is stable.The analysis yields a relation between the number of the stable theoretical attractor states and the number of neurons in the network.The analytical estimates are compared to the results of computational experiments [2].Section 3.2 continues the analysis to get the trade-off relations for the numbers of attractor states, the number of neurons in the network and the number of active neurons in each attractor state.The results are compared with the computational experiments data [2], which are presented in Figs. 1 and 2.
In section 3.3 the important property of d=1 attractors is evaluated.The overall structure of these attractors is described by the µVQDNH-in-a-ER[ ¶ PHWDSKRU >@ ,Q WKLV presentation, the points of attractor are supposed to make the axis of a snake, which is put into the box 1 .In this case, the points at the snake axis, which are at a large distance from each other along the snake axis, in the configurational space of the neural network, cannot be closer to each other than some positive value, which is GXEEHG WKH µVQDNH WKLFNQHVV ¶.The dependence of the distance between the points at the snake axis in configurational space on the distance between these points along the snake axis is effectively calculated in this section.The calculations are performed for the algorithm of obtaining d=1 attractors, described in section 2. Again, the results are compared to computational experiments showing a good agreement.
Section 3.4 treats the problem of the upper limit of the length (the size) of a continuous 1-d attractor in the network.The estimate is in accordance with the computational experiments.
As it is argued in Conclusion, the most important results of this work are: (1) a formulation of an algorithm for forming neural networks, which specificity is determined by topology model molecular markers and (2) the finding that the number of attractor points in neural networks of biologically realistic size can exceed the number of neurons in 100-1000 times.

Inborn attractors
We consider neural networks, which contain N neurons.The behavior of the i-th ( 1,..., i N ) neuron is described by its phase function, ( ) i t

M
. There are two constants: the duration of excitation, w, and the duration of refractoriness, r.
. Usually, w = 1 and r = 0.In computational experiments [2] we have explored the methods for making inter-neuronal connections, which enable the neural networks to have attractor states (states which are transformed into themselves under the specified above rules of neural dynamics).The states of the neural network are N-dimensional vectors of ones (neuron is excited) and zeros (neuron is quiet).Our approach is based on using model molecular markers, which we first distribute randomly between neurons.Then connections between the neurons are established, depending on properties of markers, which the neurons receive at the previous stage.The set of all attractor states (SAAS) of the network can be characterized by pairwise distances between the states and properties of the whole set of the attractor states.Two cases are of particular importance: (1) when all attractor states are at large distances from each other.We say that these SAAS have dimensionality d equal to zero (d=0); (2) each attractor state has two other attractor states close to it and all states compose a closed chain, in which each state has immediate neighbor, then neighbor of neighbor, etc.The distance to neighbors grows linear with the degree of neighboring relations until some limit, beyond which the distances do not systematically increase and fluctuate around some value, which is about the same as average distances between states in SAAS with d = 0.If the set of attractor states has these properties, we say that the SAAS of the neural network has dimensionality d = 1.
Let M be the total number of molecular markers.To obtain a neural network having SAAS with d = 0 we divide the markers in n = M/L2 groups.Inside a group, the distance between markers is zero; between the groups, LW LV ³YHU\ ODUJH´ VD\ WKDW LW LV close to N.
To obtain a neural network having SAAS with d = 1 we endow the markers with the other metrics.We consider that all M markers are placed equidistantly on a ring.The distance between any two markers is defined as the minimum number of other markers, which are located between the selected ones.
Ones the distances between markers are established, the further procedures, leading to formation of neural networks with the desirable properties of the SAAS do not depend on the type of SAAS.The markers are distributed between N neurons randomly subject to only one restriction: the distance between markers inside one neuron should exceed some pre-determined value, ǻ.As soon as the markers are distributed between the neurons, the latter are connected with excitatory synapses.The connections are symmetric and are set between two neurons, if they have markers, the distance between which is less, than another pre-determined value, į.Computational experiments have demonstrated that the two types of metrics marks, described above, yield the neural networks with the SAAS with d = 0 and the SAAS with d = 1, when the same for both cases conditions for marker distribution and connection setting have been used [2].

Parameters estimation
In this section we obtain parameters estimation for both types of attractors, which have been described in Section 2.

Number of inborn attractor states for d=0
Consider ij T , the neurons interconnections matrix NîN, with values 1, at i and j, where excitatory connections are formed as described, and values 0 otherwise.The mean value of matrix elements is: We suppose that n is proportional to 2 2 / N L , i.e. ( / ) n q N L .As n o f, we have: q n e J (2) Now, let the network have at the input one of its theoretical attractor patterns.Then, the probability, that a µIRUHLJQ ¶ QHXURQ KDYH L excitatory inputs is: { } So, the probability that at least one of (N-L) foreign neurons will get L units of excitation is: (3) Comparison of ( 2) and (3) finally yields: Table 1 gives numerical values of q for a set of N values at L=20.Although computational estimates of q give the constant value 1 q , one can see that the analytical reasoning does not diverge too far from the computational experiments estimate.

Trade-off relations for attractors with d=0
For neural network applications, it is important to know how the network behaves depending on the values of its parameters.In particular, it is important to know if the attractor states are stable, i.e. after small perturbations the states of the system returns to the original attractor state.Computational experiments (dots); broken line is the least square regression; L=20.
When the network resides at attractor state, m S , the L neurons which are active get the following inputs: Input to the rest ( N L ) of the neurons is approximately: Here ij ȉ is the average value of matrix element of T. The distinction between the right parts of ( 6) and ( 7) enables the neural network discriminate between attractor and non-attractor states.Thus, we obtain the trade-off relation for the number of attractor states, n (n<n cr ), the number of neurons in the network, N, and the number of active neurons in each attractor state L. Besides, we get an analytical reasoning, which qualitatively explains the data of computational experiments, displayed in Figs. 1 and 2.

Distances between attractor states in 1 d continuous attractor obtained with help of model molecular markers
Fig. 3 presents a fragment of the layout of all neurons in order of the order numbers of the markers, which they Industrial Control Systems: Analysis, Modeling and Computation 02009-p.3contain (as each neuron has k markers, each neuron is presented k times in this layout).The neurons become excited in the same order, while the activity propagates over the attractor 3 .We now consider the distance between an initial state, 0 X and states which are following it, 1 2 3 , , ,... t X X X X , (t « M).
Fig. 3. Layout of neurons in accordance to their markers order number Fig. 4 shows the inner product 0 ( , ) t X X of two states plotted against t.It is obvious that in the beginning, the intersection between 0 X and t X decreases linearly from L to 0. Then, it stays equal to 0 up to t ' .For t !' , the intersection is random.Its mean value can be obtained as follows.Each neuron of the first state takes later part in (k-1) states That means that for any neuron which is excited in 0 X , the probability to be excited in t X (for large t) is ( 1) k L p M L That means that the average intersection between X 0 and t X will be ( , ) 2 ( , ) where from we get for t t ' : , M=900, we have 29 D | , which coincides with the results of the computational experiments [2].

Evaluation of kc in one-dimensional continuous attractors
As each neuron takes part in k attractor states, each line of the matrix T contains about 2 k G positive (equal to 1) matrix elements.The probability of positive matrix 3 To study the structure of attractors with d=1, the artificial neural network dynamics has been used in [2].In this case, the neuron threshold increases, when the neuron is active for a long time.Due to this modification of the neuron model, the activity of the network with 1d µsnake-in-a-box ¶ attractor, can run for indefinite time over the ring of the attractor states.

G
. In computational experiments, the critical value c k was defined as the value of k, such that five random networks with a given k show perfect cycles.Thus, for c k we obtain an equation: And finally, for the critical value of k, we have: Factor 5 L N is of the order of 1 and changes very slowly with N. So, (8)  Thus, we have obtained analytical justification of results of computational experiments [2].

Conclusion
DOI: 10.1051/ C Owned by the authors, published by EDP Sciences,

)
non-complicated transformations, leaving only first order terms (by 1/N), we have:

Fig. 1
VKRZV WKH ³HUURU´ LQ VWDWHV DV D IXQFWLRQ RI N, n at 20 L , obtained in Monte Carlo computational experiments.The synchronous L winners dynamics of the network was used [5] DQG WKH ³HUURU´ ZDV FRQVLGHUHG WR be the Hamming distance between the experimentally REWDLQHG VWDEOH VWDWH DQG WKH ³WKHRUHWLFDO´ DWWUDFWRU SRLQW which served as the initial condition.It can be seen that the error grows with n.For n less than a critical value, , cr n WKH ³WKHRUHWLFDO´ DWWUDFWRU SRLQWV DUH stable.

Fig. 2 Figure 1 .
Fig. 2 shows dependence of / cr cr n N D on N. The linear empirical approximation shows that with increasing

2 (
distance between the states is connected an intersection of them according to the relation 0 0

G.
that the probability of firing of one excessive neuron, which is not active in 0 X , is (2 / ) L k N On the contrary, the probability that this will not happen for any of the remaining N L neurons is [1 (2 / ) ] yields practically linear dependence of c k on N. In particular, we have: at L = 15, N = 300, 15 5 300 1.62 | , k = 7.7 (in experiment, k = 5); at L = 15, N = 1200, 15 5 1200 1.79 | , k = 28 (in experiment, k = 21).