Web service for solving optimisation problems using swarm intelligence algorithms

In this research the web service which provides a user a possibility to use swarm optimisation algorithms for solving continuous and discrete extreme problems online was created. The web service includes facilities for data input and making numerical experiments related to optimisation problems research with swarm intelligence methods. For swarm intelligence algorithms numerical experiments, the user can tune the following parameters: problem dimension; swarm parts number; inertial, cognitive, and social coefficients. The user can also select algorithm stop criterion and set the number of algorithm iterations. During the optimisation problem solving, the iteration process graphical visualisation is done. Also, the protocol is being formed and it can be exported to .xls file. Checking the algorithms in the web service was made on well-known test functions and conditional and non-conditional optimisation problems; some of them are built into the service.


Introduction
Solving real optimisation problems existing in different areas of science, technology, or economics is complicated with existence the following properties of their mathematical models: search surface complex topology, multi-extrema, non-smoothness, multiple dimensions.In this case, using natural processes imitation based methods, which are borrowed from nature, and realise adaptive randomness of a search is expedient.The methods based on self-organised population systems collective behaviour modelling are among this kind of methods.For instance, an are ant colony optimisation (A.Colorni, M. Dorigo, V. Maniezzo (1991)), particle swarm optimisation (PSO) (J.Kennedy 2010)), bacterial optimisation (Niu Ben, Wang Hong (2012)) etc.The primary algorithm between them is particle swarm optimisation (PSO) offered by Kennedy J. і Eberhart R. in [1] and developed by many other researchers (see, for instance, [2]- [6]).It is popular, primarily because it can be used for effective solving the wide range of optimisation problems including continuous, discrete, binary, and multi-criteria optimisation.

Problem topicality
The swarm intelligence methods are diverse and can be used for solving problems in different domains: telecommunication, aviation, space research, robotics, medical care, biology etc.That is why swarm intelligence and its researching methods are a topical scientific direction.In many universities over the world, students of mathematical and computer faculties study PSO algorithms in courses related to the modern optimisation and operation research methods.Using these methods for solving real optimisation problems, computer mathematical software such as MatlabR2017a can be used.It has particleswarm function in Global Optimisation Toolbox [7].It realises PSO method.But Matlab is a proprietary desktop application and is not available for everybody.So, creating a system which would provide the users the possibility to use PSO algorithms for solving training and practical optimisation problems online and include the theoretical information and facilities for making experiments for researching and developing PSO algorithms is a very topical problem.
The research objectives are a) reasoning of necessity of web service for solving optimisation problems of different type online with swarm intelligence methods development; b) requirements analysis to the main information technologies being used on the way of the web service creation; c) the web resource design and creation; d) making numeric experiment for testing the methods realised using test functions.

Problem definition
A web resource being called PSO Service should implement the main algorithms of swarm intelligence specifically canonical PSO algorithm, its adaptive and hybrid variants for solving continuous and discrete optimisation problems, one-and multi-criteria extremum problems.The web service user should have the possibility to a) enter objective function, constrain functions, direct constraint of variable values; b) set parameters for problems solving specifically: swarm dimension, search margins, calculation precision, relations type between swarm particles ("star", "ring", "random"); c) get problem solution in view understandable for the user.Also, the user should have the possibility to see the function value for every iteration.The service should visualise the PSO algorithms iterations for two-dimensional optimisation problems.For solving conditional optimisation problems, specifically mathematical programming the penalty function method with possible choosing among different penalties and setting their options will be used.For making numerical experiments on PSO algorithms ability to work and effectiveness the service should include a set of known test functions.Besides, the service should be multilingual, include general information on PSO algorithms, and their use for solving different kinds of optimisation problems.

PSO algorithms realised in PSO Service
Here, the non-conditional global optimisation problem should be solved, formulated as the problem objective function f(X) in D search region minimisation: where D is hypercube of d dimension, xf function vector argument; x* -objective function f(x) global minimum point.Brief consideration of the PSO algorithms of solving problem of kind (1) which are realised in the PSO Service web service is presented.

Canonical PSO algorithm for continuous optimisation problems
In canonical PSO particles swarm is a set of decision points which transfer in space searching for global optimum.While moving the particles try to improve the value found before and exchange the information with their neighbours.
The set of swarm particle positions is defined as: where s is particles number in a swarm.The position of particle i is the set of its coordinates (х і1 , х і2 , … , х іd ) in search region D of dimension d, , , 1 s i  . For optimisation 20-50 particles are usually enough [4].The swarm allows to find global optimum even it has particles number less than search region dimension d.
In the beginning of PSO algorithm work the particle swarm is initialised the random way.If there is no any prior information about the function being optimised the simplest way to generate the initial particles coordinates is to use the following formula: where x ij is coordinate number j of the particle number i, ; rand(x jmin , x jmax ) is a random number with uniform distribution on interval defined with search region borders for coordinate j.
The set of particles speed vectors is also associated with the swarm: All speeds can be considered as equal to 0 on the initial stage.But the practice shows that the following formula gives the best results [3]: where v ij -speed component number j of particle i.This way of defining the initial speeds guarantees that no particle gets out of search region in the next iteration.
On the following algorithm steps, the speed and particles positions components are changed according the formula (in coordinate form): , where , , v and i x are new speed and position respectively of particle і; p i is the best previous position of the particle і; g is the best solution found with whole swarm; w is the inertial coefficient; с 1 is the cognitive coefficient; с 2 is the social coefficient; r 1 , r 2 are random numbers uniformly generated on [0;1] interval; they are different for every coordinate.
If during optimisation a particle gets out of search region its respective speed component is set to zero and the particle returns to the nearest border.
The inertial coefficient w means the influence the previous speed of the particle on its new value.The cognitive coefficient c 1 describes the degree of the particle individual behaviour and its intention to get back to the best solution found with it.The social coefficient value c 2 sets the degree of collective behaviour and the intention to move to the particle neighbours' best solution. Vectors the cognitive and social components of the particle new speed.The random numbers r 1 , r 2 add randomness to the search.Fig. 1 shows geometrical interpretation of swarm particles movement direction correction rule ( 6)- (7).
The coefficient values are selected within ranges [3]: Fig. 1.Geometrical interpretation of rule ( 6)- (7).The swarm remembers the best solutions found by particles in separate and by the whole swarm.On initialization the initial particles positions are considered as the best.On every following algorithm iteration after using formulas ( 6)-( 7) the individual best position of every particle p i and the whole swarm best solution g are corrected according to the rule: So, PSO algorithm scheme is the following: 1. Set algorithm parameters and initialize the swarm: Find the best individual value p i for every swarm particle i, , , 1 s i  and the best value for the whole swarm g.

Find new positions
for all swarm particles according to the formula ( 6)- (7). 4. Check the iteration loop condition.If the condition is true, go to item 5, else go to item 2.

Output the results.
The typical for optimisation population algorithms conditions are used to determine the iteration loop finish, specifically, reaching maximum iterations number, reaching acceptable solution precision, iteration process stagnation.

Adaptive particle swarm optimisation
In PSO algorithm there are some free parameters (w, c 1 , c 2 , s) which can have different optimal value for different problems.Choosing algorithm parameters free values vector is called its adaptation strategy.An adaptive PSO which decreases the probability of premature convergence is Inertia-adaptive PSO Algorithm (IA-PSO) offered in [8].In this algorithm the inertial coefficient individual value is calculated for every swarm particle by formula: where w i = rand(0.5,1); dist i is current Euklid distance from particle i the best solution of swarm g found by the swarm in current iteration, max_dist is the biggest distance from a particle to the best solution found by the swarm in current iteration, e.g.
Because of the inertial coefficient correction offered the gravity between swarm and particle increases on significant distance between particle and global swarm solution g.This is because in this case w parameter goes down and the particle stops go further from g.To avoid premature convergence, it is necessary to be sure that the particles have enough mobility on late optimisation stages.For reaching this goal the formula of the particles position correction (7) is modified according to dependency , ) 1 ( where  is a random value with uniform distribution on [-0.25, 0.25] interval.

Hybrid PSO algorithm with local search with Nelder-Mead method
Most of swarm intelligence algorithms allow to research search region D but are less effective on researching its small areas.In particle swarm optimisation the local search quality can be improved using lower values of w coefficient.But it decreases the probability of global solution localization and causes premature algorithm convergence.That is why it is better to make local search using special methods (gradient descent method, conjugated gradients method, Newton and quasi-Newton methods) [9].One of the most popular and effective local search methods which doesn't demand objective function derivatives evaluation is Nelder-Mead algorithm [10].It is also called deformed polyhedron method.
Hybrid particle swarm optimisation process can be divided into two stages.1. Optimisation with particle swarm algorithm and global optimum approximate value locating.2. Search for more exact solution with local search method.In this case local optimisation method gets values found during particle swarm optimisation and continues locating optimum for the best result found with PSO after several iterations.

Particle swarm optimisation for discrete problems
In integer optimisation and combinatorial optimisation, the global solution which belongs to some discrete set of possible values should be found.First, the particle swarm optimisation algorithm was developed for solving continuous optimisation problems but using it for solving integer and binary optimisation problems is possible.
In the simplest case the integer programming problems can be solved with using continuous PSO with rounding the result to the nearest integer [11].So, the solution is searched in D continuous region and before the evaluation of the corresponding objective function value the solution coordinates are recalculated by formula , where [x] is x number integer part extraction operation.This variant of PSO is called Discrete Particle Swarm Optimisation (DPSO).

Using PSO for solving mathematical programming problems
Swarm algorithms are intended for solving global optimisation problems on hypercube.For conditional linear and nonlinear optimisation in particular mathematical programming of kind: where the external penalty functions method with power or non-smooth penalty function (on user choice) is used.Then the problem ( 11)-( 12) in external penalty functions method comes down to solving the problems sequence of kind: is one of external penalty functions.
It is known (in, for instance, [9]) that if points x (k) are the points of global minimum of 11)-( 12) problem solution.
In practice the problem (13) is solved with fixed penalty parameter value r k , which is a rather big positive value.The ( 11)-( 12) problem solution will be obtained with some accuracy.In PSO Service parameter r k has default value of 1000 but it can be changed in parameters input window for increasing problem (11), ( 12) solution accuracy.

PSO Service web resource development tools
PSO Service has web interface.The server side is developed using Java programming language and Spring framework, the client side is developed using Thymeleaf web patterns, Bootstrap for decoration.Hibernate object-relational mapping system is used for working with database.Table 1 includes information of the tools main characteristics.

PSO Service main function
PSO Service web resource is bilingual (English and Ukrainian).It allows the user to learn the theory on swarm algorithms principles, solving optimisation problems with some of these algorithms.
On the main page (Fig. 2) user can read the information about swarm intelligence optimisation methods (Algorithms), use algorithms implemented in the service for solving optimisation problems (Solve), get information on working with the service (Help), get developers contacts (Contact Us), authorize (register) in the system (Login).
Having learnt about the algorithms user can move to solving the optimisation problem but first he needs to login or register if the user is not registered yet ("Login" function).After login user can start solving optimisation problems with selected algorithm: Canonical PSO, Hybrid PSO, Adaptiv PSO, Discrete PSO, Binary PSO.If user wants to optimise objective function which is not among test functions he must fill the form on Fig. 3, specifically, the function field using calculator-like buttons and the fields "Error", "Dimension of the task", "The size of the swarm", "Search space".Also, user can use some of predefined functions on "Test functions" tab (Fig. 4).
For making numeric experiments user is to set swarm optimisation parameters on "Parameter" tab (Fig. 5).User can change inertial, cognitive, social coefficients, set stopping criterion, number of repetitions etc. User can set, clear, or set default parameter values.
Having made the necessary settings and having sent the form to the server user gets the problem solution.The results page has three tabs: "Calculation", "Input parameters", and "Graph/Protocol".
The "Calculation" tab contains the algorithm execution main results (Fig. 6)."Input parameters" tab includes main parameters of the problem and algorithm (Fig. 7).

Numeric experiment
The set of test functions (Table 2) is used for testing the algorithms implemented in web service accuracy.They are non-linear functions used for optimisation algorithms characteristics assessment, specifically accuracy, convergence etc.
All Griewank function (f 6 ) is a typical separable multimodal objective function which has many local optimums.Because of topology like that the search algorithms incline to get stuck in one of local optimums.

Conclusion
1. Particle swarm optimisation is widely used in machine learning, specifically for neural networks learning and pattern recognition, parametrical and structural optimisation (shape, size, and topology) in design, biochemistry, biomechanics etc. Considering its effectiveness, it can compete with other global optimisation methods and low algorithmic complexity contribute to its implementation ease.2. The most promising research directions in this field are theoretical research of particle swarm optimisation convergence reasons and related topics in swarm intelligence and chaos theory areas, combining different algorithm modifications for solving complex tasks, research of SPO in multi-agent computer systems, and research on possibilities of including more complex natural mechanism analogues in it.3. In this research, the creation of web service for optimisation problems solving with swarm intelligence methods is based.Also, the main requirements for the system like that are set and information technology for its creation is described.The description of the main algorithms and methods implemented in the service and the results of numeric experiment for checking them on test functions are given.4 , R. Eberhart (1995)), grey wolf algorithms (Mirjalili S. (1995)), bee colony optimisation (D.Karaboga (2005)), glow-worm swarm optimisation (K.N. Krishnanand, D. Ghose (2005), monkey search algorithm (A.Mucherino, O. Seref (2007)), school of fish algorithm (Bastos-Filho C.J.A., de Lima Neto F.B., Lins A.J.C.C., Nascimento A.I.S., Lima M.P. (2008)), cuckoo search algorithm (X.-S.Yang, S. Deb (2009)), bats gang algorithm (Yang X. S. (

For
keeping balance between local and global search the coefficients c 1 and c 2 values are usually equal.Stagnation analysis in[4] has shown that coefficients values 719 provide in most cases the best results and search stability.Also, there are different ways of setting the swarm parameters dynamically, but adaptation needs some additional initial algorithm iterations: it can lead to objective function (necessary for optimum search) evaluation number increase.

Fig. 7 .
Fig. 7. "Input parameters" tab for Rosenbrock function."Graph/Protocol"tab (Fig.8) contains the dynamic visualisation of optimisation process with the selected algorithm when problem dimension equals 2 and objective function value change on every iteration diagram.Also, problem solution protocol in .xlsformat is available.

Table 1 .
Overview of technologies for PSO Service web resource implementation.
these functions except Schwefel and Rosenbrock have global minimum in point ) global optimum is geometrically remote from the best local minimum.That is why the search algorithm is inclined to go to wrong direction.The global solution for this function is in point

Table 2
min is function minimum value received after n algorithm repetitions, x min is the vector for which the function has f min value,
. In the future, new features are planned to be implemented in PSO Service software: other collective intelligence algorithms; more convenient conditional optimisation functional limitations input; increase methods number of moving it to global optimisation problem on hypercube; extend variants of PSO hybridisation with other local extremum search methods; add new embedded test functions.ITM Web of Conferences 15, 02009 (2017) DOI: 10.1051/itmconf/20171502009 CMES'17 5. Created PSO Service can be used: for teaching students, who study swarm intelligence methods; by scientists, who search for different optimisation problem types solving methods effectiveness; by users who use optimisation methods for decision making on real extreme problems solving.