An improved crow search algorithm with multi strategy disturbance

. As a new meta heuristic intelligent algorithm, crow search algorithm simulates the behavior of crows following each other and stealing food. Due to the simplicity and robustness of crow search algorithm, it has been successfully applied in many fields. However, like other swarm intelligence optimization algorithms, crow search algorithm also has the disadvantages of slow convergence speed and easy to fall into local optimization. In order to improve the convergence accuracy and later search ability of the algorithm, a new hybrid crow search algorithm called multi strategy disturbance improved crow search algorithm (MSD-CSA) is proposed based on the traditional crow search algorithm. In MSD-CSA, the sharing mechanism is added to improve the location update mode of random tracking in the original algorithm, reduce the search blindness and improve the convergence speed. In addition, the global optimal location is perturbed with different sizes in different iterative stages, which effectively improves the probability of jumping out of the local optimal and ensures the balance between the global search ability and the local search ability of the algorithm. In order to evaluate the effectiveness of MSD-CSA algorithm, it is applied to 20 basic test functions for optimization experiments, and compared with other intelligent optimization algorithms. Experimental results show that the average convergence and robustness of the proposed algorithm are better than other algorithms, and the overall performance is good.


Introduction
In recent years, meta heuristic algorithm, as an effective evolutionary computing technology, has attracted the attention of many scholars [1]. Meta heuristic algorithm is a kind of algorithm inspired by the real environment. which core idea is to balance the random behavior and local search in the search process. Common meta heuristic algorithms include Particle Swarm Optimization (PSO) [2], Bat Algorithm(BA) [3], Butterfly Optimization Algorithm (BOA) [4], Flower Pollination Algorithm(FPA) [5], Teachinglearning-based Optimization(TLBO) [9], Pigeon Inspired Optimization(PIO) [6], Whale Optimization Algorithm(WOA) [7], Gray Wolf Optimizer(GWO) [8], etc. These algorithms have the characteristics of simplicity, few parameters and short running time. Therefore, meta heuristic algorithms show excellent operability and optimization ability in solving many nonlinear and multimodal practical optimization problems.
Crow search algorithm (CSA) is a new meta heuristic algorithm proposed by Ailreza Askarzadeh in 2016 [10]. Based on the long-term research on crow habits, the algorithm simulates the social behavior of tracking and stealing food among the groups. This characteristic makes the diversity of the algorithm population not greatly reduce with the iteration, and increases the probability of jumping out of the local optimization. At present, CSA has been successfully applied to many engineering optimization and function optimization problems, such as chemical engineering and QSAR [11], image processing [12], feature selection [13], neural network and support vector machine [14], aircraft maintenance inspection [15], wireless sensor network [16] and other major engineering fields. However, like most optimization algorithms, crow search algorithm itself also has the defects of slow convergence speed and easy to fall into local optimal. In order to improve the convergence speed of CSA, many scholars have proposed improved algorithms. Wu Hao [17] proposed a crow search algorithm incorporated with Levy flight (LFCSA), and applied it to update the finite element model. The chaotic sequence was used to initialize the population, so that the particles were evenly distributed in the solution space and the diversity of the population was increased. Therefore, Liu Xuejing et al. proposed the chaotic binary crow search algorithm (CBCSA) [18] for the discrete space, use it to solve the problem of {0-1} knapsack. By introducing adaptive step size, Mohammdi et al. proposed an self-adaptive step size crow search algorithm (MCSA) [19] and applied it to non-convex economic load scheduling. For the second kind of improvement, Xiao Ziya [20] mixed optimization of crow search algorithm and sine and cosine algorithm to this algorithm to pressure vessel design. Arora [21] et al. combined the crow search algorithm with the gray Wolf algorithm and used it to solve the problem of feature selection. Pratiwi [22] et al. combined the crow search algorithm with the cat swarm algorithm and applied it to the vehicle routing problem with time Windows. However, the above algorithms are mainly aimed at improving the global search performance, do not focus on balancing the global search performance and local search performance of crow algorithm, and the problem of slow convergence speed of the algorithm has not been well solved.
In this paper, the basic crow algorithm is improved from three aspects: making full use of the optimal individual, increasing population diversity and adaptive step size, so that it can better balance exploration and development, so as to propose an improved method called multi strategy disturbance improved crow search algorithm (MSD-CSA). In MSD-CSA, the idea of learning from the optimal particle and adding disturbances in different stages of iteration will be introduced to improve the convergence speed and accuracy of the algorithm at the same time. In order to deeply study the performance of MSD-CSA algorithm, this paper uses 20 commonly used benchmark functions to test, and compares it with the classical and newly proposed meta heuristic algorithms (differential evolution algorithm, particle swarm optimization algorithm, bat algorithm, gray wolf algorithm, firefly algorithm and crow search algorithm), so as to verify the effectiveness of the improved algorithm.
The structure of this paper is as follows. In section 2, the basic crow search algorithm will be briefly introduced. In section 3, the improved strategy will be described to produce an improved crow search algorithm. In section 4, the proposed improved algorithm is tested with 20 benchmark functions and compared with 6 algorithms. Finally, the conclusion is to given in Section 5.

Crow search algorithm
It is found that crows have a much larger proportion of brain relative to body than other birds. They have a higher IQ than other birds, so crows are considered to be very smart birds. By understanding the habits of crows, researchers found that crows will observe where other birds hide their food. Once the owner leaves, crows will steal the food. If a crow steals, it will take additional precautions, such as moving its hiding place, to avoid becoming a later victim. Crows use their experience of stealing food from other birds to predict the behavior of other thieves, and can find the safest hiding place to keep their stores from being stolen. CSA is a population-based technology to find the best solution to the optimization problem by simulating the above biological behavior of crows. CSA abides by the following four principles: (1) crows live in groups.
(2) Crows can remember where they hide.
(3) Crows follow each other and steal.
(4) Crows protect their stores from theft with the greatest probability. Based on the four principles, the basic process of CSA is described as follows: Step 1: Initializing the parameters of CSA. Such as: population size (n), maximum iteration number (Maxiter), flight step size (fl), awareness probability (AP), etc.
Step 2: Initializing the individual crows and memory matrix. n crows are generated in the search space of d-dimension, and each crow xi = (X i,1 , X i,2 , X i,3 , …, X i,d ) represents a feasible solution of a problem. Since the initial population has no experience, it is assumed that the initial memory matrix is the initial position.
Step 3: Evaluate the quality of each crow according to the fitness function.
Step 4: A new location is generated for each crow in the d-dimensional search space. Assuming that crow i randomly follows a crow (for example, crow j) in order to find the place of the hidden food of crow j, the position update of crow i can be divided into the following two situations.
Case 1: crow j does not find that crow i is following it. In this case, the position update formula of Crow i is: Case 2: Crow j finds that crow i is following it, and crow j will take crow i to a random position.
To sum up, the position update formula of Crow i is: , In the formula, r i , r j are random numbers that obey the uniform distribution of [0,1]. AP represents the perception probability. When the AP is smaller, the probability of occurrence of case 1 is greater, and the algorithm tends to search locally. When the AP is larger, the probability of finding in case 2 is greater, and the algorithm tends to search globally. fl i,iter is the flight step length of crow i. When fl i,iter < 1, the next position of crow i is between x i t and m j t , as shown in Figure 1. When fl i,iter > 1, the next position of crow i is outside the line between x i t and m j t , as shown in Figure 2. Therefore, fl will affect the search ability of the algorithm. If the value is too large, it tends to search globally, and the algorithm has poor convergence. If the value is too small, it is easy to fall into the local optimum.
Step 5: Checking whether the new position of each crow is feasible. If possible, change the crow's position. Otherwise, it is not updated.
Step 6: Calculating the fitness value of the new position of each crow.
Step 7: Updating the memory matrix of each crow Step 8: Repeating steps 4-7 until the termination condition is reached.

Location update method of joining the sharing mechanism
In the crow search algorithm, the crow i randomly tracks a crow j for location update. This update method can maintain the good diversity of the population in the early stage of the iteration, but it will also make the algorithm more blind, slow down the convergence speed, and can't converge to the optimal value in a short time.
In 1995, Kennedy scholar proposed particle swarm optimization (PSO) [2], which cooperatively searches the solution of the problem through information sharing among individuals in the group, that is, the flight speed of particles will not only refer to their historical optimal position, but also refer to the position of the optimal particles in the group for action adjustment, which makes the algorithm have the characteristics of fast convergence speed. Therefore, in order to enhance the leadership of the algorithm, particle swarm optimization is introduced into the location update part of crow search algorithm. The idea of learning from the optimal particle in the algorithm makes the crow population share the optimal hiding position in the environment, that is, it updates itself with reference to the optimal hiding position in the population while randomly following an individual. Under the leadership of the optimal hiding and feeding position, the algorithm greatly improves the convergence speed. The new location update formula is as follows: , , Where, sm represents the formula of sharing mechanism and , gbest iter m represents the global optimal hiding position; s represents the sharing factor, which determines the influence of the optimal hiding and feeding position on the current crow position. After the improvement, the crow i moves further away from the optimal position every time, which effectively improves the convergence speed of the algorithm.

Adding disturbance strategy to update the optimal hiding position
CSA is a non-greedy algorithm, that is, if the current solution is not better than the previous generation, it will not be retained. Therefore, crow search algorithm is easy to fall into local optimization in the later stage of iteration. Especially when applied to complex multimodal functions, the disadvantages of premature convergence and low accuracy are more obvious. Secondly, the improved position update formula in Section 3.1 also affects the accuracy of the optimal solution of the algorithm in multimodal function. In this paper, the multi strategy disturbance is used to update the optimal Tibetan food position , gbest iter m , that is, the global optimal Tibetan food position is disturbed according to the normal random distribution with adjustable variance to obtain a new global optimal Tibetan food position ', gbest iter m . The speed update formula is as follows: gbest iter gbest iter where, G represents the variance of normal random distribution. G is updated as follows: where, G represents the radius parameter of normal disturbance, and 1 2 G G ! . small, so that the current solution hardly jumps out of the better region, so as to ensure that the algorithm group only learns from the global optimal solution, so as to ensure that the algorithm has good convergence. Combined with the improvement of the algorithm in Sections 3.1 and 3.2, that is, while strengthening the leadership of the global optimal Tibetan food position, relying on the disturbance of different sizes in different stages of the global optimal Tibetan food position can greatly increase the probability of the algorithm jumping out of the local optimization, so that the algorithm achieves the balance between global search and local search, which not only improves the search accuracy but also speeds up the convergence speed.

Experimental simulation and result analysis
In order to verify the effectiveness of proposed algorithm in solving various optimization problems, this paper selects 20 typical benchmark functions [23] to compare the optimization performance of DE [24], PSO [25], FA [26], BA [27], GWO [28], CSA [10]. Because the population size corresponding to the best effect of each algorithm in each study is inconsistent, in order to ensure the fairness of the experiment, the evaluation times are used as the termination condition of the algorithm, and all parameters of each algorithm are set according to the parameters proposed in the corresponding literature (including population size and key parameters). For the sake of fairness, we used a fixed population size for all algorithms: n = 50 individuals, all algorithms are executed in 30 independent runs. All algorithms are implemented in Matlab (version R2020a) and executed on HP computer (Windows 10 Inter Core i5-6300HQ, 2.3 GHz, 8GB RAM). The 20 well-known benchmark test functions for validation. The set of test functions is explained in Table 1. These benchmark functions are divided into three categories. High-dimensional unimodal test function (f1~f7), High-dimensional multi-modal test function (f8-f14), and fixeddimensional multi-modal test function (f15-f20). The number of iterations in each run for each algorithm equals 1000 iterations. And the experimental data obtained after running, as shown in Tables 2~4, where Best, Worst, Mean, and Std represent the optimal value, worst value, average value, and Std value obtained by the algorithm independently running 30 times. The algorithm performance is ranked according to the accuracy of the optimal value. The best results are for matted in bold. In order to observe the convergence speed and convergence accuracy more intuitively.
As can be seen from the experimental results in Table 2, except that the result of testing 6 f is not as good as GWO. Among the remaining seven functions 1 5 f ,and 7 f , the optimal value of MSD-CSA ranks first. At the same time, the worst value, average value and standard deviation are all smaller than other algorithms, indicating that the stability and convergence accuracy of MSD-CSA are very high. This also shows that MSD-CSA has better optimization ability for high-dimensional unimodal functions. As can be seen from the experimental data results in Table 3, Except for the test data result of 9 f , which is ranked behind the BA algorithm, the data of the remaining seven benchmark functions 10 14 f f show that the indicators of MSD-CSA are better than the other four algorithms. It shows the feasibility of MSD-CSA in solving high-dimensional multi-modal functions. At the same time, compared to the other four algorithms, MSD-CSA's global search capabilities are stronger.
It can be seen from the experimental data results in Table 4 that the effect of MSD-CSA in optimizing the function is slightly lower than that of GWO. For the optimization of the remaining four functions , MSD-CSA has the smallest optimal value and standard deviation. In general, the optimization problem of fixed-dimensional multi-modal functions can be well solved by MSD-CSA.     Note: '/' means that the algorithm is invalid for the benchmark function.

Conclusions
Through the research and analysis of the principle and update formula of the original crow algorithm, aiming at the problems of slow convergence speed and low search accuracy in the later stage, MSD-CSA is proposed in this paper. MSD-CSA adds a sharing mechanism to the location update formula and the idea of following the global optimal location to the simple random following behavior, so as to achieve the coexistence of diversity and convergence The perturbation of the global optimal solution in different iterative periods can expand the search range of the population, and has a strong ability to jump out of the local optimal in the later stage of the iteration. Combined with the above two improvements, the algorithm achieves a balance between global search and local search. The test results also show that the optimization ability of MSD-CSA is better than the original algorithm and other algorithms. In the next step, MSD-CSA algorithm is applied to engineering problems to further verify the effectiveness of the algorithm.