【文章內(nèi)容簡介】
roblem, which can be evaluated against the objective function. Each particle decides on its next movement in the search space by bining some aspect of the history of its own best (bestfitness) locations with those of some members of the swarm. The next iteration happens when all particles are moved.Gradually, the swarm moves toward the optimum of the fitness function (Clerc 2006). If the dimension of the search space is d, the current location and velocity of the particle at time t are denoted by vectors x and v, respectively. Furthermore, the best position of all particles in the whole space (Gbest) and the best position of the particle in the previous movement experiences (Pbest) are memorized. With these explanations, the equation of the particles’motion for any dimension of d (the dth part of this vector will be indicated with the d index) is (Parsopoulos and Vrahatis 2010):where and are the previous velocity and location of the particle in the dth dimension,respectively。 and are the new velocity and location of the particle in the same dimension, respectively。 w is the inertia weight (monly set to 2), r1 and r2 are random numbers generated uniformly in the range [0, 1] and are to provide randomness in the flight of the swarm。 and c1 and c2 are weighting factors, also called the cognitive and social parameters, respectively (Shi and Eberhart 1998, Poli et al. 2007). The weight coefficients c1 and c2 control the relative effect of the Pbest and Gbest locations on the velocity of a particle. Although lower values for c1 and c2 allow each particle to explore locations far away from already uncovered good points, higher values of these parameters encourage more intensive search of regions close to previous points (Clerc 2006). 粒子群優(yōu)化算法PSO算法是由甘乃迪和Eberhart開發(fā)的(1995),作為一種基于人工智能的優(yōu)化方法。在PSO,一些粒子被放置在一些搜索問題的空間,在它的位置上有每個目標的評價函數(shù)。換句話說,每個粒子的位置是一個解決問題的辦法,它可以被目標函數(shù)評估。每個粒子結(jié)合其歷史方面的一些最好的(最適宜)的位置和一些群體其它成員的最好位置決定在搜索空間的下一個動作。所有的粒子移動時下一次迭代發(fā)生。漸漸地,群走向的適應(yīng)度最佳的函數(shù)(二零零六年克萊爾奇)。如果搜索空間的維數(shù)為d,在時間t的粒子的當前位置和速度是向量X,V表示。此外,所有粒子在整個空間的最佳位置(Gbest)和在先前的粒子的最佳運動位置(Pbest)將被存儲。這些說明中,粒子的方程的在d的任何一個維度上移動是(此向量的第d部分會與D指數(shù)一起顯示)為(Parsopoulos和Vrahatis2010):其中和為在第d維度上之前的速度和粒子的位置,和是在同樣維度上的新的粒子速度和粒子的位置,此外,w為慣性重量(通常為2),r1和r2是在[0,1]范圍內(nèi)均勻地產(chǎn)生的隨機數(shù),并且提供群的隨機性飛行;C1和C2是加權(quán)因子,也分別稱為認知參數(shù)和社會參數(shù)(Shi和埃伯哈特1998年,波利等人,2007)。權(quán)重系數(shù)c1和c2控制Pbest位置和Gbest位置對于一個粒子速度的相對影響。c1和c2值較小時允許每個粒子探索地點遠離已經(jīng)發(fā)現(xiàn)的好點,這些參數(shù)的值越高鼓勵粒子搜索靠攏前期點比較密集的區(qū)域(二零零六年克萊爾奇)。. MOPSO algorithmMOPSO algorithms can be divided into two categories (ReyesSierra and Coello Coello2006). The first category consists of PSO variants that consider each objective function separately. In these approaches, each particle is evaluated with only one objective function at a time, and the best positions are determined following the standard single objective PSO rules, using the corresponding objective function. The main challenge in these PSO variants is the proper manipulation of information from each objective function,in order to guide particles toward Paretooptimal solutions. The second category consists of approaches that evaluate all objective functions for each particle, and based on the concept of Pareto optimality, produce nondominated best positions (often called leaders) to guide the particles. The determination of leaders is nontrivial, since they have to be selected among a plethora of nondominated solutions in the neighbourhood of a particle. This is the main challenge related to the second category. Many methods have been used for this purpose (ReyesSierra and Coello Coello 2006, Parsopoulos and Vrahatis 2010。 Fan et ). In this article, a method proposed by Coello Coello and Lamont (2004) was used because it has less putational plexity and a quicker convergence (ReyesSierra and Coello Coello 2006). The following is a brief explanation of the method.First, an initial population is created, the values of the objective functions are calculated, and nondominant answers are preserved in an external archive. In the archive of nondominant answers, some hypercubes (with the same dimension as objective functions) are created. In Figure 1, an example of a twodimensional search space and its division into hypercubes is shown. In the twodimensional search space, the hypercubes are squares. Then, the following process is employed until the number of repetitions es to an end and/or the final condition of the algorithm is met. MOPSO算法MOPSO算法可以分為兩大類(雷耶斯 Sierra和科埃略科埃略2006年)。第一類包括分開考慮各目標函數(shù)的PSO變種。在這些方法中,每個粒子每次只有一個目標函數(shù)進行評價,最好的位置是按照單一目標標準確定PSO規(guī)則,使用相應(yīng)的目標函數(shù)。PSO變種面臨的主要挑戰(zhàn)是每個目標函數(shù)的正確的操作信息,這些信息是為了引導(dǎo)粒子走向帕累托最優(yōu)的解。第二類包括為每個粒子評價所有的目標函數(shù),并基于帕累托最優(yōu)概念的方法,產(chǎn)生非支配最佳位置(通常被稱為領(lǐng)導(dǎo)者)來指導(dǎo)的顆粒。領(lǐng)導(dǎo)者的確定是不平凡的,因為它們是在顆粒的附近過多的非支配解中被選擇的。這是第二類別的主要挑戰(zhàn)。許多方法已被用于這個目的(雷耶斯 Sierra和科埃略科埃略2006年, Parsopoulos和Vrahatis 2010 。 Fan等。2010) 。在這篇文章中,使用由科埃略科埃略和拉蒙特( 2004)提出的方法 因為它具有更小的計算復(fù)雜度和更快的收斂(雷耶斯 Sierra和科埃略科埃略2006) 。下面是該方法的簡要說明。第一,在創(chuàng)建初始種群,計算目標函數(shù)的值,與非主導(dǎo)的答案都保存在一個外部檔案。在非優(yōu)勢的答案存檔中,一些超立方體(具有相同的維數(shù)作為目標函數(shù))被創(chuàng)建。在圖1中,一個二維搜索空間和其分裂成超立方體被顯示。在二維搜索空間中,超立方體是正方形。然后,下面的方法時,直到重復(fù)次數(shù)玩完和/或算法的最終條件得到滿足。 Figure 1. An example of hypercubes generated in a twodimensional search space of two objectivefunctions. Each cell shows one hypercube in this space (Coello Coello et al. 2004).圖1。在兩個目標的二維搜索空間中產(chǎn)生的超立方體的例子功能。每個單元顯示在這個空間的一個超立方體(科埃略科埃略等人,2004)。The velocity of any particle in the d dimension can be calculated by the followingequation: where all of the parameters are the same as in Equation (3), with the exception that rep(h) is the value obtained from the nondominated archive as a leader, as described in the followings.By assuming m as the number of available solutions in a hypercube, the probability roulette wheel of Equation (6) is applied to choose a hypercube with the h index. In fact, the aim is to choose a hypercube with fewer particles to optimize the density of the Paretofront.在d維的任何粒子的速度可以通過以下計算公式: where all of the parameters are the same as in Equation (3), with the exception that rep(h