注册地址 登录
数学建模社区-数学中国 返回首页

ultra1989的个人空间 http://www.madio.net/?135003 [收藏] [复制] [分享] [RSS]

日志

[转]Particle Swarm Optimization

已有 1392 次阅读2011-12-6 02:21

The PSO Algorithm

Although the heart of the PSO algorithm is rather simple, you’ll need to understand it thoroughly in order to modify the code in this article to meet your own needs. PSO is an iterative process. On each iteration in the PSO main processing loop, each particle’s current velocity is first updated based on the particle’s current velocity, the particle’s local information and global swarm information. Then, each particle’s position is updated using the particle’s new velocity. In math terms the two update equations are:

v(t+1) = (w * v(t)) + (c1 * r1 * (p(t) – x(t)) + (c2 * r2 * (g(t) – x(t))

 x(t+1) = x(t) + v(t+1)

Bear with me here; the position update process is actually much simpler than these equations suggest. The first equation updates a particle’s velocity. The term v(t+1) means the velocity at time t+1. Notice that v is in bold, indicating that velocity is a vector value and has multiple components such as {1.55, -0.33}, rather than being a single scalar value. The new velocity depends on three terms. The first term is w * v(t). The wfactor is called the inertia weight and is simply a constant like 0.73 (more on this shortly); v(t) is the current velocity at time t. The second term is c1 * r1 * (p(t) – x(t)). The c1 factor is a constant called the cognitive (or personal or local) weight. The r1 factor is a random variable in the range [0, 1), which is greater than or equal to 0 and strictly less than 1. The p(t) vector value is the particle’s best position found so far. Thex(t) vector value is the particle’s current position. The third term in the velocity update equation is (c2 * r2 * (g(t) – x(t)). The c2 factor is a constant called the social—or global—weight. The r2 factor is a random variable in the range [0, 1). The g(t) vector value is the best known position found by any particle in the swarm so far. Once the new velocity, v(t+1), has been determined, it’s used to compute the new particle position x(t+1)

Let’s also assume that constant w = 0.7, constant c1 = 1.4, constant c2 = 1.4,
Extending and Modifying

Now that you’ve seen how to write a basic PSO, let’s discuss how you can extend and modify the code I’ve presented. The example problem I solved is artificial in the sense that there’s no need to use PSO to find an approximate solution because the problem can be solved exactly. Where PSO is really useful is when the numeric problem under investigation is extremely difficult or impossible to solve using standard techniques. Consider the following problem. You want to predict the score of an (American) football game between teams A and B. You have historical data consisting of the previous results of A and B against other teams. You mathematically model the historical rating of a team X in such a way that if the team wins a game, the team’s rating goes up by some fixed value (say 16 points) plus another value that depends on the difference between the teams’ ratings (say 0.04 times the difference if the team X rating is less than the opposing team’s). Furthermore, you model the predicted margin of victory of a team as some function of the difference in team ratings; for example, if team X is rated 1,720 and team Y is rated 1,620, your model predicts a margin of victory for X of 3.5 points. In short, you have a large amount of data and need to determine several numeric values (such as the 16 and the 0.04) that minimize your prediction errors. This data-driven parameter estimation is the type of problem that’s right up PSO’s alley.

PSO is just one of several AI techniques based on the behavior of natural systems. Perhaps the technique closest to PSO algorithms is Genetic Algorithms (GAs). Both techniques are well-suited to difficult numeric problems. GAs have been extensively studied for decades. An advantage of PSOs over GAs is that PSO algorithms are significantly simpler to implement than GAs. It’s not clear at this time whether PSOs are more or less effective than GAs, or roughly equal to them.

The version of PSO I’ve presented here can be modified in many ways. One particularly interesting modification is to use several sub-swarms of particles rather than one global swarm. With such a design, each particle belongs to a sub-swarm and the new velocity of a particle could depend on four terms rather than three: the old velocity, the particle’s best known position, the best known position of any particle in the sub-swarm, and the best known position of any particle. The idea of this sub-swarm design is to reduce the chances of the PSO algorithm getting stuck in a non-optimal solution. To the best of my knowledge such a design has not yet been thoroughly investigated.     


from:http://msdn.microsoft.com/en-us/magazine/hh335067.aspx



路过

雷人

握手

鲜花

鸡蛋

评论 (0 个评论)

facelist doodle 涂鸦板

您需要登录后才可以评论 登录 | 注册地址

qq
收缩
  • 电话咨询

  • 04714969085

关于我们| 联系我们| 诚征英才| 对外合作| 产品服务| QQ

手机版|Archiver| |繁體中文 手机客户端  

蒙公网安备 15010502000194号

Powered by Discuz! X2.5   © 2001-2013 数学建模网-数学中国 ( 蒙ICP备14002410号-3 蒙BBS备-0002号 )     论坛法律顾问:王兆丰

GMT+8, 2024-4-26 09:47 , Processed in 0.301171 second(s), 27 queries .

回顶部