Transcript-first notes for PSO: static swarm, velocity components, informants, per-dimension stochasticity, BPP adaptation, HW#5 observations, and plot behavior.
A particle has a position x, velocity v, personal best x*, informants' best x+, and global best x!. The swarm is decentralized: there is no central leader telling particles where to go; the direction emerges from these memory and communication terms.
| Term | Meaning | Doctor-style interpretation |
|---|---|---|
alpha*v | Retain old velocity. | "I was already flying this way; continue a bit." |
b*(x* - x) | Personal best pull. | "I remember a good place I visited." |
c*(x+ - x) | Informants' best pull. | "My local neighbors found a good place." |
d*(x! - x) | Global best pull. | "The whole swarm's best place is there." |
epsilon | Step size. | "How far I move after the direction is computed." |
| Feature | GA | DE | PSO |
|---|---|---|---|
| Population change | Generational replacement. | Trial child can replace its parent. | Static particles persist. |
| Selection | Yes. | Yes, parent vs trial. | No selection. |
| Main movement | Crossover + mutation. | a + alpha*(b-c). |
Velocity update. |
| Memory | Mostly population/genetic material. | Current population positions. | Personal best, informants' best, global best. |
| Death | Weak individuals may disappear. | Parent dies only if trial beats it. | No death; bad particles can keep moving. |
alphaDiversification
Large inertia lets each particle continue its own direction. That keeps particles spread and preserves exploration.
betaIndividual exploitation
Large personal pull makes each particle return to its own best visited place. Too much makes the swarm split into separate hill climbers.
gammaLocal communication
Informants are the particle's local social group. This balances personal experience and global best.
deltaWhole-swarm exploitation
Large global pull sends everyone toward one best-known point. This can become one large hill climber and cause premature convergence.
epsilonMove length
After velocity is computed, epsilon controls how far the particle actually moves. Too large overshoots; zero means no movement.
| Exam wording | Behavior | Explore / Exploit |
|---|---|---|
High old-velocity / inertia influence (alpha large) |
Particles keep their own previous directions. Swarm stays spread longer. | Exploration / diversification |
No old-velocity / inertia influence (alpha = 0) |
No independent drift. Movement depends only on personal best, informants, and global best. | More guided, less independent drift |
High personal-best influence (beta large) |
Each particle keeps returning toward its own best history. | Separate hill climbers |
High informants / neighborhood influence (gamma large) |
Particle follows its local social group strongly. | Local exploitation, some diversity |
High global-best influence (delta large) |
All particles race toward global best. | One big hill climber / bully |
No global-best influence (delta = 0) |
No global-best pull. Swarm relies on personal and informant memory. | More exploration |
No movement step (epsilon = 0) |
Velocity may be computed, but position does not change. | Frozen |
Huge movement step (epsilon huge) |
Particles overshoot good regions and oscillate. | Too jumpy |
| No influence from old velocity, personal best, informants, or global best | No meaningful direction is produced. | Degenerate / stuck |
The doctor emphasized three memory levels: myself, my neighborhood, and the whole swarm.
| Best type | Question it answers | Risk if dominant |
|---|---|---|
Personal best x* |
Where was I personally successful? | Particles ignore others and become separate local searches. |
Informants' best x+ |
Where did my current local group do well? | Local group may over-focus, but less severe than global collapse. |
Global best x! |
Where is the best-known point in the whole swarm? | Everyone follows the same point and prematurely converges. |
Every particle treats the whole swarm as its neighborhood. This makes x+ similar to x! and increases exploitation.
Each particle has a smaller informant set, often chosen randomly each iteration. This preserves multiple search directions.
A common confusion is: if a particle already found its personal best, why is it not still there? Because PSO keeps moving. Other forces can pull the particle away from its own best.
| State | Meaning |
|---|---|
Current position x |
Where the particle is now. |
Personal best x* |
The best place this particle has ever visited, which may be behind it. |
Velocity v |
The direction/speed inherited from previous movement and current pulls. |
The particle may have drifted away because informants or global best pulled it elsewhere. Later, the personal-best term pulls it back toward what it personally knows was good.
One value is used for all dimensions.
The particle moves along the same direction, only shorter or longer. Less exploration.
Each dimension gets a different random value.
The vector can tilt sideways. This creates more exploration because the particle is not restricted to the same line.
| Distribution | Effect | Exam wording |
|---|---|---|
| Uniform over a range | Higher variance inside the allowed interval. | More exploration. |
| Gaussian with low variance | Most values stay near the center. | More exploitation / guided movement. |
| Wider variance | More possible directions and step strengths. | More exploration. |
Position is a real vector with one key per item. Sort keys to get item order, then decode using First Fit.
Velocity is a real vector that changes the keys.
Position is a packing or permutation. Velocity can be interpreted as a list of swaps, item moves, or feature-copy operations.
This is more native to BPP but requires repair and careful operator design.
| PSO concept | BPP meaning |
|---|---|
| Particle | One full candidate packing, or one random-key vector that decodes to a full packing. |
| Position | Current packing or current key vector. |
| Velocity | Key changes, or a list of swaps/moves. |
| Personal best | The best packing this particle has ever produced. |
| Informants' best | The best packing among this particle's local group. |
| Global best | The best packing found by the whole swarm. |
| Plot signature | Likely PSO diagnosis | Reason |
|---|---|---|
| Current best is noisy while BSF improves occasionally. | Swarm still explores. | Particles can move away from good locations, but memory keeps BSF. |
| CB quickly approaches BSF and then both plateau. | Global best pull too strong. | The swarm collapses into one large hill climber. |
| CB moves but BSF stays flat. | Particles are moving, but not finding better positions. | Could be too much inertia/drift, too little useful pull, or random-key encoding noise. |
| Many particles stop improving separately. | Personal best pull too strong. | Swarm fragments into separate hill climbers. |
| BSF goes in the wrong direction. | Plot or implementation error. | Best-so-far must be monotonic in the correct direction. |
For PSO, swarm size controls coverage and information sources; iterations let communication and velocity updates actually work.
| Setup | Behavior | Risk |
|---|---|---|
| Large swarm, few iterations | Many starting particles and broad coverage. | Not enough time for personal, informant, and global memory to reshape motion. |
| Small swarm, many iterations | More time to refine current directions. | Few informants and low diversity; swarm may collapse quickly. |
| Balanced | Enough particles to cover different regions and enough iterations for communication. | Usually safer for exam reasoning. |
The important HW#5 story is not exact parameter values. It is the behavior created by random-key encoding and the exploration/exploitation balance.
0.7 was a balanced setting in the HW notes.| Aspect | DE | PSO |
|---|---|---|
| Core move | d = a + alpha*(b-c), then crossover. |
Velocity update from inertia, personal best, informants, global best. |
| Population dynamics | Parent can be replaced by trial. | Same particles persist. |
| Selection | Yes, child vs parent. | No. |
| Memory | Population positions only. | Personal, local, and global best memory. |
| Automatic step adaptation | Yes, through population spread in b-c. |
Less direct; controlled by velocity, epsilon, and stochastic coefficients. |
| BPP weakness | Subtraction is not native. | Velocity is not native. |
| HW#5 outcome | Close results because random-key encoding dominated both. | |
PSO keeps a static swarm of particles and moves each particle using velocity influenced by its own memory, informants, and global best.
No. Particles do not die or reproduce. They persist and move.
All particles chase the same point. The swarm becomes one large hill climber and may prematurely converge.
Particles continue their own directions, increasing diversification and exploration.
Each particle focuses on its own previous best and the swarm can split into separate hill climbers.
They provide local communication, balancing personal search and global-best collapse.
Per-dimension randomness lets the velocity vector tilt and explore. One random value for all dimensions only scales the same direction.
Use random keys and decode to a packing, or redefine velocity as swaps/moves/features between complete packings.
PSO and DE were close because both were limited by random-key encoding.
No. Each particle is a complete candidate solution. With random keys, each item is only one dimension inside that particle.
Because it is also pulled by inertia, informants, and global best. It remembers its personal best, but it does not stay there permanently.
The reward is the fitness improvement. Better positions update personal/global memory; worse positions are not stored as bests.
Particle Swarm Optimization is a static population method: particles do not die or reproduce, they move. Each particle has a position, a velocity, a personal best, an informants' best, and a global best. Its velocity is updated by combining inertia, personal memory, local social memory, and global swarm memory. High inertia promotes exploration; high personal pull creates separate hill climbers; high global pull turns the swarm into one big hill climber and risks premature convergence. For bin packing, a particle is a complete packing or random-key vector, not a single item. In HW#5, PSO and DE behaved similarly because both were limited by random-key encoding.