HW Algorithm Competition Analysis
How the HW algorithms compete on bin packing, which conceptual mechanisms help them navigate the problem, and how item-size distribution changes the winner.
1. The Competition Story
HW1: Greedy beats random local search
FFD/BFD beat FF/BF and assignment-based HC. Sorting items by decreasing size gives the search a strong structure immediately. Random HC starts from poor assignments and gets trapped in nearby local optima.
Exploitation FFD is not adaptive, but it is a very strong problem-specific exploitative heuristic.
HW2: Representation changes everything
Permutation + First Fit decoding beats direct assignment encoding. The algorithm searches over item order, then First Fit creates a feasible packing. This makes every candidate valid and lets small swaps produce meaningful changes.
HC beats SAHC/HCRR under equal budget. HC gets 1000 sequential improvement opportunities. SAHC spends 10 evaluations per outer step, so it only makes 100 actual moves. HCRR explores more starts but each start is shallow.
HW3: TS becomes the best non-population method
TS beats HC and SA because it combines best-neighbor movement with tabu memory. It can move around plateaus and avoid immediate cycling instead of stopping at the first local optimum.
SA with low initial temperature behaves almost like HC. It occasionally accepts worse moves, but not enough to create a major advantage in your experiments.
HW4: GA adds population diversity but TS often wins
GA beats HC and SA because a population explores many regions and elitism preserves the best individual. However, TS beats GA on most non-trivial instances because TS invests evaluations into one deep improving trajectory.
GA's OX crossover preserves subsequences, but bin packing quality depends on the whole item order. Crossover can disrupt useful global order patterns.
HW5: DE and PSO are stable but limited by random keys
DE and PSO are close to each other. DE has greedy replacement, so the population best is preserved. PSO has personal/global/informant memory, so it remembers good locations even when current particles move away.
The problem is the random-key encoding: continuous values are sorted into a permutation. Small key changes may cause no ordering change, or suddenly swap items. This weakens fine control on instances where precise order matters.
HW6: AS/ACS win through construction plus pheromone
ACS is the best original HW1-HW6 method overall. It is not just "ants are better." In your implementation, ACS combines largest-first bin seeding, a size heuristic, and pheromone memory over item co-location.
AS beats ACS on Instance 3 because democratic pheromone deposit explores more co-location patterns. ACS is better where greedy construction is already close to optimal.
2. Overall Ranking To Remember
After HW6, the practical ranking from your reports is:
| Group | Where it stands | Exam phrasing |
|---|---|---|
| AS / ACS | Strongest after HW6. AS is best on the differentiating Instance 3; ACS is best among the main/original configurations and matches FFD on many instances. | Constructive search with pheromone memory. AS is more explorative; ACS is more exploitative/elitist. |
| FFD/BFD | Still the baseline. Instant, deterministic, and best or tied on many non-trivial instances. | Sorting large items first is a powerful problem-specific prior. Do not assume metaheuristics beat it. |
| TS | Best non-ACO metaheuristic before HW6. Strong on Instance 3 and Instance 10 compared with GA/DE/PSO. | Best-neighbor movement plus tabu memory gives depth, anti-cycling, and plateau movement. |
| GA / DE / PSO | Mid-tier and close. GA has targeted permutation operators; DE/PSO are stable but indirect through random keys. | Population helps stability and diversity, but representation can dominate the algorithm name. |
| HC / SA / ILS variants | Useful baselines. HC with permutation encoding is surprisingly competitive; low-temperature SA behaves close to HC. | Simple local search can be strong if representation is good, but it lacks the memory/diversity of later methods. |
3. What Conceptually Helps Navigation?
| Conceptual tool | Algorithms using it | How it helps | Failure mode |
|---|---|---|---|
| Sorting / largest-first prior | FFD, BFD, ACS implementation | Packs hard-to-place large items early, leaving small items to fill gaps later. | Can become a strong local optimum; if the best solution needs a non-obvious ordering, pure FFD cannot adapt. |
| Representation | HC/SA/TS/GA use permutations; DE/PSO use random keys; ACO uses construction/co-location | Controls what "nearby solution" means. Permutation + First Fit was much better than direct assignment HC. | A poor encoding hides useful moves. Random keys can waste evaluations on equivalent permutations. |
| Depth of sequential improvement | HC, TS | Each move builds on the previous move. This helped HC beat SAHC/HCRR under equal budget and helped TS beat GA. | Can get trapped if it only accepts improvements and has no memory or restart mechanism. |
| Anti-cycling memory | TS | Prevents immediate reversal and encourages movement to new neighboring regions. | Tenure length did not matter much in your case because the swap neighborhood is huge relative to the tabu list. |
| Elitism / preserved best | GA, DE, PSO memory, ACS pheromone on best trail, BSF archive | Prevents losing the best solution discovered so far. Produces stable BSF behavior. | Too much elitism creates a "bully" effect: fast convergence, loss of diversity, premature stagnation. |
| Population diversity | GA, DE, PSO, AS/ACS ants | Searches multiple regions at once and reduces dependence on one starting point. | If the representation is weak, many individuals still explore the wrong space. |
| Pheromone memory | AS, ACS | Stores useful components: in BPP, item pairs or placements that appeared in good packings. | Can over-reinforce early choices. ACS can lock in; AS can wander away from strong greedy solutions. |
4. Algorithm-By-Algorithm Competition
| Algorithm | Competes against | Why it wins | Why it loses | CB/BSF behavior to expect |
|---|---|---|---|---|
| FF / BF | Greedy baselines | Very fast. BF reduces wasted space better than FF. | No sorting. Processes items in given order, so bad early placements hurt later bins. | No iterative CB/BSF curve unless you plot construction progress. |
| FFD / BFD | All metaheuristics | Largest-first ordering is a strong BPP prior. Often best or tied at near-zero cost. | Cannot adapt beyond its fixed sorted order. Loses on Instance 3 where better non-sorted co-locations exist. | Deterministic single result; acts like a strong local optimum for many searches. |
| HC | SAHC, SAHCR, SA, assignment-HC | With permutation encoding, it gets deep sequential improvement. It even beats FFD on Instance 3 in HW2. | Only accepts improvement, so it stops at local optima and depends on starting point. | CB equals BSF if only improving moves are accepted: monotone decreasing with plateaus. |
| SAHC / SAHCR | HC variants | Evaluating multiple neighbors gives broader local choice per outer iteration. | Only 100 actual moves under the equal budget. SAHC and SAHCR became identical because the best of 10 neighbors was usually non-worsening. | Coarse staircase improvements; fewer improvement opportunities than HC. |
| HCRR | HC, SAHC | Restarts explore different basins; helped Instance 9 among HC variants. | Each restart is shallow, so it sacrifices depth. Bad on Instance 3 where long chains matter. | Current curve has restart jumps/sawtooth; BSF remains monotone. |
| SA | HC and TS | Can accept worse moves, so in theory it escapes local optima better than HC. | Your best low-temperature setting behaved almost like HC. It did not add enough useful exploration to beat TS. | CB can worsen/oscillate; BSF stays monotone. |
| TS | HC, SA, GA, DE, PSO | Best-neighbor search gives intensification; tabu memory prevents cycling. Strong direct permutation control. | Computationally heavier than HC. Still cannot match ACO on Instance 3 after pheromone/co-location is added. | Current may worsen to escape; BSF stair-steps down. Tenure length changes little because the neighborhood is huge. |
| ILS-SA / ILS-TS | Standalone SA/TS | Perturbation can jump to new basins; ILS-TS was strong. | Splitting budget into phases can starve the local search. ILS-TS underperformed full TS because each phase was shorter. | Current jumps after perturbations; BSF only improves when a phase beats the previous best. |
| GA | HC, SA, TS, DE/PSO | Population diversity, selection, crossover, mutation, and elitism. Better than HC/SA and stable on many instances. | Usually loses to TS because crossover is less direct than sequential local improvement. OX can disrupt global order patterns. | With elitism, generation-best should not lose the preserved best. Without elitism, current generation best can worsen while BSF stays monotone. |
| DE | PSO, GA, TS | Greedy replacement gives strong stability. Good on Instances 1, 7, 8, 10 among HW5 methods. | Random-key encoding is indirect. It struggled on Instance 3 and 9 where precise item ordering matters. | Preserved-best behavior: the population best usually equals BSF because worse trials do not replace parents. |
| PSO | DE, GA, TS | Personal best, informant best, and global best guide particles while retaining population diversity. | Also limited by random-key decoding. Current particles may oscillate around useful orders without improving them. | CB may oscillate above BSF. A visible CB-BSF gap is normal because memory keeps the historical best. |
| ACS | FFD, TS, AS | Constructs bins incrementally using largest-first seed, size heuristic, and pheromone. Best original HW1-HW6 method. | Can lock into greedy patterns. On Instance 3, more explorative AS found better solutions. | Iteration-best may fluctuate, BSF is monotone. Fast convergence/plateau suggests strong exploitation. |
| AS | ACS | Democratic update explores more co-location patterns; best on Instance 3 in the addendum. | Can wander away from strong greedy structures; loses to ACS on instances where FFD-like construction is already excellent. | More variable iteration-best than ACS; BSF improves when broad exploration discovers a better basin. |
5. How Item Sizes Change The Winner
| Instance type | Observed examples | Effect on algorithms | Exam conclusion |
|---|---|---|---|
| Very large items | Instances 4 and 5: all items are larger than 50 when capacity is 100. | No two items can share a bin. Every algorithm returns 1000 bins. Search has nothing useful to optimize. | This is not an algorithm victory. It is a structural property of the instance. |
| Very small/easy items | Instance 2: all algorithms reach 126 bins with zero variance. | Many combinations fit. The landscape has a strong attractor, so algorithm differences disappear. | When the instance is too easy, performance curves are not informative. |
| FFD-friendly distributions | Instances 1, 6, 7, 8, 9, 10 in most HW1-HW5 comparisons. | Largest-first packing is already very strong. Random-init metaheuristics spend effort rediscovering or approaching FFD. | Use FFD as baseline or seed. Search should preserve the greedy structure and only refine it. |
| Differentiating medium/mixed instance | Instance 3. | Pure FFD is not best. Direct permutation search and pheromone co-location find better item groupings. AS/ACS/TS/GA beat FFD here. | This is where metaheuristics show real value. It rewards exploration plus memory. |
| Fine-grained small-item control | Instance 9. | FFD and GA/TS do well; DE/PSO random-key control was worse. Small order changes can matter, and random-key decoding can obscure them. | Precise representation matters more than having a population. |
| Complex but FFD-favored | Instance 10. | Search methods show wider gaps and variance, but FFD remains very strong. ACS comes close because it has an FFD-like constructive prior. | A hard instance does not automatically mean metaheuristics beat greedy. It depends on whether the greedy structure is wrong. |
6. The Big Instance-by-Instance Pattern
| Instance | What it tells you | Best way to explain algorithm behavior |
|---|---|---|
| 1 | FFD-friendly. Metaheuristics improve over random baselines but usually do not beat FFD. | Sorting gives a strong exploitation prior; TS/DE/ACS get close through memory and refinement. |
| 2 | Easy small-item instance. Zero variance. | All methods find the same value; not useful for comparing algorithms. |
| 3 | Main differentiating instance. FFD = 413, HC = 411.7, GA = 409.5, TS = 407.4, ACS = 405.3, AS = 398.6. | FFD ordering is not enough. The useful structure is better item co-location or non-obvious ordering. Exploration plus memory wins. |
| 4 and 5 | All items are too large to pair. Everyone returns 1000. | The instance removes the optimization challenge; do not over-interpret algorithm equality. |
| 6 | FFD is extremely strong. GA/DE/PSO reach around 271, FFD is 269, ACS about 269.5, high-exploitation ACS variants can reach 269. | Exploitation is rewarded. Too much broad exploration can move away from the greedy optimum. |
| 7 | FFD-friendly. TS/GA/DE/PSO close but worse; ACS matches FFD in HW6. | Constructive largest-first bias matters more than random initialization. |
| 8 | FFD-friendly but DE was close and strong among HW5 methods. | Stable population refinement can approach the greedy structure, but direct FFD remains hard to beat. |
| 9 | Small-item/fine-control instance. GA/TS better than DE/PSO; ACS better than AS here. | The best solution needs precise packing control. Random-key indirection and excessive AS exploration can hurt. |
| 10 | Hard for metaheuristics, but still FFD-favored. ACS gets close; TS beats GA/DE/PSO among older methods. | Complexity alone is not enough. The sorted greedy order is still a powerful attractor. |
7. Exam-Style Answer Templates
If asked: "Why did TS beat GA?"
TS invests its budget into a deep sequential trajectory. Each iteration evaluates several neighbors, picks the best one, and uses tabu memory to avoid cycling. GA spreads its budget across a population, which gives diversity but less depth per lineage. For bin packing, small sequential order refinements are very valuable, so TS often wins.
If asked: "Why did DE/PSO not dominate?"
The optimizer is continuous, but bin packing is combinatorial. Random-key encoding translates real vectors into permutations through sorting. That mapping is indirect: many vectors decode to the same order, and tiny key changes can suddenly swap items. So DE/PSO are stable, but less precise than direct permutation operators.
If asked: "Why did ACO/ACS do well?"
ACO builds the solution incrementally. In HW6, ACS started each bin with a largest-first idea, then used a size heuristic and pheromone memory over item co-location. This combines a strong greedy prior with learning. It can match FFD on greedy-friendly instances and improve on Instance 3 where co-location learning matters.
If asked: "Why does instance size distribution matter?"
If all items are larger than half the bin, no algorithm can pair them. If items are very small, many methods reach the same packing. If the sorted decreasing order is already near-optimal, FFD wins. Metaheuristics help most when the best packing needs non-obvious item ordering or co-location, as in Instance 3.
8. Common Mistakes To Avoid
- Do not say metaheuristics always beat greedy. Your HWs show the opposite on many instances.
- Do not rank algorithms without mentioning the instance. Instance 3 rewards exploration; Instances 6 and 9 reward preserving greedy structure.
- Do not confuse population with automatic exploration. DE/PSO had populations, but random-key encoding limited fine control.
- Do not call every monotone curve "elitism." HC is monotone because of greedy acceptance; DE is monotone because of greedy replacement; GA monotonicity depends on elitism; PSO/ACS preserve historical best through memory.
- Do not overfocus on exact parameter values. For the final, the important answer is behavioral: more memory/exploitation can converge faster but risks lock-in; more exploration can escape but may wander away from good greedy structure.
9. One-Minute Final Review
Most important synthesis: The strongest "extra thing" an algorithm had was not just randomness. It was useful structure.
- FFD uses sorting structure.
- Permutation encoding uses First Fit decoding structure.
- TS adds anti-cycling memory to local improvement.
- GA adds population diversity plus elitism.
- DE adds greedy replacement but suffers from random-key indirection.
- PSO adds personal/informant/global best memory but also suffers from random-key indirection.
- AS/ACS add constructive search plus pheromone memory; ACS exploits more, AS explores more.
Best final sentence: "The HW results show that representation and instance structure dominate the algorithm name. The best method is the one whose navigation mechanism matches the item-size distribution."
Source basis: HW1-HW6 report conclusions and ranking sections in knowledge_transfer/hw_reports/, especially the HW2 representation/evaluation-budget discussion, HW3 TS analysis, HW4 GA-vs-TS analysis, HW5 random-key analysis, and HW6 ACS/AS synthesis.