HW Conceptual Exam Question Types

Dr-style preparation for questions that ask for behavior, rationale, and HW observations instead of exact numerical results.

Main rule: The final is unlikely to ask you to reproduce tables of numbers. It is more likely to ask: What behavior did you observe? Why did it happen? Is it exploration or exploitation? What would you change to fix it?

1. The Answer Formula

Use this structure for almost every HW-based conceptual question:

Observation -> Mechanism -> Exploration/Exploitation -> HW tie -> Fix or conclusion
Step What to say Example phrase
Observation Name the visible behavior, not exact numbers. "The current best quickly reaches a plateau." "The BSF improves in steps." "The current best oscillates above BSF."
Mechanism Name what caused it. "Greedy acceptance." "Tabu memory." "Elitism." "Random-key decoding." "Pheromone reinforcement."
Explore/exploit Classify the behavior. "This is mostly exploitation because moves stay near the current solution." "This adds exploration because it jumps to a new basin."
HW tie Mention the qualitative HW result. "This matches our HC random restart behavior." "This is what we saw in PSO where CB can move away from BSF."
Fix/conclusion Say what you would adjust conceptually. "Use a moderate restart interval." "Increase diversity." "Reduce elitist pressure." "Use a representation closer to the problem."

2. Plot-Reading Questions

What the doctor is testing: Can you infer algorithm behavior from CB/BSF curves, not from memorized algorithm names?
Plot behavior Conceptual interpretation Likely mechanisms Safe exam wording
Smooth monotone decrease Mostly exploitation. The search improves gradually using nearby moves. HC, greedy replacement, strong elitism, DE preserved best. "This is exploitative because there are no large jumps; the algorithm keeps improving locally."
Stepwise decrease with plateaus Exploitation periods separated by improvements. Can be balanced if the algorithm has time to exploit before moving again. TS, GA with elitism, ACS/AS iteration best, restart/perturbation methods. "The plateaus mean the algorithm is exploiting a region; the drops mean it found a better region."
Flat for long time, then sudden drop May suggest a preserved elite or "bully" effect: best solution is kept but the population/current search is not effectively exploiting it. Too much elitism, strong selection pressure, poor diversity. "This suggests elitism/preserved best may exist, but the algorithm is not using it well; diversity or selection pressure should be adjusted."
Current best goes up and down irregularly Explorative or no strict elitism. Current solution can worsen while searching. SA at high temperature, comma selection, PSO current positions, random restart current solution. "This cannot be a BSF curve in minimization because BSF cannot worsen. It can be CB/current best."
Fast drop then early plateau Premature convergence. The algorithm exploited too quickly and lost useful diversity. Too much greediness, high elitism, high global-best pull, low mutation, ACS high exploitation. "It is efficient early but may not be effective globally because it stops exploring too soon."
Large persistent oscillations Too much exploration or unstable movement. Very high SA temperature, very large DE difference move, very jumpy PSO movement, excessive mutation. "This explores many regions, but it may waste evaluations because it does not exploit good regions enough."
For the photo-midterm style question: CB4 is the one that cannot be BSF for minimization because it increases at some points. CB2 is the behavior to aim for: enough exploration to find new regions, but enough exploitation after each improvement. CB3 is the strongest "elitism/bully" warning: something good is being preserved, but the search is not exploiting/diversifying well.

3. HW Observation Questions

HIGH HC With Random Restart Interval

Question shape: With the same HC budget, should the restart interval be long, short, or moderate?

Answer: Moderate.

Reason: A long interval wastes time after HC has already plateaued. A very short interval does not give each run enough time to exploit its local region. A moderate interval lets each restart improve locally, then uses the remaining budget to sample another basin.

HW tie: In the HC experiments, runs converged to local optima and different starts could end in different local optima. That supports restarts, but not extremely short restarts.

HIGH Why FFD/BFD Beat Random HC

Question shape: Why did simple deterministic heuristics beat your stochastic metaheuristic?

Answer: FFD/BFD start with a strong problem-specific structure: large items are packed first. Random HC destroys this useful ordering and then only makes local repairs.

Concept: A metaheuristic is not automatically better. If its representation or initialization is poor, it can spend the budget rediscovering what the heuristic already knew.

HIGH Why Permutation Encoding Helped

Question shape: Why did permutation + First Fit improve HC compared with assignment encoding?

Answer: A permutation always decodes to a feasible packing using First Fit. Small swaps change item order and can create meaningful packing changes. Direct assignment can create many equivalent or weakly local changes.

Concept: Representation shapes the landscape. Good representation gives stronger locality and a smoother search surface.

HIGH Why HC Beat SAHC Under Equal Budget

Question shape: Why did basic HC outperform steepest-ascent variants when budget was equal?

Answer: Basic HC made many sequential improvement moves. SAHC spent several evaluations per outer iteration, so it had fewer actual accepted moves. For bin packing, depth of sequential refinement was more useful than breadth per step.

Exam phrase: "Depth beat breadth under the same evaluation budget."

HIGH Why TS Beat HC/SA

Question shape: What did TS add conceptually?

Answer: TS adds memory. It can move to the best neighbor even if the path is not purely improving, while tabu memory prevents immediate reversal. This helps it cross plateaus and avoid cycling.

Concept: TS is exploitation plus controlled exploration through memory.

MED Why Tabu Tenure Did Not Change Much

Question shape: Why did different tabu lengths behave similarly?

Answer: The swap neighborhood is very large. A small tabu list is already enough to prevent immediate reversal, and the chance of randomly selecting a forbidden move is low. Therefore, increasing tenure did not strongly affect behavior.

Boundary: Too short can cycle; too long can over-restrict. In our setting, the tested range was not the dominant factor.

HIGH Why GA Did Not Beat TS Consistently

Question shape: GA has population, crossover, mutation, elitism. Why can TS still win?

Answer: GA spreads evaluations across many individuals. This gives diversity, but each lineage gets less deep local refinement. TS invests evaluations into one improving trajectory with anti-cycling. For bin packing, sequential order refinement was very effective.

Concept: Population diversity helps, but it does not replace a good neighborhood and deep exploitation.

HIGH Why DE/PSO Were Stable But Not Best

Question shape: Why did DE/PSO not dominate despite being population methods?

Answer: DE/PSO are continuous methods applied through random-key encoding. The key vector must be sorted into a permutation, so the relationship between movement in key space and movement in packing space is indirect.

Concept: Population and memory gave stability, but random-key decoding reduced precise control over item order.

HIGH Why AS/ACS Improved After HW6

Question shape: What did ACO add that previous methods did not?

Answer: ACO builds the solution incrementally and stores memory at the component level. In bin packing, pheromone can reinforce useful item co-locations, not just a whole permutation.

Concept: ACO is constructive search with memory. ACS is more exploitative/elitist; AS is more democratic/explorative.

4. Conceptual Extreme Questions

Question shape Conceptual answer Plot expectation
SA temperature becomes zero No worse moves accepted. It becomes HC-like, strongly exploitative. CB close to monotone decrease, many plateaus.
SA temperature becomes extremely high Worse moves accepted almost freely. It becomes random-walk-like, highly explorative. CB oscillates strongly; BSF improves only when lucky.
GA has no crossover No communication between individuals. It becomes closer to parallel mutation/local search or self-breeding. Less combination of good building blocks; possible slower improvement.
GA has too much elitism The best individual becomes a bully. Diversity collapses and premature convergence becomes likely. Fast drop then plateau, or CB3-like preserved-best behavior.
GA has no elitism More diversity, but the best solution can be lost from the current population. CB can worsen; BSF remains monotone only if separately recorded.
DE difference movement is zero or population has converged No meaningful differential direction. The search stagnates unless mutation/diversity is reintroduced. CB and BSF flatten.
DE difference movement is very large Large jumps. More exploration, but may overshoot useful regions or require repair. CB may become unstable unless greedy replacement hides bad trials.
PSO follows only global best strongly Swarm rushes toward one leader. Strong exploitation, high bully/premature convergence risk. Fast convergence; particles may lose diversity.
PSO has no global-best pull Particles rely on personal and local/informant memory. More distributed search. More diversity; slower global convergence.
ACO evaporation is too low Old pheromone stays too long. Early choices dominate and lock-in risk increases. Fast plateau; exploitative behavior.
ACO evaporation is too high Pheromone is forgotten too quickly. The algorithm becomes closer to heuristic/random construction. More variation; weaker learning from BSF.
ACS greediness is very high Ants repeatedly choose the currently best component. Strong exploitation. Rapid convergence, possible premature lock-in.
AS democratic update instead of ACS elitist update More ants influence pheromone. More exploration and less lock-in, but can wander from greedy-good structure. More variable CB; BSF may improve later if exploration finds a better basin.

5. Compare-Two-Algorithms Questions

Pair What to compare Short exam answer
HC vs SA Acceptance of worse moves HC is greedy exploitation. SA can accept worse moves, so it can explore, but at low temperature it behaves like HC.
HC vs HCRR Depth vs breadth HC deeply exploits one basin. HCRR samples multiple basins. With fixed budget, restart interval must be moderate.
TS vs SA Memory vs probability SA escapes by probabilistic worse moves. TS escapes by forced movement plus tabu memory. In our BPP HWs, TS was more effective.
TS vs GA Sequential refinement vs population recombination TS uses deep local refinement with anti-cycling. GA uses population diversity and recombination. For BPP, TS often used the budget more effectively.
GA vs DE/PSO Direct permutation operators vs random-key continuous movement GA operators act on permutations directly. DE/PSO move in continuous key space and then decode, which can hide or distort useful item-order changes.
DE vs PSO Greedy replacement vs memory-guided movement DE competes trial against parent, so preserved-best behavior is strong. PSO moves particles using personal/informant/global memory, so current best may oscillate above BSF.
AS vs ACS Democratic exploration vs elitist exploitation AS lets many ants reinforce pheromone, so it explores more. ACS uses greedier choice and best-focused update, so it exploits more and can lock in faster.
FFD vs metaheuristics Problem-specific prior vs adaptive search FFD is fast and strong because largest-first packing matches many BPP instances. Metaheuristics help when the greedy ordering is not enough, especially the differentiating Instance 3 behavior.

6. Instance-Size Questions

Question shape: How does the item-size distribution affect which algorithm performs well?

Answer: Item size changes the landscape. If items are all larger than half the bin capacity, no two items can be paired, so all algorithms return the same result. If items are very small/easy, many algorithms reach the same packing. If largest-first ordering is already good, FFD/BFD are hard to beat. Metaheuristics matter most when good packing requires non-obvious ordering or co-location.

Instance type Conceptual answer Algorithm implication
All items large No pair can fit in one bin. There is almost no search challenge. All algorithms tie; do not claim one is better.
Very small/easy items Many combinations work, so the landscape has a strong attractor. Algorithm differences disappear; zero variance is expected.
FFD-friendly distribution Sorting large items first is already close to the best structure. Greedy and exploitative methods do well; random-init search may lose.
Mixed medium items with non-obvious pairings Good solution depends on item combinations, not just decreasing order. TS, GA, AS/ACS can beat FFD if they find better orderings or co-locations.
Fine-control small-item distribution Precise item ordering matters. Direct permutation methods can beat random-key methods because the representation has better control.

7. Fix-The-Behavior Questions

Observed behavior Diagnosis Conceptual fix
Premature plateau Too much exploitation or low diversity. Increase mutation/diversity, reduce elitist pressure, add restart/perturbation, or use less greedy ACS behavior.
Large random oscillations with little BSF improvement Too much exploration, not enough exploitation. Lower jump size, lower temperature, increase exploitation/memory, or use stronger local search.
CB is much worse than BSF Current search moved away from the best found solution; memory preserved BSF. Use the BSF to guide search more strongly, but avoid making it a bully.
All runs converge to same value Either the instance is easy/structurally constrained, or the algorithm has strong convergence pressure. If quality is good, this is stable. If quality is poor, add diversity or change representation.
Population collapses too quickly Selection/global-best/pheromone pressure is too strong. Reduce greedy pressure, maintain diversity, use local informants, or slow reinforcement.
Search wastes time after plateau Budget is spent exploiting a region with no improvement. Use stagnation stopping, restart, perturbation, or adaptive diversification.

8. High-Priority Practice Stems

HIGH Q1

A CB curve decreases smoothly with no sudden jumps, then plateaus. What does this say about the algorithm?

Answer skeleton: Mostly exploitation. It uses nearby moves and preserves improvements. Plateau means local optimum or premature convergence. To fix: add diversification, restart, mutation, or weaker elitism depending on algorithm.

HIGH Q2

A CB curve has sudden upward and downward changes. Can it be BSF?

Answer skeleton: No, not if it increases in a minimization problem. BSF can only improve or stay the same. It can be current best/current solution under exploration or no elitism.

HIGH Q3

If HC reached local optima quickly in your runs, how should you set random restart interval under the same budget?

Answer skeleton: Moderate. Long wastes iterations after plateau; short prevents exploitation. Moderate balances exploiting each basin and exploring new starts.

HIGH Q4

Why did TS outperform GA in your BPP experiments?

Answer skeleton: TS uses direct permutation neighborhood, best-neighbor movement, and tabu memory. GA has diversity, but crossover can disrupt global order and spreads budget across individuals.

HIGH Q5

Why did DE/PSO show stable plots but not the best BPP quality?

Answer skeleton: Their population/memory mechanisms stabilize search, but random-key encoding gives indirect control over permutations. Stable does not mean best if the representation is weak.

HIGH Q6

Why can AS beat ACS on one instance while ACS is better on greedy-friendly instances?

Answer skeleton: AS is more democratic and explorative, so it can discover unusual co-locations. ACS is greedier and more exploitative, so it preserves strong FFD-like structure but may lock in early.

MED Q7

If a population method has no communication between individuals, is it still a strong P-metaheuristic?

Answer skeleton: Not really. It may be multiple independent searches. The key population idea is not just many solutions, but communication, competition, or shared memory.

MED Q8

Why did FFD remain hard to beat even after applying several metaheuristics?

Answer skeleton: FFD encodes a strong bin-packing heuristic: place large items first. Many instances are FFD-friendly, so search methods mostly spend budget approaching the greedy structure.

9. What To Memorize

Use with: HW_Algorithm_Competition_Analysis.html for deeper algorithm comparison and hw_plot_audit/HW_Plot_Behavior_Audit.html for actual plot examples from the reports.

← Hub