ICS503, Spring 2026 - Final Sample Question Bank
Practice items in the doctor's exam style. No answer keys included by design.
The question bank is split into two parts. Part A - Experiment Umbrella Questions follow the doctor's preferred style for the first question of the exam: a single scenario about an experiment with several sub-parts (a), (b), (c), (d), each asking how the behaviour would change under a parameter tweak. Part B - Conceptual Questions are shorter standalone items in the doctor's other style (single stem, sometimes with sub-parts), covering coevolution comprehension, problem framing, and the comparison topics he reuses across exams. Mock final exams at the end pull from both parts.
Part A. Experiment umbrella questions Four cross-algorithm umbrella questions in midterm Q1 style: stem + artifact table + 4 sub-parts. Each is worth 10 points and references multiple algorithms via the artifact table - no full question is dedicated to a single algorithm. Sub-parts test parameter behavior, BPP adaptation, hybridization, and the explore/exploit aim.
A.1 Parameter settings analysis
1.
Consider the following parameter settings, each applied to one of the algorithms you used in your mini projects on the bin packing problem.
| # | Algorithm | Parameter(s) tweaked | Setting |
|---|---|---|---|
| S1 | GA | crossover rate, mutation rate, elitism | very low, very low, very high |
| S2 | DE | scale on the deviation vector | very large |
| S3 | PSO | inertia, global-best pull | very low, very large |
| S4 | ACS | evaporation rate, pheromone exponent | near zero, very high |
| S5 | TS | tabu tenure | very long |
- For each setting, predict the algorithm's behavior in terms of exploration and exploitation. Indicate whether similar settings were used in your projects, and if so explain how you addressed or mitigated the resulting behavior.
- Which of these settings would most likely cause premature convergence? Which would behave closest to random search? Justify.
- Pick TWO of these algorithms and explain how you adapted the algorithm to the bin packing problem (representation, operators, fitness).
- Which combination of parameter settings would you aim for, and why? Justify in terms of the exploration / exploitation balance.
A.2 Adaptation scenarios
2.
Consider the following adaptations of canonical metaheuristics to the bin packing problem, each from a different mini project.
| # | Algorithm | Representation | Operator design |
|---|---|---|---|
| A1 | DE | real-valued vector, decoded by sorting + First Fit Decreasing | standard DE arithmetic on the vector |
| A2 | PSO | real-valued vector, decoded by sorting + First Fit | standard PSO velocity update on the vector |
| A3 | GA | permutation of items, decoded by First Fit Decreasing | order-preserving crossover + swap mutation |
| A4 | ACS | constructive: ant places items into bins one by one | pheromone on item-bin co-locations; deposit at end of tour |
| A5 | TS | explicit packing (item -> bin assignment); tabu list stores recent swap moves | neighborhood = swap two items between bins |
- For each adaptation, indicate whether it preserves the algorithm's character (i.e., whether the spirit of the algorithm is intact). Justify.
- Which adaptation introduces the most 'translation work' (the most distance from the canonical algorithm)? Which is the most natural fit for BPP? Justify.
- Pick ONE adaptation and explain how a single design choice in it (e.g. the decoding heuristic, the operator, the fitness term) affects the exploration / exploitation balance.
- Suggest ONE alternative adaptation for any algorithm in the table that would make it more suitable for BPP. Justify.
A.3 Hybrid pipelines for BPP
3.
Consider the following sequential hybrid pipelines, each combining two of the algorithms used in your mini projects on the bin packing problem.
| # | First | Second | Hand-off rule |
|---|---|---|---|
| H1 | GA | HC | when GA's best-so-far plateaus for k generations |
| H2 | ACS | TS | when ACS's pheromone distribution stabilises |
| H3 | PSO | SA | when the swarm collapses (global-best stable) |
| H4 | HC | DE | when HC reaches a local optimum |
| H5 | TS | GA | when TS's diversification cycle completes |
- For each pipeline, indicate whether the order is sensible (does the first explore, the second exploit?). Justify.
- Which pipeline is most likely to outperform either algorithm alone on BPP? Which is least likely to add value over the standalone algorithm? Justify.
- Pick ONE pipeline and explain how the hand-off should be implemented in detail (what exactly does the first algorithm pass to the second?).
- Suggest a hybrid pipeline NOT in the table that would be appropriate for the bin packing problem. Justify.
A.4 Extreme parameter tweaks
4.
Consider the following extreme parameter settings, each pushed away from a balanced default. Each tweak is applied to one of the algorithms used in your mini projects.
| # | Algorithm | Parameter | Extreme setting |
|---|---|---|---|
| T1 | HC with restart | restart interval | very short |
| T2 | SA | initial temperature | far below typical change in objective value |
| T3 | TS | tabu tenure | one (smallest possible) |
| T4 | GA | mutation rate | approximately 0.5 per gene |
| T5 | ACS | evaporation rate | near zero |
- For each tweak, predict the algorithm's behavior. Indicate which extreme of the dial (full exploration / full exploitation / collapse to a simpler algorithm) the tweak produces.
- Which tweak would most help the algorithm escape a local optimum? Which would most prevent escape? Justify.
- Pick ONE tweak and explain mechanically why the algorithm degenerates into a simpler algorithm (e.g. SA at very low T degenerates into HC).
- Which type of parameter setting would you aim for, and why? Justify in terms of the exploration / exploitation balance.
Part B. Conceptual and comprehension questions Single stems or short umbrella stems with one to three sub-parts. Coevolution comprehension, problem framing, and comparison topics. Each typically worth 4-7 points in the doctor's pattern.
B.1 Coevolution comprehension (slides EC-05-7, 8, 9)
5.
Recall the multi-goal-state slide from the coevolution lecture. The slide shows four
individuals attempting to reach different goal states; two of them have explicit fitness
values and two are intentionally written as "Fitness = ??". Answer the following:
- Propose how the internal fitness should be assigned to the two unspecified individuals so that the population maintains coverage across all goal states.
- Justify your design in terms of internal vs external fitness, and explain how each of the two notions is used in the algorithm.
- A classmate proposes a simpler design: take the average distance from each individual to every goal and use that as the internal fitness. Explain in your own words why this proposal is dangerous.
6.
Consider a 1-population competitive coevolutionary GA applied to a multi-goal version of
the 8-puzzle problem. Each individual is a length-31 sequence of moves over {U, D, L, R},
and there are three acceptable goal states. Design the multi-goal fitness in the
following steps:
- Explain why solving the three goals as three separate GA runs is wasteful, and identify one concrete inefficiency in that approach.
- Now solve them as a single GA. For each individual, define the external fitness (the absolute progress measure used to update Best). Justify why external fitness in this scenario must be aware of which goal each individual is closest to.
- Design the internal fitness (the breeding signal) so that the population maintains coverage of all three goals. In your design, explain how you would (i) assign each individual to a goal cluster, (ii) compute its score relative to other members of that cluster, (iii) reward unique coverage of an under-served goal, and (iv) penalise redundancy when many individuals crowd the same goal.
- Consider an alternative proposal: keep the "best three individuals per goal" as a way to assign internal fitness. Discuss its merit, and discuss its downside when one of the three individuals reaches its goal.
- Compare your proposal to the simple averaging of distances to all goals. Explain in your own words why averaging is dangerous and what specific information it destroys.
7.
Algorithm 76 in the same slide deck calls AssessInternalFitness and AssessExternalFitness
on every iteration and uses each one in a different line of the algorithm. Answer the
following:
- Where in the algorithm is each fitness function used, and why do the two roles require different definitions?
- The same slide deck shows two bin packing solutions with the same number of bins but very different empty-space variance. Argue which solution the GA should prefer to keep for breeding.
- Explain in your own words why a single fitness function is insufficient in this case, and propose an internal fitness term that would let the GA distinguish the two solutions.
8.
The doctor described three options for measuring external fitness in 1-population
competitive coevolution when no ground truth exists: testing against a guru, testing
against a panel of hand-crafted players, and testing against a sample of past
generations. Answer the following:
- Compare the three options on cost (computation and human effort).
- Compare them on the bias each one carries.
- State and justify which option you would prefer when no ground truth exists, and explain the assumption your choice relies on.
9.
Map the firewall coevolution example onto the 2-population competitive framework. In
your answer:
- Identify what each population represents, what each individual looks like, and how the fitness of an individual in each population is computed.
- Explain why this is structurally identical to a Generative Adversarial Network.
- Two students argue about how to perform crossover inside a firewall ruleset. The first insists that crossover should swap whole rules only. The second argues that splitting inside a rule (mixing source IP from one rule with destination port from another) is permissible because the slots are still apple-to-apple. Compare the two positions on exploration and exploitation, and propose a compromise that uses both.
10.
In a multi-robot soccer team, each role (goalkeeper, defender, midfielder, striker,
playmaker) has its own sub-population evolving in parallel. Answer the following:
- Define the two fitness signals each individual receives, and explain how each one is computed.
- Explain why a player with the highest individual fitness is not necessarily picked for the team. Justify with reference to the trade-off between individual performance and team coordination.
- Explain why exhaustively evaluating every possible team becomes infeasible as the number of sub-populations grows, and describe how cooperative coevolution avoids this combinatorial explosion.
11.
Discuss diversity preservation in population metaheuristics. In your answer:
- Name five mechanisms used to prevent premature convergence and to preserve diversity, and briefly explain how each one produces more exploration.
- Fitness sharing and crowding both rely on a notion of "two individuals being similar". Discuss the three different notions of similarity (phenotype, genotype, behaviour) and give an example of each in the bin packing problem.
B.2 Effectiveness, efficiency, and reproducibility
12.
Recall the midterm question on Meta1 and Meta2: Meta1 reaches a moderate-quality
solution very quickly and then plateaus, while Meta2 improves more slowly but
eventually surpasses Meta1. Answer the following:
- According to the criterion of efficiency and the criterion of effectiveness, which of the two metaheuristics may be considered the best, and why?
- Propose a sequential strategy that uses Meta1 first and then hands off the search to Meta2 to obtain a better overall solution. Justify the hand-off rule.
- Propose an alternative parallel strategy that runs Meta1 and Meta2 simultaneously under the same total budget, and justify when the parallel strategy is preferred over the sequential one.
13.
A classmate claims that an algorithm with very low variance across runs is always
preferable to one with higher variance, because low variance means the algorithm is
more reliable. Answer the following:
- Discuss what very low variance and very high variance each indicate about the algorithm's behaviour.
- Discuss which is preferable for reproducibility and for a single-shot deployment.
- Discuss which is preferable when multiple restarts are allowed and the goal is to find the best possible solution. Justify your answer.
14.
Suppose you are given a fixed budget of fitness evaluations and must choose between a
small population evolved over many generations and a large population evolved over few
generations. Answer the following:
- Compare the two settings on exploration vs exploitation.
- State which setting you would prefer when the landscape is known to have many local optima, and justify your choice.
- Discuss how the choice interacts with the elitism level: should elitism be high or low under each setting? Justify.
B.3 Problem framing (Talbi-style)
15.
You are given a set of boxes of different heights to be piled into k columns such that
the differences between the column heights are minimised. Propose:
- An objective function (specify minimisation or maximisation).
- A representation suitable for a single-solution metaheuristic, and comment on its locality.
- A tweaking operator. Discuss whether this operator may face a plateau problem similar to the one you encountered in bin packing.
16.
In bin packing the natural objective is the number of bins, but the doctor recommended
augmenting this objective with an empty-space term. Answer the following:
- Explain in your own words why the raw count of bins is insufficient as a fitness signal for a single-solution metaheuristic.
- Discuss how the augmentation with an empty-space term changes the landscape the algorithm sees.
- Explain how this design choice mirrors the internal vs external fitness distinction from the coevolution lecture.
17.
A student claims that Hill Climbing with many random restarts is a population
metaheuristic because many solutions are being maintained over time. Answer the
following:
- Discuss whether you agree with the claim, and justify your answer in terms of what truly distinguishes a population metaheuristic from a single-solution one.
- Identify a specific mechanism that population metaheuristics have but random-restart HC does not. Explain the consequence of its absence.
- Describe a small modification to random-restart HC that would push it closer to a true population metaheuristic. Justify your modification.
B.4 8-puzzle GA with coevolution overlay
18.
Consider solving the 8-puzzle problem with a Genetic Algorithm. The hardest instance of the 8-puzzle is known to take 31 moves to solve. Answer the following:
- Propose a chromosome representation and an objective function for the single-goal-state 8-puzzle. Justify which crossover and mutation operators you would (and would not) use, and explain how invalid moves should be handled.
- Now suppose the goal state is no longer unique - there are several acceptable goal states, and the population should attempt to reach all of them. Discuss whether the chromosome needs to change, how the fitness function should be modified, and what diversity mechanism you would add to ensure the population does not collapse onto the easiest goal. Justify your answer with reference to the multi-goal-state coevolution slide.
B.5 Topic-focused short questions
19.
Recall the midterm question on Simulated Annealing with extreme temperatures. Answer
the following:
- SA is run with temperature equal to zero at all times (the corresponding termination test is omitted). Identify which algorithm SA reduces to, and explain the impact on behaviour in terms of exploration and exploitation.
- SA is run with temperature equal to infinity at all times. Identify which algorithm SA reduces to, and explain the impact on behaviour.
- Compare these two extremes with the analogous extremes for two other algorithms in the course: PSO with the global-best pull set to a very large value, and Tabu Search with the tabu list set to size zero.
20.
Schemata are sub-strings of the chromosome with some positions fixed and others left
as wildcards. The schema theorem describes how the GA amplifies fit schemata over
generations. Answer the following without computing any specific probability values:
- Define the order and the defining length of a schema, and give a small example.
- Given two schemata of equal order, one with a short defining length and one with a long defining length, explain qualitatively which is more likely to survive one-point crossover and why.
- Connect the schema theorem to the doctor's warning about uniform 50/50 crossover. Explain why uniform crossover is dangerous from a schema-survival point of view.
21.
Once Hill Climbing reaches a local optimum, several escape philosophies exist. Answer
the following:
- In Iterated Local Search (ILS), the algorithm perturbs the current local optimum and runs local search again. Discuss the impact on behaviour when the perturbation strength is (i) very weak, (ii) very strong.
- In Variable Neighborhood Search (VNS), the radius of the neighbourhood itself changes between iterations. Compare a fixed-small-radius schedule with a randomly chosen radius each iteration.
- In Guided Local Search (GLS), the search memory is kept inside the objective function itself via penalty terms on features. Explain how this mechanism prevents the algorithm from returning to the same region, and why the best-so-far must be tracked separately.
- In GRASP, the candidate-list size acts as a greediness knob. Compare the behaviour at very small (top one) and very large (top one hundred percent) settings.
22.
The doctor described representation as "the most important design choice." Answer the
following:
- Define strong locality and weak locality in the context of a genotype-to-phenotype mapping.
- Compare the locality of a single-bit mutation under plain binary encoding and under gray code encoding for an integer-valued gene. Use a concrete example to illustrate.
- The doctor showed how moving the 8-queens representation from a 2D board to a length-8 permutation reduces the search space dramatically. Explain how the choice of representation can outperform an algorithm upgrade, and justify your answer with reference to the 8-queens search-space reduction.
23.
A student decides to remove crossover entirely from a GA, keeping only mutation and
selection. Answer the following:
- Identify what the resulting algorithm is called and how it differs from a canonical GA.
- Explain what the GA loses when crossover is removed. Frame your answer in terms of how individuals share information across the population.
- Discuss when removing crossover is a defensible choice in practice. Use the 8-queens permutation example as your case study.
Mock final exams (30 pts each) Each mock picks one Part A umbrella as Q1 (10 pts) plus four conceptual questions from Part B / C (5 pts each). Practice under a 90-minute clock.
Mock final A
- Q1 Part A (10 pts) - parameter settings analysis .
- Q7 Part B (5 pts) - multi-goal fitness design (8-puzzle, design-from-scratch).
- Q13 Part B (5 pts) - Meta1 + Meta2 hand-off.
- Q16 Part B (5 pts) - boxes-into-columns problem framing.
- Q19(a) Part B (5 pts) - 8-puzzle GA chromosome and operators.
Mock final B
- Q2 Part A (10 pts) - adaptation scenarios (BPP-specific).
- Q8 Part B (5 pts) - Algorithm 76 internal/external + S1/S2 BPP variance.
- Q10 Part B (5 pts) - firewall mapped onto 2-pop / GAN.
- Q14 Part B (5 pts) - low vs high variance across runs.
- Q17 Part B (5 pts) - bin packing objective vs fitness.
Mock final C
- Q3 Part A (10 pts) - hybrid pipelines for BPP.
- Q9 Part B (5 pts) - external fitness options (guru, panel, past generations).
- Q11 Part B (5 pts) - cooperative soccer team.
- Q12 Part B (5 pts) - diversity preservation mechanisms and similarity types.
- Q19(b) Part B (5 pts) - 8-puzzle multi-goal extension.
Mock final D
- Q4 Part A (10 pts) - extreme parameter tweaks (algorithm degeneracy).
- Q6 Part B (5 pts) - multi-goal-state internal fitness design (the two unspecified individuals).
- Q13 Part B (5 pts) - Meta1 + Meta2 hand-off.
- Q15 Part B (5 pts) - small pop many gens vs large pop few gens.
- Q18 Part B (5 pts) - HC with random restart claim about population metaheuristics.
Mock final E
- Q1 Part A (10 pts) - parameter settings analysis.
- Q8 Part B (5 pts) - Algorithm 76 internal/external + S1/S2 BPP variance.
- Q11 Part B (5 pts) - cooperative soccer team.
- Q14 Part B (5 pts) - low vs high variance across runs.
- Q19(b) Part B (5 pts) - 8-puzzle multi-goal extension.
Mock final F
- Q2 Part A (10 pts) - adaptation scenarios.
- Q9 Part B (5 pts) - external fitness options.
- Q10 Part B (5 pts) - firewall mapped onto 2-pop / GAN.
- Q12 Part B (5 pts) - diversity preservation.
- Q16 Part B (5 pts) - boxes-into-columns problem framing.
Mock final G - topic-focused
- Q3 Part A (10 pts) - hybrid pipelines for BPP.
- Q20 Part B (5 pts) - SA at T=0 and T=infinity (extreme temperature analysis).
- Q21 Part B (5 pts) - schema theorem.
- Q23 Part B (5 pts) - representation locality and the 8-queens search-space upgrade.
- Q7 Part B (5 pts) - multi-goal fitness design.
Mock final H - escape strategies and self-breeding
- Q4 Part A (10 pts) - extreme parameter tweaks.
- Q22 Part B (5 pts) - escape strategies (ILS, VNS, GLS, GRASP).
- Q24 Part B (5 pts) - self-breeding / no-crossover GA.
- Q21 Part B (5 pts) - schema theorem.
- Q19(a) Part B (5 pts) - 8-puzzle GA chromosome and operators.
Mock final I - midterm reruns
- M7 Part C (10 pts) - 8-puzzle GA representation and operators (the original midterm Q7).
- M2 Part C (5 pts) - SA at T=0 and T=infinity (the original midterm Q2).
- M3 Part C (5 pts) - Meta1 + Meta2 efficiency vs effectiveness.
- M5 Part C (5 pts) - small population vs large population under fixed budget.
- M4 Part C (5 pts) - box piling problem.