feat(dispatch): subproblem dispatcher to backend solvers #28

Open
opened 2026-02-20 22:26:56 +00:00 by forbes · 0 comments
Owner

Summary

Implement dispatcher.py — routes each subproblem from the solve plan DAG to the appropriate backend solver, executing in bottom-up topological order.

Context

The dispatcher is the bridge between graph decomposition and numerical solving. It takes the SolvePlan DAG from the decomposer, builds a mini SolveContext for each subproblem, and routes it to either a closed-form pattern solver or the Ondsel backend via the existing SolverRegistry. This is where the pluggable architecture pays off — the decomposition solver is itself a plugin that uses other plugins as backends.

Depends on: #26 (decomposer — produces the SolvePlan), #27 (patterns — analytical solvers for matched subproblems)

Design

Dispatch flow

For each subproblem in execution order (leaves first):

  1. Check for pattern match — if subproblem.pattern is set, use the analytical solver
  2. Build mini SolveContext — extract the subset of parts and constraints for this subproblem from the full context
    • Parts: all part_ids in the subproblem
    • Constraints: all constraint_ids in the subproblem
    • Grounded: shared/articulation parts whose placements are already solved by a dependency
    • Placements: use solved placements from dependencies for shared parts (warm start)
  3. Dispatch to backendkcsolve.load(self._backend_id).solve(sub_ctx)
  4. Store result — map solved placements back to global part IDs

Shared part placement propagation

When subproblem B depends on subproblem A via shared part P:

  • A is solved first (it's earlier in topological order)
  • P's placement from A's result is used as a fixed/grounded placement in B's context
  • This propagates solved positions through the DAG without re-solving

API

class SubproblemDispatcher:
    def __init__(self, backend_id: str = "ondsel"):
        self._backend_id = backend_id
        self._pattern_matcher = PatternMatcher()

    def execute(self, plan: SolvePlan, ctx: SolveContext) -> DispatchResult

@dataclass
class SubproblemResult:
    subproblem_id: str
    status: SolveStatus
    placements: dict[str, Transform]  # part_id → solved placement
    diagnostics: list[ConstraintDiagnostic]
    method: str  # "pattern:<name>" or "numerical:<backend_id>"

@dataclass
class DispatchResult:
    sub_results: dict[str, SubproblemResult]
    all_succeeded: bool
    first_failure: str | None  # subproblem_id of first failure

Error handling

  • If a subproblem fails, downstream dependents are not executed (they depend on the failed result)
  • The dispatch result captures which subproblem failed and why
  • Partial results (successfully solved subproblems before the failure) are preserved for diagnostics

Tasks

  • Implement SubproblemDispatcher class
  • Mini SolveContext builder (part/constraint subsetting from full context)
  • Shared part placement propagation through the DAG
  • Pattern solver dispatch path
  • Backend solver dispatch path via kcsolve.load()
  • Error handling: stop on first failure, preserve partial results
  • Unit tests in tests/decomposition/test_dispatcher.py:
    • Single subproblem → dispatches to backend, returns result
    • Pattern-matched subproblem → uses analytical solver, skips backend
    • Two-subproblem chain → shared part placement propagates correctly
    • Backend failure → downstream subproblems skipped, partial results preserved
    • Mock backend solver for testing without kcsolve dependency

Acceptance criteria

  • Dispatcher executes subproblems in correct topological order
  • Shared part placements propagate correctly between dependent subproblems
  • Pattern-matched subproblems never invoke the numerical backend
  • First failure stops execution of dependent subproblems
## Summary Implement `dispatcher.py` — routes each subproblem from the solve plan DAG to the appropriate backend solver, executing in bottom-up topological order. ## Context The dispatcher is the bridge between graph decomposition and numerical solving. It takes the `SolvePlan` DAG from the decomposer, builds a mini `SolveContext` for each subproblem, and routes it to either a closed-form pattern solver or the Ondsel backend via the existing `SolverRegistry`. This is where the pluggable architecture pays off — the decomposition solver is itself a plugin that uses *other* plugins as backends. Depends on: #26 (decomposer — produces the SolvePlan), #27 (patterns — analytical solvers for matched subproblems) ## Design ### Dispatch flow For each subproblem in execution order (leaves first): 1. **Check for pattern match** — if `subproblem.pattern` is set, use the analytical solver 2. **Build mini SolveContext** — extract the subset of parts and constraints for this subproblem from the full context - Parts: all `part_ids` in the subproblem - Constraints: all `constraint_ids` in the subproblem - Grounded: shared/articulation parts whose placements are already solved by a dependency - Placements: use solved placements from dependencies for shared parts (warm start) 3. **Dispatch to backend** — `kcsolve.load(self._backend_id).solve(sub_ctx)` 4. **Store result** — map solved placements back to global part IDs ### Shared part placement propagation When subproblem B depends on subproblem A via shared part P: - A is solved first (it's earlier in topological order) - P's placement from A's result is used as a **fixed/grounded placement** in B's context - This propagates solved positions through the DAG without re-solving ### API ```python class SubproblemDispatcher: def __init__(self, backend_id: str = "ondsel"): self._backend_id = backend_id self._pattern_matcher = PatternMatcher() def execute(self, plan: SolvePlan, ctx: SolveContext) -> DispatchResult @dataclass class SubproblemResult: subproblem_id: str status: SolveStatus placements: dict[str, Transform] # part_id → solved placement diagnostics: list[ConstraintDiagnostic] method: str # "pattern:<name>" or "numerical:<backend_id>" @dataclass class DispatchResult: sub_results: dict[str, SubproblemResult] all_succeeded: bool first_failure: str | None # subproblem_id of first failure ``` ### Error handling - If a subproblem fails, downstream dependents are **not executed** (they depend on the failed result) - The dispatch result captures which subproblem failed and why - Partial results (successfully solved subproblems before the failure) are preserved for diagnostics ## Tasks - [ ] Implement `SubproblemDispatcher` class - [ ] Mini `SolveContext` builder (part/constraint subsetting from full context) - [ ] Shared part placement propagation through the DAG - [ ] Pattern solver dispatch path - [ ] Backend solver dispatch path via `kcsolve.load()` - [ ] Error handling: stop on first failure, preserve partial results - [ ] Unit tests in `tests/decomposition/test_dispatcher.py`: - Single subproblem → dispatches to backend, returns result - Pattern-matched subproblem → uses analytical solver, skips backend - Two-subproblem chain → shared part placement propagates correctly - Backend failure → downstream subproblems skipped, partial results preserved - Mock backend solver for testing without kcsolve dependency ## Acceptance criteria - Dispatcher executes subproblems in correct topological order - Shared part placements propagate correctly between dependent subproblems - Pattern-matched subproblems never invoke the numerical backend - First failure stops execution of dependent subproblems
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: kindred/solver#28