16 Commits

Author SHA1 Message Date
forbes
5ec5c77db4 chore: add .mailmap to normalize git identity 2026-03-03 14:17:30 -06:00
cd7f66f20a Merge pull request 'fix(solver): enforce quaternion continuity on dragged parts during drag (#338)' (#40) from fix/drag-quat-continuity into main
Reviewed-on: #40
2026-02-27 15:39:17 +00:00
eaa5f3b0c0 Merge branch 'main' into fix/drag-quat-continuity 2026-02-27 15:39:01 +00:00
forbes-0023
f85dc047e8 fix(solver): enforce quaternion continuity on dragged parts during drag (#338)
The _enforce_quat_continuity function previously skipped dragged parts,
assuming the GUI directly controls their placement.  However, Newton
re-solves all free params (including the dragged part's) to satisfy
constraints, and can converge to an equivalent but distinct quaternion
branch.  The C++ validateNewPlacements() then sees a >91 degree rotation
and rejects the step.

Two-level fix:

1. Remove the dragged_ids skip — apply continuity to ALL non-grounded
   bodies, including the dragged part.

2. Add rotation angle check beyond simple hemisphere negation: compute
   the relative quaternion angle using the same formula as the C++
   validator (2*acos(w)).  If it exceeds 91 degrees, reset to the
   previous step's quaternion.  This catches branch jumps where the
   solver finds a geometrically different but constraint-satisfying
   orientation (e.g. Cylindrical + Planar with 180-degree ambiguity).

Verified: all 291 solver tests pass.
2026-02-27 09:37:10 -06:00
54fec18afb Merge pull request 'test: add console test reproducing planar drag quaternion flip (#338)' (#39) from test/planar-drag-console-test into main
Reviewed-on: #39
2026-02-27 15:30:58 +00:00
forbes-0023
9e07ef8679 test: add console test reproducing planar drag quaternion flip (#338)
Adds console_test_planar_drag.py — a live FreeCAD console test that
reproduces the quaternion branch-jump failure from #338.

Test 2 (realistic geometry) reliably triggers the bug: 10/40 drag
steps rejected by the C++ validateNewPlacements() simulator when
the solver converges to an equivalent but distinct quaternion branch
around 240-330 deg axial rotation.

Key findings from the test:
- The failure is NOT simple hemisphere negation (q vs -q)
- The solver finds geometrically valid but quaternion-distinct
  solutions when Cylindrical + Planar constraints have multiple
  satisfying orientations
- _enforce_quat_continuity only catches sign flips, not these
  deeper branch jumps
- The C++ validator uses acos(w) not acos(|w|), so opposite-
  hemisphere quaternions show as ~360 deg rotation
2026-02-27 09:30:27 -06:00
6c2ddb6494 Merge pull request 'fix: skip single_equation_pass during drag to prevent stale constraints' (#37) from fix/planar-drag-prepass into main
Reviewed-on: #37
2026-02-25 19:02:49 +00:00
5802d45a7f fix(solver): skip single_equation_pass during drag to prevent stale constraints
single_equation_pass analytically solves variables and bakes their values
as Const() nodes into downstream residual expressions. During drag, the
cached residuals use these stale constants even though part positions have
changed, causing constraints like Planar distance=0 to silently stop
being enforced.

Skip single_equation_pass in the pre_drag() path. Only substitution_pass
(which replaces genuinely grounded parameters) is safe to cache across
drag steps. Newton-Raphson converges in 1-2 iterations from a nearby
initial guess anyway, so the prepass optimization is unnecessary for drag.

Add regression tests covering the bug scenario and the fix.
2026-02-25 12:57:43 -06:00
9d86bb203e Merge pull request 'fix(solver): prevent orientation flips during interactive drag' (#36) from fix/drag-orientation-stability into main
Reviewed-on: #36
2026-02-25 02:47:26 +00:00
forbes-0023
c2ebcc3169 fix(solver): prevent orientation flips during interactive drag
Add half-space tracking for all compound constraints with branch
ambiguity: Planar, Revolute, Concentric, Cylindrical, Slider, Screw,
Universal, PointInPlane, and LineInPlane.  Previously only
DistancePointPoint, Parallel, Angle, and Perpendicular were tracked,
so the Newton-Raphson solver could converge to the wrong branch for
compound constraints — causing parts to drift through plane
constraints while honoring revolute joints.

Add quaternion continuity enforcement in drag_step(): after solving,
each non-dragged body's quaternion is checked against its pre-step
value and negated if in the opposite hemisphere (standard SLERP
short-arc correction).  This prevents the C++ validateNewPlacements()
from rejecting valid solutions as 'flipped orientation' due to the
quaternion double-cover ambiguity (q and -q encode the same rotation
but measure as ~340° apart).
2026-02-24 20:46:42 -06:00
e7e4266f3d Merge pull request 'fix(solver): build weight vector after pre-passes to match free param count' (#35) from fix/weight-vector-after-prepass into main
Reviewed-on: #35
2026-02-23 03:19:26 +00:00
forbes-0023
0825578778 fix(solver): build weight vector after pre-passes to match free param count
The weight vector was built before substitution_pass and
single_equation_pass, which can fix variables and reduce the free
parameter count. This caused a shape mismatch in newton_solve when
the Jacobian had fewer columns than the weight vector had entries:

  ValueError: operands could not be broadcast together with shapes
  (55,27) (1,28)

Move build_weight_vector() after both pre-passes so its length
matches the actual free parameters used by the Jacobian.
2026-02-22 21:06:21 -06:00
forbes-0023
8e521b4519 fix(solver): use all 3 cross-product components to avoid XY-plane singularity
The parallel-normal constraints (ParallelConstraint, PlanarConstraint,
ConcentricConstraint, RevoluteConstraint, CylindricalConstraint,
SliderConstraint, ScrewConstraint) and point-on-line constraints
previously used only the x and y components of the cross product,
dropping the z component.

This created a singularity when both normal vectors lay in the XY
plane: a yaw rotation produced a cross product entirely along Z,
which was discarded, making the constraint blind to the rotation.

Fix: return all 3 cross-product components. The Jacobian has a
rank deficiency at the solution (3 residuals, rank 2), but the
Newton solver handles this correctly via its pseudoinverse.

Similarly, point_line_perp_components now returns all 3 components
of the displacement cross product to avoid singularity when the
line direction aligns with a coordinate axis.
2026-02-22 15:51:59 -06:00
forbes-0023
bfb787157c perf(solver): cache compiled system across drag steps
During interactive drag, the constraint topology is invariant — only the
dragged part's parameter values change between steps. Previously,
drag_step() called solve() which rebuilt everything from scratch each
frame: new ParamTable, new Expr trees, symbolic differentiation, CSE,
and compilation (~150 ms overhead per frame).

Now pre_drag() builds and caches the system, symbolic Jacobian, compiled
evaluator, half-spaces, and weight vector. drag_step() reuses all cached
artifacts, only updating the dragged part's 7 parameter values before
running Newton-Raphson.

Expected ~1.5-2x speedup on drag step latency (eliminating rebuild
overhead, leaving only the irreducible Newton iteration cost).
2026-02-21 12:23:32 -06:00
forbes-0023
e0468cd3c1 fix(solver): redirect distance=0 constraint to CoincidentConstraint
DistancePointPointConstraint uses a squared residual (||p_i-p_j||^2 - d^2)
which has a degenerate Jacobian when d=0 and the constraint is satisfied
(all partial derivatives vanish). This made the constraint invisible to
the Newton solver during drag, allowing constrained points to drift apart.

When distance=0, use CoincidentConstraint instead (3 linear residuals:
dx, dy, dz) which always has a well-conditioned Jacobian.
2026-02-21 11:46:47 -06:00
forbes-0023
64b1e24467 feat(solver): compile symbolic Jacobian to flat Python for fast evaluation
Add a code generation pipeline that compiles Expr DAGs into flat Python
functions, eliminating recursive tree-walk dispatch in the Newton-Raphson
inner loop.

Key changes:
- Add to_code() method to all 11 Expr node types (expr.py)
- New codegen.py module with CSE (common subexpression elimination),
  sparsity detection, and compile()/exec() compilation pipeline
- Add ParamTable.env_ref() to avoid dict copies per iteration (params.py)
- Newton and BFGS solvers accept pre-built jac_exprs and compiled_eval
  to avoid redundant diff/simplify and enable compiled evaluation
- count_dof() and diagnostics accept pre-built jac_exprs
- solver.py builds symbolic Jacobian once, compiles once, passes to all
  consumers (_monolithic_solve, count_dof, diagnostics)
- Automatic fallback: if codegen fails, tree-walk eval is used

Expected performance impact:
- ~10-20x faster Jacobian evaluation (no recursive dispatch)
- ~2-5x additional from CSE on quaternion-heavy systems
- ~3x fewer entries evaluated via sparsity detection
- Eliminates redundant diff().simplify() in DOF/diagnostics
2026-02-21 11:22:36 -06:00
18 changed files with 2545 additions and 115 deletions

7
.mailmap Normal file
View File

@@ -0,0 +1,7 @@
forbes <contact@kindred-systems.com> forbes <joseph.forbes@kindred-systems.com>
forbes <contact@kindred-systems.com> forbes <zoe.forbes@kindred-systems.com>
forbes <contact@kindred-systems.com> forbes-0023 <joseph.forbes@kindred-systems.com>
forbes <contact@kindred-systems.com> forbes-0023 <zoe.forbes@kindred-systems.com>
forbes <contact@kindred-systems.com> josephforbes23 <joseph.forbes@kindred-systems.com>
forbes <contact@kindred-systems.com> Zoe Forbes <forbes@copernicus-9.kindred.internal>
forbes <contact@kindred-systems.com> admin <admin@kindred-systems.com>

View File

@@ -7,7 +7,7 @@ analytic gradient from the Expr DAG's symbolic differentiation.
from __future__ import annotations
import math
from typing import List
from typing import Callable, List
import numpy as np
@@ -29,6 +29,8 @@ def bfgs_solve(
max_iter: int = 200,
tol: float = 1e-10,
weight_vector: "np.ndarray | None" = None,
jac_exprs: "List[List[Expr]] | None" = None,
compiled_eval: "Callable | None" = None,
) -> bool:
"""Solve ``residuals == 0`` by minimizing sum of squared residuals.
@@ -38,6 +40,13 @@ def bfgs_solve(
``sqrt(w)`` so that the objective becomes
``0.5 * sum(w_i * r_i^2)`` — equivalent to weighted least-squares.
Parameters
----------
jac_exprs:
Pre-built symbolic Jacobian (list-of-lists of Expr).
compiled_eval:
Pre-compiled evaluation function from :mod:`codegen`.
Returns True if converged (||r|| < tol).
"""
if not _HAS_SCIPY:
@@ -51,12 +60,19 @@ def bfgs_solve(
return True
# Build symbolic gradient expressions once: d(r_i)/d(x_j)
jac_exprs: List[List[Expr]] = []
for r in residuals:
row = []
for name in free:
row.append(r.diff(name).simplify())
jac_exprs.append(row)
if jac_exprs is None:
jac_exprs = []
for r in residuals:
row = []
for name in free:
row.append(r.diff(name).simplify())
jac_exprs.append(row)
# Try compilation if not provided
if compiled_eval is None:
from .codegen import try_compile_system
compiled_eval = try_compile_system(residuals, jac_exprs, n_res, n_free)
# Pre-compute scaling for weighted minimum-norm
if weight_vector is not None:
@@ -66,6 +82,10 @@ def bfgs_solve(
w_sqrt = None
w_inv_sqrt = None
# Pre-allocate arrays reused across objective calls
r_vals = np.empty(n_res)
J = np.zeros((n_res, n_free))
def objective_and_grad(y_vec):
# Transform back from scaled space if weighted
if w_inv_sqrt is not None:
@@ -78,18 +98,19 @@ def bfgs_solve(
if quat_groups:
_renormalize_quats(params, quat_groups)
env = params.get_env()
if compiled_eval is not None:
J[:] = 0.0
compiled_eval(params.env_ref(), r_vals, J)
else:
env = params.get_env()
for i, r in enumerate(residuals):
r_vals[i] = r.eval(env)
for i in range(n_res):
for j in range(n_free):
J[i, j] = jac_exprs[i][j].eval(env)
# Evaluate residuals
r_vals = np.array([r.eval(env) for r in residuals])
f = 0.5 * np.dot(r_vals, r_vals)
# Evaluate Jacobian
J = np.empty((n_res, n_free))
for i in range(n_res):
for j in range(n_free):
J[i, j] = jac_exprs[i][j].eval(env)
# Gradient of f w.r.t. x = J^T @ r
grad_x = J.T @ r_vals
@@ -126,8 +147,12 @@ def bfgs_solve(
_renormalize_quats(params, quat_groups)
# Check convergence on actual residual norm
env = params.get_env()
r_vals = np.array([r.eval(env) for r in residuals])
if compiled_eval is not None:
compiled_eval(params.env_ref(), r_vals, J)
else:
env = params.get_env()
for i, r in enumerate(residuals):
r_vals[i] = r.eval(env)
return bool(np.linalg.norm(r_vals) < tol)

308
kindred_solver/codegen.py Normal file
View File

@@ -0,0 +1,308 @@
"""Compile Expr DAGs into flat Python functions for fast evaluation.
The compilation pipeline:
1. Collect all Expr nodes to be evaluated (residuals + Jacobian entries).
2. Identify common subexpressions (CSE) by ``id()`` — the Expr DAG
already shares node objects via ParamTable's Var instances.
3. Generate a single Python function body that computes CSE temps,
then fills ``r_vec`` and ``J`` arrays in-place.
4. Compile with ``compile()`` + ``exec()`` and return the callable.
The generated function signature is::
fn(env: dict[str, float], r_vec: ndarray, J: ndarray) -> None
"""
from __future__ import annotations
import logging
import math
from collections import Counter
from typing import Callable, List
import numpy as np
from .expr import Const, Expr, Var
log = logging.getLogger(__name__)
# Namespace injected into compiled functions.
_CODEGEN_NS = {
"_sin": math.sin,
"_cos": math.cos,
"_sqrt": math.sqrt,
}
# ---------------------------------------------------------------------------
# CSE (Common Subexpression Elimination)
# ---------------------------------------------------------------------------
def _collect_nodes(expr: Expr, counts: Counter, visited: set[int]) -> None:
"""Walk *expr* and count how many times each node ``id()`` appears."""
eid = id(expr)
counts[eid] += 1
if eid in visited:
return
visited.add(eid)
# Recurse into children
if isinstance(expr, (Const, Var)):
return
if hasattr(expr, "child"):
_collect_nodes(expr.child, counts, visited)
elif hasattr(expr, "a"):
_collect_nodes(expr.a, counts, visited)
_collect_nodes(expr.b, counts, visited)
elif hasattr(expr, "base"):
_collect_nodes(expr.base, counts, visited)
_collect_nodes(expr.exp, counts, visited)
def _build_cse(
exprs: list[Expr],
) -> tuple[dict[int, str], list[tuple[str, Expr]]]:
"""Identify shared sub-trees and assign temporary variable names.
Returns:
id_to_temp: mapping from ``id(node)`` to temp variable name
temps_ordered: ``(temp_name, expr)`` pairs in dependency order
"""
counts: Counter = Counter()
visited: set[int] = set()
id_to_expr: dict[int, Expr] = {}
for expr in exprs:
_collect_nodes(expr, counts, visited)
# Map id -> Expr for nodes we visited
for expr in exprs:
_map_ids(expr, id_to_expr)
# Nodes referenced more than once and not trivial (Const/Var) become temps
shared_ids = set()
for eid, cnt in counts.items():
if cnt > 1:
node = id_to_expr.get(eid)
if node is not None and not isinstance(node, (Const, Var)):
shared_ids.add(eid)
if not shared_ids:
return {}, []
# Topological order: a temp must be computed before any temp that uses it.
# Walk each shared node, collect in post-order.
ordered_ids: list[int] = []
order_visited: set[int] = set()
def _topo(expr: Expr) -> None:
eid = id(expr)
if eid in order_visited:
return
order_visited.add(eid)
if isinstance(expr, (Const, Var)):
return
if hasattr(expr, "child"):
_topo(expr.child)
elif hasattr(expr, "a"):
_topo(expr.a)
_topo(expr.b)
elif hasattr(expr, "base"):
_topo(expr.base)
_topo(expr.exp)
if eid in shared_ids:
ordered_ids.append(eid)
for expr in exprs:
_topo(expr)
id_to_temp: dict[int, str] = {}
temps_ordered: list[tuple[str, Expr]] = []
for i, eid in enumerate(ordered_ids):
name = f"_c{i}"
id_to_temp[eid] = name
temps_ordered.append((name, id_to_expr[eid]))
return id_to_temp, temps_ordered
def _map_ids(expr: Expr, mapping: dict[int, Expr]) -> None:
"""Populate id -> Expr mapping for all nodes in *expr*."""
eid = id(expr)
if eid in mapping:
return
mapping[eid] = expr
if isinstance(expr, (Const, Var)):
return
if hasattr(expr, "child"):
_map_ids(expr.child, mapping)
elif hasattr(expr, "a"):
_map_ids(expr.a, mapping)
_map_ids(expr.b, mapping)
elif hasattr(expr, "base"):
_map_ids(expr.base, mapping)
_map_ids(expr.exp, mapping)
def _expr_to_code(expr: Expr, id_to_temp: dict[int, str]) -> str:
"""Emit code for *expr*, substituting temp names for shared nodes."""
eid = id(expr)
temp = id_to_temp.get(eid)
if temp is not None:
return temp
return expr.to_code()
def _expr_to_code_recursive(expr: Expr, id_to_temp: dict[int, str]) -> str:
"""Emit code for *expr*, recursing into children but respecting temps."""
eid = id(expr)
temp = id_to_temp.get(eid)
if temp is not None:
return temp
# For leaf nodes, just use to_code() directly
if isinstance(expr, (Const, Var)):
return expr.to_code()
# For non-leaf nodes, recurse into children with temp substitution
from .expr import Add, Cos, Div, Mul, Neg, Pow, Sin, Sqrt, Sub
if isinstance(expr, Neg):
return f"(-{_expr_to_code_recursive(expr.child, id_to_temp)})"
if isinstance(expr, Sin):
return f"_sin({_expr_to_code_recursive(expr.child, id_to_temp)})"
if isinstance(expr, Cos):
return f"_cos({_expr_to_code_recursive(expr.child, id_to_temp)})"
if isinstance(expr, Sqrt):
return f"_sqrt({_expr_to_code_recursive(expr.child, id_to_temp)})"
if isinstance(expr, Add):
a = _expr_to_code_recursive(expr.a, id_to_temp)
b = _expr_to_code_recursive(expr.b, id_to_temp)
return f"({a} + {b})"
if isinstance(expr, Sub):
a = _expr_to_code_recursive(expr.a, id_to_temp)
b = _expr_to_code_recursive(expr.b, id_to_temp)
return f"({a} - {b})"
if isinstance(expr, Mul):
a = _expr_to_code_recursive(expr.a, id_to_temp)
b = _expr_to_code_recursive(expr.b, id_to_temp)
return f"({a} * {b})"
if isinstance(expr, Div):
a = _expr_to_code_recursive(expr.a, id_to_temp)
b = _expr_to_code_recursive(expr.b, id_to_temp)
return f"({a} / {b})"
if isinstance(expr, Pow):
base = _expr_to_code_recursive(expr.base, id_to_temp)
exp = _expr_to_code_recursive(expr.exp, id_to_temp)
return f"({base} ** {exp})"
# Fallback — should not happen for known node types
return expr.to_code()
# ---------------------------------------------------------------------------
# Sparsity detection
# ---------------------------------------------------------------------------
def _find_nonzero_entries(
jac_exprs: list[list[Expr]],
) -> list[tuple[int, int]]:
"""Return ``(row, col)`` pairs for non-zero Jacobian entries."""
nz = []
for i, row in enumerate(jac_exprs):
for j, expr in enumerate(row):
if isinstance(expr, Const) and expr.value == 0.0:
continue
nz.append((i, j))
return nz
# ---------------------------------------------------------------------------
# Code generation
# ---------------------------------------------------------------------------
def compile_system(
residuals: list[Expr],
jac_exprs: list[list[Expr]],
n_res: int,
n_free: int,
) -> Callable[[dict, np.ndarray, np.ndarray], None]:
"""Compile residuals + Jacobian into a single evaluation function.
Returns a callable ``fn(env, r_vec, J)`` that fills *r_vec* and *J*
in-place. *J* must be pre-zeroed by the caller (only non-zero
entries are written).
"""
# Detect non-zero Jacobian entries
nz_entries = _find_nonzero_entries(jac_exprs)
# Collect all expressions for CSE analysis
all_exprs: list[Expr] = list(residuals)
nz_jac_exprs: list[Expr] = [jac_exprs[i][j] for i, j in nz_entries]
all_exprs.extend(nz_jac_exprs)
# CSE
id_to_temp, temps_ordered = _build_cse(all_exprs)
# Generate function body
lines: list[str] = ["def _eval(env, r_vec, J):"]
# Temporaries — temporarily remove each temp's own id so its RHS
# is expanded rather than self-referencing.
for temp_name, temp_expr in temps_ordered:
eid = id(temp_expr)
saved = id_to_temp.pop(eid)
code = _expr_to_code_recursive(temp_expr, id_to_temp)
id_to_temp[eid] = saved
lines.append(f" {temp_name} = {code}")
# Residuals
for i, r in enumerate(residuals):
code = _expr_to_code_recursive(r, id_to_temp)
lines.append(f" r_vec[{i}] = {code}")
# Jacobian (sparse)
for idx, (i, j) in enumerate(nz_entries):
code = _expr_to_code_recursive(nz_jac_exprs[idx], id_to_temp)
lines.append(f" J[{i}, {j}] = {code}")
source = "\n".join(lines)
# Compile
code_obj = compile(source, "<kindred_codegen>", "exec")
ns = dict(_CODEGEN_NS)
exec(code_obj, ns)
fn = ns["_eval"]
n_temps = len(temps_ordered)
n_nz = len(nz_entries)
n_total = n_res * n_free
log.debug(
"codegen: compiled %d residuals + %d/%d Jacobian entries, %d CSE temps",
n_res,
n_nz,
n_total,
n_temps,
)
return fn
def try_compile_system(
residuals: list[Expr],
jac_exprs: list[list[Expr]],
n_res: int,
n_free: int,
) -> Callable[[dict, np.ndarray, np.ndarray], None] | None:
"""Compile with automatic fallback. Returns ``None`` on failure."""
try:
return compile_system(residuals, jac_exprs, n_res, n_free)
except Exception:
log.debug(
"codegen: compilation failed, falling back to tree-walk eval", exc_info=True
)
return None

View File

@@ -196,8 +196,8 @@ class PointOnLineConstraint(ConstraintBase):
p_i = self.body_i.world_point(*self.marker_i_pos)
p_j = self.body_j.world_point(*self.marker_j_pos)
z_j = marker_z_axis(self.body_j, self.marker_j_quat)
cx, cy = point_line_perp_components(p_i, p_j, z_j)
return [cx, cy]
cx, cy, cz = point_line_perp_components(p_i, p_j, z_j)
return [cx, cy, cz]
class PointInPlaneConstraint(ConstraintBase):
@@ -244,8 +244,9 @@ class ParallelConstraint(ConstraintBase):
"""Parallel axes — 2 DOF removed.
marker Z-axes are parallel: z_i x z_j = 0.
2 residuals from the cross product (only 2 of 3 components are
independent for unit vectors).
3 cross-product residuals (rank 2 at the solution). Using all 3
avoids a singularity when the axes lie in the XY plane, where
dropping cz would leave the constraint blind to yaw rotations.
"""
def __init__(
@@ -264,7 +265,7 @@ class ParallelConstraint(ConstraintBase):
z_i = marker_z_axis(self.body_i, self.marker_i_quat)
z_j = marker_z_axis(self.body_j, self.marker_j_quat)
cx, cy, cz = cross3(z_i, z_j)
return [cx, cy]
return [cx, cy, cz]
class PerpendicularConstraint(ConstraintBase):
@@ -350,17 +351,17 @@ class ConcentricConstraint(ConstraintBase):
self.distance = distance
def residuals(self) -> List[Expr]:
# Parallel axes (2 residuals)
# Parallel axes (3 cross-product residuals, rank 2 at solution)
z_i = marker_z_axis(self.body_i, self.marker_i_quat)
z_j = marker_z_axis(self.body_j, self.marker_j_quat)
cx, cy, _cz = cross3(z_i, z_j)
cx, cy, cz = cross3(z_i, z_j)
# Point-on-line: marker_i origin on line through marker_j along z_j
p_i = self.body_i.world_point(*self.marker_i_pos)
p_j = self.body_j.world_point(*self.marker_j_pos)
lx, ly = point_line_perp_components(p_i, p_j, z_j)
lx, ly, lz = point_line_perp_components(p_i, p_j, z_j)
return [cx, cy, lx, ly]
return [cx, cy, cz, lx, ly, lz]
class TangentConstraint(ConstraintBase):
@@ -417,10 +418,10 @@ class PlanarConstraint(ConstraintBase):
self.offset = offset
def residuals(self) -> List[Expr]:
# Parallel normals
# Parallel normals (3 cross-product residuals, rank 2 at solution)
z_i = marker_z_axis(self.body_i, self.marker_i_quat)
z_j = marker_z_axis(self.body_j, self.marker_j_quat)
cx, cy, _cz = cross3(z_i, z_j)
cx, cy, cz = cross3(z_i, z_j)
# Point-in-plane
p_i = self.body_i.world_point(*self.marker_i_pos)
@@ -429,7 +430,7 @@ class PlanarConstraint(ConstraintBase):
if self.offset != 0.0:
d = d - Const(self.offset)
return [cx, cy, d]
return [cx, cy, cz, d]
class LineInPlaneConstraint(ConstraintBase):
@@ -527,12 +528,12 @@ class RevoluteConstraint(ConstraintBase):
p_j = self.body_j.world_point(*self.marker_j_pos)
pos = [p_i[0] - p_j[0], p_i[1] - p_j[1], p_i[2] - p_j[2]]
# Parallel Z-axes
# Parallel Z-axes (3 cross-product residuals, rank 2 at solution)
z_i = marker_z_axis(self.body_i, self.marker_i_quat)
z_j = marker_z_axis(self.body_j, self.marker_j_quat)
cx, cy, _cz = cross3(z_i, z_j)
cx, cy, cz = cross3(z_i, z_j)
return pos + [cx, cy]
return pos + [cx, cy, cz]
class CylindricalConstraint(ConstraintBase):
@@ -559,17 +560,17 @@ class CylindricalConstraint(ConstraintBase):
self.marker_j_quat = marker_j_quat
def residuals(self) -> List[Expr]:
# Parallel Z-axes
# Parallel Z-axes (3 cross-product residuals, rank 2 at solution)
z_i = marker_z_axis(self.body_i, self.marker_i_quat)
z_j = marker_z_axis(self.body_j, self.marker_j_quat)
cx, cy, _cz = cross3(z_i, z_j)
cx, cy, cz = cross3(z_i, z_j)
# Point-on-line
p_i = self.body_i.world_point(*self.marker_i_pos)
p_j = self.body_j.world_point(*self.marker_j_pos)
lx, ly = point_line_perp_components(p_i, p_j, z_j)
lx, ly, lz = point_line_perp_components(p_i, p_j, z_j)
return [cx, cy, lx, ly]
return [cx, cy, cz, lx, ly, lz]
class SliderConstraint(ConstraintBase):
@@ -598,22 +599,22 @@ class SliderConstraint(ConstraintBase):
self.marker_j_quat = marker_j_quat
def residuals(self) -> List[Expr]:
# Parallel Z-axes
# Parallel Z-axes (3 cross-product residuals, rank 2 at solution)
z_i = marker_z_axis(self.body_i, self.marker_i_quat)
z_j = marker_z_axis(self.body_j, self.marker_j_quat)
cx, cy, _cz = cross3(z_i, z_j)
cx, cy, cz = cross3(z_i, z_j)
# Point-on-line
p_i = self.body_i.world_point(*self.marker_i_pos)
p_j = self.body_j.world_point(*self.marker_j_pos)
lx, ly = point_line_perp_components(p_i, p_j, z_j)
lx, ly, lz = point_line_perp_components(p_i, p_j, z_j)
# Rotation lock: x_i . y_j = 0
x_i = marker_x_axis(self.body_i, self.marker_i_quat)
y_j = marker_y_axis(self.body_j, self.marker_j_quat)
twist = dot3(x_i, y_j)
return [cx, cy, lx, ly, twist]
return [cx, cy, cz, lx, ly, lz, twist]
class ScrewConstraint(ConstraintBase):
@@ -648,14 +649,14 @@ class ScrewConstraint(ConstraintBase):
self.pitch = pitch
def residuals(self) -> List[Expr]:
# Cylindrical residuals (4)
# Cylindrical residuals (5: 3 parallel + 2 point-on-line)
z_i = marker_z_axis(self.body_i, self.marker_i_quat)
z_j = marker_z_axis(self.body_j, self.marker_j_quat)
cx, cy, _cz = cross3(z_i, z_j)
cx, cy, cz = cross3(z_i, z_j)
p_i = self.body_i.world_point(*self.marker_i_pos)
p_j = self.body_j.world_point(*self.marker_j_pos)
lx, ly = point_line_perp_components(p_i, p_j, z_j)
lx, ly, lz = point_line_perp_components(p_i, p_j, z_j)
# Pitch coupling: axial_disp = pitch * angle / (2*pi)
# Axial displacement
@@ -684,7 +685,7 @@ class ScrewConstraint(ConstraintBase):
# = axial - pitch * qz / pi
coupling = axial - Const(self.pitch / math.pi) * rel[3]
return [cx, cy, lx, ly, coupling]
return [cx, cy, cz, lx, ly, lz, coupling]
class UniversalConstraint(ConstraintBase):

View File

@@ -32,6 +32,7 @@ def per_entity_dof(
params: ParamTable,
bodies: dict[str, RigidBody],
rank_tol: float = 1e-8,
jac_exprs: "list[list[Expr]] | None" = None,
) -> list[EntityDOF]:
"""Compute remaining DOF for each non-grounded body.
@@ -71,9 +72,14 @@ def per_entity_dof(
# Build full Jacobian (for efficiency, compute once)
n_free = len(free)
J_full = np.empty((n_res, n_free))
for i, r in enumerate(residuals):
for j, name in enumerate(free):
J_full[i, j] = r.diff(name).simplify().eval(env)
if jac_exprs is not None:
for i in range(n_res):
for j in range(n_free):
J_full[i, j] = jac_exprs[i][j].eval(env)
else:
for i, r in enumerate(residuals):
for j, name in enumerate(free):
J_full[i, j] = r.diff(name).simplify().eval(env)
result = []
for pid, body in bodies.items():
@@ -217,6 +223,7 @@ def find_overconstrained(
params: ParamTable,
residual_ranges: list[tuple[int, int, int]],
rank_tol: float = 1e-8,
jac_exprs: "list[list[Expr]] | None" = None,
) -> list[ConstraintDiag]:
"""Identify redundant and conflicting constraints.
@@ -243,8 +250,14 @@ def find_overconstrained(
r_vec = np.empty(n_res)
for i, r in enumerate(residuals):
r_vec[i] = r.eval(env)
for j, name in enumerate(free):
J[i, j] = r.diff(name).simplify().eval(env)
if jac_exprs is not None:
for i in range(n_res):
for j in range(n_free):
J[i, j] = jac_exprs[i][j].eval(env)
else:
for i, r in enumerate(residuals):
for j, name in enumerate(free):
J[i, j] = r.diff(name).simplify().eval(env)
# Full rank
sv_full = np.linalg.svd(J, compute_uv=False)

View File

@@ -14,11 +14,15 @@ def count_dof(
residuals: List[Expr],
params: ParamTable,
rank_tol: float = 1e-8,
jac_exprs: "List[List[Expr]] | None" = None,
) -> int:
"""Compute DOF = n_free_params - rank(Jacobian).
Evaluates the Jacobian numerically at the current parameter values
and computes its rank via SVD.
When *jac_exprs* is provided, reuses the pre-built symbolic
Jacobian instead of re-differentiating every residual.
"""
free = params.free_names()
n_free = len(free)
@@ -32,9 +36,14 @@ def count_dof(
env = params.get_env()
J = np.empty((n_res, n_free))
for i, r in enumerate(residuals):
for j, name in enumerate(free):
J[i, j] = r.diff(name).simplify().eval(env)
if jac_exprs is not None:
for i in range(n_res):
for j in range(n_free):
J[i, j] = jac_exprs[i][j].eval(env)
else:
for i, r in enumerate(residuals):
for j, name in enumerate(free):
J[i, j] = r.diff(name).simplify().eval(env)
if J.size == 0:
return n_free

View File

@@ -24,6 +24,16 @@ class Expr:
"""Return the set of variable names in this expression."""
raise NotImplementedError
def to_code(self) -> str:
"""Emit a Python arithmetic expression string.
The returned string, when evaluated with a dict ``env`` mapping
parameter names to floats (and ``_sin``, ``_cos``, ``_sqrt``
bound to their ``math`` equivalents), produces the same result
as ``self.eval(env)``.
"""
raise NotImplementedError
# -- operator overloads --------------------------------------------------
def __add__(self, other):
@@ -90,6 +100,9 @@ class Const(Expr):
def vars(self):
return set()
def to_code(self):
return repr(self.value)
def __repr__(self):
return f"Const({self.value})"
@@ -118,6 +131,9 @@ class Var(Expr):
def vars(self):
return {self.name}
def to_code(self):
return f"env[{self.name!r}]"
def __repr__(self):
return f"Var({self.name!r})"
@@ -154,6 +170,9 @@ class Neg(Expr):
def vars(self):
return self.child.vars()
def to_code(self):
return f"(-{self.child.to_code()})"
def __repr__(self):
return f"Neg({self.child!r})"
@@ -180,6 +199,9 @@ class Sin(Expr):
def vars(self):
return self.child.vars()
def to_code(self):
return f"_sin({self.child.to_code()})"
def __repr__(self):
return f"Sin({self.child!r})"
@@ -206,6 +228,9 @@ class Cos(Expr):
def vars(self):
return self.child.vars()
def to_code(self):
return f"_cos({self.child.to_code()})"
def __repr__(self):
return f"Cos({self.child!r})"
@@ -232,6 +257,9 @@ class Sqrt(Expr):
def vars(self):
return self.child.vars()
def to_code(self):
return f"_sqrt({self.child.to_code()})"
def __repr__(self):
return f"Sqrt({self.child!r})"
@@ -266,6 +294,9 @@ class Add(Expr):
def vars(self):
return self.a.vars() | self.b.vars()
def to_code(self):
return f"({self.a.to_code()} + {self.b.to_code()})"
def __repr__(self):
return f"Add({self.a!r}, {self.b!r})"
@@ -297,6 +328,9 @@ class Sub(Expr):
def vars(self):
return self.a.vars() | self.b.vars()
def to_code(self):
return f"({self.a.to_code()} - {self.b.to_code()})"
def __repr__(self):
return f"Sub({self.a!r}, {self.b!r})"
@@ -337,6 +371,9 @@ class Mul(Expr):
def vars(self):
return self.a.vars() | self.b.vars()
def to_code(self):
return f"({self.a.to_code()} * {self.b.to_code()})"
def __repr__(self):
return f"Mul({self.a!r}, {self.b!r})"
@@ -372,6 +409,9 @@ class Div(Expr):
def vars(self):
return self.a.vars() | self.b.vars()
def to_code(self):
return f"({self.a.to_code()} / {self.b.to_code()})"
def __repr__(self):
return f"Div({self.a!r}, {self.b!r})"
@@ -414,6 +454,9 @@ class Pow(Expr):
def vars(self):
return self.base.vars() | self.exp.vars()
def to_code(self):
return f"({self.base.to_code()} ** {self.exp.to_code()})"
def __repr__(self):
return f"Pow({self.base!r}, {self.exp!r})"

View File

@@ -117,15 +117,15 @@ def point_line_perp_components(
point: Vec3,
line_origin: Vec3,
line_dir: Vec3,
) -> tuple[Expr, Expr]:
"""Two independent perpendicular-distance components from point to line.
) -> tuple[Expr, Expr, Expr]:
"""Three perpendicular-distance components from point to line.
The line passes through line_origin along line_dir.
Returns the x and y components of (point - line_origin) x line_dir,
which are zero when the point lies on the line.
Returns all 3 components of (point - line_origin) x line_dir,
which are zero when the point lies on the line. Only 2 of 3 are
independent, but using all 3 avoids a singularity when the line
direction aligns with a coordinate axis.
"""
d = sub3(point, line_origin)
cx, cy, cz = cross3(d, line_dir)
# All three components of d x line_dir are zero when d is parallel
# to line_dir, but only 2 are independent. We return x and y.
return cx, cy
return cx, cy, cz

View File

@@ -3,7 +3,7 @@
from __future__ import annotations
import math
from typing import List
from typing import Callable, List
import numpy as np
@@ -19,6 +19,8 @@ def newton_solve(
tol: float = 1e-10,
post_step: "Callable[[ParamTable], None] | None" = None,
weight_vector: "np.ndarray | None" = None,
jac_exprs: "List[List[Expr]] | None" = None,
compiled_eval: "Callable | None" = None,
) -> bool:
"""Solve ``residuals == 0`` by Newton-Raphson.
@@ -43,6 +45,12 @@ def newton_solve(
lstsq step is column-scaled to produce the weighted
minimum-norm solution (prefer small movements in
high-weight parameters).
jac_exprs:
Pre-built symbolic Jacobian (list-of-lists of Expr). When
provided, skips the ``diff().simplify()`` step.
compiled_eval:
Pre-compiled evaluation function from :mod:`codegen`. When
provided, uses flat compiled code instead of tree-walk eval.
Returns True if converged within *max_iter* iterations.
"""
@@ -53,29 +61,41 @@ def newton_solve(
if n_free == 0 or n_res == 0:
return True
# Build symbolic Jacobian once (list-of-lists of simplified Expr)
jac_exprs: List[List[Expr]] = []
for r in residuals:
row = []
for name in free:
row.append(r.diff(name).simplify())
jac_exprs.append(row)
# Build symbolic Jacobian once (or reuse pre-built)
if jac_exprs is None:
jac_exprs = []
for r in residuals:
row = []
for name in free:
row.append(r.diff(name).simplify())
jac_exprs.append(row)
# Try compilation if not provided
if compiled_eval is None:
from .codegen import try_compile_system
compiled_eval = try_compile_system(residuals, jac_exprs, n_res, n_free)
# Pre-allocate arrays reused across iterations
r_vec = np.empty(n_res)
J = np.zeros((n_res, n_free))
for _it in range(max_iter):
env = params.get_env()
if compiled_eval is not None:
J[:] = 0.0
compiled_eval(params.env_ref(), r_vec, J)
else:
env = params.get_env()
for i, r in enumerate(residuals):
r_vec[i] = r.eval(env)
for i in range(n_res):
for j in range(n_free):
J[i, j] = jac_exprs[i][j].eval(env)
# Evaluate residual vector
r_vec = np.array([r.eval(env) for r in residuals])
r_norm = np.linalg.norm(r_vec)
if r_norm < tol:
return True
# Evaluate Jacobian matrix
J = np.empty((n_res, n_free))
for i in range(n_res):
for j in range(n_free):
J[i, j] = jac_exprs[i][j].eval(env)
# Solve J @ dx = -r (least-squares handles rank-deficient)
if weight_vector is not None:
# Column-scale J by W^{-1/2} for weighted minimum-norm
@@ -100,9 +120,13 @@ def newton_solve(
_renormalize_quats(params, quat_groups)
# Check final residual
env = params.get_env()
r_vec = np.array([r.eval(env) for r in residuals])
return np.linalg.norm(r_vec) < tol
if compiled_eval is not None:
compiled_eval(params.env_ref(), r_vec, J)
else:
env = params.get_env()
for i, r in enumerate(residuals):
r_vec[i] = r.eval(env)
return bool(np.linalg.norm(r_vec) < tol)
def _renormalize_quats(

View File

@@ -60,6 +60,14 @@ class ParamTable:
"""Return a snapshot of all current values (for Expr.eval)."""
return dict(self._values)
def env_ref(self) -> Dict[str, float]:
"""Return a direct reference to the internal values dict.
Faster than :meth:`get_env` (no copy). Safe when the caller
only reads during evaluation and mutates via :meth:`set_free_vector`.
"""
return self._values
def free_names(self) -> List[str]:
"""Return ordered list of free (non-fixed) parameter names."""
return list(self._free_order)

View File

@@ -21,12 +21,21 @@ import numpy as np
from .constraints import (
AngleConstraint,
ConcentricConstraint,
ConstraintBase,
CylindricalConstraint,
DistancePointPointConstraint,
LineInPlaneConstraint,
ParallelConstraint,
PerpendicularConstraint,
PlanarConstraint,
PointInPlaneConstraint,
RevoluteConstraint,
ScrewConstraint,
SliderConstraint,
UniversalConstraint,
)
from .geometry import cross3, dot3, marker_z_axis
from .geometry import cross3, dot3, marker_z_axis, point_plane_distance, sub3
from .params import ParamTable
@@ -107,6 +116,33 @@ def _build_half_space(
if isinstance(obj, PerpendicularConstraint):
return _perpendicular_half_space(obj, constraint_idx, env, params)
if isinstance(obj, PlanarConstraint):
return _planar_half_space(obj, constraint_idx, env, params)
if isinstance(obj, RevoluteConstraint):
return _revolute_half_space(obj, constraint_idx, env, params)
if isinstance(obj, ConcentricConstraint):
return _concentric_half_space(obj, constraint_idx, env, params)
if isinstance(obj, PointInPlaneConstraint):
return _point_in_plane_half_space(obj, constraint_idx, env, params)
if isinstance(obj, LineInPlaneConstraint):
return _line_in_plane_half_space(obj, constraint_idx, env, params)
if isinstance(obj, CylindricalConstraint):
return _axis_direction_half_space(obj, constraint_idx, env)
if isinstance(obj, SliderConstraint):
return _axis_direction_half_space(obj, constraint_idx, env)
if isinstance(obj, ScrewConstraint):
return _axis_direction_half_space(obj, constraint_idx, env)
if isinstance(obj, UniversalConstraint):
return _universal_half_space(obj, constraint_idx, env)
return None
@@ -212,6 +248,312 @@ def _parallel_half_space(
)
def _planar_half_space(
obj: PlanarConstraint,
constraint_idx: int,
env: dict[str, float],
params: ParamTable,
) -> HalfSpace | None:
"""Half-space for Planar: track which side of the plane the point is on
AND which direction the normals face.
A Planar constraint has parallel normals (cross product = 0) plus
point-in-plane (signed distance = 0). Both have branch ambiguity:
the normals can be same-direction or opposite, and the point can
approach the plane from either side. We track the signed distance
from marker_i to the plane defined by marker_j — this captures
the plane-side and is the primary drift mode during drag.
"""
# Point-in-plane signed distance as indicator
p_i = obj.body_i.world_point(*obj.marker_i_pos)
p_j = obj.body_j.world_point(*obj.marker_j_pos)
z_j = marker_z_axis(obj.body_j, obj.marker_j_quat)
d_expr = point_plane_distance(p_i, p_j, z_j)
d_val = d_expr.eval(env)
# Also track normal alignment (same as Parallel half-space)
z_i = marker_z_axis(obj.body_i, obj.marker_i_quat)
dot_expr = dot3(z_i, z_j)
dot_val = dot_expr.eval(env)
normal_ref_sign = math.copysign(1.0, dot_val) if abs(dot_val) > 1e-14 else 1.0
# If offset is zero and distance is near-zero, we still need the normal
# direction indicator to prevent flipping through the plane.
# Use the normal dot product as the primary indicator when the point
# is already on the plane (distance ≈ 0).
if abs(d_val) < 1e-10:
# Point is on the plane — track normal direction instead
def indicator(e: dict[str, float]) -> float:
return dot_expr.eval(e)
ref_sign = normal_ref_sign
else:
# Point is off-plane — track which side
def indicator(e: dict[str, float]) -> float:
return d_expr.eval(e)
ref_sign = math.copysign(1.0, d_val)
# Correction: reflect the moving body's position through the plane
moving_body = obj.body_j if not obj.body_j.grounded else obj.body_i
if moving_body.grounded:
return None
px_name = f"{moving_body.part_id}/tx"
py_name = f"{moving_body.part_id}/ty"
pz_name = f"{moving_body.part_id}/tz"
def correction(p: ParamTable, _val: float) -> None:
e = p.get_env()
# Recompute signed distance and normal direction
d_cur = d_expr.eval(e)
nx = z_j[0].eval(e)
ny = z_j[1].eval(e)
nz = z_j[2].eval(e)
n_len = math.sqrt(nx * nx + ny * ny + nz * nz)
if n_len < 1e-15:
return
nx, ny, nz = nx / n_len, ny / n_len, nz / n_len
# Reflect through plane: move body by -2*d along normal
sign = -1.0 if moving_body is obj.body_j else 1.0
if not p.is_fixed(px_name):
p.set_value(px_name, p.get_value(px_name) + sign * 2.0 * d_cur * nx)
if not p.is_fixed(py_name):
p.set_value(py_name, p.get_value(py_name) + sign * 2.0 * d_cur * ny)
if not p.is_fixed(pz_name):
p.set_value(pz_name, p.get_value(pz_name) + sign * 2.0 * d_cur * nz)
return HalfSpace(
constraint_index=constraint_idx,
reference_sign=ref_sign,
indicator_fn=indicator,
correction_fn=correction,
)
def _revolute_half_space(
obj: RevoluteConstraint,
constraint_idx: int,
env: dict[str, float],
params: ParamTable,
) -> HalfSpace | None:
"""Half-space for Revolute: track hinge axis direction.
A revolute has coincident origins + parallel Z-axes. The parallel
axes can flip direction (same ambiguity as Parallel). Track
dot(z_i, z_j) to prevent the axis from inverting.
"""
z_i = marker_z_axis(obj.body_i, obj.marker_i_quat)
z_j = marker_z_axis(obj.body_j, obj.marker_j_quat)
dot_expr = dot3(z_i, z_j)
ref_val = dot_expr.eval(env)
ref_sign = math.copysign(1.0, ref_val) if abs(ref_val) > 1e-14 else 1.0
return HalfSpace(
constraint_index=constraint_idx,
reference_sign=ref_sign,
indicator_fn=lambda e: dot_expr.eval(e),
)
def _concentric_half_space(
obj: ConcentricConstraint,
constraint_idx: int,
env: dict[str, float],
params: ParamTable,
) -> HalfSpace | None:
"""Half-space for Concentric: track axis direction.
Concentric has parallel axes + point-on-line. The parallel axes
can flip direction. Track dot(z_i, z_j).
"""
z_i = marker_z_axis(obj.body_i, obj.marker_i_quat)
z_j = marker_z_axis(obj.body_j, obj.marker_j_quat)
dot_expr = dot3(z_i, z_j)
ref_val = dot_expr.eval(env)
ref_sign = math.copysign(1.0, ref_val) if abs(ref_val) > 1e-14 else 1.0
return HalfSpace(
constraint_index=constraint_idx,
reference_sign=ref_sign,
indicator_fn=lambda e: dot_expr.eval(e),
)
def _point_in_plane_half_space(
obj: PointInPlaneConstraint,
constraint_idx: int,
env: dict[str, float],
params: ParamTable,
) -> HalfSpace | None:
"""Half-space for PointInPlane: track which side of the plane.
The signed distance to the plane can be satisfied from either side.
Track which side the initial configuration is on.
"""
p_i = obj.body_i.world_point(*obj.marker_i_pos)
p_j = obj.body_j.world_point(*obj.marker_j_pos)
n_j = marker_z_axis(obj.body_j, obj.marker_j_quat)
d_expr = point_plane_distance(p_i, p_j, n_j)
d_val = d_expr.eval(env)
if abs(d_val) < 1e-10:
return None # already on the plane, no branch to track
ref_sign = math.copysign(1.0, d_val)
moving_body = obj.body_j if not obj.body_j.grounded else obj.body_i
if moving_body.grounded:
return None
px_name = f"{moving_body.part_id}/tx"
py_name = f"{moving_body.part_id}/ty"
pz_name = f"{moving_body.part_id}/tz"
def correction(p: ParamTable, _val: float) -> None:
e = p.get_env()
d_cur = d_expr.eval(e)
nx = n_j[0].eval(e)
ny = n_j[1].eval(e)
nz = n_j[2].eval(e)
n_len = math.sqrt(nx * nx + ny * ny + nz * nz)
if n_len < 1e-15:
return
nx, ny, nz = nx / n_len, ny / n_len, nz / n_len
sign = -1.0 if moving_body is obj.body_j else 1.0
if not p.is_fixed(px_name):
p.set_value(px_name, p.get_value(px_name) + sign * 2.0 * d_cur * nx)
if not p.is_fixed(py_name):
p.set_value(py_name, p.get_value(py_name) + sign * 2.0 * d_cur * ny)
if not p.is_fixed(pz_name):
p.set_value(pz_name, p.get_value(pz_name) + sign * 2.0 * d_cur * nz)
return HalfSpace(
constraint_index=constraint_idx,
reference_sign=ref_sign,
indicator_fn=lambda e: d_expr.eval(e),
correction_fn=correction,
)
def _line_in_plane_half_space(
obj: LineInPlaneConstraint,
constraint_idx: int,
env: dict[str, float],
params: ParamTable,
) -> HalfSpace | None:
"""Half-space for LineInPlane: track which side of the plane.
Same plane-side ambiguity as PointInPlane.
"""
p_i = obj.body_i.world_point(*obj.marker_i_pos)
p_j = obj.body_j.world_point(*obj.marker_j_pos)
n_j = marker_z_axis(obj.body_j, obj.marker_j_quat)
d_expr = point_plane_distance(p_i, p_j, n_j)
d_val = d_expr.eval(env)
if abs(d_val) < 1e-10:
return None
ref_sign = math.copysign(1.0, d_val)
moving_body = obj.body_j if not obj.body_j.grounded else obj.body_i
if moving_body.grounded:
return None
px_name = f"{moving_body.part_id}/tx"
py_name = f"{moving_body.part_id}/ty"
pz_name = f"{moving_body.part_id}/tz"
def correction(p: ParamTable, _val: float) -> None:
e = p.get_env()
d_cur = d_expr.eval(e)
nx = n_j[0].eval(e)
ny = n_j[1].eval(e)
nz = n_j[2].eval(e)
n_len = math.sqrt(nx * nx + ny * ny + nz * nz)
if n_len < 1e-15:
return
nx, ny, nz = nx / n_len, ny / n_len, nz / n_len
sign = -1.0 if moving_body is obj.body_j else 1.0
if not p.is_fixed(px_name):
p.set_value(px_name, p.get_value(px_name) + sign * 2.0 * d_cur * nx)
if not p.is_fixed(py_name):
p.set_value(py_name, p.get_value(py_name) + sign * 2.0 * d_cur * ny)
if not p.is_fixed(pz_name):
p.set_value(pz_name, p.get_value(pz_name) + sign * 2.0 * d_cur * nz)
return HalfSpace(
constraint_index=constraint_idx,
reference_sign=ref_sign,
indicator_fn=lambda e: d_expr.eval(e),
correction_fn=correction,
)
def _axis_direction_half_space(
obj,
constraint_idx: int,
env: dict[str, float],
) -> HalfSpace | None:
"""Half-space for any constraint with parallel Z-axes (Cylindrical, Slider, Screw).
Tracks dot(z_i, z_j) to prevent axis inversion.
"""
z_i = marker_z_axis(obj.body_i, obj.marker_i_quat)
z_j = marker_z_axis(obj.body_j, obj.marker_j_quat)
dot_expr = dot3(z_i, z_j)
ref_val = dot_expr.eval(env)
ref_sign = math.copysign(1.0, ref_val) if abs(ref_val) > 1e-14 else 1.0
return HalfSpace(
constraint_index=constraint_idx,
reference_sign=ref_sign,
indicator_fn=lambda e: dot_expr.eval(e),
)
def _universal_half_space(
obj: UniversalConstraint,
constraint_idx: int,
env: dict[str, float],
) -> HalfSpace | None:
"""Half-space for Universal: track which quadrant of perpendicularity.
Universal has dot(z_i, z_j) = 0 (perpendicular). The cross product
sign distinguishes which "side" of perpendicular.
"""
z_i = marker_z_axis(obj.body_i, obj.marker_i_quat)
z_j = marker_z_axis(obj.body_j, obj.marker_j_quat)
cx, cy, cz = cross3(z_i, z_j)
cx_val = cx.eval(env)
cy_val = cy.eval(env)
cz_val = cz.eval(env)
components = [
(abs(cx_val), cx, cx_val),
(abs(cy_val), cy, cy_val),
(abs(cz_val), cz, cz_val),
]
_, best_expr, best_val = max(components, key=lambda t: t[0])
if abs(best_val) < 1e-14:
return None
ref_sign = math.copysign(1.0, best_val)
return HalfSpace(
constraint_index=constraint_idx,
reference_sign=ref_sign,
indicator_fn=lambda e: best_expr.eval(e),
)
# ============================================================================
# Minimum-movement weighting
# ============================================================================

View File

@@ -4,6 +4,7 @@ expression-based Newton-Raphson solver."""
from __future__ import annotations
import logging
import math
import time
import kcsolve
@@ -91,6 +92,7 @@ class KindredSolver(kcsolve.IKCSolver):
super().__init__()
self._drag_ctx = None
self._drag_parts = None
self._drag_cache = None
self._limits_warned = False
def name(self):
@@ -129,8 +131,7 @@ class KindredSolver(kcsolve.IKCSolver):
for c in ctx.constraints:
if c.limits:
log.warning(
"Joint limits on '%s' ignored "
"(not yet supported by Kindred solver)",
"Joint limits on '%s' ignored (not yet supported by Kindred solver)",
c.id,
)
self._limits_warned = True
@@ -142,8 +143,6 @@ class KindredSolver(kcsolve.IKCSolver):
system.constraint_indices,
system.params,
)
weight_vec = build_weight_vector(system.params)
if half_spaces:
post_step_fn = lambda p: apply_half_space_correction(p, half_spaces)
else:
@@ -153,7 +152,12 @@ class KindredSolver(kcsolve.IKCSolver):
residuals = substitution_pass(system.all_residuals, system.params)
residuals = single_equation_pass(residuals, system.params)
# Build weight vector *after* pre-passes so its length matches the
# remaining free parameters (single_equation_pass may fix some).
weight_vec = build_weight_vector(system.params)
# Solve (decomposed for large assemblies, monolithic for small)
jac_exprs = None # may be populated by _monolithic_solve
if n_free_bodies >= _DECOMPOSE_THRESHOLD:
grounded_ids = {pid for pid, b in system.bodies.items() if b.grounded}
clusters = decompose(ctx.constraints, grounded_ids)
@@ -172,7 +176,7 @@ class KindredSolver(kcsolve.IKCSolver):
system.params,
)
else:
converged = _monolithic_solve(
converged, jac_exprs = _monolithic_solve(
residuals,
system.params,
system.quat_groups,
@@ -185,7 +189,7 @@ class KindredSolver(kcsolve.IKCSolver):
n_free_bodies,
_DECOMPOSE_THRESHOLD,
)
converged = _monolithic_solve(
converged, jac_exprs = _monolithic_solve(
residuals,
system.params,
system.quat_groups,
@@ -194,7 +198,7 @@ class KindredSolver(kcsolve.IKCSolver):
)
# DOF
dof = count_dof(residuals, system.params)
dof = count_dof(residuals, system.params, jac_exprs=jac_exprs)
# Build result
result = kcsolve.SolveResult()
@@ -210,6 +214,7 @@ class KindredSolver(kcsolve.IKCSolver):
system.params,
system.residual_ranges,
ctx,
jac_exprs=jac_exprs,
)
result.placements = _extract_placements(system.params, system.bodies)
@@ -242,8 +247,103 @@ class KindredSolver(kcsolve.IKCSolver):
self._drag_ctx = ctx
self._drag_parts = set(drag_parts)
self._drag_step_count = 0
result = self.solve(ctx)
log.info("pre_drag: initial solve status=%s", result.status)
# Build the system once and cache everything for drag_step() reuse.
t0 = time.perf_counter()
system = _build_system(ctx)
half_spaces = compute_half_spaces(
system.constraint_objs,
system.constraint_indices,
system.params,
)
if half_spaces:
post_step_fn = lambda p: apply_half_space_correction(p, half_spaces)
else:
post_step_fn = None
residuals = substitution_pass(system.all_residuals, system.params)
# NOTE: single_equation_pass is intentionally skipped for drag.
# It permanently fixes variables and removes residuals from the
# list. During drag the dragged part's parameters change each
# frame, which can invalidate those analytic solutions and cause
# constraints (e.g. Planar distance=0) to stop being enforced.
# The substitution pass alone is safe because it only replaces
# genuinely grounded parameters with constants.
# Build weight vector *after* pre-passes so its length matches the
# remaining free parameters (single_equation_pass may fix some).
weight_vec = build_weight_vector(system.params)
# Build symbolic Jacobian + compile once
from .codegen import try_compile_system
free = system.params.free_names()
n_res = len(residuals)
n_free = len(free)
jac_exprs = [[r.diff(name).simplify() for name in free] for r in residuals]
compiled_eval = try_compile_system(residuals, jac_exprs, n_res, n_free)
# Initial solve
converged = newton_solve(
residuals,
system.params,
quat_groups=system.quat_groups,
max_iter=100,
tol=1e-10,
post_step=post_step_fn,
weight_vector=weight_vec,
jac_exprs=jac_exprs,
compiled_eval=compiled_eval,
)
if not converged:
converged = bfgs_solve(
residuals,
system.params,
quat_groups=system.quat_groups,
max_iter=200,
tol=1e-10,
weight_vector=weight_vec,
jac_exprs=jac_exprs,
compiled_eval=compiled_eval,
)
# Cache for drag_step() reuse
cache = _DragCache()
cache.system = system
cache.residuals = residuals
cache.jac_exprs = jac_exprs
cache.compiled_eval = compiled_eval
cache.half_spaces = half_spaces
cache.weight_vec = weight_vec
cache.post_step_fn = post_step_fn
# Snapshot solved quaternions for continuity enforcement in drag_step()
env = system.params.get_env()
cache.pre_step_quats = {}
for body in system.bodies.values():
if not body.grounded:
cache.pre_step_quats[body.part_id] = body.extract_quaternion(env)
self._drag_cache = cache
# Build result
dof = count_dof(residuals, system.params, jac_exprs=jac_exprs)
result = kcsolve.SolveResult()
result.status = (
kcsolve.SolveStatus.Success if converged else kcsolve.SolveStatus.Failed
)
result.dof = dof
result.placements = _extract_placements(system.params, system.bodies)
elapsed = (time.perf_counter() - t0) * 1000
log.info(
"pre_drag: initial solve %s in %.1f ms — dof=%d",
"converged" if converged else "FAILED",
elapsed,
dof,
)
return result
def drag_step(self, drag_placements):
@@ -252,19 +352,89 @@ class KindredSolver(kcsolve.IKCSolver):
log.warning("drag_step: no drag context (pre_drag not called?)")
return kcsolve.SolveResult()
self._drag_step_count = getattr(self, "_drag_step_count", 0) + 1
# Update dragged part placements in ctx (for caller consistency)
for pr in drag_placements:
for part in ctx.parts:
if part.id == pr.id:
part.placement = pr.placement
break
t0 = time.perf_counter()
result = self.solve(ctx)
elapsed = (time.perf_counter() - t0) * 1000
if result.status != kcsolve.SolveStatus.Success:
log.warning(
"drag_step #%d: solve %s in %.1f ms",
cache = getattr(self, "_drag_cache", None)
if cache is None:
# Fallback: no cache, do a full solve
log.debug(
"drag_step #%d: no cache, falling back to full solve",
self._drag_step_count,
)
return self.solve(ctx)
t0 = time.perf_counter()
params = cache.system.params
# Update only the dragged part's 7 parameter values
for pr in drag_placements:
pfx = pr.id + "/"
params.set_value(pfx + "tx", pr.placement.position[0])
params.set_value(pfx + "ty", pr.placement.position[1])
params.set_value(pfx + "tz", pr.placement.position[2])
params.set_value(pfx + "qw", pr.placement.quaternion[0])
params.set_value(pfx + "qx", pr.placement.quaternion[1])
params.set_value(pfx + "qy", pr.placement.quaternion[2])
params.set_value(pfx + "qz", pr.placement.quaternion[3])
# Solve with cached artifacts — no rebuild
converged = newton_solve(
cache.residuals,
params,
quat_groups=cache.system.quat_groups,
max_iter=100,
tol=1e-10,
post_step=cache.post_step_fn,
weight_vector=cache.weight_vec,
jac_exprs=cache.jac_exprs,
compiled_eval=cache.compiled_eval,
)
if not converged:
converged = bfgs_solve(
cache.residuals,
params,
quat_groups=cache.system.quat_groups,
max_iter=200,
tol=1e-10,
weight_vector=cache.weight_vec,
jac_exprs=cache.jac_exprs,
compiled_eval=cache.compiled_eval,
)
# Quaternion continuity: ensure solved quaternions stay in the
# same hemisphere as the previous step. q and -q encode the
# same rotation, but the C++ side measures angle between the
# old and new quaternion — if we're in the -q branch, that
# shows up as a ~340° flip and gets rejected.
dragged_ids = self._drag_parts or set()
_enforce_quat_continuity(
params, cache.system.bodies, cache.pre_step_quats, dragged_ids
)
# Update the stored quaternions for the next drag step
env = params.get_env()
for body in cache.system.bodies.values():
if not body.grounded:
cache.pre_step_quats[body.part_id] = body.extract_quaternion(env)
result = kcsolve.SolveResult()
result.status = (
kcsolve.SolveStatus.Success if converged else kcsolve.SolveStatus.Failed
)
result.dof = -1 # skip DOF counting during drag for speed
result.placements = _extract_placements(params, cache.system.bodies)
elapsed = (time.perf_counter() - t0) * 1000
if not converged:
log.warning(
"drag_step #%d: solve FAILED in %.1f ms",
self._drag_step_count,
result.status,
elapsed,
)
else:
@@ -281,6 +451,7 @@ class KindredSolver(kcsolve.IKCSolver):
self._drag_ctx = None
self._drag_parts = None
self._drag_step_count = 0
self._drag_cache = None
# ── Diagnostics ─────────────────────────────────────────────────
@@ -298,6 +469,27 @@ class KindredSolver(kcsolve.IKCSolver):
return True
class _DragCache:
"""Cached artifacts from pre_drag() reused across drag_step() calls.
During interactive drag the constraint topology is invariant — only
the dragged part's parameter values change. Caching the built
system, symbolic Jacobian, and compiled evaluator eliminates the
expensive rebuild overhead (~150 ms) on every frame.
"""
__slots__ = (
"system", # _System — owns ParamTable + Expr trees
"residuals", # list[Expr] — after substitution + single-equation pass
"jac_exprs", # list[list[Expr]] — symbolic Jacobian
"compiled_eval", # Callable or None
"half_spaces", # list[HalfSpace]
"weight_vec", # ndarray or None
"post_step_fn", # Callable or None
"pre_step_quats", # dict[str, tuple] — last-accepted quaternions per body
)
class _System:
"""Intermediate representation of a built constraint system."""
@@ -312,6 +504,98 @@ class _System:
)
def _enforce_quat_continuity(
params: ParamTable,
bodies: dict,
pre_step_quats: dict,
dragged_ids: set,
) -> None:
"""Ensure solved quaternions stay close to the previous step.
Two levels of correction, applied to ALL non-grounded bodies
(including dragged parts, whose params Newton re-solves):
1. **Hemisphere check** (cheap): if dot(q_prev, q_solved) < 0, negate
q_solved. This catches the common q-vs-(-q) sign flip.
2. **Rotation angle check**: compute the rotation angle from q_prev
to q_solved using the same formula as the C++ validator
(2*acos(w) of the relative quaternion). If the angle exceeds
the C++ threshold (91°), reset the body's quaternion to q_prev.
This catches deeper branch jumps where the solver converged to a
geometrically different but constraint-satisfying orientation.
The next Newton iteration from the caller will re-converge from
the safer starting point.
This applies to dragged parts too: the GUI sets the dragged part's
params to the mouse-projected placement, then Newton re-solves all
free params (including the dragged part's) to satisfy constraints.
The solver can converge to an equivalent quaternion on the opposite
branch, which the C++ validateNewPlacements() rejects as a >91°
flip.
"""
_MAX_ANGLE = 91.0 * math.pi / 180.0 # match C++ threshold
for body in bodies.values():
if body.grounded:
continue
prev = pre_step_quats.get(body.part_id)
if prev is None:
continue
pfx = body.part_id + "/"
qw = params.get_value(pfx + "qw")
qx = params.get_value(pfx + "qx")
qy = params.get_value(pfx + "qy")
qz = params.get_value(pfx + "qz")
# Level 1: hemisphere check (standard SLERP short-arc correction)
dot = prev[0] * qw + prev[1] * qx + prev[2] * qy + prev[3] * qz
if dot < 0.0:
qw, qx, qy, qz = -qw, -qx, -qy, -qz
params.set_value(pfx + "qw", qw)
params.set_value(pfx + "qx", qx)
params.set_value(pfx + "qy", qy)
params.set_value(pfx + "qz", qz)
# Level 2: rotation angle check (catches branch jumps beyond sign flip)
# Compute relative quaternion: q_rel = q_new * conj(q_prev)
pw, px, py, pz = prev
rel_w = qw * pw + qx * px + qy * py + qz * pz
rel_x = qx * pw - qw * px - qy * pz + qz * py
rel_y = qy * pw - qw * py - qz * px + qx * pz
rel_z = qz * pw - qw * pz - qx * py + qy * px
# Normalize
rel_norm = math.sqrt(
rel_w * rel_w + rel_x * rel_x + rel_y * rel_y + rel_z * rel_z
)
if rel_norm > 1e-15:
rel_w /= rel_norm
rel_w = max(-1.0, min(1.0, rel_w))
# C++ evaluateVector: angle = 2 * acos(w)
if -1.0 < rel_w < 1.0:
angle = 2.0 * math.acos(rel_w)
else:
angle = 0.0
if abs(angle) > _MAX_ANGLE:
# The solver jumped to a different constraint branch.
# Reset to the previous step's quaternion — the caller's
# Newton solve was already complete, so this just ensures
# the output stays near the previous configuration.
log.debug(
"_enforce_quat_continuity: %s jumped %.1f deg, "
"resetting to previous quaternion",
body.part_id,
math.degrees(angle),
)
params.set_value(pfx + "qw", pw)
params.set_value(pfx + "qx", px)
params.set_value(pfx + "qy", py)
params.set_value(pfx + "qz", pz)
def _build_system(ctx):
"""Build the solver's internal representation from a SolveContext.
@@ -419,13 +703,15 @@ def _build_system(ctx):
return system
def _run_diagnostics(residuals, params, residual_ranges, ctx):
def _run_diagnostics(residuals, params, residual_ranges, ctx, jac_exprs=None):
"""Run overconstrained detection and return kcsolve diagnostics."""
diagnostics = []
if not hasattr(kcsolve, "ConstraintDiagnostic"):
return diagnostics
diags = find_overconstrained(residuals, params, residual_ranges)
diags = find_overconstrained(
residuals, params, residual_ranges, jac_exprs=jac_exprs
)
for d in diags:
cd = kcsolve.ConstraintDiagnostic()
cd.constraint_id = ctx.constraints[d.constraint_index].id
@@ -458,7 +744,23 @@ def _extract_placements(params, bodies):
def _monolithic_solve(
all_residuals, params, quat_groups, post_step=None, weight_vector=None
):
"""Newton-Raphson solve with BFGS fallback on the full system."""
"""Newton-Raphson solve with BFGS fallback on the full system.
Returns ``(converged, jac_exprs)`` so the caller can reuse the
symbolic Jacobian for DOF counting / diagnostics.
"""
from .codegen import try_compile_system
free = params.free_names()
n_res = len(all_residuals)
n_free = len(free)
# Build symbolic Jacobian once
jac_exprs = [[r.diff(name).simplify() for name in free] for r in all_residuals]
# Compile once
compiled_eval = try_compile_system(all_residuals, jac_exprs, n_res, n_free)
t0 = time.perf_counter()
converged = newton_solve(
all_residuals,
@@ -468,6 +770,8 @@ def _monolithic_solve(
tol=1e-10,
post_step=post_step,
weight_vector=weight_vector,
jac_exprs=jac_exprs,
compiled_eval=compiled_eval,
)
nr_ms = (time.perf_counter() - t0) * 1000
if not converged:
@@ -482,6 +786,8 @@ def _monolithic_solve(
max_iter=200,
tol=1e-10,
weight_vector=weight_vector,
jac_exprs=jac_exprs,
compiled_eval=compiled_eval,
)
bfgs_ms = (time.perf_counter() - t1) * 1000
if converged:
@@ -490,7 +796,7 @@ def _monolithic_solve(
log.warning("_monolithic_solve: BFGS also failed (%.1f ms)", bfgs_ms)
else:
log.debug("_monolithic_solve: Newton-Raphson converged (%.1f ms)", nr_ms)
return converged
return converged, jac_exprs
def _build_constraint(
@@ -514,6 +820,16 @@ def _build_constraint(
if kind == kcsolve.BaseJointKind.DistancePointPoint:
distance = c_params[0] if c_params else 0.0
if distance == 0.0:
# Distance=0 is point coincidence. Use CoincidentConstraint
# (3 linear residuals) instead of the squared form which has
# a degenerate Jacobian when the constraint is satisfied.
return CoincidentConstraint(
body_i,
marker_i_pos,
body_j,
marker_j_pos,
)
return DistancePointPointConstraint(
body_i,
marker_i_pos,

View File

@@ -0,0 +1,720 @@
"""
Console test: Cylindrical + Planar drag — reproduces #338.
Paste into the FreeCAD Python console (or run via exec(open(...).read())).
This test builds scenarios that trigger the quaternion hemisphere flip
during drag that causes the C++ validateNewPlacements() to reject every
step with "flipped orientation (360.0 degrees)".
Key insight: the C++ Rotation::evaluateVector() computes
_angle = 2 * acos(w)
using the RAW w component (not |w|). When the solver returns a
quaternion in the opposite hemisphere (w < 0), the relative rotation
relativeRot = newRot * oldRot.inverse()
has w ≈ -1, giving angle ≈ 2*acos(-1) = 2*pi = 360 degrees.
The 91-degree threshold then rejects it.
The solver's _enforce_quat_continuity SHOULD fix this, but it skips
dragged parts. For the non-dragged bar, the fix only works if the
pre_step_quats baseline is correct. This test reproduces the failure
by using realistic non-identity geometry.
"""
import math
import kcsolve
_results = []
def _report(name, passed, detail=""):
status = "PASS" if passed else "FAIL"
msg = f" [{status}] {name}"
if detail:
msg += f" -- {detail}"
print(msg)
_results.append((name, passed))
# ── Quaternion math ──────────────────────────────────────────────────
def _qmul(a, b):
"""Hamilton product (w, x, y, z)."""
aw, ax, ay, az = a
bw, bx, by, bz = b
return (
aw * bw - ax * bx - ay * by - az * bz,
aw * bx + ax * bw + ay * bz - az * by,
aw * by - ax * bz + ay * bw + az * bx,
aw * bz + ax * by - ay * bx + az * bw,
)
def _qconj(q):
"""Conjugate (= inverse for unit quaternion)."""
return (q[0], -q[1], -q[2], -q[3])
def _qnorm(q):
"""Normalize quaternion."""
n = math.sqrt(sum(c * c for c in q))
return tuple(c / n for c in q) if n > 1e-15 else q
def _axis_angle_quat(axis, angle_rad):
"""Quaternion (w, x, y, z) for rotation about a normalized axis."""
ax, ay, az = axis
n = math.sqrt(ax * ax + ay * ay + az * az)
if n < 1e-15:
return (1.0, 0.0, 0.0, 0.0)
ax, ay, az = ax / n, ay / n, az / n
s = math.sin(angle_rad / 2.0)
return (math.cos(angle_rad / 2.0), ax * s, ay * s, az * s)
def _rotation_angle_cpp(q_old, q_new):
"""Rotation angle (degrees) matching C++ validateNewPlacements().
C++ pipeline:
Rotation(x, y, z, w) — stores quat as (x, y, z, w)
evaluateVector(): _angle = 2 * acos(quat[3]) // quat[3] = w
getRawValue(axis, angle) returns _angle
CRITICAL: C++ uses acos(w), NOT acos(|w|).
When w < 0 (opposite hemisphere), acos(w) > pi/2, so angle > pi.
For w ≈ -1 (identity rotation in wrong hemisphere): angle ≈ 2*pi = 360 deg.
Note: the relative rotation quaternion is constructed via
relativeRot = newRot * oldRot.inverse()
which goes through Rotation::operator*() and normalize()+evaluateVector().
"""
q_rel = _qmul(q_new, _qconj(q_old))
q_rel = _qnorm(q_rel)
# q_rel is in (w, x, y, z) order. FreeCAD stores (x, y, z, w), so
# when it constructs a Rotation from (q0=x, q1=y, q2=z, q3=w),
# evaluateVector() reads quat[3] = w.
w = q_rel[0]
w = max(-1.0, min(1.0, w))
# C++ evaluateVector: checks (quat[3] > -1.0) && (quat[3] < 1.0)
# Exact ±1 hits else-branch → angle = 0. In practice the multiply
# + normalize produces values like -0.99999... which still enters
# the acos branch. We replicate C++ exactly here.
if w > -1.0 and w < 1.0:
angle_rad = math.acos(w) * 2.0
else:
angle_rad = 0.0
return math.degrees(angle_rad)
def _rotation_angle_abs(q_old, q_new):
"""Angle using |w| — what a CORRECT validator would use."""
q_rel = _qmul(q_new, _qconj(q_old))
q_rel = _qnorm(q_rel)
w = abs(q_rel[0])
w = min(1.0, w)
return math.degrees(2.0 * math.acos(w))
# ── Context builders ─────────────────────────────────────────────────
def _build_ctx(
ground_pos,
ground_quat,
bar_pos,
bar_quat,
cyl_marker_i_quat,
cyl_marker_j_quat,
cyl_marker_i_pos=(0, 0, 0),
cyl_marker_j_pos=(0, 0, 0),
planar_marker_i_quat=None,
planar_marker_j_quat=None,
planar_marker_i_pos=(0, 0, 0),
planar_marker_j_pos=(0, 0, 0),
planar_offset=0.0,
):
"""Build SolveContext with ground + bar, Cylindrical + Planar joints.
Uses explicit marker quaternions instead of identity, so the
constraint geometry matches realistic assemblies.
"""
if planar_marker_i_quat is None:
planar_marker_i_quat = cyl_marker_i_quat
if planar_marker_j_quat is None:
planar_marker_j_quat = cyl_marker_j_quat
ground = kcsolve.Part()
ground.id = "ground"
ground.placement = kcsolve.Transform()
ground.placement.position = list(ground_pos)
ground.placement.quaternion = list(ground_quat)
ground.grounded = True
bar = kcsolve.Part()
bar.id = "bar"
bar.placement = kcsolve.Transform()
bar.placement.position = list(bar_pos)
bar.placement.quaternion = list(bar_quat)
bar.grounded = False
cyl = kcsolve.Constraint()
cyl.id = "cylindrical"
cyl.part_i = "ground"
cyl.marker_i = kcsolve.Transform()
cyl.marker_i.position = list(cyl_marker_i_pos)
cyl.marker_i.quaternion = list(cyl_marker_i_quat)
cyl.part_j = "bar"
cyl.marker_j = kcsolve.Transform()
cyl.marker_j.position = list(cyl_marker_j_pos)
cyl.marker_j.quaternion = list(cyl_marker_j_quat)
cyl.type = kcsolve.BaseJointKind.Cylindrical
planar = kcsolve.Constraint()
planar.id = "planar"
planar.part_i = "ground"
planar.marker_i = kcsolve.Transform()
planar.marker_i.position = list(planar_marker_i_pos)
planar.marker_i.quaternion = list(planar_marker_i_quat)
planar.part_j = "bar"
planar.marker_j = kcsolve.Transform()
planar.marker_j.position = list(planar_marker_j_pos)
planar.marker_j.quaternion = list(planar_marker_j_quat)
planar.type = kcsolve.BaseJointKind.Planar
planar.params = [planar_offset]
ctx = kcsolve.SolveContext()
ctx.parts = [ground, bar]
ctx.constraints = [cyl, planar]
return ctx
# ── Test 1: Validator function correctness ───────────────────────────
def test_validator_function():
"""Verify our _rotation_angle_cpp matches C++ behavior for hemisphere flips."""
print("\n--- Test 1: Validator function correctness ---")
# Same quaternion → 0 degrees
q = (1.0, 0.0, 0.0, 0.0)
angle = _rotation_angle_cpp(q, q)
_report("same quat → 0 deg", abs(angle) < 0.1, f"{angle:.1f}")
# Near-negated quaternion (tiny perturbation from exact -1 to avoid
# the C++ boundary condition where |w| == 1 → angle = 0).
# In practice the solver never returns EXACTLY -1; it returns
# -0.999999... which enters the acos() branch and gives ~360 deg.
q_near_neg = _qnorm((-0.99999, 0.00001, 0.0, 0.0))
angle = _rotation_angle_cpp(q, q_near_neg)
_report("near-negated quat → ~360 deg", angle > 350.0, f"{angle:.1f}")
# Same test with |w| correction → should be ~0
angle_abs = _rotation_angle_abs(q, q_near_neg)
_report("|w|-corrected → ~0 deg", angle_abs < 1.0, f"{angle_abs:.1f}")
# Non-trivial quaternion vs near-negated version (realistic float noise)
q2 = _qnorm((-0.5181, -0.5181, 0.4812, -0.4812))
# Simulate what happens: solver returns same rotation in opposite hemisphere
q2_neg = _qnorm(tuple(-c + 1e-8 for c in q2))
angle = _rotation_angle_cpp(q2, q2_neg)
_report("real quat near-negated → >180 deg", angle > 180.0, f"{angle:.1f}")
# 10-degree rotation — should be fine
q_small = _axis_angle_quat((0, 0, 1), math.radians(10))
angle = _rotation_angle_cpp(q, q_small)
_report("10 deg rotation → ~10 deg", abs(angle - 10.0) < 1.0, f"{angle:.1f}")
# ── Test 2: Synthetic drag with realistic geometry ───────────────────
def test_drag_realistic():
"""Drag with non-identity markers and non-trivial bar orientation.
This reproduces the real assembly geometry:
- Cylindrical axis is along a diagonal (not global Z)
- Bar starts at a complex orientation far from identity
- Drag includes axial perturbation (rotation about constraint axis)
The solver must re-converge the bar's orientation on each step.
If it lands on the -q hemisphere, the C++ validator rejects.
"""
print("\n--- Test 2: Realistic drag with non-identity geometry ---")
solver = kcsolve.load("kindred")
# Marker quaternion: rotates local Z to point along (1,1,0)/sqrt(2)
# This means the cylindrical axis is diagonal in the XY plane
marker_q = _qnorm(_axis_angle_quat((0, 1, 0), math.radians(45)))
# Bar starts at a complex orientation (from real assembly data)
# This is close to the actual q=(-0.5181, -0.5181, 0.4812, -0.4812)
bar_quat_init = _qnorm((-0.5181, -0.5181, 0.4812, -0.4812))
# Ground at a non-trivial orientation too (real assembly had q=(0.707,0,0,0.707))
ground_quat = _qnorm((0.7071, 0.0, 0.0, 0.7071))
# Positions far from origin (like real assembly)
ground_pos = (100.0, 0.0, 0.0)
bar_pos = (500.0, -500.0, 0.0)
ctx = _build_ctx(
ground_pos=ground_pos,
ground_quat=ground_quat,
bar_pos=bar_pos,
bar_quat=bar_quat_init,
cyl_marker_i_quat=marker_q,
cyl_marker_j_quat=marker_q,
# Planar uses identity markers (XY plane constraint)
planar_marker_i_quat=(1, 0, 0, 0),
planar_marker_j_quat=(1, 0, 0, 0),
)
# ── Save baseline (simulates savePlacementsForUndo) ──
baseline_quat = bar_quat_init
# ── pre_drag ──
drag_result = solver.pre_drag(ctx, ["bar"])
_report(
"drag: pre_drag converged",
drag_result.status == kcsolve.SolveStatus.Success,
f"status={drag_result.status}",
)
# Check pre_drag result against baseline
for pr in drag_result.placements:
if pr.id == "bar":
solved_quat = tuple(pr.placement.quaternion)
angle_cpp = _rotation_angle_cpp(baseline_quat, solved_quat)
angle_abs = _rotation_angle_abs(baseline_quat, solved_quat)
ok = angle_cpp <= 91.0
_report(
"drag: pre_drag passes validator",
ok,
f"C++ angle={angle_cpp:.1f}, |w| angle={angle_abs:.1f}, "
f"q=({solved_quat[0]:+.4f},{solved_quat[1]:+.4f},"
f"{solved_quat[2]:+.4f},{solved_quat[3]:+.4f})",
)
if ok:
baseline_quat = solved_quat
# ── drag steps with axial perturbation ──
n_steps = 40
accepted = 0
rejected = 0
first_reject_step = None
for step in range(1, n_steps + 1):
# Drag the bar along the cylindrical axis with ROTATION perturbation
# Each step: translate along the axis + rotate about it
t = step / n_steps
angle_about_axis = math.radians(step * 15.0) # 15 deg/step, goes past 360
# The cylindrical axis direction (marker Z in ground frame)
# For our 45-deg-rotated marker: axis ≈ (sin45, 0, cos45) in ground-local
# But ground is also rotated. Let's just move along a diagonal.
slide = step * 5.0
drag_pos = [
bar_pos[0] + slide * 0.707,
bar_pos[1] + slide * 0.707,
bar_pos[2],
]
# Build the dragged orientation: start from bar_quat_init,
# apply rotation about the constraint axis
axis_rot = _axis_angle_quat((0.707, 0.707, 0), angle_about_axis)
drag_quat = list(_qnorm(_qmul(axis_rot, bar_quat_init)))
pr = kcsolve.SolveResult.PartResult()
pr.id = "bar"
pr.placement = kcsolve.Transform()
pr.placement.position = drag_pos
pr.placement.quaternion = drag_quat
result = solver.drag_step([pr])
converged = result.status == kcsolve.SolveStatus.Success
bar_quat_out = None
for rpr in result.placements:
if rpr.id == "bar":
bar_quat_out = tuple(rpr.placement.quaternion)
break
if bar_quat_out is None:
_report(f"step {step:2d}", False, "bar not in result")
rejected += 1
continue
# ── Simulate validateNewPlacements() ──
angle_cpp = _rotation_angle_cpp(baseline_quat, bar_quat_out)
angle_abs = _rotation_angle_abs(baseline_quat, bar_quat_out)
validator_ok = angle_cpp <= 91.0
if validator_ok:
baseline_quat = bar_quat_out
accepted += 1
else:
rejected += 1
if first_reject_step is None:
first_reject_step = step
_report(
f"step {step:2d} ({step * 15:3d} deg)",
validator_ok and converged,
f"C++={angle_cpp:.1f} |w|={angle_abs:.1f} "
f"{'ACCEPT' if validator_ok else 'REJECT'} "
f"q=({bar_quat_out[0]:+.4f},{bar_quat_out[1]:+.4f},"
f"{bar_quat_out[2]:+.4f},{bar_quat_out[3]:+.4f})",
)
solver.post_drag()
print(f"\n Summary: accepted={accepted}/{n_steps}, rejected={rejected}/{n_steps}")
_report(
"drag: all steps accepted by C++ validator",
rejected == 0,
f"{rejected} rejected"
+ (f", first at step {first_reject_step}" if first_reject_step else ""),
)
# ── Test 3: Drag with NEGATED initial bar quaternion ─────────────────
def test_drag_negated_init():
"""Start the bar at -q (same rotation, opposite hemisphere from solver
convention) to maximize the chance of hemisphere mismatch.
The C++ side saves the FreeCAD object's current Placement.Rotation
as the baseline. If FreeCAD stores q but the solver internally
prefers -q, the very first solve output can differ in hemisphere.
"""
print("\n--- Test 3: Drag with negated initial quaternion ---")
solver = kcsolve.load("kindred")
# A non-trivial orientation with w < 0
# This is a valid unit quaternion representing a real rotation
bar_quat_neg = _qnorm((-0.5, -0.5, 0.5, -0.5)) # w < 0
# The same rotation in the positive hemisphere
bar_quat_pos = tuple(-c for c in bar_quat_neg) # w > 0
# Identity markers (simplify to isolate the hemisphere issue)
ident = (1.0, 0.0, 0.0, 0.0)
ctx = _build_ctx(
ground_pos=(0, 0, 0),
ground_quat=ident,
bar_pos=(10, 0, 0),
bar_quat=bar_quat_neg, # Start in NEGATIVE hemisphere
cyl_marker_i_quat=ident,
cyl_marker_j_quat=ident,
)
# C++ baseline is saved BEFORE pre_drag — so it uses the w<0 form
baseline_quat = bar_quat_neg
# pre_drag: solver may normalize to positive hemisphere internally
drag_result = solver.pre_drag(ctx, ["bar"])
_report(
"negated: pre_drag converged",
drag_result.status == kcsolve.SolveStatus.Success,
)
for pr in drag_result.placements:
if pr.id == "bar":
solved = tuple(pr.placement.quaternion)
# Did the solver flip to positive hemisphere?
dot = sum(a * b for a, b in zip(baseline_quat, solved))
angle_cpp = _rotation_angle_cpp(baseline_quat, solved)
hemisphere_match = dot >= 0
_report(
"negated: pre_drag hemisphere match",
hemisphere_match,
f"dot={dot:+.4f}, C++ angle={angle_cpp:.1f} deg, "
f"baseline w={baseline_quat[0]:+.4f}, "
f"solved w={solved[0]:+.4f}",
)
validator_ok = angle_cpp <= 91.0
_report(
"negated: pre_drag passes C++ validator",
validator_ok,
f"angle={angle_cpp:.1f} deg (threshold=91)",
)
if validator_ok:
baseline_quat = solved
# Drag steps with small perturbation
n_steps = 20
accepted = 0
rejected = 0
first_reject = None
for step in range(1, n_steps + 1):
angle_rad = math.radians(step * 18.0)
R = 10.0
drag_pos = [R * math.cos(angle_rad), R * math.sin(angle_rad), 0.0]
# Apply the drag rotation in the NEGATIVE hemisphere to match
# how FreeCAD would track the mouse-projected placement
z_rot = _axis_angle_quat((0, 0, 1), angle_rad)
drag_quat = list(_qnorm(_qmul(z_rot, bar_quat_neg)))
pr = kcsolve.SolveResult.PartResult()
pr.id = "bar"
pr.placement = kcsolve.Transform()
pr.placement.position = drag_pos
pr.placement.quaternion = drag_quat
result = solver.drag_step([pr])
for rpr in result.placements:
if rpr.id == "bar":
out_q = tuple(rpr.placement.quaternion)
angle_cpp = _rotation_angle_cpp(baseline_quat, out_q)
ok = angle_cpp <= 91.0
if ok:
baseline_quat = out_q
accepted += 1
else:
rejected += 1
if first_reject is None:
first_reject = step
_report(
f"neg step {step:2d} ({step * 18:3d} deg)",
ok,
f"C++={angle_cpp:.1f} "
f"q=({out_q[0]:+.4f},{out_q[1]:+.4f},"
f"{out_q[2]:+.4f},{out_q[3]:+.4f})",
)
break
solver.post_drag()
print(f"\n Summary: accepted={accepted}/{n_steps}, rejected={rejected}/{n_steps}")
_report(
"negated: all steps accepted",
rejected == 0,
f"{rejected} rejected"
+ (f", first at step {first_reject}" if first_reject else ""),
)
# ── Test 4: Live assembly if available ───────────────────────────────
def test_live_assembly():
"""If a FreeCAD assembly is open, extract its actual geometry and run
the drag simulation with real markers and placements."""
print("\n--- Test 4: Live assembly introspection ---")
try:
import FreeCAD as App
except ImportError:
_report("live: FreeCAD available", False, "not running inside FreeCAD")
return
doc = App.ActiveDocument
if doc is None:
_report("live: document open", False, "no active document")
return
asm = None
for obj in doc.Objects:
if obj.TypeId == "Assembly::AssemblyObject":
asm = obj
break
if asm is None:
_report("live: assembly found", False, "no Assembly object in document")
return
_report("live: assembly found", True, f"'{asm.Name}'")
# Introspect parts
parts = []
joints = []
grounded = []
for obj in asm.Group:
if hasattr(obj, "TypeId"):
if obj.TypeId == "Assembly::JointGroup":
for jobj in obj.Group:
if hasattr(jobj, "Proxy"):
joints.append(jobj)
elif hasattr(obj, "Placement"):
parts.append(obj)
for jobj in joints:
proxy = getattr(jobj, "Proxy", None)
if proxy and type(proxy).__name__ == "GroundedJoint":
ref = getattr(jobj, "ObjectToGround", None)
if ref:
grounded.append(ref.Name)
print(f" Parts: {len(parts)}, Joints: {len(joints)}, Grounded: {grounded}")
# Print each part's placement
for p in parts:
plc = p.Placement
rot = plc.Rotation
q = rot.Q # FreeCAD (x, y, z, w)
q_wxyz = (q[3], q[0], q[1], q[2])
pos = plc.Base
is_gnd = p.Name in grounded
print(
f" {p.Label:40s} pos=({pos.x:.1f}, {pos.y:.1f}, {pos.z:.1f}) "
f"q(wxyz)=({q_wxyz[0]:.4f}, {q_wxyz[1]:.4f}, "
f"{q_wxyz[2]:.4f}, {q_wxyz[3]:.4f}) "
f"{'[GROUNDED]' if is_gnd else ''}"
)
# Print joint details
for jobj in joints:
proxy = getattr(jobj, "Proxy", None)
ptype = type(proxy).__name__ if proxy else "unknown"
kind = getattr(jobj, "JointType", "?")
print(f" Joint: {jobj.Label} type={ptype} kind={kind}")
# Check: does any non-grounded part have w < 0 in its current quaternion?
# That alone would cause the validator to reject on the first solve.
for p in parts:
if p.Name in grounded:
continue
q = p.Placement.Rotation.Q # (x, y, z, w)
w = q[3]
if w < 0:
print(
f"\n ** {p.Label} has w={w:.4f} < 0 in current placement! **"
f"\n If the solver returns w>0, the C++ validator sees ~360 deg flip."
)
_report("live: assembly introspected", True)
# ── Test 5: Direct hemisphere flip reproduction ──────────────────────
def test_hemisphere_flip_direct():
"""Directly reproduce the hemisphere flip by feeding the solver
a dragged placement where the quaternion is in the opposite
hemisphere from what pre_drag returned.
This simulates what happens when:
1. FreeCAD stores Placement with q = (w<0, x, y, z) form
2. Solver normalizes to w>0 during pre_drag
3. Next drag_step gets mouse placement in the w<0 form
4. Solver output may flip back to w<0
"""
print("\n--- Test 5: Direct hemisphere flip ---")
solver = kcsolve.load("kindred")
# Use a quaternion representing 90-deg rotation about Z
# In positive hemisphere: (cos45, 0, 0, sin45) = (0.707, 0, 0, 0.707)
# In negative hemisphere: (-0.707, 0, 0, -0.707)
q_pos = _axis_angle_quat((0, 0, 1), math.radians(90))
q_neg = tuple(-c for c in q_pos)
ident = (1.0, 0.0, 0.0, 0.0)
# Build context with positive-hemisphere quaternion
ctx = _build_ctx(
ground_pos=(0, 0, 0),
ground_quat=ident,
bar_pos=(10, 0, 0),
bar_quat=q_pos,
cyl_marker_i_quat=ident,
cyl_marker_j_quat=ident,
)
# C++ baseline saves q_pos
baseline_quat = q_pos
result = solver.pre_drag(ctx, ["bar"])
_report("flip: pre_drag converged", result.status == kcsolve.SolveStatus.Success)
for pr in result.placements:
if pr.id == "bar":
baseline_quat = tuple(pr.placement.quaternion)
print(
f" pre_drag baseline: ({baseline_quat[0]:+.4f},"
f"{baseline_quat[1]:+.4f},{baseline_quat[2]:+.4f},"
f"{baseline_quat[3]:+.4f})"
)
# Now feed drag steps where we alternate hemispheres in the dragged
# placement to see if the solver output flips
test_drags = [
("same hemisphere", q_pos),
("opposite hemisphere", q_neg),
("back to same", q_pos),
("opposite again", q_neg),
("large rotation pos", _axis_angle_quat((0, 0, 1), math.radians(170))),
(
"large rotation neg",
tuple(-c for c in _axis_angle_quat((0, 0, 1), math.radians(170))),
),
]
for name, drag_q in test_drags:
pr = kcsolve.SolveResult.PartResult()
pr.id = "bar"
pr.placement = kcsolve.Transform()
pr.placement.position = [10.0, 0.0, 0.0]
pr.placement.quaternion = list(drag_q)
result = solver.drag_step([pr])
for rpr in result.placements:
if rpr.id == "bar":
out_q = tuple(rpr.placement.quaternion)
angle_cpp = _rotation_angle_cpp(baseline_quat, out_q)
angle_abs = _rotation_angle_abs(baseline_quat, out_q)
ok = angle_cpp <= 91.0
_report(
f"flip: {name}",
ok,
f"C++={angle_cpp:.1f} |w|={angle_abs:.1f} "
f"in_w={drag_q[0]:+.4f} out_w={out_q[0]:+.4f}",
)
if ok:
baseline_quat = out_q
break
solver.post_drag()
# ── Run all ──────────────────────────────────────────────────────────
def run_all():
print("\n" + "=" * 70)
print(" Console Test: Planar + Cylindrical Drag (#338 / #339)")
print(" Realistic geometry + C++ validator simulation")
print("=" * 70)
test_validator_function()
test_drag_realistic()
test_drag_negated_init()
test_live_assembly()
test_hemisphere_flip_direct()
# Summary
passed = sum(1 for _, p in _results if p)
total = len(_results)
print(f"\n{'=' * 70}")
print(f" {passed}/{total} passed")
if passed < total:
failed = [n for n, p in _results if not p]
print(f" FAILED ({len(failed)}):")
for f in failed:
print(f" - {f}")
print("=" * 70 + "\n")
run_all()

357
tests/test_codegen.py Normal file
View File

@@ -0,0 +1,357 @@
"""Tests for the codegen module — CSE, compilation, and compiled evaluation."""
import math
import numpy as np
import pytest
from kindred_solver.codegen import (
_build_cse,
_find_nonzero_entries,
compile_system,
try_compile_system,
)
from kindred_solver.expr import (
ZERO,
Add,
Const,
Cos,
Div,
Mul,
Neg,
Pow,
Sin,
Sqrt,
Sub,
Var,
)
from kindred_solver.newton import newton_solve
from kindred_solver.params import ParamTable
# ---------------------------------------------------------------------------
# to_code() — round-trip correctness for each Expr type
# ---------------------------------------------------------------------------
class TestToCode:
"""Verify that eval(expr.to_code()) == expr.eval(env) for each node."""
NS = {"_sin": math.sin, "_cos": math.cos, "_sqrt": math.sqrt}
def _check(self, expr, env):
code = expr.to_code()
ns = dict(self.NS)
ns["env"] = env
compiled = eval(code, ns)
expected = expr.eval(env)
assert abs(compiled - expected) < 1e-15, (
f"{code} = {compiled}, expected {expected}"
)
def test_const(self):
self._check(Const(3.14), {})
def test_const_negative(self):
self._check(Const(-2.5), {})
def test_const_zero(self):
self._check(Const(0.0), {})
def test_var(self):
self._check(Var("x"), {"x": 7.0})
def test_neg(self):
self._check(Neg(Var("x")), {"x": 3.0})
def test_add(self):
self._check(Add(Var("x"), Const(2.0)), {"x": 5.0})
def test_sub(self):
self._check(Sub(Var("x"), Var("y")), {"x": 5.0, "y": 3.0})
def test_mul(self):
self._check(Mul(Var("x"), Const(3.0)), {"x": 4.0})
def test_div(self):
self._check(Div(Var("x"), Const(2.0)), {"x": 6.0})
def test_pow(self):
self._check(Pow(Var("x"), Const(3.0)), {"x": 2.0})
def test_sin(self):
self._check(Sin(Var("x")), {"x": 1.0})
def test_cos(self):
self._check(Cos(Var("x")), {"x": 1.0})
def test_sqrt(self):
self._check(Sqrt(Var("x")), {"x": 9.0})
def test_nested(self):
"""Complex nested expression."""
x, y = Var("x"), Var("y")
expr = Add(Mul(Sin(x), Cos(y)), Sqrt(Sub(x, Neg(y))))
self._check(expr, {"x": 2.0, "y": 1.0})
# ---------------------------------------------------------------------------
# CSE
# ---------------------------------------------------------------------------
class TestCSE:
def test_no_sharing(self):
"""Distinct expressions produce no CSE temps."""
a = Var("x") + Const(1.0)
b = Var("y") + Const(2.0)
id_to_temp, temps = _build_cse([a, b])
assert len(temps) == 0
def test_shared_subtree(self):
"""Same node object used in two places is extracted."""
x = Var("x")
shared = x * Const(2.0) # single Mul node
a = shared + Const(1.0)
b = shared + Const(3.0)
id_to_temp, temps = _build_cse([a, b])
assert len(temps) >= 1
# The shared Mul node should be a temp
assert id(shared) in id_to_temp
def test_leaf_nodes_not_extracted(self):
"""Const and Var nodes are never extracted as temps."""
x = Var("x")
c = Const(5.0)
a = x + c
b = x + c
id_to_temp, temps = _build_cse([a, b])
for _, expr in temps:
assert not isinstance(expr, (Const, Var))
def test_dependency_order(self):
"""Temps are in dependency order (dependencies first)."""
x = Var("x")
inner = x * Const(2.0)
outer = inner + inner # uses inner twice
wrapper_a = outer * Const(3.0)
wrapper_b = outer * Const(4.0)
id_to_temp, temps = _build_cse([wrapper_a, wrapper_b])
# If both inner and outer are temps, inner must come first
temp_names = [name for name, _ in temps]
temp_ids = [id(expr) for _, expr in temps]
if id(inner) in set(id_to_temp) and id(outer) in set(id_to_temp):
inner_idx = temp_ids.index(id(inner))
outer_idx = temp_ids.index(id(outer))
assert inner_idx < outer_idx
# ---------------------------------------------------------------------------
# Sparsity detection
# ---------------------------------------------------------------------------
class TestSparsity:
def test_zero_entries_skipped(self):
nz = _find_nonzero_entries(
[
[Const(0.0), Var("x"), Const(0.0)],
[Const(1.0), Const(0.0), Var("y")],
]
)
assert nz == [(0, 1), (1, 0), (1, 2)]
def test_all_nonzero(self):
nz = _find_nonzero_entries(
[
[Var("x"), Const(1.0)],
]
)
assert nz == [(0, 0), (0, 1)]
def test_all_zero(self):
nz = _find_nonzero_entries(
[
[Const(0.0), Const(0.0)],
]
)
assert nz == []
# ---------------------------------------------------------------------------
# Full compilation pipeline
# ---------------------------------------------------------------------------
class TestCompileSystem:
def test_simple_linear(self):
"""Compile and evaluate a trivial system: r = x - 3, J = [[1]]."""
x = Var("x")
residuals = [x - Const(3.0)]
jac_exprs = [[Const(1.0)]] # d(x-3)/dx = 1
fn = compile_system(residuals, jac_exprs, 1, 1)
env = {"x": 5.0}
r_vec = np.empty(1)
J = np.zeros((1, 1))
fn(env, r_vec, J)
assert abs(r_vec[0] - 2.0) < 1e-15 # 5 - 3 = 2
assert abs(J[0, 0] - 1.0) < 1e-15
def test_two_variable_system(self):
"""Compile: r0 = x + y - 5, r1 = x - y - 1."""
x, y = Var("x"), Var("y")
residuals = [x + y - Const(5.0), x - y - Const(1.0)]
jac_exprs = [
[Const(1.0), Const(1.0)], # d(r0)/dx, d(r0)/dy
[Const(1.0), Const(-1.0)], # d(r1)/dx, d(r1)/dy
]
fn = compile_system(residuals, jac_exprs, 2, 2)
env = {"x": 3.0, "y": 2.0}
r_vec = np.empty(2)
J = np.zeros((2, 2))
fn(env, r_vec, J)
assert abs(r_vec[0] - 0.0) < 1e-15
assert abs(r_vec[1] - 0.0) < 1e-15
assert abs(J[0, 0] - 1.0) < 1e-15
assert abs(J[0, 1] - 1.0) < 1e-15
assert abs(J[1, 0] - 1.0) < 1e-15
assert abs(J[1, 1] - (-1.0)) < 1e-15
def test_sparse_jacobian(self):
"""Zero Jacobian entries remain zero after compiled evaluation."""
x = Var("x")
y = Var("y")
# r0 depends on x only, r1 depends on y only
residuals = [x - Const(1.0), y - Const(2.0)]
jac_exprs = [
[Const(1.0), Const(0.0)],
[Const(0.0), Const(1.0)],
]
fn = compile_system(residuals, jac_exprs, 2, 2)
env = {"x": 3.0, "y": 4.0}
r_vec = np.empty(2)
J = np.zeros((2, 2))
fn(env, r_vec, J)
assert abs(J[0, 1]) < 1e-15 # should remain zero
assert abs(J[1, 0]) < 1e-15 # should remain zero
assert abs(J[0, 0] - 1.0) < 1e-15
assert abs(J[1, 1] - 1.0) < 1e-15
def test_trig_functions(self):
"""Compiled evaluation handles Sin/Cos/Sqrt."""
x = Var("x")
residuals = [Sin(x), Cos(x), Sqrt(x)]
jac_exprs = [
[Cos(x)],
[Neg(Sin(x))],
[Div(Const(1.0), Mul(Const(2.0), Sqrt(x)))],
]
fn = compile_system(residuals, jac_exprs, 3, 1)
env = {"x": 1.0}
r_vec = np.empty(3)
J = np.zeros((3, 1))
fn(env, r_vec, J)
assert abs(r_vec[0] - math.sin(1.0)) < 1e-15
assert abs(r_vec[1] - math.cos(1.0)) < 1e-15
assert abs(r_vec[2] - math.sqrt(1.0)) < 1e-15
assert abs(J[0, 0] - math.cos(1.0)) < 1e-15
assert abs(J[1, 0] - (-math.sin(1.0))) < 1e-15
assert abs(J[2, 0] - (1.0 / (2.0 * math.sqrt(1.0)))) < 1e-15
def test_matches_tree_walk(self):
"""Compiled eval produces identical results to tree-walk eval."""
pt = ParamTable()
x = pt.add("x", 2.0)
y = pt.add("y", 3.0)
residuals = [x * y - Const(6.0), x * x + y - Const(7.0)]
free = pt.free_names()
jac_exprs = [[r.diff(name).simplify() for name in free] for r in residuals]
fn = compile_system(residuals, jac_exprs, 2, 2)
# Tree-walk eval
env = pt.get_env()
r_tree = np.array([r.eval(env) for r in residuals])
J_tree = np.empty((2, 2))
for i in range(2):
for j in range(2):
J_tree[i, j] = jac_exprs[i][j].eval(env)
# Compiled eval
r_comp = np.empty(2)
J_comp = np.zeros((2, 2))
fn(pt.env_ref(), r_comp, J_comp)
np.testing.assert_allclose(r_comp, r_tree, atol=1e-15)
np.testing.assert_allclose(J_comp, J_tree, atol=1e-15)
class TestTryCompile:
def test_returns_callable(self):
x = Var("x")
fn = try_compile_system([x], [[Const(1.0)]], 1, 1)
assert fn is not None
def test_empty_system(self):
"""Empty system returns None (nothing to compile)."""
fn = try_compile_system([], [], 0, 0)
# Empty system is handled by the solver before codegen is reached,
# so returning None is acceptable.
assert fn is None or callable(fn)
# ---------------------------------------------------------------------------
# Integration: Newton with compiled eval matches tree-walk
# ---------------------------------------------------------------------------
class TestCompiledNewton:
def test_single_linear(self):
"""Solve x - 3 = 0 with compiled eval."""
pt = ParamTable()
x = pt.add("x", 0.0)
residuals = [x - Const(3.0)]
assert newton_solve(residuals, pt) is True
assert abs(pt.get_value("x") - 3.0) < 1e-10
def test_two_variables(self):
"""Solve x + y = 5, x - y = 1 with compiled eval."""
pt = ParamTable()
x = pt.add("x", 0.0)
y = pt.add("y", 0.0)
residuals = [x + y - Const(5.0), x - y - Const(1.0)]
assert newton_solve(residuals, pt) is True
assert abs(pt.get_value("x") - 3.0) < 1e-10
assert abs(pt.get_value("y") - 2.0) < 1e-10
def test_quadratic(self):
"""Solve x^2 - 4 = 0 starting from x=1."""
pt = ParamTable()
x = pt.add("x", 1.0)
residuals = [x * x - Const(4.0)]
assert newton_solve(residuals, pt) is True
assert abs(pt.get_value("x") - 2.0) < 1e-10
def test_nonlinear_system(self):
"""Compiled eval converges for a nonlinear system: xy=6, x+y=5."""
pt = ParamTable()
x = pt.add("x", 2.0)
y = pt.add("y", 3.5)
residuals = [x * y - Const(6.0), x + y - Const(5.0)]
assert newton_solve(residuals, pt, max_iter=100) is True
# Solutions are (2, 3) or (3, 2) — check they satisfy both equations
xv, yv = pt.get_value("x"), pt.get_value("y")
assert abs(xv * yv - 6.0) < 1e-10
assert abs(xv + yv - 5.0) < 1e-10

View File

@@ -65,7 +65,7 @@ class TestPointOnLine:
b1 = RigidBody("a", pt, (0, 0, 0), (1, 0, 0, 0))
b2 = RigidBody("b", pt, (0, 0, 0), (1, 0, 0, 0))
c = PointOnLineConstraint(b1, (0, 0, 0), ID_QUAT, b2, (0, 0, 0), ID_QUAT)
assert len(c.residuals()) == 2
assert len(c.residuals()) == 3
class TestPointInPlane:
@@ -135,7 +135,7 @@ class TestParallel:
b1 = RigidBody("a", pt, (0, 0, 0), (1, 0, 0, 0))
b2 = RigidBody("b", pt, (0, 0, 0), (1, 0, 0, 0))
c = ParallelConstraint(b1, ID_QUAT, b2, ID_QUAT)
assert len(c.residuals()) == 2
assert len(c.residuals()) == 3
class TestPerpendicular:
@@ -222,7 +222,7 @@ class TestConcentric:
b1 = RigidBody("a", pt, (0, 0, 0), (1, 0, 0, 0))
b2 = RigidBody("b", pt, (0, 0, 0), (1, 0, 0, 0))
c = ConcentricConstraint(b1, (0, 0, 0), ID_QUAT, b2, (0, 0, 0), ID_QUAT)
assert len(c.residuals()) == 4
assert len(c.residuals()) == 6
class TestTangent:
@@ -279,7 +279,7 @@ class TestPlanar:
b1 = RigidBody("a", pt, (0, 0, 0), (1, 0, 0, 0))
b2 = RigidBody("b", pt, (0, 0, 0), (1, 0, 0, 0))
c = PlanarConstraint(b1, (0, 0, 0), ID_QUAT, b2, (0, 0, 0), ID_QUAT)
assert len(c.residuals()) == 3
assert len(c.residuals()) == 4
class TestLineInPlane:
@@ -334,7 +334,7 @@ class TestRevolute:
b1 = RigidBody("a", pt, (0, 0, 0), (1, 0, 0, 0))
b2 = RigidBody("b", pt, (0, 0, 0), (1, 0, 0, 0))
c = RevoluteConstraint(b1, (0, 0, 0), ID_QUAT, b2, (0, 0, 0), ID_QUAT)
assert len(c.residuals()) == 5
assert len(c.residuals()) == 6
class TestCylindrical:
@@ -353,7 +353,7 @@ class TestCylindrical:
b1 = RigidBody("a", pt, (0, 0, 0), (1, 0, 0, 0))
b2 = RigidBody("b", pt, (0, 0, 0), (1, 0, 0, 0))
c = CylindricalConstraint(b1, (0, 0, 0), ID_QUAT, b2, (0, 0, 0), ID_QUAT)
assert len(c.residuals()) == 4
assert len(c.residuals()) == 6
class TestSlider:
@@ -375,15 +375,15 @@ class TestSlider:
c = SliderConstraint(b1, (0, 0, 0), ID_QUAT, b2, (0, 0, 0), ID_QUAT)
env = pt.get_env()
vals = [r.eval(env) for r in c.residuals()]
# First 4 should be ~0 (parallel + on-line), but twist residual should be ~1
assert abs(vals[4]) > 0.5
# First 6 should be ~0 (parallel + on-line), but twist residual should be ~1
assert abs(vals[6]) > 0.5
def test_residual_count(self):
pt = ParamTable()
b1 = RigidBody("a", pt, (0, 0, 0), (1, 0, 0, 0))
b2 = RigidBody("b", pt, (0, 0, 0), (1, 0, 0, 0))
c = SliderConstraint(b1, (0, 0, 0), ID_QUAT, b2, (0, 0, 0), ID_QUAT)
assert len(c.residuals()) == 5
assert len(c.residuals()) == 7
class TestUniversal:
@@ -411,7 +411,7 @@ class TestScrew:
b1 = RigidBody("a", pt, (0, 0, 0), (1, 0, 0, 0))
b2 = RigidBody("b", pt, (0, 0, 0), (1, 0, 0, 0))
c = ScrewConstraint(b1, (0, 0, 0), ID_QUAT, b2, (0, 0, 0), ID_QUAT, pitch=10.0)
assert len(c.residuals()) == 5
assert len(c.residuals()) == 7
def test_zero_displacement_zero_rotation(self):
"""Both at origin with identity rotation → all residuals 0."""

253
tests/test_drag.py Normal file
View File

@@ -0,0 +1,253 @@
"""Regression tests for interactive drag.
These tests exercise the drag protocol at the solver-internals level,
verifying that constraints remain enforced across drag steps when the
pre-pass has been applied to cached residuals.
Bug scenario: single_equation_pass runs during pre_drag, analytically
solving variables from upstream constraints and baking their values as
constants into downstream residual expressions. When a drag step
changes those variables, the cached residuals use stale constants and
downstream constraints (e.g. Planar distance=0) stop being enforced.
Fix: skip single_equation_pass in the drag path. Only substitution_pass
(which replaces genuinely grounded parameters) is safe to cache.
"""
import math
import pytest
from kindred_solver.constraints import (
CoincidentConstraint,
PlanarConstraint,
RevoluteConstraint,
)
from kindred_solver.entities import RigidBody
from kindred_solver.newton import newton_solve
from kindred_solver.params import ParamTable
from kindred_solver.prepass import single_equation_pass, substitution_pass
ID_QUAT = (1, 0, 0, 0)
def _build_residuals(bodies, constraint_objs):
"""Build raw residual list + quat groups (no prepass)."""
all_residuals = []
for c in constraint_objs:
all_residuals.extend(c.residuals())
quat_groups = []
for body in bodies:
if not body.grounded:
all_residuals.append(body.quat_norm_residual())
quat_groups.append(body.quat_param_names())
return all_residuals, quat_groups
def _eval_raw_residuals(bodies, constraint_objs, params):
"""Evaluate original constraint residuals at current param values.
Returns the max absolute residual value — the ground truth for
whether constraints are satisfied regardless of prepass state.
"""
raw, _ = _build_residuals(bodies, constraint_objs)
env = params.get_env()
return max(abs(r.eval(env)) for r in raw)
class TestPrepassDragRegression:
"""single_equation_pass bakes stale values that break drag.
Setup: ground --Revolute--> arm --Planar(d=0)--> plate
The Revolute pins arm's origin to ground (fixes arm/tx, arm/ty,
arm/tz via single_equation_pass). The Planar keeps plate coplanar
with arm. After prepass, the Planar residual has arm's position
baked as Const(0.0).
During drag: arm/tz is set to 5.0. Because arm/tz is marked fixed
by prepass, Newton can't correct it, AND the Planar residual still
uses Const(0.0) instead of the live value 5.0. The Revolute
constraint (arm at origin) is silently violated.
"""
def _setup(self):
pt = ParamTable()
ground = RigidBody("g", pt, (0, 0, 0), ID_QUAT, grounded=True)
arm = RigidBody("arm", pt, (10, 0, 0), ID_QUAT)
plate = RigidBody("plate", pt, (10, 5, 0), ID_QUAT)
constraints = [
RevoluteConstraint(ground, (0, 0, 0), ID_QUAT, arm, (0, 0, 0), ID_QUAT),
PlanarConstraint(arm, (0, 0, 0), ID_QUAT, plate, (0, 0, 0), ID_QUAT, offset=0.0),
]
bodies = [ground, arm, plate]
return pt, bodies, constraints
def test_bug_stale_constants_after_single_equation_pass(self):
"""Document the bug: prepass bakes arm/tz=0, drag breaks constraints."""
pt, bodies, constraints = self._setup()
raw_residuals, quat_groups = _build_residuals(bodies, constraints)
# Simulate OLD pre_drag: substitution + single_equation_pass
residuals = substitution_pass(raw_residuals, pt)
residuals = single_equation_pass(residuals, pt)
ok = newton_solve(residuals, pt, quat_groups=quat_groups, max_iter=100, tol=1e-10)
assert ok
# Verify prepass fixed arm's position params
assert pt.is_fixed("arm/tx")
assert pt.is_fixed("arm/ty")
assert pt.is_fixed("arm/tz")
# Simulate drag: move arm up (set_value, as drag_step does)
pt.set_value("arm/tz", 5.0)
pt.set_value("plate/tz", 5.0) # initial guess near drag
ok = newton_solve(residuals, pt, quat_groups=quat_groups, max_iter=100, tol=1e-10)
# Solver "converges" on the stale cached residuals
assert ok
# But the TRUE constraints are violated: arm should be at z=0
# (Revolute pins it to ground) yet it's at z=5
max_err = _eval_raw_residuals(bodies, constraints, pt)
assert max_err > 1.0, (
f"Expected large raw residual violation, got {max_err:.6e}. "
"The bug should cause the Revolute z-residual to be ~5.0"
)
def test_fix_no_single_equation_pass_for_drag(self):
"""With the fix: skip single_equation_pass, constraints hold."""
pt, bodies, constraints = self._setup()
raw_residuals, quat_groups = _build_residuals(bodies, constraints)
# Simulate FIXED pre_drag: substitution only
residuals = substitution_pass(raw_residuals, pt)
ok = newton_solve(residuals, pt, quat_groups=quat_groups, max_iter=100, tol=1e-10)
assert ok
# arm/tz should NOT be fixed
assert not pt.is_fixed("arm/tz")
# Simulate drag: move arm up
pt.set_value("arm/tz", 5.0)
pt.set_value("plate/tz", 5.0)
ok = newton_solve(residuals, pt, quat_groups=quat_groups, max_iter=100, tol=1e-10)
assert ok
# Newton pulls arm back to z=0 (Revolute enforced) and plate follows
max_err = _eval_raw_residuals(bodies, constraints, pt)
assert max_err < 1e-8, f"Raw residual violation {max_err:.6e} — constraints not satisfied"
class TestCoincidentPlanarDragRegression:
"""Coincident upstream + Planar downstream — same bug class.
ground --Coincident--> bracket --Planar(d=0)--> plate
Coincident fixes bracket/tx,ty,tz. After prepass, the Planar
residual has bracket's position baked. Drag moves bracket;
the Planar uses stale constants.
"""
def _setup(self):
pt = ParamTable()
ground = RigidBody("g", pt, (0, 0, 0), ID_QUAT, grounded=True)
bracket = RigidBody("bracket", pt, (0, 0, 0), ID_QUAT)
plate = RigidBody("plate", pt, (10, 5, 0), ID_QUAT)
constraints = [
CoincidentConstraint(ground, (0, 0, 0), bracket, (0, 0, 0)),
PlanarConstraint(bracket, (0, 0, 0), ID_QUAT, plate, (0, 0, 0), ID_QUAT, offset=0.0),
]
bodies = [ground, bracket, plate]
return pt, bodies, constraints
def test_bug_coincident_planar(self):
"""Prepass fixes bracket/tz, Planar uses stale constant during drag."""
pt, bodies, constraints = self._setup()
raw, qg = _build_residuals(bodies, constraints)
residuals = substitution_pass(raw, pt)
residuals = single_equation_pass(residuals, pt)
ok = newton_solve(residuals, pt, quat_groups=qg, max_iter=100, tol=1e-10)
assert ok
assert pt.is_fixed("bracket/tz")
# Drag bracket up
pt.set_value("bracket/tz", 5.0)
pt.set_value("plate/tz", 5.0)
ok = newton_solve(residuals, pt, quat_groups=qg, max_iter=100, tol=1e-10)
assert ok
# True constraints violated
max_err = _eval_raw_residuals(bodies, constraints, pt)
assert max_err > 1.0, f"Expected raw violation from stale prepass, got {max_err:.6e}"
def test_fix_coincident_planar(self):
"""With the fix: constraints satisfied after drag."""
pt, bodies, constraints = self._setup()
raw, qg = _build_residuals(bodies, constraints)
residuals = substitution_pass(raw, pt)
# No single_equation_pass
ok = newton_solve(residuals, pt, quat_groups=qg, max_iter=100, tol=1e-10)
assert ok
assert not pt.is_fixed("bracket/tz")
# Drag bracket up
pt.set_value("bracket/tz", 5.0)
pt.set_value("plate/tz", 5.0)
ok = newton_solve(residuals, pt, quat_groups=qg, max_iter=100, tol=1e-10)
assert ok
max_err = _eval_raw_residuals(bodies, constraints, pt)
assert max_err < 1e-8, f"Raw residual violation {max_err:.6e} — constraints not satisfied"
class TestDragDoesNotBreakStaticSolve:
"""Verify that the static solve path (with single_equation_pass) still works.
The fix only affects pre_drag — the static solve() path continues to
use single_equation_pass for faster convergence.
"""
def test_static_solve_still_uses_prepass(self):
"""Static solve with single_equation_pass converges correctly."""
pt = ParamTable()
ground = RigidBody("g", pt, (0, 0, 0), ID_QUAT, grounded=True)
arm = RigidBody("arm", pt, (10, 0, 0), ID_QUAT)
plate = RigidBody("plate", pt, (10, 5, 8), ID_QUAT)
constraints = [
RevoluteConstraint(ground, (0, 0, 0), ID_QUAT, arm, (0, 0, 0), ID_QUAT),
PlanarConstraint(arm, (0, 0, 0), ID_QUAT, plate, (0, 0, 0), ID_QUAT, offset=0.0),
]
bodies = [ground, arm, plate]
raw, qg = _build_residuals(bodies, constraints)
# Full prepass (static solve path)
residuals = substitution_pass(raw, pt)
residuals = single_equation_pass(residuals, pt)
ok = newton_solve(residuals, pt, quat_groups=qg, max_iter=100, tol=1e-10)
assert ok
# All raw constraints satisfied
max_err = _eval_raw_residuals(bodies, constraints, pt)
assert max_err < 1e-8
# arm at origin (Revolute), plate coplanar (z=0)
env = pt.get_env()
assert abs(env["arm/tx"]) < 1e-8
assert abs(env["arm/ty"]) < 1e-8
assert abs(env["arm/tz"]) < 1e-8
assert abs(env["plate/tz"]) < 1e-8

View File

@@ -173,15 +173,17 @@ class TestPointLinePerp:
pt = (Const(0.0), Const(0.0), Const(5.0))
origin = (Const(0.0), Const(0.0), Const(0.0))
direction = (Const(0.0), Const(0.0), Const(1.0))
cx, cy = point_line_perp_components(pt, origin, direction)
cx, cy, cz = point_line_perp_components(pt, origin, direction)
assert abs(cx.eval({})) < 1e-10
assert abs(cy.eval({})) < 1e-10
assert abs(cz.eval({})) < 1e-10
def test_off_line(self):
pt = (Const(3.0), Const(0.0), Const(0.0))
origin = (Const(0.0), Const(0.0), Const(0.0))
direction = (Const(0.0), Const(0.0), Const(1.0))
cx, cy = point_line_perp_components(pt, origin, direction)
cx, cy, cz = point_line_perp_components(pt, origin, direction)
# d = (3,0,0), dir = (0,0,1), d x dir = (0*1-0*0, 0*0-3*1, 3*0-0*0) = (0,-3,0)
assert abs(cx.eval({})) < 1e-10
assert abs(cy.eval({}) - (-3.0)) < 1e-10
assert abs(cz.eval({})) < 1e-10

View File

@@ -421,8 +421,10 @@ class TestSliderCrank:
bodies = [ground, crank, rod, piston]
dof = _dof(pt, bodies, constraints)
# 3D slider-crank: planar motion + out-of-plane fold modes
assert dof == 3
# With full 3-component cross products, the redundant constraint rows
# eliminate the out-of-plane fold modes, giving the correct 1 DOF
# (crank rotation only).
assert dof == 1
def test_slider_crank_solves(self):
"""Slider-crank converges from displaced state."""