37 Commits

Author SHA1 Message Date
forbes-0023
8e521b4519 fix(solver): use all 3 cross-product components to avoid XY-plane singularity
The parallel-normal constraints (ParallelConstraint, PlanarConstraint,
ConcentricConstraint, RevoluteConstraint, CylindricalConstraint,
SliderConstraint, ScrewConstraint) and point-on-line constraints
previously used only the x and y components of the cross product,
dropping the z component.

This created a singularity when both normal vectors lay in the XY
plane: a yaw rotation produced a cross product entirely along Z,
which was discarded, making the constraint blind to the rotation.

Fix: return all 3 cross-product components. The Jacobian has a
rank deficiency at the solution (3 residuals, rank 2), but the
Newton solver handles this correctly via its pseudoinverse.

Similarly, point_line_perp_components now returns all 3 components
of the displacement cross product to avoid singularity when the
line direction aligns with a coordinate axis.
2026-02-22 15:51:59 -06:00
forbes-0023
bfb787157c perf(solver): cache compiled system across drag steps
During interactive drag, the constraint topology is invariant — only the
dragged part's parameter values change between steps. Previously,
drag_step() called solve() which rebuilt everything from scratch each
frame: new ParamTable, new Expr trees, symbolic differentiation, CSE,
and compilation (~150 ms overhead per frame).

Now pre_drag() builds and caches the system, symbolic Jacobian, compiled
evaluator, half-spaces, and weight vector. drag_step() reuses all cached
artifacts, only updating the dragged part's 7 parameter values before
running Newton-Raphson.

Expected ~1.5-2x speedup on drag step latency (eliminating rebuild
overhead, leaving only the irreducible Newton iteration cost).
2026-02-21 12:23:32 -06:00
forbes-0023
e0468cd3c1 fix(solver): redirect distance=0 constraint to CoincidentConstraint
DistancePointPointConstraint uses a squared residual (||p_i-p_j||^2 - d^2)
which has a degenerate Jacobian when d=0 and the constraint is satisfied
(all partial derivatives vanish). This made the constraint invisible to
the Newton solver during drag, allowing constrained points to drift apart.

When distance=0, use CoincidentConstraint instead (3 linear residuals:
dx, dy, dz) which always has a well-conditioned Jacobian.
2026-02-21 11:46:47 -06:00
forbes-0023
64b1e24467 feat(solver): compile symbolic Jacobian to flat Python for fast evaluation
Add a code generation pipeline that compiles Expr DAGs into flat Python
functions, eliminating recursive tree-walk dispatch in the Newton-Raphson
inner loop.

Key changes:
- Add to_code() method to all 11 Expr node types (expr.py)
- New codegen.py module with CSE (common subexpression elimination),
  sparsity detection, and compile()/exec() compilation pipeline
- Add ParamTable.env_ref() to avoid dict copies per iteration (params.py)
- Newton and BFGS solvers accept pre-built jac_exprs and compiled_eval
  to avoid redundant diff/simplify and enable compiled evaluation
- count_dof() and diagnostics accept pre-built jac_exprs
- solver.py builds symbolic Jacobian once, compiles once, passes to all
  consumers (_monolithic_solve, count_dof, diagnostics)
- Automatic fallback: if codegen fails, tree-walk eval is used

Expected performance impact:
- ~10-20x faster Jacobian evaluation (no recursive dispatch)
- ~2-5x additional from CSE on quaternion-heavy systems
- ~3x fewer entries evaluated via sparsity detection
- Eliminates redundant diff().simplify() in DOF/diagnostics
2026-02-21 11:22:36 -06:00
forbes-0023
d20b38e760 feat(solver): add diagnostic logging throughout solver pipeline
- solver.py: log solve entry (parts/constraints counts), system build
  stats, convergence result with timing, decomposition decisions,
  Newton/BFGS fallback events, and per-constraint diagnostics on failure
- solver.py: log drag lifecycle (pre_drag parts, drag_step timing/status,
  post_drag step count summary)
- decompose.py: log cluster count, per-cluster body/constraint/residual
  stats, and per-cluster convergence failures
- Init.py: add _FreeCADLogHandler routing Python logging.* calls to
  FreeCAD.Console (PrintLog/PrintWarning/PrintError) with kindred_solver
  logger at DEBUG level
2026-02-21 10:07:54 -06:00
318a1c17da Merge pull request 'feat(solver): Phase 4+5 — diagnostics, preferences, assembly integration' (#34) from feat/phase5-assembly-integration into main
Reviewed-on: #34
2026-02-21 05:48:26 +00:00
forbes-0023
adaa0f9a69 test(solver): add in-client console tests for Phase 5 assembly integration
Paste-into-console test script exercising the full pipeline:
- Solver registry and loading
- Preference switching between kindred/ondsel
- Fixed joint placement matching
- Revolute joint DOF reporting
- No-ground error code
- Solve determinism/stability
- Standalone kcsolve API (no FreeCAD Assembly objects)
- Diagnose API for overconstrained detection
2026-02-20 23:34:39 -06:00
forbes-0023
9dad25e947 feat(solver): assembly integration — diagnose, drag protocol, system extraction (phase 5)
- Extract _build_system() from solve() to enable reuse by diagnose()
- Add diagnose(ctx) method: runs find_overconstrained() unconditionally
- Add interactive drag protocol: pre_drag(), drag_step(), post_drag()
- Add _run_diagnostics() and _extract_placements() helpers
- Log warning when joint limits are present (not yet enforced)
- KindredSolver now implements all IKCSolver methods needed for
  full Assembly workbench integration
2026-02-20 23:32:51 -06:00
forbes-0023
b4b8724ff1 feat(solver): diagnostics, half-space preference, and weight vectors (phase 4)
- Add per-entity DOF analysis via Jacobian SVD (diagnostics.py)
- Add overconstrained detection: redundant vs conflicting constraints
- Add half-space tracking to preserve configuration branch (preference.py)
- Add minimum-movement weighting for least-squares solve
- Extend BFGS fallback with weight vector and quaternion renormalization
- Add snapshot/restore and env accessor to ParamTable
- Fix DistancePointPointConstraint sign for half-space tracking
2026-02-20 23:32:45 -06:00
3f5f7905b5 Merge pull request 'feat(solver): graph decomposition for cluster-by-cluster solving (phase 3)' (#33) from feat/phase3-graph-decomposition into main 2026-02-21 04:21:10 +00:00
forbes-0023
92ae57751f feat(solver): graph decomposition for cluster-by-cluster solving (phase 3)
Add a Python decomposition layer using NetworkX that partitions the
constraint graph into biconnected components (rigid clusters), orders
them via a block-cut tree, and solves each cluster independently.
Articulation-point bodies propagate as boundary conditions between
clusters.

New module kindred_solver/decompose.py:
- DOF table mapping BaseJointKind to residual counts
- Constraint graph construction (nx.MultiGraph)
- Biconnected component detection + articulation points
- Block-cut tree solve ordering (root-first from grounded cluster)
- Cluster-by-cluster solver with boundary body fix/unfix cycling
- Pebble game integration for per-cluster rigidity classification

Changes to existing modules:
- params.py: add unfix() for boundary body cycling
- solver.py: extract _monolithic_solve(), add decomposition branch
  for assemblies with >= 8 free bodies

Performance: for k clusters of ~n/k params each, total cost drops
from O(n^3) to O(n^3/k^2).

220 tests passing (up from 207).
2026-02-20 22:19:35 -06:00
forbes-0023
533ca91774 feat(solver): full constraint vocabulary — all 24 BaseJointKind types (phase 2)
Add 18 new constraint classes covering all BaseJointKind types from Types.h:
- Point: PointOnLine (2r), PointInPlane (1r)
- Orientation: Parallel (2r), Perpendicular (1r), Angle (1r)
- Surface: Concentric (4r), Tangent (1r), Planar (3r), LineInPlane (2r)
- Kinematic: Ball (3r), Revolute (5r), Cylindrical (4r), Slider (5r),
  Screw (5r), Universal (4r)
- Mechanical: Gear (1r), RackPinion (1r)
- Stubs: Cam, Slot, DistanceCylSph

New modules:
- geometry.py: marker axis extraction, vector ops (dot3, cross3, sub3),
  geometric primitives (point_plane_distance, point_line_perp_components)
- bfgs.py: L-BFGS-B fallback solver via scipy for when Newton fails

solver.py changes:
- Wire all 20 supported types in _build_constraint()
- BFGS fallback after Newton-Raphson in solve()

183 tests passing (up from 82), including:
- DOF counting for every joint type
- Solve convergence from displaced initial conditions
- Multi-body mechanisms (four-bar linkage, slider-crank, revolute chain)
2026-02-20 21:15:15 -06:00
forbes-0023
98051ba0c9 feat: add Phase 1 constraint solver addon, move prior content to GNN/
- Move existing OndselSolver, GNN ML layer, and tooling into GNN/
  directory for integration in later phases
- Add Create addon scaffold: package.xml, Init.py
- Add expression DAG with eval, symbolic diff, simplification
- Add parameter table with fixed/free variable tracking
- Add quaternion rotation as polynomial Expr trees
- Add RigidBody entity (7 DOF: position + unit quaternion)
- Add constraint classes: Coincident, DistancePointPoint, Fixed
- Add Newton-Raphson solver with symbolic Jacobian + numpy lstsq
- Add pre-solve passes: substitution + single-equation
- Add DOF counting via Jacobian SVD rank
- Add KindredSolver IKCSolver bridge for kcsolve integration
- Add 82 unit tests covering all modules

Registers as 'kindred' solver via kcsolve.register_solver() when
loaded by Create's addon_loader.
2026-02-20 20:35:47 -06:00
forbes
c728bd93f7 Merge remote-tracking branch 'public/main'
Some checks failed
CI / datagen (push) Blocked by required conditions
CI / lint (push) Failing after 2m20s
CI / test (push) Has been cancelled
CI / type-check (push) Has been cancelled
2026-02-03 18:03:54 -06:00
forbes
bbbc5e0137 ci: use venv for PEP 668 compatibility on runner 2026-02-03 17:59:05 -06:00
forbes
40cda51142 ci: install internal CA from IPA instead of skipping SSL verification
Fetches the Kindred CA cert from ipa.kindred.internal and installs it
into the system trust store before checkout. Removes GIT_SSL_NO_VERIFY.
2026-02-03 17:57:53 -06:00
forbes
e45207b7cc ci: skip SSL verification for internal Gitea runner 2026-02-03 17:56:13 -06:00
forbes
537d8c7689 ci: add datagen job, adapt workflow for Gitea runner
- Drop actions/setup-python, use system python3
- Use full Gitea-compatible action URLs
- CPU-only torch via pytorch whl/cpu index
- Add datagen job with cache/checkpoint resume and artifact upload
- Manual dispatch with configurable assembly count and worker count
- Datagen runs on push to main (after tests pass) or manual trigger
2026-02-03 17:52:48 -06:00
93bda28f67 feat(mates): add mate-level ground truth labels
Some checks failed
CI / lint (push) Successful in 1m45s
CI / type-check (push) Successful in 2m32s
CI / test (push) Failing after 3m36s
MateLabel and MateAssemblyLabels dataclasses with label_mate_assembly()
that back-attributes joint-level independence to originating mates.
Detects redundant and degenerate mates with pattern membership tracking.

Closes #15
2026-02-03 13:08:23 -06:00
239e45c7f9 feat(mates): add mate-based synthetic assembly generator
Some checks failed
CI / lint (push) Has been cancelled
CI / type-check (push) Has been cancelled
CI / test (push) Has been cancelled
SyntheticMateGenerator wraps existing joint generator with reverse
mapping (joint->mates) and configurable noise injection (redundant,
missing, incompatible mates). Batch generation via
generate_mate_training_batch().

Closes #14
2026-02-03 13:05:58 -06:00
118474f892 feat(mates): add mate-to-joint conversion and assembly analysis
Some checks failed
CI / lint (push) Has been cancelled
CI / type-check (push) Has been cancelled
CI / test (push) Has been cancelled
convert_mates_to_joints() bridges mate-level constraints to the existing
joint-based analysis pipeline. analyze_mate_assembly() orchestrates the
full pipeline with bidirectional mate-joint traceability.

Closes #13
2026-02-03 13:03:13 -06:00
e8143cf64c feat(mates): add joint pattern recognition
Some checks failed
CI / lint (push) Has been cancelled
CI / type-check (push) Has been cancelled
CI / test (push) Has been cancelled
JointPattern enum (9 patterns), PatternMatch dataclass, and
recognize_patterns() function with data-driven pattern rules.
Supports canonical, partial, and ambiguous pattern matching.

Closes #12
2026-02-03 12:59:53 -06:00
9f53fdb154 feat(mates): add mate type definitions and geometry references
Some checks failed
CI / lint (push) Has been cancelled
CI / type-check (push) Has been cancelled
CI / test (push) Has been cancelled
MateType enum (8 types), GeometryType enum (5 types), GeometryRef and
Mate dataclasses with validation, serialization, and context-dependent
DOF removal via dof_removed().

Closes #11
2026-02-03 12:55:37 -06:00
5d1988b513 Merge remote-tracking branch 'public/main'
Some checks failed
CI / lint (push) Successful in 38s
CI / type-check (push) Successful in 1m47s
CI / test (push) Failing after 3m2s
# Conflicts:
#	.gitignore
#	README.md
2026-02-03 10:53:48 -06:00
f29060491e feat(datagen): add dataset generation CLI with sharding and checkpointing
Some checks failed
CI / lint (push) Has been cancelled
CI / type-check (push) Has been cancelled
CI / test (push) Has been cancelled
- Add solver/datagen/dataset.py with DatasetConfig, DatasetGenerator,
  ShardSpec/ShardResult dataclasses, parallel shard generation via
  ProcessPoolExecutor, checkpoint/resume support, index and stats output
- Add scripts/generate_synthetic.py CLI entry point with Hydra-first
  and argparse fallback modes
- Add minimal YAML parser (parse_simple_yaml) for config loading
  without PyYAML dependency
- Add progress display with tqdm fallback to print-based ETA
- Update configs/dataset/synthetic.yaml with shard_size, checkpoint_every
- Update solver/datagen/__init__.py with DatasetConfig, DatasetGenerator
  exports
- Add tests/datagen/test_dataset.py with 28 tests covering config,
  YAML parsing, seed derivation, end-to-end generation, resume,
  stats/index structure, determinism, and CLI integration

Closes #10
2026-02-03 08:44:31 -06:00
8a49f8ef40 feat: ground truth labeling pipeline
Some checks failed
CI / lint (push) Failing after 25m6s
CI / type-check (push) Has been cancelled
CI / test (push) Has been cancelled
- Create solver/datagen/labeling.py with label_assembly() function
- Add dataclasses: ConstraintLabel, JointLabel, BodyDofLabel,
  AssemblyLabel, AssemblyLabels
- Per-constraint labels: pebble_independent + jacobian_independent
- Per-joint labels: aggregated independent/redundant/total counts
- Per-body DOF: translational + rotational from nullspace projection
- Assembly label: classification, total_dof, has_degeneracy flag
- AssemblyLabels.to_dict() for JSON-serializable output
- Integrate into generate_training_batch (adds 'labels' field)
- Export AssemblyLabels and label_assembly from datagen package
- Add 25 labeling tests + 1 batch structure test (184 total)

Closes #9
2026-02-02 15:20:02 -06:00
78289494e2 feat: geometric diversity for synthetic assembly generation
Some checks failed
CI / lint (push) Has been cancelled
CI / type-check (push) Has been cancelled
CI / test (push) Has been cancelled
- Add AxisStrategy type (cardinal, random, near_parallel)
- Add random body orientations via scipy.spatial.transform.Rotation
- Add parallel axis injection with configurable probability
- Add grounded parameter on all 7 generators (grounded/floating)
- Add axis sampling strategies: cardinal, random, near-parallel
- Update _create_joint with orientation-aware anchor offsets
- Add _resolve_axis helper for parallel axis propagation
- Update generate_training_batch with axis_strategy, parallel_axis_prob,
  grounded_ratio parameters
- Add body_orientations and grounded fields to batch output
- Export AxisStrategy from datagen package
- Add 28 new tests (72 total generator tests, 158 total)

Closes #8
2026-02-02 14:57:49 -06:00
0b5813b5a9 feat: parameterized assembly templates and complexity tiers
Some checks failed
CI / lint (push) Has been cancelled
CI / type-check (push) Has been cancelled
CI / test (push) Has been cancelled
Add 4 new topology generators to SyntheticAssemblyGenerator:
- generate_tree_assembly: random spanning tree with configurable branching
- generate_loop_assembly: closed ring producing overconstrained data
- generate_star_assembly: hub-and-spoke topology
- generate_mixed_assembly: tree + loops with configurable edge density

Each accepts joint_types as JointType | list[JointType] for per-joint
type sampling.

Add complexity tiers (simple/medium/complex) with predefined body count
ranges via COMPLEXITY_RANGES dict and ComplexityTier type alias.

Update generate_training_batch with 7-way generator selection,
complexity_tier parameter, and generator_type field in output dicts.

Extract private helpers (_random_position, _random_axis,
_select_joint_type, _create_joint) to reduce duplication.

44 generator tests, 130 total — all passing.

Closes #7
2026-02-02 14:38:05 -06:00
dc742bfc82 test: add unit tests for datagen modules
Some checks failed
CI / lint (push) Has been cancelled
CI / type-check (push) Has been cancelled
CI / test (push) Has been cancelled
- test_types.py: JointType enum values/count, dataclass defaults/isolation
- test_pebble_game.py: DOF accounting, rigidity, classification, edge results
- test_jacobian.py: Jacobian shape per joint type, rank, parallel axis degeneracy
- test_analysis.py: demo scenarios (revolute, fixed, triangle, parallel axes)
- test_generator.py: chain/rigid/overconstrained generation, training batch

Bug fixes found during testing:
- JointType enum: duplicate int values caused aliasing (SLIDER=REVOLUTE etc).
  Changed to (ordinal, dof) tuple values with a .dof property.
- pebble_game.py: .value -> .dof for constraint count
- analysis.py: classify from effective DOF (not raw pebble game with virtual
  ground body skew)

105 tests, all passing.

Closes #6
2026-02-02 14:08:22 -06:00
831a10cdb4 feat: port SyntheticAssemblyGenerator to solver/datagen/generator.py
Some checks failed
CI / lint (push) Has been cancelled
CI / type-check (push) Has been cancelled
CI / test (push) Has been cancelled
Port chain, rigid, and overconstrained assembly generators plus
the training batch generation from data/synthetic/pebble-game.py.

- Refactored rng.choice on enums/callables to integer indexing (mypy)
- Typed n_bodies_range as tuple[int, int]
- Typed batch return as list[dict[str, Any]]
- Full type annotations (mypy strict)
- Re-exported from solver.datagen.__init__

Closes #5
2026-02-02 13:54:32 -06:00
9a31df4988 feat: port analyze_assembly to solver/datagen/analysis.py
Some checks failed
CI / lint (push) Has been cancelled
CI / type-check (push) Has been cancelled
CI / test (push) Has been cancelled
Port the combined pebble game + Jacobian verification entry point from
data/synthetic/pebble-game.py. Ties PebbleGame3D and JacobianVerifier
together with virtual ground body support.

- Optional[int] -> int | None (UP007)
- GROUND_ID constant extracted to module level
- Full type annotations (mypy strict)
- Re-exported from solver.datagen.__init__

Closes #4
2026-02-02 13:52:03 -06:00
455b6318d9 feat: port JacobianVerifier to solver/datagen/jacobian.py
Some checks failed
CI / lint (push) Has been cancelled
CI / type-check (push) Has been cancelled
CI / test (push) Has been cancelled
Port the constraint Jacobian builder and numerical rank verifier from
data/synthetic/pebble-game.py. All 11 joint type builders, SVD rank
computation, and incremental dependency detection.

- Full type annotations (mypy strict)
- Ruff lint and format clean
- Re-exported from solver.datagen.__init__

Closes #3
2026-02-02 13:50:16 -06:00
35d4ef736f feat: port PebbleGame3D to solver/datagen/pebble_game.py
Some checks failed
CI / lint (push) Has been cancelled
CI / type-check (push) Has been cancelled
CI / test (push) Has been cancelled
Port the (6,6)-pebble game implementation from data/synthetic/pebble-game.py.
Imports shared types from solver.datagen.types. No behavioral changes.

- Full type annotations on all methods (mypy strict)
- Ruff-compliant: ternary, combined if, unpacking
- Re-exported from solver.datagen.__init__

Closes #2
2026-02-02 13:47:36 -06:00
1b6135129e feat: port shared types to solver/datagen/types.py
Some checks failed
CI / lint (push) Has been cancelled
CI / type-check (push) Has been cancelled
CI / test (push) Has been cancelled
Port JointType, RigidBody, Joint, PebbleState, and ConstraintAnalysis
from data/synthetic/pebble-game.py into the solver package.

- Add __all__ export list
- Put typing.Any behind TYPE_CHECKING (ruff TCH003)
- Parameterize list[dict] as list[dict[str, Any]] (mypy strict)
- Re-export all types from solver.datagen.__init__

Closes #1
2026-02-02 13:43:19 -06:00
363b49281b build: phase 0 infrastructure setup
Some checks failed
CI / lint (push) Has been cancelled
CI / type-check (push) Has been cancelled
CI / test (push) Has been cancelled
- Project structure: solver/, freecad/, export/, configs/, scripts/, tests/, docs/
- pyproject.toml with dependency groups: core, train, freecad, dev
- Hydra configs: dataset (synthetic, fusion360), model (baseline, gat), training (pretrain, finetune), export (production)
- Dockerfile with CUDA+PyG GPU and CPU-only targets
- docker-compose.yml for train, test, data-gen services
- Makefile with targets: train, test, lint, format, type-check, data-gen, export, check
- Pre-commit hooks: ruff, mypy, conventional commits
- Gitea Actions CI: lint, type-check, test on push/PR
- README with setup and usage instructions
2026-02-02 13:26:38 -06:00
f61d005400 first commit 2026-02-02 13:09:37 -06:00
forbes
e32c9cd793 fix: use previous iteration dxNorm in convergence check
isConvergedToNumericalLimit() compared dxNorms->at(iterNo) to itself
instead of comparing current vs previous iteration. This prevented
the solver from detecting convergence improvement, causing it to
exhaust its iteration limit on assemblies with many constraints.

Fix: read dxNorms->at(iterNo - 1) for the previous iteration's norm.
2026-02-01 21:10:12 -06:00
828 changed files with 18492 additions and 25 deletions

77
.gitignore vendored
View File

@@ -1,44 +1,83 @@
# Prerequisites
# C++ compiled objects
*.d
# Compiled Object files
*.slo
*.lo
*.o
*.obj
# Precompiled Headers
*.gch
*.pch
# Compiled Dynamic libraries
# C++ libraries
*.so
*.dylib
*.dll
# Fortran module files
*.mod
*.smod
# Compiled Static libraries
*.lai
*.la
*.a
*.lib
# Executables
# C++ executables
*.exe
*.out
*.app
.vs
# C++ build
build/
cmake-build-debug/
.vs/
x64/
temp/
# OndselSolver test artifacts
*.bak
assembly.asmt
build
cmake-build-debug
.idea
temp/
/testapp/draggingBackhoe.log
/testapp/runPreDragBackhoe.asmt
# Python
__pycache__/
*.py[cod]
*$py.class
*.egg-info/
dist/
*.egg
# Virtual environments
.venv/
venv/
# IDE
.vscode/
.idea/
*.swp
*.swo
*~
# mypy / ruff / pytest
.mypy_cache/
.ruff_cache/
.pytest_cache/
# Data (large files tracked separately)
data/synthetic/*.pt
data/fusion360/*.json
data/fusion360/*.step
data/processed/*.pt
!data/**/.gitkeep
# Model checkpoints
*.ckpt
*.pth
*.onnx
*.torchscript
# Experiment tracking
wandb/
runs/
# OS
.DS_Store
Thumbs.db
# Environment
.env

View File

@@ -0,0 +1,209 @@
name: CI
on:
push:
branches: [main]
pull_request:
branches: [main]
workflow_dispatch:
inputs:
run_datagen:
description: "Run dataset generation"
required: false
type: boolean
default: false
num_assemblies:
description: "Number of assemblies to generate"
required: false
type: string
default: "100000"
num_workers:
description: "Parallel workers for datagen"
required: false
type: string
default: "4"
env:
PIP_CACHE_DIR: /tmp/pip-cache-solver
TORCH_INDEX: https://download.pytorch.org/whl/cpu
VIRTUAL_ENV: /tmp/solver-venv
jobs:
# ---------------------------------------------------------------------------
# Lint — fast, no torch required
# ---------------------------------------------------------------------------
lint:
runs-on: ubuntu-latest
env:
PATH: /tmp/solver-venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
steps:
- name: Trust internal CA
run: |
curl -sk https://ipa.kindred.internal/ipa/config/ca.crt \
-o /usr/local/share/ca-certificates/kindred-internal.crt
update-ca-certificates
- name: Checkout
uses: https://github.com/actions/checkout@v4
- name: Set up venv
run: python3 -m venv $VIRTUAL_ENV
- name: Install lint tools
run: pip install --cache-dir $PIP_CACHE_DIR ruff
- name: Ruff check
run: ruff check solver/ freecad/ tests/ scripts/
- name: Ruff format check
run: ruff format --check solver/ freecad/ tests/ scripts/
# ---------------------------------------------------------------------------
# Type check
# ---------------------------------------------------------------------------
type-check:
runs-on: ubuntu-latest
env:
PATH: /tmp/solver-venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
steps:
- name: Trust internal CA
run: |
curl -sk https://ipa.kindred.internal/ipa/config/ca.crt \
-o /usr/local/share/ca-certificates/kindred-internal.crt
update-ca-certificates
- name: Checkout
uses: https://github.com/actions/checkout@v4
- name: Set up venv
run: python3 -m venv $VIRTUAL_ENV
- name: Install dependencies
run: |
pip install --cache-dir $PIP_CACHE_DIR \
mypy numpy scipy \
torch --index-url $TORCH_INDEX
pip install --cache-dir $PIP_CACHE_DIR torch-geometric
pip install --cache-dir $PIP_CACHE_DIR -e ".[dev]"
- name: Mypy
run: mypy solver/ freecad/
# ---------------------------------------------------------------------------
# Tests
# ---------------------------------------------------------------------------
test:
runs-on: ubuntu-latest
env:
PATH: /tmp/solver-venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
steps:
- name: Trust internal CA
run: |
curl -sk https://ipa.kindred.internal/ipa/config/ca.crt \
-o /usr/local/share/ca-certificates/kindred-internal.crt
update-ca-certificates
- name: Checkout
uses: https://github.com/actions/checkout@v4
- name: Set up venv
run: python3 -m venv $VIRTUAL_ENV
- name: Install dependencies
run: |
pip install --cache-dir $PIP_CACHE_DIR \
torch --index-url $TORCH_INDEX
pip install --cache-dir $PIP_CACHE_DIR torch-geometric
pip install --cache-dir $PIP_CACHE_DIR -e ".[train,dev]"
- name: Run tests
run: pytest tests/ freecad/tests/ -v --tb=short
# ---------------------------------------------------------------------------
# Dataset generation — manual trigger or on main push
# ---------------------------------------------------------------------------
datagen:
runs-on: ubuntu-latest
if: >-
(github.event_name == 'workflow_dispatch' && inputs.run_datagen == true) ||
(github.event_name == 'push' && github.ref == 'refs/heads/main')
needs: [test]
env:
PATH: /tmp/solver-venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
steps:
- name: Trust internal CA
run: |
curl -sk https://ipa.kindred.internal/ipa/config/ca.crt \
-o /usr/local/share/ca-certificates/kindred-internal.crt
update-ca-certificates
- name: Checkout
uses: https://github.com/actions/checkout@v4
- name: Set up venv
run: python3 -m venv $VIRTUAL_ENV
- name: Install dependencies
run: |
pip install --cache-dir $PIP_CACHE_DIR \
torch --index-url $TORCH_INDEX
pip install --cache-dir $PIP_CACHE_DIR torch-geometric
pip install --cache-dir $PIP_CACHE_DIR -e ".[train]"
- name: Restore datagen checkpoint
id: datagen-cache
uses: https://github.com/actions/cache/restore@v4
with:
path: data/synthetic
key: datagen-${{ github.sha }}
restore-keys: |
datagen-
- name: Generate dataset
run: |
NUM=${INPUTS_NUM_ASSEMBLIES:-100000}
WORKERS=${INPUTS_NUM_WORKERS:-4}
echo "Generating ${NUM} assemblies with ${WORKERS} workers"
python3 scripts/generate_synthetic.py \
--num-assemblies "${NUM}" \
--num-workers "${WORKERS}" \
--output-dir data/synthetic
env:
INPUTS_NUM_ASSEMBLIES: ${{ inputs.num_assemblies }}
INPUTS_NUM_WORKERS: ${{ inputs.num_workers }}
- name: Save datagen checkpoint
if: always()
uses: https://github.com/actions/cache/save@v4
with:
path: data/synthetic
key: datagen-${{ github.sha }}
- name: Upload dataset
uses: https://github.com/actions/upload-artifact@v3
with:
name: synthetic-dataset
path: |
data/synthetic/index.json
data/synthetic/stats.json
data/synthetic/shards/
retention-days: 90
- name: Print summary
if: always()
run: |
echo "=== Dataset Generation Results ==="
if [ -f data/synthetic/stats.json ]; then
python3 -c "
import json
with open('data/synthetic/stats.json') as f:
s = json.load(f)
print(f'Total examples: {s[\"total_examples\"]}')
print(f'Classification: {json.dumps(s[\"classification_distribution\"], indent=2)}')
print(f'Rigid: {s[\"rigidity\"][\"rigid_fraction\"]*100:.1f}%')
print(f'Degeneracy: {s[\"geometric_degeneracy\"][\"fraction_with_degeneracy\"]*100:.1f}%')
"
else
echo "stats.json not found — generation may have failed"
ls -la data/synthetic/ 2>/dev/null || echo "output dir missing"
fi

View File

@@ -0,0 +1,23 @@
repos:
- repo: https://github.com/astral-sh/ruff-pre-commit
rev: v0.3.4
hooks:
- id: ruff
args: [--fix]
- id: ruff-format
- repo: https://github.com/pre-commit/mirrors-mypy
rev: v1.8.0
hooks:
- id: mypy
additional_dependencies:
- torch>=2.2
- numpy>=1.26
args: [--ignore-missing-imports]
- repo: https://github.com/compilerla/conventional-pre-commit
rev: v3.1.0
hooks:
- id: conventional-pre-commit
stages: [commit-msg]
args: [feat, fix, docs, style, refactor, perf, test, build, ci, chore, revert]

61
GNN/Dockerfile Normal file
View File

@@ -0,0 +1,61 @@
FROM nvidia/cuda:12.4.1-devel-ubuntu22.04 AS base
ENV DEBIAN_FRONTEND=noninteractive
ENV PYTHONUNBUFFERED=1
# System deps
RUN apt-get update && apt-get install -y --no-install-recommends \
python3.11 python3.11-venv python3.11-dev python3-pip \
git wget curl \
# FreeCAD headless deps
freecad \
libgl1-mesa-glx libglib2.0-0 \
&& rm -rf /var/lib/apt/lists/*
RUN update-alternatives --install /usr/bin/python python /usr/bin/python3.11 1
# Create venv
RUN python -m venv /opt/venv
ENV PATH="/opt/venv/bin:$PATH"
# Install PyTorch with CUDA
RUN pip install --no-cache-dir \
torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu124
# Install PyG
RUN pip install --no-cache-dir \
torch-geometric \
pyg_lib torch_scatter torch_sparse torch_cluster torch_spline_conv \
-f https://data.pyg.org/whl/torch-2.4.0+cu124.html
WORKDIR /workspace
# Install project
COPY pyproject.toml .
RUN pip install --no-cache-dir -e ".[train,dev]" || true
COPY . .
RUN pip install --no-cache-dir -e ".[train,dev]"
# -------------------------------------------------------------------
FROM base AS cpu
# CPU-only variant (for CI and non-GPU environments)
FROM python:3.11-slim AS cpu-only
ENV PYTHONUNBUFFERED=1
RUN apt-get update && apt-get install -y --no-install-recommends \
git freecad libgl1-mesa-glx libglib2.0-0 \
&& rm -rf /var/lib/apt/lists/*
WORKDIR /workspace
COPY pyproject.toml .
RUN pip install --no-cache-dir torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cpu
RUN pip install --no-cache-dir torch-geometric
COPY . .
RUN pip install --no-cache-dir -e ".[train,dev]"
CMD ["pytest", "tests/", "-v"]

48
GNN/Makefile Normal file
View File

@@ -0,0 +1,48 @@
.PHONY: train test lint data-gen export format type-check install dev clean help
PYTHON ?= python
PYTEST ?= pytest
RUFF ?= ruff
MYPY ?= mypy
help: ## Show this help
@grep -E '^[a-zA-Z_-]+:.*?## .*$$' $(MAKEFILE_LIST) | sort | \
awk 'BEGIN {FS = ":.*?## "}; {printf "\033[36m%-20s\033[0m %s\n", $$1, $$2}'
install: ## Install core dependencies
pip install -e .
dev: ## Install all dependencies including dev tools
pip install -e ".[train,dev]"
pre-commit install
pre-commit install --hook-type commit-msg
train: ## Run training (pass CONFIG=path/to/config.yaml)
$(PYTHON) -m solver.training.train $(if $(CONFIG),--config-path $(CONFIG))
test: ## Run test suite
$(PYTEST) tests/ freecad/tests/ -v --tb=short
lint: ## Run ruff linter
$(RUFF) check solver/ freecad/ tests/ scripts/
format: ## Format code with ruff
$(RUFF) format solver/ freecad/ tests/ scripts/
$(RUFF) check --fix solver/ freecad/ tests/ scripts/
type-check: ## Run mypy type checker
$(MYPY) solver/ freecad/
data-gen: ## Generate synthetic dataset (pass CONFIG=path/to/config.yaml)
$(PYTHON) scripts/generate_synthetic.py $(if $(CONFIG),--config-path $(CONFIG))
export: ## Export trained model for deployment
$(PYTHON) export/package_model.py $(if $(MODEL),--model $(MODEL))
clean: ## Remove build artifacts and caches
rm -rf build/ dist/ *.egg-info/
rm -rf .mypy_cache/ .pytest_cache/ .ruff_cache/
find . -type d -name __pycache__ -exec rm -rf {} + 2>/dev/null || true
find . -type f -name "*.pyc" -delete 2>/dev/null || true
check: lint type-check test ## Run all checks (lint, type-check, test)

Some files were not shown because too many files have changed in this diff Show More