Files
solver/configs/training/finetune.yaml
forbes-0023 363b49281b
Some checks failed
CI / lint (push) Has been cancelled
CI / type-check (push) Has been cancelled
CI / test (push) Has been cancelled
build: phase 0 infrastructure setup
- Project structure: solver/, freecad/, export/, configs/, scripts/, tests/, docs/
- pyproject.toml with dependency groups: core, train, freecad, dev
- Hydra configs: dataset (synthetic, fusion360), model (baseline, gat), training (pretrain, finetune), export (production)
- Dockerfile with CUDA+PyG GPU and CPU-only targets
- docker-compose.yml for train, test, data-gen services
- Makefile with targets: train, test, lint, format, type-check, data-gen, export, check
- Pre-commit hooks: ruff, mypy, conventional commits
- Gitea Actions CI: lint, type-check, test on push/PR
- README with setup and usage instructions
2026-02-02 13:26:38 -06:00

46 lines
782 B
YAML

# Fine-tuning on real data config
phase: finetune
dataset: fusion360
model: baseline
pretrained_checkpoint: checkpoints/pretrain/best_val_loss.ckpt
optimizer:
name: adamw
lr: 1e-5
weight_decay: 1e-4
scheduler:
name: cosine_annealing
T_max: 50
eta_min: 1e-7
training:
epochs: 50
batch_size: 32
gradient_clip: 1.0
early_stopping_patience: 10
amp: true
freeze_encoder: false # set true for frozen encoder experiment
loss:
edge_weight: 1.0
graph_weight: 0.5
joint_type_weight: 0.3
dof_weight: 0.2
redundant_penalty: 2.0
checkpointing:
save_best_val_loss: true
save_best_val_accuracy: true
save_every_n_epochs: 5
checkpoint_dir: checkpoints/finetune
logging:
backend: wandb
project: kindred-solver
log_every_n_steps: 20
seed: 42