2 Commits

Author SHA1 Message Date
Forbes
68c9acea5c feat(sessions): edit session acquire, release, and query endpoints
- Add 023_edit_sessions.sql migration with unique index on (item_id, context_level, object_id) for hard interference
- Add EditSessionRepository with Acquire, Release, ReleaseForWorkstation, GetByID, ListForItem, ListForUser, TouchHeartbeat, ExpireStale, GetConflict
- Add 4 handlers: acquire (POST), release (DELETE), list by item (GET), list by user (GET)
- Acquire auto-computes dependency_cone from DAG forward cone when available
- Hard interference returns 409 with holder info (username, workstation, context_level, object_id, acquired_at)
- Publish edit.session_acquired and edit.session_released via item-scoped SSE
- Add /api/edit-sessions (user scope) and /api/items/{pn}/edit-sessions (item scope) routes

Closes #163
2026-03-01 13:40:18 -06:00
Forbes
a669327042 Merge branch 'feat/sse-per-connection-filtering' into feat/edit-sessions 2026-03-01 13:37:44 -06:00
16 changed files with 626 additions and 1250 deletions

View File

@@ -1,6 +1,6 @@
# Configuration Reference
**Last Updated:** 2026-03-01
**Last Updated:** 2026-02-06
---
@@ -153,70 +153,6 @@ odoo:
---
## Approval Workflows
| Key | Type | Default | Description |
|-----|------|---------|-------------|
| `workflows.directory` | string | `"/etc/silo/workflows"` | Path to directory containing YAML workflow definition files |
Workflow definition files describe multi-stage approval processes using a state machine pattern. Each file defines a workflow with states, transitions, and approver requirements.
```yaml
workflows:
directory: "/etc/silo/workflows"
```
---
## Solver
| Key | Type | Default | Env Override | Description |
|-----|------|---------|-------------|-------------|
| `solver.default_solver` | string | `""` | `SILO_SOLVER_DEFAULT` | Default solver backend name |
| `solver.max_context_size_mb` | int | `10` | — | Maximum SolveContext payload size in MB |
| `solver.default_timeout` | int | `300` | — | Default solver job timeout in seconds |
| `solver.auto_diagnose_on_commit` | bool | `false` | — | Auto-submit diagnose job on assembly revision commit |
The solver module depends on the `jobs` module being enabled. See [SOLVER.md](SOLVER.md) for the full solver service specification.
```yaml
solver:
default_solver: "ondsel"
max_context_size_mb: 10
default_timeout: 300
auto_diagnose_on_commit: true
```
---
## Modules
Optional module toggles. Each module can be explicitly enabled or disabled. If not listed, the module's built-in default applies. See [MODULES.md](MODULES.md) for the full module system specification.
```yaml
modules:
projects:
enabled: true
audit:
enabled: true
odoo:
enabled: false
freecad:
enabled: true
jobs:
enabled: false
dag:
enabled: false
solver:
enabled: false
sessions:
enabled: true
```
The `auth.enabled` field controls the `auth` module directly (not duplicated under `modules:`). The `sessions` module depends on `auth` and is enabled by default.
---
## Authentication
Authentication has a master toggle and three independent backends. When `auth.enabled` is `false`, all routes are accessible without login and a synthetic admin user (`dev`) is injected into every request.
@@ -335,7 +271,6 @@ All environment variable overrides. These take precedence over values in `config
| `SILO_ADMIN_PASSWORD` | `auth.local.default_admin_password` | Default admin password |
| `SILO_LDAP_BIND_PASSWORD` | `auth.ldap.bind_password` | LDAP service account password |
| `SILO_OIDC_CLIENT_SECRET` | `auth.oidc.client_secret` | OIDC client secret |
| `SILO_SOLVER_DEFAULT` | `solver.default_solver` | Default solver backend name |
Additionally, YAML values can reference environment variables directly using `${VAR_NAME}` syntax, which is expanded at load time via `os.ExpandEnv()`.

View File

@@ -1,6 +1,6 @@
# Silo Gap Analysis
**Date:** 2026-03-01
**Date:** 2026-02-13
**Status:** Analysis Complete (Updated)
---
@@ -130,8 +130,8 @@ FreeCAD workbench maintained in separate [silo-mod](https://git.kindred-systems.
|-----|-------------|--------|--------|
| ~~**No rollback**~~ | ~~Cannot revert to previous revision~~ | ~~Data recovery difficult~~ | **Implemented** |
| ~~**No comparison**~~ | ~~Cannot diff between revisions~~ | ~~Change tracking manual~~ | **Implemented** |
| **No locking** | No concurrent edit protection | Multi-user unsafe | Partial (edit sessions with hard interference detection; full pessimistic locking not yet implemented) |
| ~~**No approval workflow**~~ | ~~No release/sign-off process~~ | ~~Quality control gap~~ | **Implemented** (YAML-configurable ECO workflows, multi-stage review gates, digital signatures) |
| **No locking** | No concurrent edit protection | Multi-user unsafe | Open |
| **No approval workflow** | No release/sign-off process | Quality control gap | Open |
### 3.2 Important Gaps
@@ -355,54 +355,47 @@ These design decisions remain unresolved:
## Appendix A: File Structure
Current structure:
Revision endpoints, status, labels, authentication, audit logging, and file attachments are implemented. Current structure:
```
internal/
api/
approval_handlers.go # Approval/ECO workflow endpoints
audit_handlers.go # Audit/completeness endpoints
auth_handlers.go # Login, tokens, OIDC
bom_handlers.go # Flat BOM, cost roll-up
broker.go # SSE broker with targeted delivery
dag_handlers.go # Dependency DAG endpoints
dependency_handlers.go # .kc dependency resolution
file_handlers.go # Presigned uploads, item files, thumbnails
handlers.go # Items, schemas, projects, revisions, Server struct
job_handlers.go # Job queue endpoints
location_handlers.go # Location hierarchy endpoints
macro_handlers.go # .kc macro endpoints
metadata_handlers.go # .kc metadata endpoints
handlers.go # Items, schemas, projects, revisions
middleware.go # Auth middleware
odoo_handlers.go # Odoo integration endpoints
pack_handlers.go # .kc checkout packing
routes.go # Route registration (~140 endpoints)
runner_handlers.go # Job runner endpoints
routes.go # Route registration (78 endpoints)
search.go # Fuzzy search
session_handlers.go # Edit session acquire/release/query
settings_handlers.go # Admin settings endpoints
solver_handlers.go # Solver service endpoints
sse_handler.go # SSE event stream handler
workstation_handlers.go # Workstation registration
auth/
auth.go # Auth service: local, LDAP, OIDC
db/
edit_sessions.go # Edit session repository
items.go # Item and revision repository
item_files.go # File attachment repository
jobs.go # Job queue repository
projects.go # Project repository
relationships.go # BOM repository
workstations.go # Workstation repository
modules/
modules.go # Module registry (12 modules)
loader.go # Config-to-module state loader
projects.go # Project repository
storage/
storage.go # File storage helpers
migrations/
001_initial.sql # Core schema
...
023_edit_sessions.sql # Edit session tracking (latest)
011_item_files.sql # Item file attachments (latest)
```
Future features would add:
```
internal/
api/
lock_handlers.go # Locking endpoints
db/
locks.go # Lock repository
releases.go # Release repository
migrations/
012_item_locks.sql # Locking table
013_releases.sql # Release management
```
---
@@ -472,28 +465,28 @@ This section compares Silo's capabilities against SOLIDWORKS PDM features. Gaps
| Feature | SOLIDWORKS PDM | Silo Status | Priority | Complexity |
|---------|---------------|-------------|----------|------------|
| Check-in/check-out | Full pessimistic locking | Partial (edit sessions with hard interference) | High | Moderate |
| Check-in/check-out | Full pessimistic locking | None | High | Moderate |
| Version history | Complete with branching | Full (linear) | - | - |
| Revision labels | A, B, C or custom schemes | Full (custom labels) | - | - |
| Rollback/restore | Full | Full | - | - |
| Compare revisions | Visual + metadata diff | Metadata diff only | Medium | Complex |
| Get Latest Revision | One-click retrieval | Partial (API only) | Medium | Simple |
Silo has edit sessions with hard interference detection (unique index on item + context_level + object_id prevents two users from editing the same object simultaneously). Full pessimistic file-level locking is not yet implemented. Visual diff comparison would require FreeCAD integration for CAD file visualization.
Silo lacks pessimistic locking (check-out), which is critical for multi-user CAD environments where file merging is impractical. Visual diff comparison would require FreeCAD integration for CAD file visualization.
### C.2 Workflow Management
| Feature | SOLIDWORKS PDM | Silo Status | Priority | Complexity |
|---------|---------------|-------------|----------|------------|
| Custom workflows | Full visual designer | Full (YAML-defined state machines) | - | - |
| State transitions | Configurable with permissions | Full (configurable transition rules) | - | - |
| Parallel approvals | Multiple approvers required | Full (multi-stage review gates) | - | - |
| Custom workflows | Full visual designer | None | Critical | Complex |
| State transitions | Configurable with permissions | Basic (status field only) | Critical | Complex |
| Parallel approvals | Multiple approvers required | None | High | Complex |
| Automatic transitions | Timer/condition-based | None | Medium | Moderate |
| Email notifications | On state change | None | High | Moderate |
| ECO process | Built-in change management | Full (YAML-configurable ECO workflows) | - | - |
| ECO process | Built-in change management | None | High | Complex |
| Child state conditions | Block parent if children invalid | None | Medium | Moderate |
Workflow management has been significantly addressed. Silo now supports YAML-defined state machine workflows with configurable transitions, multi-stage approval gates, and digital signatures. Remaining gaps: automatic timer-based transitions, email notifications, and child state condition enforcement.
Workflow management is the largest functional gap. SOLIDWORKS PDM offers sophisticated state machines with parallel approvals, automatic transitions, and deep integration with engineering change processes. Silo currently has only a simple status field (draft/review/released/obsolete) with no transition rules or approval processes.
### C.3 User Management & Security
@@ -556,13 +549,13 @@ CAD integration is maintained in separate repositories ([silo-mod](https://git.k
| Feature | SOLIDWORKS PDM | Silo Status | Priority | Complexity |
|---------|---------------|-------------|----------|------------|
| ERP integration | SAP, Dynamics, etc. | Partial (Odoo stubs) | Medium | Complex |
| API access | Full COM/REST API | Full REST API (~140 endpoints) | - | - |
| API access | Full COM/REST API | Full REST API (78 endpoints) | - | - |
| Dispatch scripts | Automation without coding | None | Medium | Moderate |
| Task scheduler | Background processing | Full (job queue with runners) | - | - |
| Task scheduler | Background processing | None | Medium | Moderate |
| Email system | SMTP integration | None | High | Simple |
| Web portal | Browser access | Full (React SPA + auth) | - | - |
Silo has a comprehensive REST API (~140 endpoints) and a full web UI with authentication. Odoo ERP integration has config/sync-log scaffolding but push/pull operations are stubs. Job queue with runner management is fully implemented. Remaining gaps: email notifications, dispatch automation.
Silo has a comprehensive REST API (78 endpoints) and a full web UI with authentication. Odoo ERP integration has config/sync-log scaffolding but push/pull operations are stubs. Remaining gaps: email notifications, task scheduler, dispatch automation.
### C.8 Reporting & Analytics
@@ -593,13 +586,13 @@ File storage works well. Thumbnail generation and file preview would significant
| Category | Feature | SW PDM Standard | SW PDM Pro | Silo Current | Silo Planned |
|----------|---------|-----------------|------------|--------------|--------------|
| **Version Control** | Check-in/out | Yes | Yes | Partial (edit sessions) | Tier 1 |
| **Version Control** | Check-in/out | Yes | Yes | No | Tier 1 |
| | Version history | Yes | Yes | Yes | - |
| | Rollback | Yes | Yes | Yes | - |
| | Revision labels/status | Yes | Yes | Yes | - |
| | Revision comparison | Yes | Yes | Yes (metadata) | - |
| **Workflow** | Custom workflows | Limited | Yes | Yes (YAML state machines) | - |
| | Parallel approval | No | Yes | Yes (multi-stage gates) | - |
| **Workflow** | Custom workflows | Limited | Yes | No | Tier 4 |
| | Parallel approval | No | Yes | No | Tier 4 |
| | Notifications | No | Yes | No | Tier 1 |
| **Security** | User auth | Windows | Windows/LDAP | Yes (local, LDAP, OIDC) | - |
| | Permissions | Basic | Granular | Partial (role-based) | Tier 4 |
@@ -613,7 +606,7 @@ File storage works well. Thumbnail generation and file preview would significant
| **Data** | CSV import/export | Yes | Yes | Yes | - |
| | ODS import/export | No | No | Yes | - |
| | Project management | Yes | Yes | Yes | - |
| **Integration** | API | Limited | Full | Full REST (~140) | - |
| **Integration** | API | Limited | Full | Full REST (78) | - |
| | ERP connectors | No | Yes | Partial (Odoo stubs) | Tier 6 |
| | Web access | No | Yes | Yes (React SPA + auth) | - |
| **Files** | Versioning | Yes | Yes | Yes | - |

View File

@@ -491,7 +491,4 @@ After a successful installation:
| [SPECIFICATION.md](SPECIFICATION.md) | Full design specification and API reference |
| [STATUS.md](STATUS.md) | Implementation status |
| [GAP_ANALYSIS.md](GAP_ANALYSIS.md) | Gap analysis and revision control roadmap |
| [MODULES.md](MODULES.md) | Module system specification |
| [WORKERS.md](WORKERS.md) | Job queue and runner system |
| [SOLVER.md](SOLVER.md) | Assembly solver service |
| [COMPONENT_AUDIT.md](COMPONENT_AUDIT.md) | Component audit tool design |

View File

@@ -1,7 +1,7 @@
# Module System Specification
**Status:** Draft
**Last Updated:** 2026-03-01
**Last Updated:** 2026-02-14
---
@@ -36,8 +36,6 @@ These cannot be disabled. They define what Silo *is*.
| `freecad` | Create Integration | `true` | URI scheme, executable path, client settings |
| `jobs` | Job Queue | `false` | Async compute jobs, runner management |
| `dag` | Dependency DAG | `false` | Feature DAG sync, validation states, interference detection |
| `solver` | Solver | `false` | Assembly constraint solving via server-side runners |
| `sessions` | Sessions | `true` | Workstation registration, edit sessions, and presence tracking |
### 2.3 Module Dependencies
@@ -48,8 +46,6 @@ Some modules require others to function:
| `dag` | `jobs` |
| `jobs` | `auth` (runner tokens) |
| `odoo` | `auth` |
| `solver` | `jobs` |
| `sessions` | `auth` |
When enabling a module, its dependencies are validated. The server rejects enabling `dag` without `jobs`. Disabling a module that others depend on shows a warning listing dependents.
@@ -261,34 +257,6 @@ PUT /api/items/{partNumber}/dag
POST /api/items/{partNumber}/dag/mark-dirty/{nodeKey}
```
### 3.11 `solver`
```
GET /api/solver/jobs
GET /api/solver/jobs/{jobID}
POST /api/solver/jobs
POST /api/solver/jobs/{jobID}/cancel
GET /api/solver/solvers
GET /api/solver/results/{partNumber}
```
### 3.12 `sessions`
```
# Workstation management
GET /api/workstations
POST /api/workstations
DELETE /api/workstations/{workstationID}
# Edit sessions (user-scoped)
GET /api/edit-sessions
# Edit sessions (item-scoped)
GET /api/items/{partNumber}/edit-sessions
POST /api/items/{partNumber}/edit-sessions
DELETE /api/items/{partNumber}/edit-sessions/{sessionID}
```
---
## 4. Disabled Module Behavior
@@ -463,18 +431,6 @@ GET /api/modules
"required": false,
"name": "Dependency DAG",
"depends_on": ["jobs"]
},
"solver": {
"enabled": false,
"required": false,
"name": "Solver",
"depends_on": ["jobs"]
},
"sessions": {
"enabled": true,
"required": false,
"name": "Sessions",
"depends_on": ["auth"]
}
},
"server": {
@@ -562,9 +518,7 @@ Returns full config grouped by module with secrets redacted:
"job_timeout_check": 30,
"default_priority": 100
},
"dag": { "enabled": false },
"solver": { "enabled": false, "default_solver": "ondsel" },
"sessions": { "enabled": true }
"dag": { "enabled": false }
}
```
@@ -678,11 +632,6 @@ modules:
default_priority: 100
dag:
enabled: false
solver:
enabled: false
default_solver: ondsel
sessions:
enabled: true
```
If a module is not listed under `modules:`, its default enabled state from Section 2.2 applies. The `auth.enabled` field continues to control the `auth` module (no duplication under `modules:`).
@@ -783,7 +732,6 @@ These are read-only in the UI (setup-only via YAML/env). The "Test" button is av
- **Per-module permissions** — beyond the current role hierarchy, modules may define fine-grained scopes (e.g., `jobs:admin`, `dag:write`).
- **Location & Inventory module** — when the Location/Inventory API is implemented (tables already exist), it becomes a new optional module.
- **Notifications module** — per ROADMAP.md Tier 1, notifications/subscriptions will be a dedicated module.
- **Soft interference detection** — the `sessions` module currently enforces hard interference (unique index on item + context_level + object_id). Soft interference detection (overlapping dependency cones) is planned as a follow-up.
---

View File

@@ -92,7 +92,7 @@ Everything depends on these. They define what Silo *is*.
| **API Endpoint Registry** | Module discovery, dynamic UI rendering, health checks | Not Started |
| **Web UI Shell** | App launcher, breadcrumbs, view framework, module rendering | Partial |
| **Python Scripting Engine** | Server-side hook execution, module extension point | Not Started |
| **Job Queue Infrastructure** | PostgreSQL-backed async job queue with runner management | Complete |
| **Job Queue Infrastructure** | Redis/NATS shared async service for all compute modules | Not Started |
### Tier 1 -- Core Services
@@ -102,7 +102,7 @@ Broad downstream dependencies. These should be built early because retrofitting
|--------|-------------|------------|--------|
| **Headless Create** | API-driven FreeCAD instance for file manipulation, geometry queries, format conversion, rendering | Core Silo, Job Queue | Not Started |
| **Notifications & Subscriptions** | Per-part watch lists, lifecycle event hooks, webhook delivery | Core Silo, Registry | Not Started |
| **Audit Trail / Compliance** | ITAR, ISO 9001, AS9100 traceability; module-level event journaling | Core Silo | Complete (base) |
| **Audit Trail / Compliance** | ITAR, ISO 9001, AS9100 traceability; module-level event journaling | Core Silo | Partial |
### Tier 2 -- File Intelligence & Collaboration
@@ -132,7 +132,7 @@ Process modules that formalize how engineering work moves through an organizatio
| Module | Description | Depends On | Status |
|--------|-------------|------------|--------|
| **Approval / ECO Workflow** | Engineering change orders, multi-stage review gates, digital signatures | Notifications, Audit Trail, Schemas | Complete |
| **Approval / ECO Workflow** | Engineering change orders, multi-stage review gates, digital signatures | Notifications, Audit Trail, Schemas | Not Started |
| **Shop Floor Drawing Distribution** | Controlled push-to-production drawings; web-based appliance displays on the floor | Headless Create, Approval Workflow | Not Started |
| **Import/Export Bridge** | STEP, IGES, 3MF connectors; SOLIDWORKS migration tooling; ERP adapters | Headless Create | Not Started |
| **Multi-tenant / Org Management** | Org boundaries, role-based permissioning, storage quotas | Core Auth, Audit Trail | Not Started |
@@ -202,15 +202,15 @@ Implement engineering change processes (Tier 4: Approval/ECO Workflow).
| Task | Description | Status |
|------|-------------|--------|
| Workflow designer | YAML-defined state machines | Complete |
| State transitions | Configurable transition rules with permissions | Complete |
| Approval workflows | Single and parallel approver gates | Complete |
| Workflow designer | YAML-defined state machines | Not Started |
| State transitions | Configurable transition rules with permissions | Not Started |
| Approval workflows | Single and parallel approver gates | Not Started |
| Email notifications | SMTP integration for alerts on state changes | Not Started |
**Success metrics:**
- ~~Engineering change process completable in Silo~~ Done (YAML-configured workflows with multi-stage gates)
- Engineering change process completable in Silo
- Email notifications delivered reliably
- ~~Workflow state visible in web UI~~ Available via API
- Workflow state visible in web UI
### Search & Discovery
@@ -240,17 +240,9 @@ For full SOLIDWORKS PDM comparison tables, see [GAP_ANALYSIS.md Appendix C](GAP_
5. ~~Multi-level BOM API~~ -- recursive expansion with configurable depth
6. ~~BOM export~~ -- CSV and ODS formats
### Recently Completed
7. ~~Workflow engine~~ -- YAML-defined state machines with multi-stage approval gates
8. ~~Job queue~~ -- PostgreSQL-backed async compute with runner management
9. ~~Assembly solver service~~ -- server-side constraint solving with result caching
10. ~~Workstation registration~~ -- device identity and heartbeat tracking
11. ~~Edit sessions~~ -- acquire/release with hard interference detection
### Critical Gaps (Required for Team Use)
1. ~~**Workflow engine**~~ -- Complete (YAML-configured approval workflows)
1. **Workflow engine** -- state machines with transitions and approvals
2. **Check-out locking** -- pessimistic locking for CAD files
### High Priority Gaps (Significant Value)
@@ -283,7 +275,7 @@ For full SOLIDWORKS PDM comparison tables, see [GAP_ANALYSIS.md Appendix C](GAP_
1. **Module manifest format** -- JSON, TOML, or Python-based? Tradeoffs between simplicity and expressiveness.
2. **.kc thumbnail policy** -- Single canonical thumbnail vs. multi-view renders. Impacts file size and generation cost.
3. ~~**Job queue technology**~~ -- Resolved: PostgreSQL-backed with `SELECT FOR UPDATE SKIP LOCKED` for exactly-once delivery. No external queue dependency.
3. **Job queue technology** -- Redis Streams vs. NATS. Redis is already in the stack; NATS offers better pub/sub semantics for event-driven modules.
4. **Headless Create deployment** -- Sidecar container per Silo instance, or pool of workers behind the job queue?
5. **BIM-MES workbench scope** -- How much of FreeCAD BIM is reusable vs. needs to be purpose-built for inventory/facility modeling?
6. **Offline .kc workflow** -- How much of the `silo/` metadata is authoritative when disconnected? Reconciliation strategy on reconnect.
@@ -295,7 +287,7 @@ For full SOLIDWORKS PDM comparison tables, see [GAP_ANALYSIS.md Appendix C](GAP_
### Implemented Features (MVP Complete)
#### Core Database System
- PostgreSQL schema with 23 migrations
- PostgreSQL schema with 13 migrations
- UUID-based identifiers throughout
- Soft delete support via `archived_at` timestamps
- Atomic sequence generation for part numbers
@@ -348,7 +340,7 @@ For full SOLIDWORKS PDM comparison tables, see [GAP_ANALYSIS.md Appendix C](GAP_
- Template generation for import formatting
#### API & Web Interface
- REST API with ~140 endpoints
- REST API with 78 endpoints
- Authentication: local (bcrypt), LDAP/FreeIPA, OIDC/Keycloak
- Role-based access control (admin > editor > viewer)
- API token management (SHA-256 hashed)
@@ -379,7 +371,7 @@ For full SOLIDWORKS PDM comparison tables, see [GAP_ANALYSIS.md Appendix C](GAP_
| Part number validation | Not started | API accepts but doesn't validate format |
| Location hierarchy CRUD | Schema only | Tables exist, no API endpoints |
| Inventory tracking | Schema only | Tables exist, no API endpoints |
| Unit tests | Partial | 31 Go test files across api, db, modules, ods, partnum, schema packages |
| Unit tests | Partial | 11 Go test files across api, db, ods, partnum, schema packages |
---

View File

@@ -1,912 +0,0 @@
# Solver Service Specification
**Status:** Phase 3b Implemented (server endpoints, job definitions, result cache)
**Last Updated:** 2026-03-01
**Depends on:** KCSolve Phase 1 (PR #297), Phase 2 (PR #298)
**Prerequisite infrastructure:** Job queue, runner system, and SSE broadcasting are fully implemented (see [WORKERS.md](WORKERS.md), migration `015_jobs_runners.sql`, `cmd/silorunner/`).
---
## 1. Overview
The solver service extends Silo's job queue system with assembly constraint solving capabilities. It enables server-side solving of assemblies stored in Silo, with results streamed back to clients in real time via SSE.
This specification describes how the existing KCSolve client-side API (C++ library + pybind11 `kcsolve` module) integrates with Silo's worker infrastructure to provide headless, asynchronous constraint solving.
### 1.1 Goals
1. **Offload solving** -- Move heavy solve operations off the user's machine to server workers.
2. **Batch validation** -- Automatically validate assemblies on commit (e.g. check for over-constrained systems).
3. **Solver selection** -- Allow the server to run different solvers than the client (e.g. a more thorough solver for validation, a fast one for interactive editing).
4. **Standalone execution** -- Solver workers can run without a full FreeCAD installation, using just the `kcsolve` Python module and the `.kc` file.
### 1.2 Non-Goals
- **Interactive drag** -- Real-time drag solving stays client-side (latency-sensitive).
- **Geometry processing** -- Workers don't compute geometry; they receive pre-extracted constraint graphs.
- **Solver development** -- Writing new solver backends is out of scope; this spec covers the transport and execution layer.
---
## 2. Architecture
```
┌─────────────────────┐
│ Kindred Create │
│ (FreeCAD client) │
└───────┬──────────────┘
│ 1. POST /api/solver/jobs
│ (SolveContext JSON)
│ 4. GET /api/events (SSE)
│ job.progress, job.completed
┌─────────────────────┐
│ Silo Server │
│ (silod) │
│ │
│ solver module │
│ REST + SSE + queue │
└───────┬──────────────┘
│ 2. POST /api/runner/claim
│ 3. POST /api/runner/jobs/{id}/complete
┌─────────────────────┐
│ Solver Runner │
│ (silorunner) │
│ │
│ kcsolve module │
│ OndselAdapter │
│ Python solvers │
└─────────────────────┘
```
### 2.1 Components
| Component | Role | Deployment |
|-----------|------|------------|
| **Silo server** | Job queue management, REST API, SSE broadcast, result storage | Existing `silod` binary (jobs module, migration 015) |
| **Solver runner** | Claims solver jobs, executes `kcsolve`, reports results | Existing `silorunner` binary (`cmd/silorunner/`) with `solver` tag |
| **kcsolve module** | Python/C++ solver library (Phase 1+2) | Installed on runner nodes |
| **Create client** | Submits jobs, receives results via SSE | Existing FreeCAD client |
### 2.2 Module Registration
The solver service is a Silo module with ID `solver`, gated behind the existing module system:
```yaml
# config.yaml
modules:
solver:
enabled: true
```
It depends on the `jobs` module being enabled. All solver endpoints return `404` with `{"error": "module not enabled"}` when disabled.
---
## 3. Data Model
### 3.1 SolveContext JSON Schema
The `SolveContext` is the input to a solve operation. Currently it exists only as a C++ struct and pybind11 binding with no serialization. Phase 3 adds JSON serialization to enable server transport.
```json
{
"api_version": 1,
"parts": [
{
"id": "Part001",
"placement": {
"position": [0.0, 0.0, 0.0],
"quaternion": [1.0, 0.0, 0.0, 0.0]
},
"mass": 1.0,
"grounded": true
},
{
"id": "Part002",
"placement": {
"position": [100.0, 0.0, 0.0],
"quaternion": [1.0, 0.0, 0.0, 0.0]
},
"mass": 1.0,
"grounded": false
}
],
"constraints": [
{
"id": "Joint001",
"part_i": "Part001",
"marker_i": {
"position": [50.0, 0.0, 0.0],
"quaternion": [1.0, 0.0, 0.0, 0.0]
},
"part_j": "Part002",
"marker_j": {
"position": [0.0, 0.0, 0.0],
"quaternion": [1.0, 0.0, 0.0, 0.0]
},
"type": "Revolute",
"params": [],
"limits": [],
"activated": true
}
],
"motions": [],
"simulation": null,
"bundle_fixed": false
}
```
**Field reference:** See [KCSolve Python API](../reference/kcsolve-python.md) for full field documentation. The JSON schema maps 1:1 to the Python/C++ types.
**Enum serialization:** Enums serialize as strings matching their Python names (e.g. `"Revolute"`, `"Success"`, `"Redundant"`).
**Transform shorthand:** The `placement` and `marker_*` fields use the `Transform` struct: `position` is `[x, y, z]`, `quaternion` is `[w, x, y, z]`.
**Constraint.Limit:**
```json
{
"kind": "RotationMin",
"value": -1.5708,
"tolerance": 1e-9
}
```
**MotionDef:**
```json
{
"kind": "Rotational",
"joint_id": "Joint001",
"marker_i": "",
"marker_j": "",
"rotation_expr": "2*pi*t",
"translation_expr": ""
}
```
**SimulationParams:**
```json
{
"t_start": 0.0,
"t_end": 2.0,
"h_out": 0.04,
"h_min": 1e-9,
"h_max": 1.0,
"error_tol": 1e-6
}
```
### 3.2 SolveResult JSON Schema
```json
{
"status": "Success",
"placements": [
{
"id": "Part002",
"placement": {
"position": [50.0, 0.0, 0.0],
"quaternion": [0.707, 0.0, 0.707, 0.0]
}
}
],
"dof": 1,
"diagnostics": [
{
"constraint_id": "Joint003",
"kind": "Redundant",
"detail": "6 DOF removed by Joint003 are already constrained"
}
],
"num_frames": 0
}
```
### 3.3 Solver Job Record
Solver jobs are stored in the existing `jobs` table. The solver-specific data is in the `args` and `result` JSONB columns.
**Job args (input):**
```json
{
"solver": "ondsel",
"operation": "solve",
"context": { /* SolveContext JSON */ },
"item_part_number": "ASM-001",
"revision_number": 3
}
```
**Operation types:**
| Operation | Description | Requires simulation? |
|-----------|-------------|---------------------|
| `solve` | Static equilibrium solve | No |
| `diagnose` | Constraint analysis only (no placement update) | No |
| `kinematic` | Time-domain kinematic simulation | Yes |
**Job result (output):**
```json
{
"result": { /* SolveResult JSON */ },
"solver_name": "OndselSolver (Lagrangian)",
"solver_version": "1.0",
"solve_time_ms": 127.4
}
```
---
## 4. REST API
All endpoints are prefixed with `/api/solver/` and gated behind `RequireModule("solver")`.
### 4.1 Submit Solve Job
```
POST /api/solver/jobs
Authorization: Bearer silo_...
Content-Type: application/json
{
"solver": "ondsel",
"operation": "solve",
"context": { /* SolveContext */ },
"priority": 50
}
```
**Optional fields:**
| Field | Type | Default | Description |
|-------|------|---------|-------------|
| `solver` | string | `""` (default solver) | Solver name from registry |
| `operation` | string | `"solve"` | `solve`, `diagnose`, or `kinematic` |
| `context` | object | required | SolveContext JSON |
| `priority` | int | `50` | Lower = higher priority |
| `item_part_number` | string | `null` | Silo item reference (for result association) |
| `revision_number` | int | `null` | Revision that generated this context |
| `callback_url` | string | `null` | Webhook URL for completion notification |
**Response `201 Created`:**
```json
{
"job_id": "550e8400-e29b-41d4-a716-446655440000",
"status": "pending",
"created_at": "2026-02-19T18:30:00Z"
}
```
**Error responses:**
| Code | Condition |
|------|-----------|
| `400` | Invalid SolveContext (missing required fields, unknown enum values) |
| `401` | Not authenticated |
| `404` | Module not enabled |
| `422` | Unknown solver name, invalid operation |
### 4.2 Get Job Status
```
GET /api/solver/jobs/{jobID}
```
**Response `200 OK`:**
```json
{
"job_id": "550e8400-...",
"status": "completed",
"operation": "solve",
"solver": "ondsel",
"priority": 50,
"item_part_number": "ASM-001",
"revision_number": 3,
"runner_id": "runner-01",
"runner_name": "solver-worker-01",
"created_at": "2026-02-19T18:30:00Z",
"claimed_at": "2026-02-19T18:30:01Z",
"completed_at": "2026-02-19T18:30:02Z",
"result": {
"result": { /* SolveResult */ },
"solver_name": "OndselSolver (Lagrangian)",
"solve_time_ms": 127.4
}
}
```
### 4.3 List Solver Jobs
```
GET /api/solver/jobs?status=completed&item=ASM-001&limit=20&offset=0
```
**Query parameters:**
| Param | Type | Description |
|-------|------|-------------|
| `status` | string | Filter by status: `pending`, `claimed`, `running`, `completed`, `failed` |
| `item` | string | Filter by item part number |
| `operation` | string | Filter by operation type |
| `solver` | string | Filter by solver name |
| `limit` | int | Page size (default 20, max 100) |
| `offset` | int | Pagination offset |
**Response `200 OK`:**
```json
{
"jobs": [ /* array of job objects */ ],
"total": 42,
"limit": 20,
"offset": 0
}
```
### 4.4 Cancel Job
```
POST /api/solver/jobs/{jobID}/cancel
```
Only `pending` and `claimed` jobs can be cancelled. Running jobs must complete or time out.
**Response `200 OK`:**
```json
{
"job_id": "550e8400-...",
"status": "cancelled"
}
```
### 4.5 Get Solver Registry
```
GET /api/solver/solvers
```
Returns available solvers on registered runners. Runners report their solver capabilities during heartbeat.
**Response `200 OK`:**
```json
{
"solvers": [
{
"name": "ondsel",
"display_name": "OndselSolver (Lagrangian)",
"deterministic": true,
"supported_joints": [
"Coincident", "Fixed", "Revolute", "Cylindrical",
"Slider", "Ball", "Screw", "Gear", "RackPinion",
"Parallel", "Perpendicular", "Angle", "Planar",
"Concentric", "PointOnLine", "PointInPlane",
"LineInPlane", "Tangent", "DistancePointPoint",
"DistanceCylSph", "Universal"
],
"runner_count": 2
}
],
"default_solver": "ondsel"
}
```
---
## 5. Server-Sent Events
Solver jobs emit events on the existing `/api/events` SSE stream.
### 5.1 Event Types
Solver jobs use the existing `job.*` SSE event prefix (see [WORKERS.md](WORKERS.md)). Clients filter on `definition_name` to identify solver-specific events.
| Event | Payload | When |
|-------|---------|------|
| `job.created` | `{job_id, definition_name, trigger, item_id}` | Job submitted |
| `job.claimed` | `{job_id, runner_id, runner}` | Runner claims work |
| `job.progress` | `{job_id, progress, message}` | Progress update (0-100) |
| `job.completed` | `{job_id, runner_id}` | Job succeeded |
| `job.failed` | `{job_id, runner_id, error}` | Job failed |
### 5.2 Example Stream
```
event: job.created
data: {"job_id":"abc-123","definition_name":"assembly-solve","trigger":"manual","item_id":"uuid-..."}
event: job.claimed
data: {"job_id":"abc-123","runner_id":"r1","runner":"solver-worker-01"}
event: job.progress
data: {"job_id":"abc-123","progress":50,"message":"Building constraint system..."}
event: job.completed
data: {"job_id":"abc-123","runner_id":"r1"}
```
### 5.3 Client Integration
The Create client subscribes to the SSE stream and updates the Assembly workbench UI:
1. **Silo viewport widget** shows job status indicator (pending/running/done/failed)
2. On `job.completed` (where `definition_name` starts with `assembly-`), the client fetches the full result via `GET /api/jobs/{id}` and applies placements
3. On `job.failed`, the client shows the error in the report panel
4. Diagnostic results (redundant/conflicting constraints) surface in the constraint tree
---
## 6. Runner Integration
### 6.1 Runner Requirements
Solver runners are standard `silorunner` instances (see `cmd/silorunner/main.go`) registered with the `solver` tag. The existing runner binary already handles the full job lifecycle (claim, start, progress, complete/fail, log, DAG sync). Solver support requires adding `solver-run`, `solver-diagnose`, and `solver-kinematic` to the runner's command dispatch (currently handles `create-validate`, `create-export`, `create-dag-extract`, `create-thumbnail`).
Additional requirements on the runner host:
- Python 3.11+ with `kcsolve` module installed
- `libKCSolve.so` and solver backend libraries (e.g. `libOndselSolver.so`)
- Network access to the Silo server
No FreeCAD installation is required. The runner operates on pre-extracted `SolveContext` JSON.
### 6.2 Runner Registration
```bash
# Register a solver runner (admin)
curl -X POST https://silo.example.com/api/runners \
-H "Authorization: Bearer admin_token" \
-d '{"name":"solver-01","tags":["solver"]}'
# Response includes one-time token
{"id":"uuid","token":"silo_runner_xyz..."}
```
### 6.3 Runner Heartbeat and Capabilities
The existing heartbeat endpoint (`POST /api/runner/heartbeat`) takes no body — it updates `last_heartbeat` on every authenticated request via the `RequireRunnerAuth` middleware. Runners that go 90 seconds without a request are marked offline by the background sweeper.
Solver capabilities are reported via the runner's `metadata` JSONB field, set at registration time:
```bash
curl -X POST https://silo.example.com/api/runners \
-H "Authorization: Bearer admin_token" \
-d '{
"name": "solver-01",
"tags": ["solver"],
"metadata": {
"solvers": ["ondsel"],
"api_version": 1,
"python_version": "3.11.11"
}
}'
```
> **Future enhancement:** The heartbeat endpoint could be extended to accept an optional body for dynamic capability updates, but currently capabilities are static per registration.
### 6.4 Runner Execution Flow
```python
#!/usr/bin/env python3
"""Solver runner entry point."""
import json
import kcsolve
def execute_solve_job(args: dict) -> dict:
"""Execute a solver job from parsed args."""
solver_name = args.get("solver", "")
operation = args.get("operation", "solve")
ctx_dict = args["context"]
# Deserialize SolveContext from JSON
ctx = kcsolve.SolveContext.from_dict(ctx_dict)
# Load solver
solver = kcsolve.load(solver_name)
if solver is None:
raise ValueError(f"Unknown solver: {solver_name!r}")
# Execute operation
if operation == "solve":
result = solver.solve(ctx)
elif operation == "diagnose":
diags = solver.diagnose(ctx)
result = kcsolve.SolveResult()
result.diagnostics = diags
elif operation == "kinematic":
result = solver.run_kinematic(ctx)
else:
raise ValueError(f"Unknown operation: {operation!r}")
# Serialize result
return {
"result": result.to_dict(),
"solver_name": solver.name(),
"solver_version": "1.0",
}
```
### 6.5 Standalone Process Mode
For minimal deployments, the runner can invoke a standalone solver process:
```bash
echo '{"solver":"ondsel","operation":"solve","context":{...}}' | \
python3 -m kcsolve.runner
```
The `kcsolve.runner` module reads JSON from stdin, executes the solve, and writes the result JSON to stdout. Exit code 0 = success, non-zero = failure with error JSON on stderr.
---
## 7. Job Definitions
### 7.1 Manual Solve Job
Triggered by the client when the user requests a server-side solve.
> **Note:** The `compute.type` uses `custom` because the valid types in `internal/jobdef/jobdef.go` are: `validate`, `rebuild`, `diff`, `export`, `custom`. Solver commands are dispatched by the runner based on the `command` field.
```yaml
job:
name: assembly-solve
version: 1
description: "Solve assembly constraints on server"
trigger:
type: manual
scope:
type: assembly
compute:
type: custom
command: solver-run
runner:
tags: [solver]
timeout: 300
max_retries: 1
priority: 50
```
### 7.2 Commit-Time Validation
Automatically validates assembly constraints when a new revision is committed:
```yaml
job:
name: assembly-validate
version: 1
description: "Validate assembly constraints on commit"
trigger:
type: revision_created
filter:
item_type: assembly
scope:
type: assembly
compute:
type: custom
command: solver-diagnose
args:
operation: diagnose
runner:
tags: [solver]
timeout: 120
max_retries: 2
priority: 75
```
### 7.3 Kinematic Simulation
Server-side kinematic simulation for assemblies with motion definitions:
```yaml
job:
name: assembly-kinematic
version: 1
description: "Run kinematic simulation"
trigger:
type: manual
scope:
type: assembly
compute:
type: custom
command: solver-kinematic
args:
operation: kinematic
runner:
tags: [solver]
timeout: 1800
max_retries: 0
priority: 100
```
---
## 8. SolveContext Extraction
When a solver job is triggered by a revision commit (rather than a direct context submission), the server or runner must extract a `SolveContext` from the `.kc` file.
### 8.1 Extraction via Headless Create
For full-fidelity extraction that handles geometry classification:
```bash
create --console -e "
import kcsolve_extract
kcsolve_extract.extract_and_solve('input.kc', 'output.json', solver='ondsel')
"
```
This requires a full Create installation on the runner and uses the Assembly module's existing adapter layer to build `SolveContext` from document objects.
### 8.2 Extraction from .kc Silo Directory
For lightweight extraction without FreeCAD, the constraint graph can be stored in the `.kc` archive's `silo/` directory during commit:
```
silo/solver/context.json # Pre-extracted SolveContext
silo/solver/result.json # Last solve result (if any)
```
The client extracts the `SolveContext` locally before committing the `.kc` file. The server reads it from the archive, avoiding the need for geometry processing on the runner.
**Commit-time packing** (client side):
```python
# In the Assembly workbench commit hook:
ctx = assembly_object.build_solve_context()
kc_archive.write("silo/solver/context.json", ctx.to_json())
```
**Runner-side extraction:**
```python
import zipfile, json
with zipfile.ZipFile("assembly.kc") as zf:
ctx_json = json.loads(zf.read("silo/solver/context.json"))
```
---
## 9. Database Schema
### 9.1 Migration
The solver module uses the existing `jobs` table. One new table is added for result caching:
```sql
-- Migration: 021_solver_results.sql
CREATE TABLE solver_results (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
item_id UUID NOT NULL REFERENCES items(id) ON DELETE CASCADE,
revision_number INTEGER NOT NULL,
job_id UUID REFERENCES jobs(id) ON DELETE SET NULL,
operation TEXT NOT NULL, -- 'solve', 'diagnose', 'kinematic'
solver_name TEXT NOT NULL,
status TEXT NOT NULL, -- SolveStatus string
dof INTEGER,
diagnostics JSONB DEFAULT '[]',
placements JSONB DEFAULT '[]',
num_frames INTEGER DEFAULT 0,
solve_time_ms DOUBLE PRECISION,
created_at TIMESTAMPTZ NOT NULL DEFAULT now(),
UNIQUE(item_id, revision_number, operation)
);
CREATE INDEX idx_solver_results_item ON solver_results(item_id);
CREATE INDEX idx_solver_results_status ON solver_results(status);
```
The `UNIQUE(item_id, revision_number, operation)` constraint means each revision has at most one result per operation type. Re-running overwrites the previous result.
### 9.2 Result Association
When a solver job completes, the server:
1. Stores the full result in the `jobs.result` JSONB column (standard job result)
2. Upserts a row in `solver_results` for quick lookup by item/revision
3. Broadcasts `job.completed` SSE event
---
## 10. Configuration
### 10.1 Server Config
```yaml
# config.yaml
modules:
solver:
enabled: true
default_solver: "ondsel"
max_context_size_mb: 10 # Reject oversized SolveContext payloads
default_timeout: 300 # Default job timeout (seconds)
auto_diagnose_on_commit: true # Auto-submit diagnose job on revision commit
```
### 10.2 Environment Variables
| Variable | Description |
|----------|-------------|
| `SILO_SOLVER_ENABLED` | Override module enabled state |
| `SILO_SOLVER_DEFAULT` | Default solver name |
### 10.3 Runner Config
```yaml
# runner.yaml
server_url: https://silo.example.com
token: silo_runner_xyz...
tags: [solver]
solver:
kcsolve_path: /opt/create/lib # LD_LIBRARY_PATH for kcsolve.so
python: /opt/create/bin/python3
max_concurrent: 2 # Parallel job slots per runner
```
---
## 11. Security
### 11.1 Authentication
All solver endpoints use the existing Silo authentication:
- **User endpoints** (`/api/solver/jobs`): Session or API token, requires `viewer` role to read, `editor` role to submit
- **Runner endpoints** (`/api/runner/...`): Runner token authentication (existing)
### 11.2 Input Validation
The server validates SolveContext JSON before queuing:
- Maximum payload size (configurable, default 10 MB)
- Required fields present (`parts`, `constraints`)
- Enum values are valid strings
- Transform arrays have correct length (position: 3, quaternion: 4)
- No duplicate part or constraint IDs
### 11.3 Runner Isolation
Solver runners execute untrusted constraint data. Mitigations:
- Runners should run in containers or sandboxed environments
- Python solver registration (`kcsolve.register_solver()`) is disabled in runner mode
- Solver execution has a configurable timeout (killed on expiry)
- Result size is bounded (large kinematic simulations are truncated)
---
## 12. Client SDK
### 12.1 Python Client
The existing `silo-client` package is extended with solver methods:
```python
from silo_client import SiloClient
client = SiloClient("https://silo.example.com", token="silo_...")
# Submit a solve job
import kcsolve
ctx = kcsolve.SolveContext()
# ... build context ...
job = client.solver.submit(ctx.to_dict(), solver="ondsel")
print(job.id, job.status) # "pending"
# Poll for completion
result = client.solver.wait(job.id, timeout=60)
print(result.status) # "Success"
# Or use SSE for real-time updates
for event in client.solver.stream(job.id):
print(event.type, event.data)
# Query results for an item
results = client.solver.results("ASM-001")
```
### 12.2 Create Workbench Integration
The Assembly workbench adds a "Solve on Server" command:
```python
# CommandSolveOnServer.py (sketch)
def activated(self):
assembly = get_active_assembly()
ctx = assembly.build_solve_context()
# Submit to Silo
from silo_client import get_client
client = get_client()
job = client.solver.submit(ctx.to_dict())
# Subscribe to SSE for updates
self.watch_job(job.id)
def on_solver_completed(self, job_id, result):
# Apply placements back to assembly
assembly = get_active_assembly()
for pr in result["placements"]:
assembly.set_part_placement(pr["id"], pr["placement"])
assembly.recompute()
```
---
## 13. Implementation Plan
### Phase 3a: JSON Serialization
Add `to_dict()` / `from_dict()` methods to all KCSolve types in the pybind11 module.
**Files to modify:**
- `src/Mod/Assembly/Solver/bindings/kcsolve_py.cpp` -- add dict conversion methods
**Verification:** `ctx.to_dict()` round-trips through `SolveContext.from_dict()`.
### Phase 3b: Server Endpoints -- COMPLETE
Add the solver module to the Silo server. This builds on the existing job queue infrastructure (`migration 015_jobs_runners.sql`, `internal/db/jobs.go`, `internal/api/job_handlers.go`, `internal/api/runner_handlers.go`).
**Implemented files:**
- `internal/api/solver_handlers.go` -- REST endpoint handlers (solver-specific convenience layer over existing `/api/jobs`)
- `internal/db/migrations/021_solver_results.sql` -- Database migration for result caching table
- Module registered as `solver` in `internal/modules/modules.go` with `jobs` dependency
### Phase 3c: Runner Support
Add solver command handlers to the existing `silorunner` binary (`cmd/silorunner/main.go`). The runner already implements the full job lifecycle (claim, start, progress, complete/fail). This phase adds `solver-run`, `solver-diagnose`, and `solver-kinematic` to the `executeJob` switch statement.
**Files to modify:**
- `cmd/silorunner/main.go` -- Add solver command dispatch cases
- `src/Mod/Assembly/Solver/bindings/runner.py` -- `kcsolve.runner` Python entry point (invoked by silorunner via subprocess)
### Phase 3d: .kc Context Packing
Pack `SolveContext` into `.kc` archives on commit.
**Files to modify:**
- `mods/silo/freecad/silo_origin.py` -- Hook into commit to pack solver context
### Phase 3e: Client Integration
Add "Solve on Server" command to the Assembly workbench.
**Files to modify:**
- `mods/silo/freecad/` -- Solver client methods
- `src/Mod/Assembly/` -- Server solve command
---
## 14. Open Questions
1. **Context size limits** -- Large assemblies may produce multi-MB SolveContext JSON. Should we compress (gzip) or use a binary format (msgpack)?
2. **Result persistence** -- How long should solver results be retained? Per-revision (overwritten on next commit) or historical (keep all)?
3. **Kinematic frame storage** -- Kinematic simulations can produce thousands of frames. Store all frames in JSONB, or write to a separate file and reference it?
4. **Multi-solver comparison** -- Should the API support running the same context through multiple solvers and comparing results? Useful for Phase 4 (second solver validation).
5. **Webhook notifications** -- The `callback_url` field allows external integrations (e.g. CI). What authentication should the webhook use?
---
## 15. References
- [KCSolve Architecture](../architecture/ondsel-solver.md)
- [KCSolve Python API Reference](../reference/kcsolve-python.md)
- [INTER_SOLVER.md](../../INTER_SOLVER.md) -- Full pluggable solver spec
- [WORKERS.md](WORKERS.md) -- Worker/runner job system
- [SPECIFICATION.md](SPECIFICATION.md) -- Silo server specification
- [MODULES.md](MODULES.md) -- Module system

View File

@@ -1,6 +1,6 @@
# Silo Development Status
**Last Updated:** 2026-03-01
**Last Updated:** 2026-02-08
---
@@ -10,10 +10,10 @@
| Component | Status | Notes |
|-----------|--------|-------|
| PostgreSQL schema | Complete | 23 migrations applied |
| PostgreSQL schema | Complete | 18 migrations applied |
| YAML schema parser | Complete | Supports enum, serial, constant, string segments |
| Part number generator | Complete | Scoped sequences, category-based format |
| API server (`silod`) | Complete | ~140 REST endpoints via chi/v5 |
| API server (`silod`) | Complete | 86 REST endpoints via chi/v5 |
| CLI tool (`silo`) | Complete | Item registration and management |
| Filesystem file storage | Complete | Upload, download, checksums |
| Revision control | Complete | Append-only history, rollback, comparison, status/labels |
@@ -35,11 +35,6 @@
| .kc metadata API | Complete | GET/PUT metadata, lifecycle transitions, tag management |
| .kc dependency API | Complete | List raw deps, resolve UUIDs to part numbers + file availability |
| .kc macro API | Complete | List macros, get source content by filename |
| Approval workflows | Complete | YAML-configurable ECO workflows, multi-stage review gates, digital signatures |
| Solver service | Complete | Server-side assembly constraint solving, result caching, job definitions |
| Workstation registration | Complete | Device identity, heartbeat tracking, per-user workstation management |
| Edit sessions | Complete | Acquire/release locks, hard interference detection, SSE notifications |
| SSE targeted delivery | Complete | Per-item, per-user, per-workstation event filtering |
| Odoo ERP integration | Partial | Config and sync-log CRUD functional; push/pull are stubs |
| Docker Compose | Complete | Dev and production configurations |
| Deployment scripts | Complete | setup-host, deploy, init-db, setup-ipa-nginx |
@@ -57,7 +52,7 @@ FreeCAD workbench and LibreOffice Calc extension are maintained in separate repo
| Inventory API endpoints | Database tables exist, no REST handlers |
| Date segment type | Schema parser placeholder only |
| Part number format validation | API accepts but does not validate format on creation |
| Unit tests | 31 Go test files across api, db, modules, ods, partnum, schema packages |
| Unit tests | 9 Go test files across api, db, ods, partnum, schema packages |
---
@@ -111,8 +106,3 @@ The schema defines 170 category codes across 10 groups:
| 016_dag.sql | Dependency DAG nodes and edges |
| 017_locations.sql | Location hierarchy and inventory tracking |
| 018_kc_metadata.sql | .kc metadata tables (item_metadata, item_dependencies, item_macros, item_approvals, approval_signatures) |
| 019_approval_workflow_name.sql | Approval workflow name column |
| 020_storage_backend_filesystem_default.sql | Storage backend default to filesystem |
| 021_solver_results.sql | Solver result caching table |
| 022_workstations.sql | Workstation registration table |
| 023_edit_sessions.sql | Edit session tracking table with hard interference unique index |

View File

@@ -1,7 +1,7 @@
# Worker System Specification
**Status:** Implemented
**Last Updated:** 2026-03-01
**Status:** Draft
**Last Updated:** 2026-02-13
---

View File

@@ -63,6 +63,7 @@ type Server struct {
workflows map[string]*workflow.Workflow
solverResults *db.SolverResultRepository
workstations *db.WorkstationRepository
editSessions *db.EditSessionRepository
}
// NewServer creates a new API server.
@@ -98,6 +99,7 @@ func NewServer(
itemApprovals := db.NewItemApprovalRepository(database)
solverResults := db.NewSolverResultRepository(database)
workstations := db.NewWorkstationRepository(database)
editSessions := db.NewEditSessionRepository(database)
seqStore := &dbSequenceStore{db: database, schemas: schemas}
partgen := partnum.NewGenerator(schemas, seqStore)
@@ -133,6 +135,7 @@ func NewServer(
workflows: workflows,
solverResults: solverResults,
workstations: workstations,
editSessions: editSessions,
}
}
@@ -860,7 +863,6 @@ type UpdateItemRequest struct {
}
// HandleUpdateItem updates an item's fields and/or creates a new revision.
// Any change to item metadata or properties triggers a new revision for audit trail.
func (s *Server) HandleUpdateItem(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
partNumber := chi.URLParam(r, "partNumber")
@@ -901,30 +903,6 @@ func (s *Server) HandleUpdateItem(w http.ResponseWriter, r *http.Request) {
fields.Description = req.Description
}
// Detect which metadata fields actually changed
var metadataChanges []string
if req.PartNumber != "" && req.PartNumber != item.PartNumber {
metadataChanges = append(metadataChanges, "part_number")
}
if req.ItemType != "" && req.ItemType != item.ItemType {
metadataChanges = append(metadataChanges, "item_type")
}
if req.Description != "" && req.Description != item.Description {
metadataChanges = append(metadataChanges, "description")
}
if req.SourcingType != nil && *req.SourcingType != item.SourcingType {
metadataChanges = append(metadataChanges, "sourcing_type")
}
if req.LongDescription != nil {
oldLD := ""
if item.LongDescription != nil {
oldLD = *item.LongDescription
}
if *req.LongDescription != oldLD {
metadataChanges = append(metadataChanges, "long_description")
}
}
// Update the item record (UUID stays the same)
if user := auth.UserFromContext(ctx); user != nil {
fields.UpdatedBy = &user.Username
@@ -935,38 +913,12 @@ func (s *Server) HandleUpdateItem(w http.ResponseWriter, r *http.Request) {
return
}
// Create a revision if anything changed (metadata or properties)
metadataChanged := len(metadataChanges) > 0
propertiesChanged := req.Properties != nil
if metadataChanged || propertiesChanged {
// Determine properties for the new revision
props := req.Properties
if props == nil {
// Carry forward properties from the latest revision
latestRev, err := s.items.GetLatestRevision(ctx, item.ID)
if err != nil {
s.logger.Error().Err(err).Msg("failed to get latest revision")
writeError(w, http.StatusInternalServerError, "internal_error", "Failed to get latest revision")
return
}
if latestRev != nil {
props = latestRev.Properties
} else {
props = make(map[string]any)
}
}
// Auto-generate comment if not provided and only metadata changed
comment := req.Comment
if comment == "" && metadataChanged {
comment = "updated " + strings.Join(metadataChanges, ", ")
}
// Create new revision if properties provided
if req.Properties != nil {
rev := &db.Revision{
ItemID: item.ID,
Properties: props,
Comment: &comment,
Properties: req.Properties,
Comment: &req.Comment,
}
if user := auth.UserFromContext(ctx); user != nil {
rev.CreatedBy = &user.Username
@@ -977,12 +929,6 @@ func (s *Server) HandleUpdateItem(w http.ResponseWriter, r *http.Request) {
writeError(w, http.StatusInternalServerError, "revision_failed", err.Error())
return
}
s.broker.Publish("revision.created", mustMarshal(map[string]any{
"part_number": fields.PartNumber,
"revision_number": rev.RevisionNumber,
}))
go s.triggerJobs(context.Background(), "revision_created", item.ID, item)
}
// Get updated item (use new part number if changed)
@@ -1058,11 +1004,10 @@ type RevisionDiffResponse struct {
Changed map[string]db.PropertyChange `json:"changed,omitempty"`
}
// UpdateRevisionRequest represents a request to update revision status/labels/comment.
// UpdateRevisionRequest represents a request to update revision status/labels.
type UpdateRevisionRequest struct {
Status *string `json:"status,omitempty"`
Labels []string `json:"labels,omitempty"`
Comment *string `json:"comment,omitempty"`
Status *string `json:"status,omitempty"`
Labels []string `json:"labels,omitempty"`
}
// RollbackRequest represents a request to rollback to a previous revision.
@@ -1165,12 +1110,12 @@ func (s *Server) HandleUpdateRevision(w http.ResponseWriter, r *http.Request) {
}
// Validate that at least one field is being updated
if req.Status == nil && req.Labels == nil && req.Comment == nil {
writeError(w, http.StatusBadRequest, "invalid_request", "Must provide status, labels, or comment to update")
if req.Status == nil && req.Labels == nil {
writeError(w, http.StatusBadRequest, "invalid_request", "Must provide status or labels to update")
return
}
err = s.items.UpdateRevisionStatus(ctx, item.ID, revNum, req.Status, req.Labels, req.Comment)
err = s.items.UpdateRevisionStatus(ctx, item.ID, revNum, req.Status, req.Labels)
if err != nil {
if err.Error() == "revision not found" {
writeError(w, http.StatusNotFound, "not_found", "Revision not found")

View File

@@ -150,31 +150,6 @@ func TestHandleUpdateRevision(t *testing.T) {
}
}
func TestHandleUpdateRevisionComment(t *testing.T) {
s := newTestServer(t)
router := newRevisionRouter(s)
createItemDirect(t, s, "REVCMT-001", "update comment", nil)
body := `{"comment":"updated comment"}`
req := authRequest(httptest.NewRequest("PATCH", "/api/items/REVCMT-001/revisions/1", strings.NewReader(body)))
req.Header.Set("Content-Type", "application/json")
w := httptest.NewRecorder()
router.ServeHTTP(w, req)
if w.Code != http.StatusOK {
t.Fatalf("status: got %d, want %d; body: %s", w.Code, http.StatusOK, w.Body.String())
}
var rev RevisionResponse
if err := json.Unmarshal(w.Body.Bytes(), &rev); err != nil {
t.Fatalf("decoding response: %v", err)
}
if rev.Comment == nil || *rev.Comment != "updated comment" {
t.Errorf("comment: got %v, want %q", rev.Comment, "updated comment")
}
}
func TestHandleCompareRevisions(t *testing.T) {
s := newTestServer(t)
router := newRevisionRouter(s)

View File

@@ -79,6 +79,12 @@ func NewRouter(server *Server, logger zerolog.Logger) http.Handler {
r.Delete("/{id}", server.HandleDeleteWorkstation)
})
// Edit sessions — current user's active sessions (gated by sessions module)
r.Route("/edit-sessions", func(r chi.Router) {
r.Use(server.RequireModule("sessions"))
r.Get("/", server.HandleListUserEditSessions)
})
// Auth endpoints
r.Get("/auth/me", server.HandleGetCurrentUser)
r.Route("/auth/tokens", func(r chi.Router) {
@@ -206,6 +212,19 @@ func NewRouter(server *Server, logger zerolog.Logger) http.Handler {
})
})
// Edit sessions (gated by sessions module)
r.Route("/edit-sessions", func(r chi.Router) {
r.Use(server.RequireModule("sessions"))
r.Get("/", server.HandleListItemEditSessions)
r.Group(func(r chi.Router) {
r.Use(server.RequireWritable)
r.Use(server.RequireRole(auth.RoleEditor))
r.Post("/", server.HandleAcquireEditSession)
r.Delete("/{sessionID}", server.HandleReleaseEditSession)
})
})
r.Group(func(r chi.Router) {
r.Use(server.RequireWritable)
r.Use(server.RequireRole(auth.RoleEditor))

View File

@@ -0,0 +1,293 @@
package api
import (
"encoding/json"
"errors"
"fmt"
"net/http"
"github.com/go-chi/chi/v5"
"github.com/jackc/pgx/v5/pgconn"
"github.com/kindredsystems/silo/internal/auth"
"github.com/kindredsystems/silo/internal/db"
"github.com/kindredsystems/silo/internal/modules"
)
var validContextLevels = map[string]bool{
"sketch": true,
"partdesign": true,
"assembly": true,
}
type editSessionResponse struct {
ID string `json:"id"`
ItemID string `json:"item_id"`
PartNumber string `json:"part_number,omitempty"`
UserID string `json:"user_id"`
WorkstationID string `json:"workstation_id"`
ContextLevel string `json:"context_level"`
ObjectID *string `json:"object_id"`
DependCone []string `json:"dependency_cone"`
AcquiredAt string `json:"acquired_at"`
LastHeartbeat string `json:"last_heartbeat"`
}
func sessionToResponse(s *db.EditSession, partNumber string) editSessionResponse {
cone := s.DependencyCone
if cone == nil {
cone = []string{}
}
return editSessionResponse{
ID: s.ID,
ItemID: s.ItemID,
PartNumber: partNumber,
UserID: s.UserID,
WorkstationID: s.WorkstationID,
ContextLevel: s.ContextLevel,
ObjectID: s.ObjectID,
DependCone: cone,
AcquiredAt: s.AcquiredAt.UTC().Format("2006-01-02T15:04:05Z"),
LastHeartbeat: s.LastHeartbeat.UTC().Format("2006-01-02T15:04:05Z"),
}
}
// HandleAcquireEditSession acquires an edit session on an item.
// POST /api/items/{partNumber}/edit-sessions
func (s *Server) HandleAcquireEditSession(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
partNumber := chi.URLParam(r, "partNumber")
item, err := s.items.GetByPartNumber(ctx, partNumber)
if err != nil {
s.logger.Error().Err(err).Msg("failed to get item")
writeError(w, http.StatusInternalServerError, "internal_error", "Failed to get item")
return
}
if item == nil {
writeError(w, http.StatusNotFound, "not_found", "Item not found")
return
}
user := auth.UserFromContext(ctx)
if user == nil {
writeError(w, http.StatusUnauthorized, "unauthorized", "Authentication required")
return
}
var req struct {
WorkstationID string `json:"workstation_id"`
ContextLevel string `json:"context_level"`
ObjectID *string `json:"object_id"`
DependencyCone []string `json:"dependency_cone"`
}
if err := json.NewDecoder(r.Body).Decode(&req); err != nil {
writeError(w, http.StatusBadRequest, "invalid_json", err.Error())
return
}
if req.WorkstationID == "" {
writeError(w, http.StatusBadRequest, "validation_error", "workstation_id is required")
return
}
if !validContextLevels[req.ContextLevel] {
writeError(w, http.StatusBadRequest, "validation_error", "context_level must be sketch, partdesign, or assembly")
return
}
// If no dependency cone provided and DAG module is enabled, attempt to compute it.
depCone := req.DependencyCone
if len(depCone) == 0 && req.ObjectID != nil && s.modules.IsEnabled(modules.DAG) {
node, nodeErr := s.dag.GetNodeByKey(ctx, item.ID, item.CurrentRevision, *req.ObjectID)
if nodeErr == nil && node != nil {
coneNodes, coneErr := s.dag.GetForwardCone(ctx, node.ID)
if coneErr == nil {
depCone = make([]string, len(coneNodes))
for i, n := range coneNodes {
depCone[i] = n.NodeKey
}
}
}
}
session := &db.EditSession{
ItemID: item.ID,
UserID: user.ID,
WorkstationID: req.WorkstationID,
ContextLevel: req.ContextLevel,
ObjectID: req.ObjectID,
DependencyCone: depCone,
}
if err := s.editSessions.Acquire(ctx, session); err != nil {
// Check for unique constraint violation (hard interference).
var pgErr *pgconn.PgError
if errors.As(err, &pgErr) && pgErr.Code == "23505" {
s.writeConflictResponse(w, r, item.ID, req.ContextLevel, req.ObjectID)
return
}
s.logger.Error().Err(err).Msg("failed to acquire edit session")
writeError(w, http.StatusInternalServerError, "internal_error", "Failed to acquire edit session")
return
}
s.broker.PublishToItem(item.ID, "edit.session_acquired", mustMarshal(map[string]any{
"session_id": session.ID,
"item_id": item.ID,
"part_number": partNumber,
"user": user.Username,
"workstation": req.WorkstationID,
"context_level": session.ContextLevel,
"object_id": session.ObjectID,
}))
writeJSON(w, http.StatusOK, sessionToResponse(session, partNumber))
}
// writeConflictResponse builds a 409 response with holder info.
func (s *Server) writeConflictResponse(w http.ResponseWriter, r *http.Request, itemID, contextLevel string, objectID *string) {
ctx := r.Context()
conflict, err := s.editSessions.GetConflict(ctx, itemID, contextLevel, objectID)
if err != nil || conflict == nil {
writeError(w, http.StatusConflict, "hard_interference", "Another user is editing this object")
return
}
// Look up holder's username and workstation name.
holderUser := "unknown"
if u, err := s.auth.GetUserByID(ctx, conflict.UserID); err == nil && u != nil {
holderUser = u.Username
}
holderWS := conflict.WorkstationID
if ws, err := s.workstations.GetByID(ctx, conflict.WorkstationID); err == nil && ws != nil {
holderWS = ws.Name
}
objDesc := contextLevel
if objectID != nil {
objDesc = *objectID
}
writeJSON(w, http.StatusConflict, map[string]any{
"error": "hard_interference",
"holder": map[string]any{
"user": holderUser,
"workstation": holderWS,
"context_level": conflict.ContextLevel,
"object_id": conflict.ObjectID,
"acquired_at": conflict.AcquiredAt.UTC().Format("2006-01-02T15:04:05Z"),
},
"message": fmt.Sprintf("%s is currently editing %s", holderUser, objDesc),
})
}
// HandleReleaseEditSession releases an edit session.
// DELETE /api/items/{partNumber}/edit-sessions/{sessionID}
func (s *Server) HandleReleaseEditSession(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
partNumber := chi.URLParam(r, "partNumber")
sessionID := chi.URLParam(r, "sessionID")
item, err := s.items.GetByPartNumber(ctx, partNumber)
if err != nil {
s.logger.Error().Err(err).Msg("failed to get item")
writeError(w, http.StatusInternalServerError, "internal_error", "Failed to get item")
return
}
if item == nil {
writeError(w, http.StatusNotFound, "not_found", "Item not found")
return
}
user := auth.UserFromContext(ctx)
if user == nil {
writeError(w, http.StatusUnauthorized, "unauthorized", "Authentication required")
return
}
session, err := s.editSessions.GetByID(ctx, sessionID)
if err != nil {
s.logger.Error().Err(err).Str("session_id", sessionID).Msg("failed to get edit session")
writeError(w, http.StatusInternalServerError, "internal_error", "Failed to get edit session")
return
}
if session == nil {
writeError(w, http.StatusNotFound, "not_found", "Edit session not found")
return
}
if session.UserID != user.ID && user.Role != auth.RoleAdmin {
writeError(w, http.StatusForbidden, "forbidden", "You can only release your own edit sessions")
return
}
if err := s.editSessions.Release(ctx, sessionID); err != nil {
s.logger.Error().Err(err).Str("session_id", sessionID).Msg("failed to release edit session")
writeError(w, http.StatusInternalServerError, "internal_error", "Failed to release edit session")
return
}
s.broker.PublishToItem(item.ID, "edit.session_released", mustMarshal(map[string]any{
"session_id": session.ID,
"item_id": item.ID,
"part_number": partNumber,
"user": user.Username,
"context_level": session.ContextLevel,
"object_id": session.ObjectID,
}))
w.WriteHeader(http.StatusNoContent)
}
// HandleListItemEditSessions lists active edit sessions for an item.
// GET /api/items/{partNumber}/edit-sessions
func (s *Server) HandleListItemEditSessions(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
partNumber := chi.URLParam(r, "partNumber")
item, err := s.items.GetByPartNumber(ctx, partNumber)
if err != nil {
s.logger.Error().Err(err).Msg("failed to get item")
writeError(w, http.StatusInternalServerError, "internal_error", "Failed to get item")
return
}
if item == nil {
writeError(w, http.StatusNotFound, "not_found", "Item not found")
return
}
sessions, err := s.editSessions.ListForItem(ctx, item.ID)
if err != nil {
s.logger.Error().Err(err).Msg("failed to list edit sessions")
writeError(w, http.StatusInternalServerError, "internal_error", "Failed to list edit sessions")
return
}
out := make([]editSessionResponse, len(sessions))
for i, sess := range sessions {
out[i] = sessionToResponse(sess, partNumber)
}
writeJSON(w, http.StatusOK, out)
}
// HandleListUserEditSessions lists active edit sessions for the current user.
// GET /api/edit-sessions
func (s *Server) HandleListUserEditSessions(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
user := auth.UserFromContext(ctx)
if user == nil {
writeError(w, http.StatusUnauthorized, "unauthorized", "Authentication required")
return
}
sessions, err := s.editSessions.ListForUser(ctx, user.ID)
if err != nil {
s.logger.Error().Err(err).Msg("failed to list edit sessions")
writeError(w, http.StatusInternalServerError, "internal_error", "Failed to list edit sessions")
return
}
out := make([]editSessionResponse, len(sessions))
for i, sess := range sessions {
out[i] = sessionToResponse(sess, "")
}
writeJSON(w, http.StatusOK, out)
}

View File

@@ -0,0 +1,222 @@
package db
import (
"context"
"time"
"github.com/jackc/pgx/v5"
)
// EditSession represents an active editing context.
type EditSession struct {
ID string
ItemID string
UserID string
WorkstationID string
ContextLevel string
ObjectID *string
DependencyCone []string
AcquiredAt time.Time
LastHeartbeat time.Time
}
// EditSessionRepository provides edit session database operations.
type EditSessionRepository struct {
db *DB
}
// NewEditSessionRepository creates a new edit session repository.
func NewEditSessionRepository(db *DB) *EditSessionRepository {
return &EditSessionRepository{db: db}
}
// Acquire inserts a new edit session. Returns a unique constraint error
// if another session already holds the same (item_id, context_level, object_id).
func (r *EditSessionRepository) Acquire(ctx context.Context, s *EditSession) error {
return r.db.pool.QueryRow(ctx, `
INSERT INTO edit_sessions (item_id, user_id, workstation_id, context_level, object_id, dependency_cone)
VALUES ($1, $2, $3, $4, $5, $6)
RETURNING id, acquired_at, last_heartbeat
`, s.ItemID, s.UserID, s.WorkstationID, s.ContextLevel, s.ObjectID, s.DependencyCone).
Scan(&s.ID, &s.AcquiredAt, &s.LastHeartbeat)
}
// Release deletes an edit session by ID.
func (r *EditSessionRepository) Release(ctx context.Context, id string) error {
_, err := r.db.pool.Exec(ctx, `DELETE FROM edit_sessions WHERE id = $1`, id)
return err
}
// ReleaseForWorkstation deletes all sessions for a workstation, returning
// the released sessions so callers can publish SSE notifications.
func (r *EditSessionRepository) ReleaseForWorkstation(ctx context.Context, workstationID string) ([]EditSession, error) {
rows, err := r.db.pool.Query(ctx, `
DELETE FROM edit_sessions
WHERE workstation_id = $1
RETURNING id, item_id, user_id, workstation_id, context_level, object_id, dependency_cone, acquired_at, last_heartbeat
`, workstationID)
if err != nil {
return nil, err
}
defer rows.Close()
var sessions []EditSession
for rows.Next() {
var s EditSession
if err := rows.Scan(&s.ID, &s.ItemID, &s.UserID, &s.WorkstationID,
&s.ContextLevel, &s.ObjectID, &s.DependencyCone,
&s.AcquiredAt, &s.LastHeartbeat); err != nil {
return nil, err
}
sessions = append(sessions, s)
}
return sessions, rows.Err()
}
// GetByID returns an edit session by its ID.
func (r *EditSessionRepository) GetByID(ctx context.Context, id string) (*EditSession, error) {
s := &EditSession{}
err := r.db.pool.QueryRow(ctx, `
SELECT id, item_id, user_id, workstation_id, context_level, object_id,
dependency_cone, acquired_at, last_heartbeat
FROM edit_sessions
WHERE id = $1
`, id).Scan(&s.ID, &s.ItemID, &s.UserID, &s.WorkstationID,
&s.ContextLevel, &s.ObjectID, &s.DependencyCone,
&s.AcquiredAt, &s.LastHeartbeat)
if err == pgx.ErrNoRows {
return nil, nil
}
if err != nil {
return nil, err
}
return s, nil
}
// ListForItem returns all active edit sessions for an item.
func (r *EditSessionRepository) ListForItem(ctx context.Context, itemID string) ([]*EditSession, error) {
rows, err := r.db.pool.Query(ctx, `
SELECT id, item_id, user_id, workstation_id, context_level, object_id,
dependency_cone, acquired_at, last_heartbeat
FROM edit_sessions
WHERE item_id = $1
ORDER BY acquired_at
`, itemID)
if err != nil {
return nil, err
}
defer rows.Close()
var sessions []*EditSession
for rows.Next() {
s := &EditSession{}
if err := rows.Scan(&s.ID, &s.ItemID, &s.UserID, &s.WorkstationID,
&s.ContextLevel, &s.ObjectID, &s.DependencyCone,
&s.AcquiredAt, &s.LastHeartbeat); err != nil {
return nil, err
}
sessions = append(sessions, s)
}
return sessions, rows.Err()
}
// ListForUser returns all active edit sessions for a user.
func (r *EditSessionRepository) ListForUser(ctx context.Context, userID string) ([]*EditSession, error) {
rows, err := r.db.pool.Query(ctx, `
SELECT id, item_id, user_id, workstation_id, context_level, object_id,
dependency_cone, acquired_at, last_heartbeat
FROM edit_sessions
WHERE user_id = $1
ORDER BY acquired_at
`, userID)
if err != nil {
return nil, err
}
defer rows.Close()
var sessions []*EditSession
for rows.Next() {
s := &EditSession{}
if err := rows.Scan(&s.ID, &s.ItemID, &s.UserID, &s.WorkstationID,
&s.ContextLevel, &s.ObjectID, &s.DependencyCone,
&s.AcquiredAt, &s.LastHeartbeat); err != nil {
return nil, err
}
sessions = append(sessions, s)
}
return sessions, rows.Err()
}
// TouchHeartbeat updates last_heartbeat for all sessions of a workstation.
func (r *EditSessionRepository) TouchHeartbeat(ctx context.Context, workstationID string) error {
_, err := r.db.pool.Exec(ctx, `
UPDATE edit_sessions SET last_heartbeat = now() WHERE workstation_id = $1
`, workstationID)
return err
}
// ExpireStale deletes sessions whose last_heartbeat is older than the given
// timeout, returning the expired sessions for SSE notification.
func (r *EditSessionRepository) ExpireStale(ctx context.Context, timeout time.Duration) ([]EditSession, error) {
rows, err := r.db.pool.Query(ctx, `
DELETE FROM edit_sessions
WHERE last_heartbeat < now() - $1::interval
RETURNING id, item_id, user_id, workstation_id, context_level, object_id, dependency_cone, acquired_at, last_heartbeat
`, timeout.String())
if err != nil {
return nil, err
}
defer rows.Close()
var sessions []EditSession
for rows.Next() {
var s EditSession
if err := rows.Scan(&s.ID, &s.ItemID, &s.UserID, &s.WorkstationID,
&s.ContextLevel, &s.ObjectID, &s.DependencyCone,
&s.AcquiredAt, &s.LastHeartbeat); err != nil {
return nil, err
}
sessions = append(sessions, s)
}
return sessions, rows.Err()
}
// GetConflict returns the existing session holding a given (item, context_level, object_id)
// slot, for building 409 conflict responses.
func (r *EditSessionRepository) GetConflict(ctx context.Context, itemID, contextLevel string, objectID *string) (*EditSession, error) {
s := &EditSession{}
var query string
var args []any
if objectID != nil {
query = `
SELECT id, item_id, user_id, workstation_id, context_level, object_id,
dependency_cone, acquired_at, last_heartbeat
FROM edit_sessions
WHERE item_id = $1 AND context_level = $2 AND object_id = $3
`
args = []any{itemID, contextLevel, *objectID}
} else {
query = `
SELECT id, item_id, user_id, workstation_id, context_level, object_id,
dependency_cone, acquired_at, last_heartbeat
FROM edit_sessions
WHERE item_id = $1 AND context_level = $2 AND object_id IS NULL
`
args = []any{itemID, contextLevel}
}
err := r.db.pool.QueryRow(ctx, query, args...).Scan(
&s.ID, &s.ItemID, &s.UserID, &s.WorkstationID,
&s.ContextLevel, &s.ObjectID, &s.DependencyCone,
&s.AcquiredAt, &s.LastHeartbeat)
if err == pgx.ErrNoRows {
return nil, nil
}
if err != nil {
return nil, err
}
return s, nil
}

View File

@@ -329,38 +329,6 @@ func (r *ItemRepository) CreateRevision(ctx context.Context, rev *Revision) erro
return nil
}
// GetLatestRevision retrieves the most recent revision for an item.
func (r *ItemRepository) GetLatestRevision(ctx context.Context, itemID string) (*Revision, error) {
rev := &Revision{}
var propsJSON []byte
err := r.db.pool.QueryRow(ctx, `
SELECT id, item_id, revision_number, properties, file_key, file_version,
file_checksum, file_size, COALESCE(file_storage_backend, 'filesystem'),
thumbnail_key, created_at, created_by, comment,
COALESCE(status, 'draft') as status, COALESCE(labels, ARRAY[]::TEXT[]) as labels
FROM revisions
WHERE item_id = $1
ORDER BY revision_number DESC
LIMIT 1
`, itemID).Scan(
&rev.ID, &rev.ItemID, &rev.RevisionNumber, &propsJSON, &rev.FileKey, &rev.FileVersion,
&rev.FileChecksum, &rev.FileSize, &rev.FileStorageBackend,
&rev.ThumbnailKey, &rev.CreatedAt, &rev.CreatedBy, &rev.Comment,
&rev.Status, &rev.Labels,
)
if err == pgx.ErrNoRows {
return nil, nil
}
if err != nil {
return nil, fmt.Errorf("querying latest revision: %w", err)
}
if err := json.Unmarshal(propsJSON, &rev.Properties); err != nil {
return nil, fmt.Errorf("unmarshaling properties: %w", err)
}
return rev, nil
}
// GetRevisions retrieves all revisions for an item.
func (r *ItemRepository) GetRevisions(ctx context.Context, itemID string) ([]*Revision, error) {
// Check if status column exists (migration 007 applied)
@@ -491,8 +459,8 @@ func (r *ItemRepository) GetRevision(ctx context.Context, itemID string, revisio
}
// UpdateRevisionStatus updates the status and/or labels of a revision.
func (r *ItemRepository) UpdateRevisionStatus(ctx context.Context, itemID string, revisionNumber int, status *string, labels []string, comment *string) error {
if status == nil && labels == nil && comment == nil {
func (r *ItemRepository) UpdateRevisionStatus(ctx context.Context, itemID string, revisionNumber int, status *string, labels []string) error {
if status == nil && labels == nil {
return nil // Nothing to update
}
@@ -524,12 +492,6 @@ func (r *ItemRepository) UpdateRevisionStatus(ctx context.Context, itemID string
argNum++
}
if comment != nil {
updates = append(updates, fmt.Sprintf("comment = $%d", argNum))
args = append(args, *comment)
argNum++
}
query += updates[0]
for i := 1; i < len(updates); i++ {
query += ", " + updates[i]

View File

@@ -105,7 +105,7 @@ func TestRevisionStatusUpdate(t *testing.T) {
}
status := "released"
if err := repo.UpdateRevisionStatus(ctx, item.ID, 1, &status, nil, nil); err != nil {
if err := repo.UpdateRevisionStatus(ctx, item.ID, 1, &status, nil); err != nil {
t.Fatalf("UpdateRevisionStatus: %v", err)
}
@@ -129,7 +129,7 @@ func TestRevisionLabelsUpdate(t *testing.T) {
}
labels := []string{"prototype", "urgent"}
if err := repo.UpdateRevisionStatus(ctx, item.ID, 1, nil, labels, nil); err != nil {
if err := repo.UpdateRevisionStatus(ctx, item.ID, 1, nil, labels); err != nil {
t.Fatalf("UpdateRevisionStatus: %v", err)
}

View File

@@ -0,0 +1,17 @@
-- 023_edit_sessions.sql — active editing context tracking
CREATE TABLE edit_sessions (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
item_id UUID NOT NULL REFERENCES items(id) ON DELETE CASCADE,
user_id UUID NOT NULL REFERENCES users(id) ON DELETE CASCADE,
workstation_id UUID NOT NULL REFERENCES workstations(id) ON DELETE CASCADE,
context_level TEXT NOT NULL CHECK (context_level IN ('sketch', 'partdesign', 'assembly')),
object_id TEXT,
dependency_cone TEXT[],
acquired_at TIMESTAMPTZ NOT NULL DEFAULT now(),
last_heartbeat TIMESTAMPTZ NOT NULL DEFAULT now()
);
CREATE INDEX idx_edit_sessions_item ON edit_sessions(item_id);
CREATE INDEX idx_edit_sessions_user ON edit_sessions(user_id);
CREATE UNIQUE INDEX idx_edit_sessions_active ON edit_sessions(item_id, context_level, object_id);