68 Commits

Author SHA1 Message Date
f8b8eda973 Merge branch 'main' into feat/kc-dependencies 2026-02-19 00:55:40 +00:00
Forbes
cffcf56085 feat(api): item dependency extraction, indexing, and resolve endpoints
- Add Dependency type to internal/kc and extract silo/dependencies.json
  from .kc files on commit
- Create ItemDependencyRepository with ReplaceForRevision, ListByItem,
  and Resolve (LEFT JOIN against items table)
- Add GET /{partNumber}/dependencies and
  GET /{partNumber}/dependencies/resolve endpoints
- Index dependencies in extractKCMetadata with SSE broadcast
- Pack real dependency data into .kc files on checkout
- Update PackInput.Dependencies from []any to []Dependency

Closes #143
2026-02-18 18:53:40 -06:00
1a34455ad5 Merge pull request 'feat(kc): checkout packing + ETag caching (Phase 2)' (#150) from feat/kc-checkout-packing into main
Reviewed-on: #150
2026-02-18 23:06:17 +00:00
Forbes
c216d64702 feat(kc): checkout packing + ETag caching (Phase 2)
Implements issue #142 — .kc checkout pipeline that repacks silo/ entries
with current DB state before serving downloads.

When a client downloads a .kc file via GET /api/items/{pn}/file/{rev},
the server now:
1. Reads the file from storage into memory
2. Checks for silo/ directory (plain .fcstd files bypass packing)
3. Repacks silo/ entries with current item_metadata + revision history
4. Streams the repacked ZIP to the client

New files:
- internal/kc/pack.go: Pack() replaces silo/ entries in ZIP, preserving
  all non-silo entries (FreeCAD files, thumbnails) with original
  compression and timestamps. HasSiloDir() for lightweight detection.
- internal/api/pack_handlers.go: packKCFile server helper, computeETag,
  canSkipRepack lazy optimization.

ETag caching:
- ETag computed from revision_number + metadata.updated_at
- If-None-Match support returns 304 Not Modified before reading storage
- Cache-Control: private, must-revalidate

Lazy packing optimization:
- Skips repack if revision_hash matches and metadata unchanged since upload

Phase 2 packs: manifest.json, metadata.json, history.json,
dependencies.json (empty []). Approvals, macros, jobs deferred to
Phase 3-5.

Closes #142
2026-02-18 17:01:26 -06:00
28f133411e Merge pull request 'feat(kc): commit extraction pipeline + metadata API (Phase 1)' (#149) from feat/kc-extraction-pipeline into main
Reviewed-on: #149
2026-02-18 22:39:59 +00:00
6528df0461 Merge branch 'main' into feat/kc-extraction-pipeline 2026-02-18 22:39:49 +00:00
Forbes
dd010331c0 feat(kc): commit extraction pipeline + metadata API (Phase 1)
Implements issue #141 — .kc server-side metadata integration Phase 1.

When a .kc file is uploaded, the server extracts silo/manifest.json and
silo/metadata.json from the ZIP archive and indexes them into the
item_metadata table. Plain .fcstd files continue to work unchanged.
Extraction is best-effort: failures are logged but do not block the upload.

New packages:
- internal/kc: ZIP extraction library (Extract, Manifest, Metadata types)
- internal/db: ItemMetadataRepository (Get, Upsert, UpdateFields,
  UpdateLifecycle, SetTags)

New API endpoints under /api/items/{partNumber}:
- GET    /metadata           — read indexed metadata (viewer)
- PUT    /metadata           — merge fields into JSONB (editor)
- PATCH  /metadata/lifecycle — transition lifecycle state (editor)
- PATCH  /metadata/tags      — add/remove tags (editor)

SSE events: metadata.updated, metadata.lifecycle, metadata.tags

Lifecycle transitions (Phase 1): draft→review→released→obsolete,
review→draft (reject).

Closes #141
2026-02-18 16:37:39 -06:00
628cd1d252 Merge pull request 'feat(db): .kc metadata database migration' (#148) from feat/kc-metadata-migration into main
Reviewed-on: #148
2026-02-18 21:05:15 +00:00
Forbes
8d777e83bb feat(db): .kc metadata database migration (#140)
Add migration 018_kc_metadata.sql with all tables needed for .kc
server-side metadata indexing:

- item_metadata: indexed manifest + metadata fields from silo/
  directory (tags, lifecycle_state, fields JSONB, manifest info)
- item_dependencies: CAD-extracted assembly dependencies
  (complements existing relationships table)
- item_approvals + approval_signatures: ECO workflow state
- item_macros: registered macros from silo/macros/

Also adds docs/KC_SERVER.md specification document.

Closes #140
2026-02-18 15:04:03 -06:00
d96ba8d394 Merge pull request 'docs: replace MinIO with filesystem storage throughout' (#139) from update-silo-fs-docs into main
Reviewed-on: #139
2026-02-18 20:46:51 +00:00
Forbes
56c76940ed docs: replace MinIO with filesystem storage throughout
Remove all MinIO/S3 references from documentation and deployment
configs. Silo now uses local filesystem storage exclusively.

Updated files:
- docs/CONFIGURATION.md: storage section now documents filesystem backend
- docs/DEPLOYMENT.md: architecture diagram, external services, troubleshooting
- docs/INSTALL.md: remove MinIO setup, update architecture diagrams
- docs/SPECIFICATION.md: architecture, technology stack, file storage strategy
- docs/STATUS.md: storage backend status
- docs/GAP_ANALYSIS.md: file handling references
- docs/ROADMAP.md: file storage appendix entries
- deployments/config.prod.yaml: filesystem backend config
- deployments/systemd/silod.env.example: remove MinIO credential vars
2026-02-18 14:45:00 -06:00
9dabaf5796 Merge pull request 'feat(scripts): remote migrate-storage script for MinIO to filesystem migration' (#138) from feat-remote-migrate-storage into main
Reviewed-on: #138
2026-02-18 20:33:38 +00:00
Forbes
3bb335397c feat(scripts): remote migrate-storage script for MinIO to filesystem migration
Adds scripts/migrate-storage.sh that follows the same deploy.sh pattern:
cross-compiles the migrate-storage binary locally, uploads it to the
target host via SCP, then runs it over SSH using credentials from
/etc/silo/silod.env.

Usage: ./scripts/migrate-storage.sh <silo-host> <psql-host> <minio-host> [flags...]
2026-02-18 14:29:46 -06:00
344a0cd0a0 Merge pull request 'feat(storage): add MinIO to filesystem migration tool' (#137) from feat/migrate-storage-tool into main
Reviewed-on: #137
2026-02-18 20:16:17 +00:00
forbes
f5b03989ff feat(storage): add MinIO to filesystem migration tool
Standalone binary (cmd/migrate-storage) that downloads all files from
MinIO and writes them to the local filesystem for decommissioning MinIO.

Queries revision files, item file attachments, and item thumbnails from
the database, then downloads each from MinIO preserving the object key
structure as filesystem paths. Supports --dry-run, --verbose, atomic
writes via temp+rename, and idempotent re-runs (skips existing files
with matching size).
2026-02-18 14:12:32 -06:00
8cd92a4025 Merge pull request 'feat(api): direct multipart upload endpoints for filesystem backend' (#136) from feat-direct-upload into main
Reviewed-on: #136
2026-02-17 19:05:39 +00:00
ffa01ebeb7 feat(api): direct multipart upload endpoints for filesystem backend
Add three new endpoints that bypass the MinIO presigned URL flow:
- POST /api/items/{pn}/files/upload — multipart file upload
- POST /api/items/{pn}/thumbnail/upload — multipart thumbnail upload
- GET /api/items/{pn}/files/{fileId}/download — stream file download

Rewrite frontend upload flow: files are held in browser memory on drop
and uploaded directly after item creation via multipart POST. The old
presign+associate endpoints remain for MinIO backward compatibility.

Closes #129
2026-02-17 13:04:44 -06:00
9181673554 Merge pull request 'feat(db): add storage backend metadata columns' (#135) from feat-file-storage-metadata into main
Reviewed-on: #135
2026-02-17 18:32:05 +00:00
8cef4fa55f feat(db): add storage backend metadata columns
Add storage_backend columns to track which backend (minio or filesystem)
holds each file, enabling dual-running during migration.

Migration 017_file_storage_metadata.sql:
- item_files.storage_backend TEXT NOT NULL DEFAULT 'minio'
- revisions.file_storage_backend TEXT NOT NULL DEFAULT 'minio'

DB repository changes:
- Revision struct: add FileStorageBackend field
- ItemFile struct: add StorageBackend field
- All INSERT queries include the new columns
- All SELECT queries read them (COALESCE for pre-migration compat)
- CreateRevisionFromExisting copies the backend from source revision
- Default to 'minio' when field is empty (backward compat)

Existing rows default to 'minio'. New uploads will write 'filesystem'
when the filesystem backend is active.

Closes #128
2026-02-17 12:30:20 -06:00
7a9dd057a5 Merge pull request 'feat(storage): FileStore interface abstraction + filesystem backend' (#134) from feat-storage-interface-filesystem into main
Reviewed-on: #134
2026-02-17 17:55:09 +00:00
9f347e7898 feat(storage): implement filesystem backend
Implement FilesystemStore satisfying the FileStore interface for local
filesystem storage, replacing MinIO for simpler deployments.

- Atomic writes via temp file + os.Rename (no partial files)
- SHA-256 checksum computed on Put via io.MultiWriter
- Get/GetVersion return os.File (GetVersion ignores versionID)
- Delete is idempotent (no error if file missing)
- Copy uses same atomic write pattern
- PresignPut returns ErrPresignNotSupported
- Ping verifies root directory is writable
- Wire NewFilesystemStore in main.go backend switch
- 14 unit tests covering all methods including atomicity

Closes #127
2026-02-17 11:49:42 -06:00
b531617e39 feat(storage): define FileStore interface and refactor to use it
Extract a FileStore interface from the concrete *storage.Storage MinIO
wrapper so the API layer is storage-backend agnostic.

- Define FileStore interface in internal/storage/interface.go
- Add Exists method to MinIO Storage (via StatObject)
- Add compile-time interface satisfaction check
- Change Server.storage and ServerState.storage to FileStore interface
- Update NewServer and NewServerState signatures
- Add Backend and FilesystemConfig fields to StorageConfig
- Add backend selection switch in main.go (minio/filesystem/unknown)
- Update config.example.yaml with backend field

The nil-interface pattern is preserved: when storage is unconfigured,
store remains a true nil FileStore (not a typed nil pointer), so all
existing if s.storage == nil checks continue to work correctly.

Closes #126
2026-02-17 11:49:35 -06:00
906277149e Merge pull request 'feat(web): read-write configuration from admin UI' (#124) from feat-admin-config-ui into main
Reviewed-on: #124
2026-02-15 23:12:04 +00:00
Forbes
fc4826f576 feat(web): read-write configuration from admin UI
Convert all module settings from read-only to editable fields in the
admin settings page:

- Core: host, port, base_url (read-only stays read-only)
- Schemas: directory, default (count stays read-only)
- Database: host, port, name, user, password, sslmode (dropdown),
  max_connections
- Storage: endpoint, bucket, use_ssl (checkbox), region
- Auth: local/ldap/oidc sub-sections with enabled checkboxes,
  connection fields, and secret fields (password input for redacted)

New field components: SelectField (dropdown), CheckboxField (toggle).
Redacted fields now render as password inputs with placeholder.
Auth uses nested key handling to send sub-section objects.

Backend already persists overrides and flags restart-required changes.

Closes #117
2026-02-15 13:33:48 -06:00
fbfc955ccc Merge pull request 'feat(modules): SSE settings.changed event broadcast' (#123) from feat-sse-settings-changed into main
Reviewed-on: #123
2026-02-15 19:14:36 +00:00
e0295e7180 Merge branch 'main' into feat-sse-settings-changed 2026-02-15 19:14:26 +00:00
Forbes
7fec219152 feat(modules): SSE settings.changed event broadcast and UI reactions
Add useSSE hook that connects to /api/events with automatic reconnect
and exponential backoff. On settings.changed events:

- Refresh module state so sidebar nav items show/hide immediately
- Show dismissable toast when another admin updates settings

The backend already publishes settings.changed in HandleUpdateModuleSettings.

Closes #101
2026-02-15 13:11:04 -06:00
fa069eb05c Merge pull request 'feat(web): move edit/delete buttons into tab bar on item detail' (#122) from feat-move-edit-delete-buttons into main
Reviewed-on: #122
2026-02-15 19:03:59 +00:00
Forbes
8735c8341b feat(web): move edit/delete buttons into tab bar on item detail
Relocate Edit and Delete buttons from the header row into the tab bar,
grouping them with tab navigation to reduce mouse travel. Adds Pencil
and Trash2 icons for quick visual recognition.

Header now only shows part number, type badge, and close button.

Closes #119
2026-02-15 12:59:40 -06:00
7a172ce34c Merge pull request 'feat(web): favicon, narrow settings, scrollable token list' (#121) from feat-ui-tweaks into main
Reviewed-on: #121
2026-02-15 18:47:03 +00:00
Forbes
da65d4bc1a feat(web): favicon, narrow settings, scrollable token list
- Add kindred-logo.svg as site favicon (#115)
- Narrow settings page to 66% max-width, centered (#116)
- Add max-height and scroll to API token table (#118)

Closes #115, closes #116, closes #118
2026-02-15 12:38:20 -06:00
57d5a786d0 Merge pull request 'feat(web): collapsible left sidebar, remove top nav bar' (#120) from feat-sidebar-nav into main
Reviewed-on: #120
2026-02-15 18:33:09 +00:00
Forbes
42a901f39c feat(web): collapsible left sidebar, remove top nav bar
- Replace top header with left sidebar navigation
- Sidebar shows module-aware nav items filtered by /api/modules
- Collapsible: expanded shows icon+label, collapsed shows icon only
- Toggle with Ctrl+J or collapse button, state persisted in localStorage
- Keyboard navigable: Arrow Up/Down, Enter to navigate, Escape to collapse
- Bottom section: density toggle, user info with role badge, logout
- Add useModules hook for fetching module state
- Add sidebar density variables to theme.css

Closes #113, closes #114
2026-02-15 12:32:52 -06:00
666cc2b23b Merge pull request 'feat(jobs): wire auto-triggering on bom_changed events' (#112) from feat-job-auto-trigger into main
Reviewed-on: #112
2026-02-15 15:44:42 +00:00
Forbes
747bae8354 feat(jobs): wire auto-triggering on bom_changed events, add module guard
- Add IsEnabled("jobs") guard to triggerJobs() to skip when module disabled
- Fire bom_changed trigger from HandleAddBOMEntry, HandleUpdateBOMEntry,
  HandleDeleteBOMEntry (matching existing HandleMergeBOM pattern)
- Add 4 integration tests: revision trigger, BOM trigger, filter mismatch,
  module disabled
- Fix AppShell overflow: hidden -> auto so Settings page scrolls
- Clean old frontend assets in deploy script before extracting

Closes #107
2026-02-15 09:43:05 -06:00
71603bb6d7 Merge pull request 'feat: location hierarchy CRUD API' (#106) from feat-location-crud into main
Reviewed-on: #106
2026-02-15 09:16:52 +00:00
Forbes
4ef912cf4b feat: location hierarchy CRUD API
Add LocationRepository with CRUD operations, hierarchy traversal
(children, subtree by path prefix), and inventory-safe deletion.

Endpoints:
  GET    /api/locations          — list all or ?tree={path} for subtree
  POST   /api/locations          — create (auto-resolves parent_id, depth)
  GET    /api/locations/{path..} — get by hierarchical path
  PUT    /api/locations/{path..} — update name, type, metadata
  DELETE /api/locations/{path..} — delete (rejects if inventory exists)

Uses chi wildcard routes to support multi-segment paths like
/api/locations/lab/shelf-a/bin-3.

Includes 10 handler integration tests covering CRUD, nesting,
validation, duplicates, tree queries, and delete-not-found.

Closes #81
2026-02-15 03:15:54 -06:00
decb32c3e7 Merge pull request 'feat(web): admin settings page — module cards, toggles, config forms' (#105) from feat-admin-settings-api into main
Reviewed-on: #105
2026-02-15 09:09:17 +00:00
Forbes
0be39065ac feat(web): admin settings page with module cards, toggles, config forms
Add admin-only Module Configuration section to the Settings page.
Each module gets a collapsible card with enable/disable toggle,
status badge, module-specific config fields, save and test
connectivity buttons.

- AdminModules: fetches GET /api/modules + GET /api/admin/settings,
  renders Infrastructure and Features groups, restart banner
- ModuleCard: collapsible card with toggle, status badge, field
  layouts per module, save (PUT) and test (POST) actions
- TypeScript types for ModuleInfo, ModulesResponse, admin settings
  API response shapes

Ref: #100
2026-02-15 03:01:33 -06:00
Forbes
101d04ab6f test(api): admin settings handler tests
- TestGetAllSettings — all module keys present, secrets redacted
- TestGetModuleSettings — single module response
- TestGetModuleSettings_Unknown — 404 for unknown module
- TestToggleModule — disable projects, verify registry state
- TestToggleModule_DependencyError — enable dag without jobs, expect 400
- TestToggleRequiredModule — disable core, expect 400
- TestTestConnectivity_Database — ping database, expect success
- TestTestConnectivity_NotTestable — core module, expect 400
2026-02-15 02:51:00 -06:00
Forbes
8167d9c216 feat(api): admin settings API endpoints
Add four admin-only endpoints under /api/admin/settings:

- GET  /                — full config (secrets redacted)
- GET  /{module}        — single module config
- PUT  /{module}        — toggle modules + persist config overrides
- POST /{module}/test   — test external connectivity (database, storage)

PUT publishes a settings.changed SSE event. Config overrides are
persisted for future hot-reload support; changes to database/storage/
server/schemas namespaces return restart_required: true.

Wires SettingsRepository into Server struct.

Closes #99
2026-02-15 02:51:00 -06:00
Forbes
319a739adb feat(db): add SettingsRepository for module state and config overrides
Provides CRUD operations on the module_state and settings_overrides
tables (created in migration 016).

- GetModuleStates / SetModuleState — upsert module enabled/disabled
- GetOverrides / SetOverride / DeleteOverride — JSONB config overrides

Part of #99
2026-02-15 02:51:00 -06:00
e20252a993 Merge pull request 'feat: module system — registry, middleware, and discovery endpoint' (#102) from feat-module-system into main
Reviewed-on: #102
2026-02-14 20:05:42 +00:00
Forbes
138ce16010 fix: remove unreachable code in testutil.findProjectRoot 2026-02-14 14:02:48 -06:00
Forbes
690ad73161 feat(modules): public GET /api/modules discovery endpoint
Add HandleGetModules returning module state, metadata, and
public config (auth providers, Create URI scheme). No auth
required — clients call this pre-login.

Register at /api/modules before the auth middleware.

Ref #97
2026-02-14 14:02:11 -06:00
Forbes
b8abd8859d feat(modules): RequireModule middleware to gate route groups
Add RequireModule middleware that returns 404 with
{"error":"module '<id>' is not enabled"} when a module is disabled.

Wrap route groups:
- projects → RequireModule("projects")
- audit → RequireModule("audit")
- integrations/odoo → RequireModule("odoo")
- jobs, job-definitions, runners → RequireModule("jobs")
- /api/runner (runner-facing) → RequireModule("jobs")
- dag → RequireModule("dag") (extracted into sub-route)

Ref #98
2026-02-14 14:01:32 -06:00
Forbes
4fd4013360 feat(modules): wire registry into server startup
Add modules.Registry and config.Config fields to Server struct.
Create registry in main.go, load state from YAML+DB, log all
module states at startup.

Conditionally start job/runner sweeper goroutines only when the
jobs module is enabled.

Update all 5 test files to pass registry to NewServer.

Ref #95, #96
2026-02-14 14:00:24 -06:00
Forbes
3adc155b14 feat(modules): config loader refactor — YAML → DB → env pipeline
Add ModulesConfig and ModuleToggle types to config.go for explicit
module enable/disable in YAML.

Add LoadState() that merges state from three sources:
1. Backward-compat YAML fields (auth.enabled, odoo.enabled)
2. Explicit modules.* YAML toggles (override compat)
3. Database module_state table (highest precedence)

Validates dependency chain after loading. 5 loader tests.

Ref #95
2026-02-14 13:58:26 -06:00
Forbes
9d8afa5981 feat(modules): module registry with metadata, dependencies, and defaults
In-memory registry for 10 modules (3 required, 7 optional).
SetEnabled validates dependency chains: cannot enable a module
whose dependencies are disabled, cannot disable a module that
others depend on.

9 unit tests covering default state, toggling, dependency
validation, and error cases.

Ref #96
2026-02-14 13:57:32 -06:00
Forbes
f91cf2bc6f feat(modules): settings_overrides and module_state migration
Add migration 016 with two tables for the module system:
- settings_overrides: dotted-path config overrides set via admin UI
- module_state: per-module enabled/disabled state

Update testutil.TruncateAll to include new tables.

Ref #94
2026-02-14 13:56:26 -06:00
ef44523ae8 Merge pull request 'fix(web): standardize typography and spacing to style guide' (#93) from fix-web-style-guide into main
Reviewed-on: #93
2026-02-14 19:37:04 +00:00
Forbes
ba92dd363c fix(web): align all spacing values to 4px grid
Standardize all spacing to multiples of 4px (0.25rem):
- 0.15rem (2.4px) → 0.25rem (4px)
- 0.35rem (5.6px) → 0.25rem (4px)
- 0.375rem (6px) → 0.25rem (4px) for borderRadius
- 0.4rem (6.4px) → 0.5rem (8px)
- 0.6rem (9.6px) → 0.5rem (8px)

Updated theme.css density variables, silo-base.css focus ring,
and all TSX component inline styles.

Closes #71
2026-02-14 13:36:22 -06:00
Forbes
c7857fdfc9 fix(web): standardize font sizes to style guide scale
Map fontWeight: 700 → 600 in non-title contexts (LoginPage, FileDropZone).
Align FileDropZone badge padding to 4px grid.

Closes #70
2026-02-14 13:36:07 -06:00
defb3af56f Merge pull request 'feat: dependency DAG and YAML-defined compute jobs' (#92) from feat-dag-workers into main
Reviewed-on: #92
2026-02-14 19:27:18 +00:00
Forbes
6d7a85cfac docs: add DAG client integration contract for silo-mod and runners 2026-02-14 13:24:36 -06:00
Forbes
22c778f8b0 test: add DAG handler, job handler, and runner token tests 2026-02-14 13:23:21 -06:00
Forbes
ad4224aa8f feat: add silorunner binary with job poll/claim/execute lifecycle 2026-02-14 13:21:21 -06:00
Forbes
b6ac5133c3 feat: add auto-trigger hooks for revision and BOM changes 2026-02-14 13:20:15 -06:00
Forbes
2732554cd2 feat: add job, runner, and DAG API handlers with routes 2026-02-14 13:19:02 -06:00
Forbes
df073709ce feat: add DAG API handlers for graph queries and sync 2026-02-14 13:16:19 -06:00
Forbes
0eb891667b feat: add runner authentication middleware and identity context 2026-02-14 13:14:36 -06:00
Forbes
1952dea00c feat: wire job definitions, DAG/job repos, and background sweepers 2026-02-14 13:13:54 -06:00
Forbes
6becfd82d4 feat: add job and runner repository with atomic claim 2026-02-14 13:11:41 -06:00
Forbes
671a0aeefe feat: add DAG repository with graph queries and dirty propagation 2026-02-14 13:09:41 -06:00
Forbes
f60c25983b feat: add YAML job definition parser and example definitions
New package internal/jobdef mirrors the schema package pattern:
- Load/LoadAll/Validate for YAML job definitions
- Supports trigger types: revision_created, bom_changed, manual, schedule
- Supports scope types: item, assembly, project
- Supports compute types: validate, rebuild, diff, export, custom
- Defaults: timeout=600s, max_retries=1, priority=100

Example definitions in jobdefs/:
- assembly-validate.yaml: incremental validation on revision_created
- part-export-step.yaml: STEP export on manual trigger

11 unit tests, all passing.
2026-02-14 13:06:24 -06:00
Forbes
83e0d6821c feat: add database migrations for DAG and worker system
Migration 014: dag_nodes, dag_edges, dag_cross_edges tables for the
feature-level dependency graph with validation state tracking.

Migration 015: runners, job_definitions, jobs, job_log tables for the
async compute job system with PostgreSQL-backed work queue.

Update TruncateAll in testutil to include new tables.
2026-02-14 13:04:41 -06:00
Forbes
9a8b3150ff docs: add DAG and worker system specifications
DAG.md describes the two-tier dependency graph (BOM DAG + feature DAG),
node/edge data model, validation states, dirty propagation, forward/backward
cone queries, DAG sync payload format, and REST API.

WORKERS.md describes the general-purpose async compute job system: YAML job
definitions, job lifecycle (pending→claimed→running→completed/failed),
runner registration and authentication, claim semantics (SELECT FOR UPDATE
SKIP LOCKED), timeout enforcement, SSE events, and REST API.
2026-02-14 13:03:48 -06:00
376fa3db31 Merge pull request 'test: add test coverage for DB, file handlers, CSV/ODS, and API endpoints' (#86) from test-coverage-batch into main
Reviewed-on: #86
2026-02-14 14:50:38 +00:00
110 changed files with 13326 additions and 711 deletions

1
.gitignore vendored
View File

@@ -1,6 +1,7 @@
# Binaries
/silo
/silod
/migrate-storage
*.exe
*.dll
*.so

View File

@@ -1,7 +1,8 @@
.PHONY: build run test test-integration clean migrate fmt lint \
docker-build docker-up docker-down docker-logs docker-ps \
docker-clean docker-rebuild \
web-install web-dev web-build
web-install web-dev web-build \
migrate-storage
# =============================================================================
# Local Development
@@ -11,6 +12,7 @@
build: web-build
go build -o silo ./cmd/silo
go build -o silod ./cmd/silod
go build -o silorunner ./cmd/silorunner
# Run the API server locally
run:
@@ -30,7 +32,7 @@ test-integration:
# Clean build artifacts
clean:
rm -f silo silod
rm -f silo silod silorunner
rm -f *.out
rm -rf web/dist
@@ -55,6 +57,13 @@ tidy:
migrate:
./scripts/init-db.sh
# Build and run MinIO → filesystem migration tool
# Usage: make migrate-storage DEST=/opt/silo/data [ARGS="--dry-run --verbose"]
migrate-storage:
go build -o migrate-storage ./cmd/migrate-storage
@echo "Built ./migrate-storage"
@echo "Run: ./migrate-storage -config <config.yaml> -dest <dir> [-dry-run] [-verbose]"
# Connect to database (requires psql)
db-shell:
PGPASSWORD=$${SILO_DB_PASSWORD:-silodev} psql -h $${SILO_DB_HOST:-localhost} -U $${SILO_DB_USER:-silo} -d $${SILO_DB_NAME:-silo}

288
cmd/migrate-storage/main.go Normal file
View File

@@ -0,0 +1,288 @@
// Command migrate-storage downloads files from MinIO and writes them to the
// local filesystem. It is a one-shot migration tool for moving off MinIO.
//
// Usage:
//
// migrate-storage -config config.yaml -dest /opt/silo/data [-dry-run] [-verbose]
package main
import (
"context"
"flag"
"fmt"
"io"
"os"
"path/filepath"
"time"
"github.com/kindredsystems/silo/internal/config"
"github.com/kindredsystems/silo/internal/db"
"github.com/kindredsystems/silo/internal/storage"
"github.com/rs/zerolog"
)
// fileEntry represents a single file to migrate.
type fileEntry struct {
key string
versionID string // MinIO version ID; empty if not versioned
size int64 // expected size from DB; 0 if unknown
}
func main() {
configPath := flag.String("config", "config.yaml", "Path to configuration file")
dest := flag.String("dest", "", "Destination root directory (required)")
dryRun := flag.Bool("dry-run", false, "Preview what would be migrated without downloading")
verbose := flag.Bool("verbose", false, "Log every file, not just errors and summary")
flag.Parse()
logger := zerolog.New(os.Stdout).With().Timestamp().Logger()
if *dest == "" {
logger.Fatal().Msg("-dest is required")
}
// Load config (reuses existing config for DB + MinIO credentials).
cfg, err := config.Load(*configPath)
if err != nil {
logger.Fatal().Err(err).Msg("failed to load configuration")
}
ctx := context.Background()
// Connect to PostgreSQL.
database, err := db.Connect(ctx, db.Config{
Host: cfg.Database.Host,
Port: cfg.Database.Port,
Name: cfg.Database.Name,
User: cfg.Database.User,
Password: cfg.Database.Password,
SSLMode: cfg.Database.SSLMode,
MaxConnections: cfg.Database.MaxConnections,
})
if err != nil {
logger.Fatal().Err(err).Msg("failed to connect to database")
}
defer database.Close()
logger.Info().Msg("connected to database")
// Connect to MinIO.
store, err := storage.Connect(ctx, storage.Config{
Endpoint: cfg.Storage.Endpoint,
AccessKey: cfg.Storage.AccessKey,
SecretKey: cfg.Storage.SecretKey,
Bucket: cfg.Storage.Bucket,
UseSSL: cfg.Storage.UseSSL,
Region: cfg.Storage.Region,
})
if err != nil {
logger.Fatal().Err(err).Msg("failed to connect to MinIO")
}
logger.Info().Str("bucket", cfg.Storage.Bucket).Msg("connected to MinIO")
// Collect all file references from the database.
entries, err := collectEntries(ctx, logger, database)
if err != nil {
logger.Fatal().Err(err).Msg("failed to collect file entries from database")
}
logger.Info().Int("total", len(entries)).Msg("file entries found")
if len(entries) == 0 {
logger.Info().Msg("nothing to migrate")
return
}
// Migrate.
var migrated, skipped, failed int
start := time.Now()
for i, e := range entries {
destPath := filepath.Join(*dest, e.key)
// Check if already migrated.
if info, err := os.Stat(destPath); err == nil {
if e.size > 0 && info.Size() == e.size {
if *verbose {
logger.Info().Str("key", e.key).Msg("skipped (already exists)")
}
skipped++
continue
}
// Size mismatch or unknown size — re-download.
}
if *dryRun {
logger.Info().
Str("key", e.key).
Int64("size", e.size).
Str("version", e.versionID).
Msgf("[%d/%d] would migrate", i+1, len(entries))
continue
}
if err := migrateFile(ctx, store, e, destPath); err != nil {
logger.Error().Err(err).Str("key", e.key).Msg("failed to migrate")
failed++
continue
}
migrated++
if *verbose {
logger.Info().
Str("key", e.key).
Int64("size", e.size).
Msgf("[%d/%d] migrated", i+1, len(entries))
} else if (i+1)%50 == 0 {
logger.Info().Msgf("progress: %d/%d", i+1, len(entries))
}
}
elapsed := time.Since(start)
ev := logger.Info().
Int("total", len(entries)).
Int("migrated", migrated).
Int("skipped", skipped).
Int("failed", failed).
Dur("elapsed", elapsed)
if *dryRun {
ev.Msg("dry run complete")
} else {
ev.Msg("migration complete")
}
if failed > 0 {
os.Exit(1)
}
}
// collectEntries queries the database for all file references across the three
// storage domains: revision files, item file attachments, and item thumbnails.
// It deduplicates by key.
func collectEntries(ctx context.Context, logger zerolog.Logger, database *db.DB) ([]fileEntry, error) {
pool := database.Pool()
seen := make(map[string]struct{})
var entries []fileEntry
add := func(key, versionID string, size int64) {
if key == "" {
return
}
if _, ok := seen[key]; ok {
return
}
seen[key] = struct{}{}
entries = append(entries, fileEntry{key: key, versionID: versionID, size: size})
}
// 1. Revision files.
rows, err := pool.Query(ctx,
`SELECT file_key, COALESCE(file_version, ''), COALESCE(file_size, 0)
FROM revisions WHERE file_key IS NOT NULL`)
if err != nil {
return nil, fmt.Errorf("querying revisions: %w", err)
}
for rows.Next() {
var key, version string
var size int64
if err := rows.Scan(&key, &version, &size); err != nil {
rows.Close()
return nil, fmt.Errorf("scanning revision row: %w", err)
}
add(key, version, size)
}
rows.Close()
if err := rows.Err(); err != nil {
return nil, fmt.Errorf("iterating revisions: %w", err)
}
logger.Info().Int("count", len(entries)).Msg("revision files found")
// 2. Item file attachments.
countBefore := len(entries)
rows, err = pool.Query(ctx,
`SELECT object_key, size FROM item_files`)
if err != nil {
return nil, fmt.Errorf("querying item_files: %w", err)
}
for rows.Next() {
var key string
var size int64
if err := rows.Scan(&key, &size); err != nil {
rows.Close()
return nil, fmt.Errorf("scanning item_files row: %w", err)
}
add(key, "", size)
}
rows.Close()
if err := rows.Err(); err != nil {
return nil, fmt.Errorf("iterating item_files: %w", err)
}
logger.Info().Int("count", len(entries)-countBefore).Msg("item file attachments found")
// 3. Item thumbnails.
countBefore = len(entries)
rows, err = pool.Query(ctx,
`SELECT thumbnail_key FROM items WHERE thumbnail_key IS NOT NULL`)
if err != nil {
return nil, fmt.Errorf("querying item thumbnails: %w", err)
}
for rows.Next() {
var key string
if err := rows.Scan(&key); err != nil {
rows.Close()
return nil, fmt.Errorf("scanning thumbnail row: %w", err)
}
add(key, "", 0)
}
rows.Close()
if err := rows.Err(); err != nil {
return nil, fmt.Errorf("iterating thumbnails: %w", err)
}
logger.Info().Int("count", len(entries)-countBefore).Msg("item thumbnails found")
return entries, nil
}
// migrateFile downloads a single file from MinIO and writes it atomically to destPath.
func migrateFile(ctx context.Context, store *storage.Storage, e fileEntry, destPath string) error {
// Ensure parent directory exists.
if err := os.MkdirAll(filepath.Dir(destPath), 0755); err != nil {
return fmt.Errorf("creating directory: %w", err)
}
// Download from MinIO.
var reader io.ReadCloser
var err error
if e.versionID != "" {
reader, err = store.GetVersion(ctx, e.key, e.versionID)
} else {
reader, err = store.Get(ctx, e.key)
}
if err != nil {
return fmt.Errorf("downloading from MinIO: %w", err)
}
defer reader.Close()
// Write to temp file then rename for atomicity.
tmpPath := destPath + ".tmp"
f, err := os.Create(tmpPath)
if err != nil {
return fmt.Errorf("creating temp file: %w", err)
}
if _, err := io.Copy(f, reader); err != nil {
f.Close()
os.Remove(tmpPath)
return fmt.Errorf("writing file: %w", err)
}
if err := f.Close(); err != nil {
os.Remove(tmpPath)
return fmt.Errorf("closing temp file: %w", err)
}
if err := os.Rename(tmpPath, destPath); err != nil {
os.Remove(tmpPath)
return fmt.Errorf("renaming temp file: %w", err)
}
return nil
}

View File

@@ -3,6 +3,7 @@ package main
import (
"context"
"encoding/json"
"flag"
"fmt"
"net/http"
@@ -13,10 +14,13 @@ import (
"github.com/alexedwards/scs/pgxstore"
"github.com/alexedwards/scs/v2"
"github.com/kindredsystems/silo/internal/api"
"github.com/kindredsystems/silo/internal/auth"
"github.com/kindredsystems/silo/internal/config"
"github.com/kindredsystems/silo/internal/db"
"github.com/kindredsystems/silo/internal/jobdef"
"github.com/kindredsystems/silo/internal/modules"
"github.com/kindredsystems/silo/internal/schema"
"github.com/kindredsystems/silo/internal/storage"
"github.com/rs/zerolog"
@@ -61,24 +65,39 @@ func main() {
logger.Info().Msg("connected to database")
// Connect to storage (optional - may be externally managed)
var store *storage.Storage
if cfg.Storage.Endpoint != "" {
store, err = storage.Connect(ctx, storage.Config{
Endpoint: cfg.Storage.Endpoint,
AccessKey: cfg.Storage.AccessKey,
SecretKey: cfg.Storage.SecretKey,
Bucket: cfg.Storage.Bucket,
UseSSL: cfg.Storage.UseSSL,
Region: cfg.Storage.Region,
})
if err != nil {
logger.Warn().Err(err).Msg("failed to connect to storage - file operations disabled")
store = nil
var store storage.FileStore
switch cfg.Storage.Backend {
case "minio", "":
if cfg.Storage.Endpoint != "" {
s, connErr := storage.Connect(ctx, storage.Config{
Endpoint: cfg.Storage.Endpoint,
AccessKey: cfg.Storage.AccessKey,
SecretKey: cfg.Storage.SecretKey,
Bucket: cfg.Storage.Bucket,
UseSSL: cfg.Storage.UseSSL,
Region: cfg.Storage.Region,
})
if connErr != nil {
logger.Warn().Err(connErr).Msg("failed to connect to storage - file operations disabled")
} else {
store = s
logger.Info().Msg("connected to storage")
}
} else {
logger.Info().Msg("connected to storage")
logger.Info().Msg("storage not configured - file operations disabled")
}
} else {
logger.Info().Msg("storage not configured - file operations disabled")
case "filesystem":
if cfg.Storage.Filesystem.RootDir == "" {
logger.Fatal().Msg("storage.filesystem.root_dir is required when backend is \"filesystem\"")
}
s, fsErr := storage.NewFilesystemStore(cfg.Storage.Filesystem.RootDir)
if fsErr != nil {
logger.Fatal().Err(fsErr).Msg("failed to initialize filesystem storage")
}
store = s
logger.Info().Str("root", cfg.Storage.Filesystem.RootDir).Msg("connected to filesystem storage")
default:
logger.Fatal().Str("backend", cfg.Storage.Backend).Msg("unknown storage backend")
}
// Load schemas
@@ -178,6 +197,54 @@ func main() {
}
}
// Load job definitions (optional — directory may not exist yet)
var jobDefs map[string]*jobdef.Definition
if _, err := os.Stat(cfg.Jobs.Directory); err == nil {
jobDefs, err = jobdef.LoadAll(cfg.Jobs.Directory)
if err != nil {
logger.Fatal().Err(err).Str("directory", cfg.Jobs.Directory).Msg("failed to load job definitions")
}
logger.Info().Int("count", len(jobDefs)).Msg("loaded job definitions")
} else {
jobDefs = make(map[string]*jobdef.Definition)
logger.Info().Str("directory", cfg.Jobs.Directory).Msg("job definitions directory not found, skipping")
}
// Upsert job definitions into database
jobRepo := db.NewJobRepository(database)
for _, def := range jobDefs {
defJSON, _ := json.Marshal(def)
var defMap map[string]any
json.Unmarshal(defJSON, &defMap)
rec := &db.JobDefinitionRecord{
Name: def.Name,
Version: def.Version,
TriggerType: def.Trigger.Type,
ScopeType: def.Scope.Type,
ComputeType: def.Compute.Type,
RunnerTags: def.Runner.Tags,
TimeoutSeconds: def.Timeout,
MaxRetries: def.MaxRetries,
Priority: def.Priority,
Definition: defMap,
Enabled: true,
}
if err := jobRepo.UpsertDefinition(ctx, rec); err != nil {
logger.Fatal().Err(err).Str("name", def.Name).Msg("failed to upsert job definition")
}
}
// Initialize module registry
registry := modules.NewRegistry()
if err := modules.LoadState(registry, cfg, database.Pool()); err != nil {
logger.Fatal().Err(err).Msg("failed to load module state")
}
for _, m := range registry.All() {
logger.Info().Str("module", m.ID).Bool("enabled", registry.IsEnabled(m.ID)).
Bool("required", m.Required).Msg("module")
}
// Create SSE broker and server state
broker := api.NewBroker(logger)
serverState := api.NewServerState(logger, store, broker)
@@ -190,9 +257,32 @@ func main() {
// Create API server
server := api.NewServer(logger, database, schemas, cfg.Schemas.Directory, store,
authService, sessionManager, oidcBackend, &cfg.Auth, broker, serverState)
authService, sessionManager, oidcBackend, &cfg.Auth, broker, serverState,
jobDefs, cfg.Jobs.Directory, registry, cfg)
router := api.NewRouter(server, logger)
// Start background sweepers for job/runner timeouts (only when jobs module enabled)
if registry.IsEnabled(modules.Jobs) {
go func() {
ticker := time.NewTicker(time.Duration(cfg.Jobs.JobTimeoutCheck) * time.Second)
defer ticker.Stop()
for range ticker.C {
if n, err := jobRepo.TimeoutExpiredJobs(ctx); err != nil {
logger.Error().Err(err).Msg("job timeout sweep failed")
} else if n > 0 {
logger.Info().Int64("count", n).Msg("timed out expired jobs")
}
if n, err := jobRepo.ExpireStaleRunners(ctx, time.Duration(cfg.Jobs.RunnerTimeout)*time.Second); err != nil {
logger.Error().Err(err).Msg("runner expiry sweep failed")
} else if n > 0 {
logger.Info().Int64("count", n).Msg("expired stale runners")
}
}
}()
logger.Info().Msg("job/runner sweepers started")
}
// Create HTTP server
addr := fmt.Sprintf("%s:%d", cfg.Server.Host, cfg.Server.Port)
httpServer := &http.Server{

330
cmd/silorunner/main.go Normal file
View File

@@ -0,0 +1,330 @@
// Command silorunner is a compute worker that polls the Silo server for jobs
// and executes them using Headless Create with silo-mod installed.
package main
import (
"bytes"
"encoding/json"
"flag"
"fmt"
"io"
"net/http"
"os"
"os/signal"
"syscall"
"time"
"github.com/rs/zerolog"
"gopkg.in/yaml.v3"
)
// RunnerConfig holds runner configuration.
type RunnerConfig struct {
ServerURL string `yaml:"server_url"`
Token string `yaml:"token"`
Name string `yaml:"name"`
Tags []string `yaml:"tags"`
PollInterval int `yaml:"poll_interval"` // seconds, default 5
CreatePath string `yaml:"create_path"` // path to Headless Create binary
}
func main() {
configPath := flag.String("config", "runner.yaml", "Path to runner config file")
flag.Parse()
logger := zerolog.New(os.Stdout).With().Timestamp().Str("component", "silorunner").Logger()
// Load config
cfg, err := loadConfig(*configPath)
if err != nil {
logger.Fatal().Err(err).Msg("failed to load config")
}
if cfg.ServerURL == "" {
logger.Fatal().Msg("server_url is required")
}
if cfg.Token == "" {
logger.Fatal().Msg("token is required")
}
if cfg.Name == "" {
hostname, _ := os.Hostname()
cfg.Name = "runner-" + hostname
}
if cfg.PollInterval <= 0 {
cfg.PollInterval = 5
}
logger.Info().
Str("server", cfg.ServerURL).
Str("name", cfg.Name).
Strs("tags", cfg.Tags).
Int("poll_interval", cfg.PollInterval).
Msg("starting runner")
client := &http.Client{Timeout: 30 * time.Second}
// Graceful shutdown
quit := make(chan os.Signal, 1)
signal.Notify(quit, syscall.SIGINT, syscall.SIGTERM)
// Heartbeat goroutine
go func() {
ticker := time.NewTicker(30 * time.Second)
defer ticker.Stop()
for {
select {
case <-ticker.C:
if err := heartbeat(client, cfg); err != nil {
logger.Error().Err(err).Msg("heartbeat failed")
}
case <-quit:
return
}
}
}()
// Initial heartbeat
if err := heartbeat(client, cfg); err != nil {
logger.Warn().Err(err).Msg("initial heartbeat failed")
}
// Poll loop
ticker := time.NewTicker(time.Duration(cfg.PollInterval) * time.Second)
defer ticker.Stop()
for {
select {
case <-ticker.C:
job, definition, err := claimJob(client, cfg)
if err != nil {
logger.Error().Err(err).Msg("claim failed")
continue
}
if job == nil {
continue
}
jobID, _ := job["id"].(string)
defName, _ := job["definition_name"].(string)
logger.Info().Str("job_id", jobID).Str("definition", defName).Msg("claimed job")
// Start the job
if err := startJob(client, cfg, jobID); err != nil {
logger.Error().Err(err).Str("job_id", jobID).Msg("failed to start job")
continue
}
// Execute the job
executeJob(logger, client, cfg, jobID, job, definition)
case <-quit:
logger.Info().Msg("shutting down")
return
}
}
}
func loadConfig(path string) (*RunnerConfig, error) {
data, err := os.ReadFile(path)
if err != nil {
return nil, fmt.Errorf("reading config: %w", err)
}
data = []byte(os.ExpandEnv(string(data)))
var cfg RunnerConfig
if err := yaml.Unmarshal(data, &cfg); err != nil {
return nil, fmt.Errorf("parsing config: %w", err)
}
return &cfg, nil
}
func heartbeat(client *http.Client, cfg *RunnerConfig) error {
req, err := http.NewRequest("POST", cfg.ServerURL+"/api/runner/heartbeat", nil)
if err != nil {
return err
}
req.Header.Set("Authorization", "Bearer "+cfg.Token)
resp, err := client.Do(req)
if err != nil {
return err
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
body, _ := io.ReadAll(resp.Body)
return fmt.Errorf("heartbeat: %d %s", resp.StatusCode, string(body))
}
return nil
}
func claimJob(client *http.Client, cfg *RunnerConfig) (map[string]any, map[string]any, error) {
req, err := http.NewRequest("POST", cfg.ServerURL+"/api/runner/claim", nil)
if err != nil {
return nil, nil, err
}
req.Header.Set("Authorization", "Bearer "+cfg.Token)
resp, err := client.Do(req)
if err != nil {
return nil, nil, err
}
defer resp.Body.Close()
if resp.StatusCode == http.StatusNoContent {
return nil, nil, nil // No jobs available
}
if resp.StatusCode != http.StatusOK {
body, _ := io.ReadAll(resp.Body)
return nil, nil, fmt.Errorf("claim: %d %s", resp.StatusCode, string(body))
}
var result struct {
Job map[string]any `json:"job"`
Definition map[string]any `json:"definition"`
}
if err := json.NewDecoder(resp.Body).Decode(&result); err != nil {
return nil, nil, fmt.Errorf("decoding claim response: %w", err)
}
return result.Job, result.Definition, nil
}
func startJob(client *http.Client, cfg *RunnerConfig, jobID string) error {
req, err := http.NewRequest("POST", cfg.ServerURL+"/api/runner/jobs/"+jobID+"/start", nil)
if err != nil {
return err
}
req.Header.Set("Authorization", "Bearer "+cfg.Token)
resp, err := client.Do(req)
if err != nil {
return err
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
body, _ := io.ReadAll(resp.Body)
return fmt.Errorf("start: %d %s", resp.StatusCode, string(body))
}
return nil
}
func reportProgress(client *http.Client, cfg *RunnerConfig, jobID string, progress int, message string) {
body, _ := json.Marshal(map[string]any{
"progress": progress,
"message": message,
})
req, _ := http.NewRequest("PUT", cfg.ServerURL+"/api/runner/jobs/"+jobID+"/progress", bytes.NewReader(body))
req.Header.Set("Authorization", "Bearer "+cfg.Token)
req.Header.Set("Content-Type", "application/json")
resp, err := client.Do(req)
if err != nil {
return
}
resp.Body.Close()
}
func completeJob(client *http.Client, cfg *RunnerConfig, jobID string, result map[string]any) error {
body, _ := json.Marshal(map[string]any{"result": result})
req, err := http.NewRequest("POST", cfg.ServerURL+"/api/runner/jobs/"+jobID+"/complete", bytes.NewReader(body))
if err != nil {
return err
}
req.Header.Set("Authorization", "Bearer "+cfg.Token)
req.Header.Set("Content-Type", "application/json")
resp, err := client.Do(req)
if err != nil {
return err
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
respBody, _ := io.ReadAll(resp.Body)
return fmt.Errorf("complete: %d %s", resp.StatusCode, string(respBody))
}
return nil
}
func failJob(client *http.Client, cfg *RunnerConfig, jobID string, errMsg string) {
body, _ := json.Marshal(map[string]string{"error": errMsg})
req, _ := http.NewRequest("POST", cfg.ServerURL+"/api/runner/jobs/"+jobID+"/fail", bytes.NewReader(body))
req.Header.Set("Authorization", "Bearer "+cfg.Token)
req.Header.Set("Content-Type", "application/json")
resp, err := client.Do(req)
if err != nil {
return
}
resp.Body.Close()
}
func appendLog(client *http.Client, cfg *RunnerConfig, jobID, level, message string) {
body, _ := json.Marshal(map[string]string{
"level": level,
"message": message,
})
req, _ := http.NewRequest("POST", cfg.ServerURL+"/api/runner/jobs/"+jobID+"/log", bytes.NewReader(body))
req.Header.Set("Authorization", "Bearer "+cfg.Token)
req.Header.Set("Content-Type", "application/json")
resp, err := client.Do(req)
if err != nil {
return
}
resp.Body.Close()
}
// executeJob dispatches the job based on its compute command.
// For now, this is a stub that demonstrates the lifecycle.
// Real execution will shell out to Headless Create with silo-mod.
func executeJob(logger zerolog.Logger, client *http.Client, cfg *RunnerConfig, jobID string, job, definition map[string]any) {
defName, _ := job["definition_name"].(string)
// Extract compute config from definition
var command string
if definition != nil {
if compute, ok := definition["compute"].(map[string]any); ok {
command, _ = compute["command"].(string)
}
}
appendLog(client, cfg, jobID, "info", fmt.Sprintf("starting execution: %s (command: %s)", defName, command))
reportProgress(client, cfg, jobID, 10, "preparing")
switch command {
case "create-validate", "create-export", "create-dag-extract", "create-thumbnail":
if cfg.CreatePath == "" {
failJob(client, cfg, jobID, "create_path not configured")
return
}
appendLog(client, cfg, jobID, "info", fmt.Sprintf("would execute: %s --console with silo-mod", cfg.CreatePath))
reportProgress(client, cfg, jobID, 50, "executing")
// TODO: Actual Create execution:
// 1. Download item file from Silo API
// 2. Shell out: create --console -e "from silo.runner import <entry>; <entry>(...)"
// 3. Parse output JSON
// 4. Upload results / sync DAG
// For now, complete with a placeholder result.
reportProgress(client, cfg, jobID, 90, "finalizing")
if err := completeJob(client, cfg, jobID, map[string]any{
"status": "placeholder",
"message": "Create execution not yet implemented - runner lifecycle verified",
"command": command,
}); err != nil {
logger.Error().Err(err).Str("job_id", jobID).Msg("failed to complete job")
} else {
logger.Info().Str("job_id", jobID).Msg("job completed (placeholder)")
}
default:
failJob(client, cfg, jobID, fmt.Sprintf("unknown compute command: %s", command))
logger.Warn().Str("job_id", jobID).Str("command", command).Msg("unknown compute command")
}
}

View File

@@ -17,12 +17,17 @@ database:
max_connections: 10
storage:
backend: "minio" # "minio" (default) or "filesystem"
# MinIO/S3 settings (used when backend: "minio")
endpoint: "localhost:9000" # Use "minio:9000" for Docker Compose
access_key: "" # Use SILO_MINIO_ACCESS_KEY env var
secret_key: "" # Use SILO_MINIO_SECRET_KEY env var
bucket: "silo-files"
use_ssl: true # Use false for Docker Compose (internal network)
region: "us-east-1"
# Filesystem settings (used when backend: "filesystem")
# filesystem:
# root_dir: "/var/lib/silo/objects"
schemas:
# Directory containing YAML schema files

View File

@@ -10,8 +10,6 @@
#
# Credentials via environment variables (set in /etc/silo/silod.env):
# SILO_DB_PASSWORD
# SILO_MINIO_ACCESS_KEY
# SILO_MINIO_SECRET_KEY
# SILO_SESSION_SECRET
# SILO_ADMIN_PASSWORD
@@ -30,12 +28,9 @@ database:
max_connections: 20
storage:
endpoint: "minio.example.internal:9000"
access_key: "" # Set via SILO_MINIO_ACCESS_KEY
secret_key: "" # Set via SILO_MINIO_SECRET_KEY
bucket: "silo-files"
use_ssl: true
region: "us-east-1"
backend: "filesystem"
filesystem:
root_dir: "/opt/silo/data"
schemas:
directory: "/opt/silo/schemas"

View File

@@ -6,10 +6,6 @@
# Database: silo, User: silo
SILO_DB_PASSWORD=
# MinIO credentials (minio.example.internal)
# User: silouser
SILO_MINIO_ACCESS_KEY=silouser
SILO_MINIO_SECRET_KEY=
# Authentication
# Session secret (required when auth is enabled)

View File

@@ -73,25 +73,27 @@ database:
---
## Storage (MinIO/S3)
## Storage (Filesystem)
| Key | Type | Default | Env Override | Description |
|-----|------|---------|-------------|-------------|
| `storage.endpoint` | string | — | `SILO_MINIO_ENDPOINT` | MinIO/S3 endpoint (`host:port`) |
| `storage.access_key` | string | — | `SILO_MINIO_ACCESS_KEY` | Access key |
| `storage.secret_key` | string | — | `SILO_MINIO_SECRET_KEY` | Secret key |
| `storage.bucket` | string | — | — | S3 bucket name (created automatically if missing) |
| `storage.use_ssl` | bool | `false` | — | Use HTTPS for MinIO connections |
| `storage.region` | string | `"us-east-1"` | — | S3 region |
Files are stored on the local filesystem under a configurable root directory.
| Key | Type | Default | Description |
|-----|------|---------|-------------|
| `storage.backend` | string | `"filesystem"` | Storage backend (`filesystem`) |
| `storage.filesystem.root_dir` | string | — | Root directory for file storage (required) |
```yaml
storage:
endpoint: "localhost:9000"
access_key: "" # use SILO_MINIO_ACCESS_KEY env var
secret_key: "" # use SILO_MINIO_SECRET_KEY env var
bucket: "silo-files"
use_ssl: false
region: "us-east-1"
backend: "filesystem"
filesystem:
root_dir: "/opt/silo/data"
```
Ensure the directory exists and is writable by the `silo` user:
```bash
sudo mkdir -p /opt/silo/data
sudo chown silo:silo /opt/silo/data
```
---
@@ -264,9 +266,6 @@ All environment variable overrides. These take precedence over values in `config
| `SILO_DB_NAME` | `database.name` | PostgreSQL database name |
| `SILO_DB_USER` | `database.user` | PostgreSQL user |
| `SILO_DB_PASSWORD` | `database.password` | PostgreSQL password |
| `SILO_MINIO_ENDPOINT` | `storage.endpoint` | MinIO endpoint |
| `SILO_MINIO_ACCESS_KEY` | `storage.access_key` | MinIO access key |
| `SILO_MINIO_SECRET_KEY` | `storage.secret_key` | MinIO secret key |
| `SILO_SESSION_SECRET` | `auth.session_secret` | Session cookie signing secret |
| `SILO_ADMIN_USERNAME` | `auth.local.default_admin_username` | Default admin username |
| `SILO_ADMIN_PASSWORD` | `auth.local.default_admin_password` | Default admin password |
@@ -296,11 +295,9 @@ database:
sslmode: "disable"
storage:
endpoint: "localhost:9000"
access_key: "minioadmin"
secret_key: "minioadmin"
bucket: "silo-files"
use_ssl: false
backend: "filesystem"
filesystem:
root_dir: "./data"
schemas:
directory: "./schemas"

246
docs/DAG.md Normal file
View File

@@ -0,0 +1,246 @@
# Dependency DAG Specification
**Status:** Draft
**Last Updated:** 2026-02-13
---
## 1. Purpose
The Dependency DAG is a server-side graph that tracks how features, constraints, and assembly relationships depend on each other. It enables three capabilities described in [MULTI_USER_EDITS.md](MULTI_USER_EDITS.md):
1. **Interference detection** -- comparing dependency cones of concurrent edit sessions to classify conflicts as none, soft, or hard before the user encounters them.
2. **Incremental validation** -- marking changed nodes dirty and propagating only through the affected subgraph, using input-hash memoization to stop early when inputs haven't changed.
3. **Structured merge safety** -- walking the DAG to determine whether concurrent edits share upstream dependencies, deciding if auto-merge is safe or manual review is required.
---
## 2. Two-Tier Model
Silo maintains two levels of dependency graph:
### 2.1 BOM DAG (existing)
The assembly-to-part relationship graph already stored in the `relationships` table. Each row represents a parent item containing a child item with a quantity and relationship type (`component`, `alternate`, `reference`). This graph is queried via `GetBOM`, `GetExpandedBOM`, `GetWhereUsed`, and `HasCycle` in `internal/db/relationships.go`.
The BOM DAG is **not modified** by this specification. It continues to serve its existing purpose.
### 2.2 Feature DAG (new)
A finer-grained graph stored in `dag_nodes` and `dag_edges` tables. Each node represents a feature within a single item's revision -- a sketch, pad, fillet, pocket, constraint, body, or part-level container. Edges represent "depends on" relationships: if Pad003 depends on Sketch001, an edge runs from Sketch001 to Pad003.
The feature DAG is populated by clients (silo-mod) when users save, or by runners after compute jobs. Silo stores and queries it but does not generate it -- the Create client has access to the feature tree and is the authoritative source.
### 2.3 Cross-Item Edges
Assembly constraints often reference geometry on child parts (e.g., "mate Face6 of PartA to Face2 of PartB"). These cross-item dependencies are stored in `dag_cross_edges`, linking a node in one item to a node in another. Each cross-edge optionally references the `relationships` row that establishes the BOM connection.
---
## 3. Data Model
### 3.1 dag_nodes
| Column | Type | Description |
|--------|------|-------------|
| `id` | UUID | Primary key |
| `item_id` | UUID | FK to `items.id` |
| `revision_number` | INTEGER | Revision this DAG snapshot belongs to |
| `node_key` | TEXT | Feature name from Create (e.g., `Sketch001`, `Pad003`, `Body`) |
| `node_type` | TEXT | One of: `sketch`, `pad`, `pocket`, `fillet`, `chamfer`, `constraint`, `body`, `part`, `datum`, `mirror`, `pattern`, `boolean` |
| `properties_hash` | TEXT | SHA-256 of the node's parametric inputs (sketch coordinates, fillet radius, constraint values). Used for memoization -- if the hash hasn't changed, validation can skip this node. |
| `validation_state` | TEXT | One of: `clean`, `dirty`, `validating`, `failed` |
| `validation_msg` | TEXT | Error message when `validation_state = 'failed'` |
| `metadata` | JSONB | Type-specific data (sketch coords, feature params, constraint definitions) |
| `created_at` | TIMESTAMPTZ | Row creation time |
| `updated_at` | TIMESTAMPTZ | Last state change |
**Uniqueness:** `(item_id, revision_number, node_key)` -- one node per feature per revision.
### 3.2 dag_edges
| Column | Type | Description |
|--------|------|-------------|
| `id` | UUID | Primary key |
| `source_node_id` | UUID | FK to `dag_nodes.id` -- the upstream node |
| `target_node_id` | UUID | FK to `dag_nodes.id` -- the downstream node that depends on source |
| `edge_type` | TEXT | `depends_on` (default), `references`, `constrains` |
| `metadata` | JSONB | Optional edge metadata |
**Direction convention:** An edge from A to B means "B depends on A". A is upstream, B is downstream. Forward-cone traversal from A walks edges where A is the source.
**Uniqueness:** `(source_node_id, target_node_id, edge_type)`.
**Constraint:** `source_node_id != target_node_id` (no self-edges).
### 3.3 dag_cross_edges
| Column | Type | Description |
|--------|------|-------------|
| `id` | UUID | Primary key |
| `source_node_id` | UUID | FK to `dag_nodes.id` -- node in item A |
| `target_node_id` | UUID | FK to `dag_nodes.id` -- node in item B |
| `relationship_id` | UUID | FK to `relationships.id` (nullable) -- the BOM entry connecting the two items |
| `edge_type` | TEXT | `assembly_ref` (default) |
| `metadata` | JSONB | Reference details (face ID, edge ID, etc.) |
**Uniqueness:** `(source_node_id, target_node_id)`.
---
## 4. Validation States
Each node has a `validation_state` that tracks whether its computed geometry is current:
| State | Meaning |
|-------|---------|
| `clean` | Node's geometry matches its `properties_hash`. No recompute needed. |
| `dirty` | An upstream change has propagated to this node. Recompute required. |
| `validating` | A compute job is currently revalidating this node. |
| `failed` | Recompute failed. `validation_msg` contains the error. |
### 4.1 State Transitions
```
clean → dirty (upstream change detected, or MarkDirty called)
dirty → validating (compute job claims this node)
validating → clean (recompute succeeded, properties_hash updated)
validating → failed (recompute produced an error)
failed → dirty (upstream change detected, retry possible)
dirty → clean (properties_hash matches previous -- memoization shortcut)
```
### 4.2 Dirty Propagation
When a node is marked dirty, all downstream nodes in its forward cone are also marked dirty. This is done atomically in a single recursive CTE:
```sql
WITH RECURSIVE forward_cone AS (
SELECT $1::uuid AS node_id
UNION
SELECT e.target_node_id
FROM dag_edges e
JOIN forward_cone fc ON fc.node_id = e.source_node_id
)
UPDATE dag_nodes SET validation_state = 'dirty', updated_at = now()
WHERE id IN (SELECT node_id FROM forward_cone)
AND validation_state = 'clean';
```
### 4.3 Memoization
Before marking a node dirty, the system can compare the new `properties_hash` against the stored value. If they match, the change did not affect this node's inputs, and propagation stops. This is the memoization boundary described in MULTI_USER_EDITS.md Section 5.2.
---
## 5. Graph Queries
### 5.1 Forward Cone
Returns all nodes downstream of a given node -- everything that would be affected if the source node changes. Used for interference detection: if two users' forward cones overlap, there is potential interference.
```sql
WITH RECURSIVE forward_cone AS (
SELECT target_node_id AS node_id
FROM dag_edges WHERE source_node_id = $1
UNION
SELECT e.target_node_id
FROM dag_edges e
JOIN forward_cone fc ON fc.node_id = e.source_node_id
)
SELECT n.* FROM dag_nodes n JOIN forward_cone fc ON n.id = fc.node_id;
```
### 5.2 Backward Cone
Returns all nodes upstream of a given node -- everything the target node depends on.
### 5.3 Dirty Subgraph
Returns all nodes for a given item where `validation_state != 'clean'`, along with their edges. This is the input to an incremental validation job.
### 5.4 Cycle Detection
Before adding an edge, check that it would not create a cycle. Uses the same recursive ancestor-walk pattern as `HasCycle` in `internal/db/relationships.go`.
---
## 6. DAG Sync
Clients push the full feature DAG to Silo via `PUT /api/items/{partNumber}/dag`. The sync payload is a JSON document:
```json
{
"revision": 3,
"nodes": [
{
"key": "Sketch001",
"type": "sketch",
"properties_hash": "a1b2c3...",
"metadata": {
"coordinates": [[0, 0], [10, 0], [10, 5]],
"constraints": ["horizontal", "vertical"]
}
},
{
"key": "Pad003",
"type": "pad",
"properties_hash": "d4e5f6...",
"metadata": {
"length": 15.0,
"direction": [0, 0, 1]
}
}
],
"edges": [
{
"source": "Sketch001",
"target": "Pad003",
"type": "depends_on"
}
]
}
```
The server processes this within a single transaction:
1. Upsert all nodes (matched by `item_id + revision_number + node_key`).
2. Replace all edges for this item/revision.
3. Compare new `properties_hash` values against stored values to detect changes.
4. Mark changed nodes and their forward cones dirty.
5. Publish `dag.updated` SSE event.
---
## 7. Interference Detection
When a user registers an edit context (MULTI_USER_EDITS.md Section 3.1), the server:
1. Looks up the node(s) being edited by `node_key` within the item's current revision.
2. Computes the forward cone for those nodes.
3. Compares the cone against all active edit sessions' cones.
4. Classifies interference:
- **No overlap** → no interference, fully concurrent.
- **Overlap, different objects** → soft interference, visual indicator via SSE.
- **Same object, same edit type** → hard interference, edit blocked.
---
## 8. REST API
All endpoints are under `/api/items/{partNumber}` and require authentication.
| Method | Path | Auth | Description |
|--------|------|------|-------------|
| `GET` | `/dag` | viewer | Get full feature DAG for current revision |
| `GET` | `/dag/forward-cone/{nodeKey}` | viewer | Get forward dependency cone |
| `GET` | `/dag/dirty` | viewer | Get dirty subgraph |
| `PUT` | `/dag` | editor | Sync full feature tree (from client or runner) |
| `POST` | `/dag/mark-dirty/{nodeKey}` | editor | Manually mark a node and its cone dirty |
---
## 9. References
- [MULTI_USER_EDITS.md](MULTI_USER_EDITS.md) -- Full multi-user editing specification
- [WORKERS.md](WORKERS.md) -- Worker/runner system that executes validation jobs
- [ROADMAP.md](ROADMAP.md) -- Tier 0 Dependency DAG entry

View File

@@ -0,0 +1,395 @@
# DAG Client Integration Contract
**Status:** Draft
**Last Updated:** 2026-02-13
This document describes what silo-mod and Headless Create runners need to implement to integrate with the Silo dependency DAG and worker system.
---
## 1. Overview
The DAG system has two client-side integration points:
1. **silo-mod workbench** (desktop) -- pushes DAG data to Silo on save or revision create.
2. **silorunner + silo-mod** (headless) -- extracts DAGs, validates features, and exports geometry as compute jobs.
Both share the same Python codebase in the silo-mod repository. Desktop users call the code interactively; runners call it headlessly via `create --console`.
---
## 2. DAG Sync Payload
Clients push feature trees to Silo via:
```
PUT /api/items/{partNumber}/dag
Authorization: Bearer <user_token or runner_token>
Content-Type: application/json
```
### 2.1 Request Body
```json
{
"revision_number": 3,
"nodes": [
{
"node_key": "Sketch001",
"node_type": "sketch",
"properties_hash": "a1b2c3d4e5f6...",
"metadata": {
"label": "Base Profile",
"constraint_count": 12
}
},
{
"node_key": "Pad001",
"node_type": "pad",
"properties_hash": "f6e5d4c3b2a1...",
"metadata": {
"label": "Main Extrusion",
"length": 25.0
}
}
],
"edges": [
{
"source_key": "Sketch001",
"target_key": "Pad001",
"edge_type": "depends_on"
}
]
}
```
### 2.2 Field Reference
**Nodes:**
| Field | Type | Required | Description |
|-------|------|----------|-------------|
| `node_key` | string | yes | Unique within item+revision. Use Create's internal object name (e.g. `Sketch001`, `Pad003`). |
| `node_type` | string | yes | One of: `sketch`, `pad`, `pocket`, `fillet`, `chamfer`, `constraint`, `body`, `part`, `datum`. |
| `properties_hash` | string | no | SHA-256 hex digest of the node's parametric inputs. Used for memoization. |
| `validation_state` | string | no | One of: `clean`, `dirty`, `validating`, `failed`. Defaults to `clean`. |
| `metadata` | object | no | Arbitrary key-value pairs for display or debugging. |
**Edges:**
| Field | Type | Required | Description |
|-------|------|----------|-------------|
| `source_key` | string | yes | The node that is depended upon. |
| `target_key` | string | yes | The node that depends on the source. |
| `edge_type` | string | no | One of: `depends_on` (default), `references`, `constrains`. |
**Direction convention:** Edges point from dependency to dependent. If Pad001 depends on Sketch001, the edge is `source_key: "Sketch001"`, `target_key: "Pad001"`.
### 2.3 Response
```json
{
"synced": true,
"node_count": 15,
"edge_count": 14
}
```
---
## 3. Computing properties_hash
The `properties_hash` enables memoization -- if a node's inputs haven't changed since the last validation, it can be skipped. Computing it:
```python
import hashlib
import json
def compute_properties_hash(feature_obj):
"""Hash the parametric inputs of a Create feature."""
inputs = {}
if feature_obj.TypeId == "Sketcher::SketchObject":
# Hash geometry + constraints
inputs["geometry_count"] = feature_obj.GeometryCount
inputs["constraint_count"] = feature_obj.ConstraintCount
inputs["geometry"] = str(feature_obj.Shape.exportBrep())
elif feature_obj.TypeId == "PartDesign::Pad":
inputs["length"] = feature_obj.Length.Value
inputs["type"] = str(feature_obj.Type)
inputs["reversed"] = feature_obj.Reversed
inputs["sketch"] = feature_obj.Profile[0].Name
# ... other feature types
canonical = json.dumps(inputs, sort_keys=True)
return hashlib.sha256(canonical.encode()).hexdigest()
```
The exact inputs per feature type are determined by what parametric values affect the feature's geometry. Include anything that, if changed, would require recomputation.
---
## 4. Feature Tree Walking
To extract the DAG from a Create document:
```python
import FreeCAD
def extract_dag(doc):
"""Walk a Create document and return nodes + edges."""
nodes = []
edges = []
for obj in doc.Objects:
# Skip non-feature objects
if not hasattr(obj, "TypeId"):
continue
node_type = classify_type(obj.TypeId)
if node_type is None:
continue
nodes.append({
"node_key": obj.Name,
"node_type": node_type,
"properties_hash": compute_properties_hash(obj),
"metadata": {
"label": obj.Label,
"type_id": obj.TypeId,
}
})
# Walk dependencies via InList (objects this one depends on)
for dep in obj.InList:
if hasattr(dep, "TypeId") and classify_type(dep.TypeId):
edges.append({
"source_key": dep.Name,
"target_key": obj.Name,
"edge_type": "depends_on",
})
return nodes, edges
def classify_type(type_id):
"""Map Create TypeIds to DAG node types."""
mapping = {
"Sketcher::SketchObject": "sketch",
"PartDesign::Pad": "pad",
"PartDesign::Pocket": "pocket",
"PartDesign::Fillet": "fillet",
"PartDesign::Chamfer": "chamfer",
"PartDesign::Body": "body",
"Part::Feature": "part",
"Sketcher::SketchConstraint": "constraint",
}
return mapping.get(type_id)
```
---
## 5. When to Push DAG Data
Push the DAG to Silo in these scenarios:
| Event | Trigger | Who |
|-------|---------|-----|
| User saves in silo-mod | On save callback | Desktop silo-mod workbench |
| User creates a revision | After `POST /api/items/{pn}/revisions` succeeds | Desktop silo-mod workbench |
| Runner extracts DAG | After `create-dag-extract` job completes | silorunner via `PUT /api/runner/jobs/{id}/dag` |
| Runner validates | After `create-validate` job, push updated validation states | silorunner via `PUT /api/runner/jobs/{id}/dag` |
---
## 6. Runner Entry Points
silo-mod must provide these Python entry points for headless invocation:
### 6.1 silo.runner.dag_extract
Extracts the feature DAG from a Create file and writes it as JSON.
```python
# silo/runner.py
def dag_extract(input_path, output_path):
"""
Extract feature DAG from a Create file.
Args:
input_path: Path to the .kc (Kindred Create) file.
output_path: Path to write the JSON output.
Output JSON format:
{
"nodes": [...], // Same format as DAG sync payload
"edges": [...]
}
"""
doc = FreeCAD.openDocument(input_path)
nodes, edges = extract_dag(doc)
with open(output_path, 'w') as f:
json.dump({"nodes": nodes, "edges": edges}, f)
FreeCAD.closeDocument(doc.Name)
```
### 6.2 silo.runner.validate
Rebuilds all features and reports pass/fail per node.
```python
def validate(input_path, output_path):
"""
Validate a Create file by rebuilding all features.
Output JSON format:
{
"valid": true/false,
"nodes": [
{
"node_key": "Pad001",
"state": "clean", // or "failed"
"message": null, // error message if failed
"properties_hash": "..."
}
]
}
"""
doc = FreeCAD.openDocument(input_path)
doc.recompute()
results = []
all_valid = True
for obj in doc.Objects:
if not hasattr(obj, "TypeId"):
continue
node_type = classify_type(obj.TypeId)
if node_type is None:
continue
state = "clean"
message = None
if hasattr(obj, "isValid") and not obj.isValid():
state = "failed"
message = f"Feature {obj.Label} failed to recompute"
all_valid = False
results.append({
"node_key": obj.Name,
"state": state,
"message": message,
"properties_hash": compute_properties_hash(obj),
})
with open(output_path, 'w') as f:
json.dump({"valid": all_valid, "nodes": results}, f)
FreeCAD.closeDocument(doc.Name)
```
### 6.3 silo.runner.export
Exports geometry to STEP, IGES, or other formats.
```python
def export(input_path, output_path, format="step"):
"""
Export a Create file to an external format.
Args:
input_path: Path to the .kc file.
output_path: Path to write the exported file.
format: Export format ("step", "iges", "stl", "obj").
"""
doc = FreeCAD.openDocument(input_path)
import Part
shapes = [obj.Shape for obj in doc.Objects if hasattr(obj, "Shape")]
compound = Part.makeCompound(shapes)
format_map = {
"step": "STEP",
"iges": "IGES",
"stl": "STL",
"obj": "OBJ",
}
Part.export([compound], output_path)
FreeCAD.closeDocument(doc.Name)
```
---
## 7. Headless Invocation
The `silorunner` binary shells out to Create (with silo-mod installed):
```bash
# DAG extraction
create --console -e "from silo.runner import dag_extract; dag_extract('/tmp/job/part.kc', '/tmp/job/dag.json')"
# Validation
create --console -e "from silo.runner import validate; validate('/tmp/job/part.kc', '/tmp/job/result.json')"
# Export
create --console -e "from silo.runner import export; export('/tmp/job/part.kc', '/tmp/job/output.step', 'step')"
```
**Prerequisites:** The runner host must have:
- Headless Create installed (Kindred's fork of FreeCAD)
- silo-mod installed as a Create addon (so `from silo.runner import ...` works)
- No display server required -- `--console` mode is headless
---
## 8. Validation Result Handling
After a runner completes a `create-validate` job, it should:
1. Read the result JSON.
2. Push updated validation states via `PUT /api/runner/jobs/{jobID}/dag`:
```json
{
"revision_number": 3,
"nodes": [
{"node_key": "Sketch001", "node_type": "sketch", "validation_state": "clean", "properties_hash": "abc..."},
{"node_key": "Pad001", "node_type": "pad", "validation_state": "failed", "properties_hash": "def..."}
],
"edges": [
{"source_key": "Sketch001", "target_key": "Pad001"}
]
}
```
3. Complete the job via `POST /api/runner/jobs/{jobID}/complete` with the summary result.
---
## 9. SSE Events
Clients should listen for these events on `GET /api/events`:
| Event | Payload | When |
|-------|---------|------|
| `dag.updated` | `{item_id, part_number, revision_number, node_count, edge_count}` | After any DAG sync |
| `dag.validated` | `{item_id, part_number, valid, failed_count}` | After validation completes |
| `job.created` | `{job_id, definition_name, trigger, item_id}` | Job auto-triggered or manually created |
| `job.claimed` | `{job_id, runner_id, runner}` | Runner claims a job |
| `job.progress` | `{job_id, progress, message}` | Runner reports progress |
| `job.completed` | `{job_id, runner_id}` | Job finishes successfully |
| `job.failed` | `{job_id, runner_id, error}` | Job fails |
| `job.cancelled` | `{job_id, cancelled_by}` | Job cancelled by user |
---
## 10. Cross-Item Edges
For assembly constraints that reference geometry in child parts (e.g. a mate constraint between two parts), use the `dag_cross_edges` table. These edges bridge the BOM DAG and the feature DAG.
Cross-item edges are **not** included in the standard `PUT /dag` sync. They will be managed through a dedicated endpoint in a future iteration once the assembly constraint model in Create/silo-mod is finalized.
For now, the DAG sync covers intra-item dependencies only. Assembly-level interference detection uses the BOM DAG (`relationships` table) combined with per-item feature DAGs.

View File

@@ -4,7 +4,7 @@
> instructions. This document covers ongoing maintenance and operations for an
> existing deployment.
This guide covers deploying Silo to a dedicated VM using external PostgreSQL and MinIO services.
This guide covers deploying Silo to a dedicated VM using external PostgreSQL and local filesystem storage.
## Table of Contents
@@ -26,28 +26,25 @@ This guide covers deploying Silo to a dedicated VM using external PostgreSQL and
│ │ silod │ │
│ │ (Silo API Server) │ │
│ │ :8080 │ │
│ │ Files: /opt/silo/data │ │
│ └───────────────────────────────────────────────────────────┘ │
└─────────────────────────────────────────────────────────────────┘
┌─────────────────────────┐ ┌─────────────────────────────────┐
│ psql.example.internal │ │ minio.example.internal │
│ PostgreSQL 16 │ │ MinIO S3 │
│ :5432 │ │ :9000 (API) │
│ │ │ :9001 (Console) │
└─────────────────────────┘ └─────────────────────────────────┘
┌─────────────────────────┐
│ psql.example.internal │
│ PostgreSQL 16 │
│ :5432 │
└─────────────────────────┘
```
## External Services
The following external services are already configured:
| Service | Host | Database/Bucket | User |
|---------|------|-----------------|------|
| Service | Host | Database | User |
|---------|------|----------|------|
| PostgreSQL | psql.example.internal:5432 | silo | silo |
| MinIO | minio.example.internal:9000 | silo-files | silouser |
Migrations have been applied to the database.
Files are stored on the local filesystem at `/opt/silo/data`. Migrations have been applied to the database.
---
@@ -107,21 +104,15 @@ Fill in the values:
# Database credentials (psql.example.internal)
SILO_DB_PASSWORD=your-database-password
# MinIO credentials (minio.example.internal)
SILO_MINIO_ACCESS_KEY=silouser
SILO_MINIO_SECRET_KEY=your-minio-secret-key
```
### Verify External Services
Before deploying, verify connectivity to external services:
Before deploying, verify connectivity to PostgreSQL:
```bash
# Test PostgreSQL
psql -h psql.example.internal -U silo -d silo -c 'SELECT 1'
# Test MinIO
curl -I http://minio.example.internal:9000/minio/health/live
```
---
@@ -183,6 +174,7 @@ sudo -E /opt/silo/src/scripts/deploy.sh
| File | Purpose |
|------|---------|
| `/opt/silo/bin/silod` | Server binary |
| `/opt/silo/data/` | File storage root |
| `/opt/silo/src/` | Git repository checkout |
| `/etc/silo/config.yaml` | Server configuration |
| `/etc/silo/silod.env` | Environment variables (secrets) |
@@ -242,7 +234,7 @@ sudo journalctl -u silod --since "2024-01-15 10:00:00"
# Basic health check
curl http://localhost:8080/health
# Full readiness check (includes DB and MinIO)
# Full readiness check (includes DB)
curl http://localhost:8080/ready
```
@@ -318,24 +310,6 @@ psql -h psql.example.internal -U silo -d silo -f /opt/silo/src/migrations/008_ne
3. Check `pg_hba.conf` on PostgreSQL server allows connections from this host.
### Connection Refused to MinIO
1. Test network connectivity:
```bash
nc -zv minio.example.internal 9000
```
2. Test with curl:
```bash
curl -I http://minio.example.internal:9000/minio/health/live
```
3. Check SSL settings in config match MinIO setup:
```yaml
storage:
use_ssl: true # or false
```
### Health Check Fails
```bash
@@ -345,7 +319,9 @@ curl -v http://localhost:8080/ready
# If ready fails but health passes, check external services
psql -h psql.example.internal -U silo -d silo -c 'SELECT 1'
curl http://minio.example.internal:9000/minio/health/live
# Check file storage directory
ls -la /opt/silo/data
```
### Build Fails
@@ -460,10 +436,9 @@ sudo systemctl reload nginx
- [ ] `/etc/silo/silod.env` has mode 600 (`chmod 600`)
- [ ] Database password is strong and unique
- [ ] MinIO credentials are specific to silo (not admin)
- [ ] SSL/TLS enabled for PostgreSQL (`sslmode: require`)
- [ ] SSL/TLS enabled for MinIO (`use_ssl: true`) if available
- [ ] HTTPS enabled via nginx reverse proxy
- [ ] File storage directory (`/opt/silo/data`) owned by `silo` user with mode 750
- [ ] Silod listens on localhost only (`host: 127.0.0.1`)
- [ ] Firewall allows only ports 80, 443 (not 8080)
- [ ] Service runs as non-root `silo` user

View File

@@ -76,7 +76,7 @@ See [ROADMAP.md](ROADMAP.md) for the platform roadmap and dependency tier struct
| Append-only revision history | Complete | `internal/db/items.go` |
| Sequential revision numbering | Complete | Database trigger |
| Property snapshots (JSONB) | Complete | `revisions.properties` |
| File versioning (MinIO) | Complete | `internal/storage/` |
| File storage (filesystem) | Complete | `internal/storage/` |
| SHA256 checksums | Complete | Captured on upload |
| Revision comments | Complete | `revisions.comment` |
| User attribution | Complete | `revisions.created_by` |
@@ -93,7 +93,7 @@ CREATE TABLE revisions (
revision_number INTEGER NOT NULL,
properties JSONB NOT NULL DEFAULT '{}',
file_key TEXT,
file_version TEXT, -- MinIO version ID
file_version TEXT, -- storage version ID
file_checksum TEXT, -- SHA256
file_size BIGINT,
thumbnail_key TEXT,
@@ -283,7 +283,7 @@ Effort: Medium | Priority: Low | Risk: Low
**Changes:**
- Add thumbnail generation on file upload
- Store in MinIO at `thumbnails/{part_number}/rev{n}.png`
- Store at `thumbnails/{part_number}/rev{n}.png`
- Expose via `GET /api/items/{pn}/thumbnail/{rev}`
---
@@ -377,7 +377,7 @@ internal/
relationships.go # BOM repository
projects.go # Project repository
storage/
storage.go # MinIO file storage helpers
storage.go # File storage helpers
migrations/
001_initial.sql # Core schema
...
@@ -572,7 +572,7 @@ Reporting capabilities are absent. Basic reports (item counts, revision activity
| Feature | SOLIDWORKS PDM | Silo Status | Priority | Complexity |
|---------|---------------|-------------|----------|------------|
| File versioning | Automatic | Full (MinIO) | - | - |
| File versioning | Automatic | Full (filesystem) | - | - |
| File preview | Thumbnails, 3D preview | None | Medium | Complex |
| File conversion | PDF, DXF generation | None | Medium | Complex |
| Replication | Multi-site sync | None | Low | Complex |

View File

@@ -3,7 +3,7 @@
This guide covers two installation methods:
- **[Option A: Docker Compose](#option-a-docker-compose)** — self-contained stack with all services. Recommended for evaluation, small teams, and environments where Docker is the standard.
- **[Option B: Daemon Install](#option-b-daemon-install-systemd--external-services)** — systemd service with external PostgreSQL, MinIO, and optional LDAP/nginx. Recommended for production deployments integrated with existing infrastructure.
- **[Option B: Daemon Install](#option-b-daemon-install-systemd--external-services)** — systemd service with external PostgreSQL and optional LDAP/nginx. Files are stored on the local filesystem. Recommended for production deployments integrated with existing infrastructure.
Both methods produce the same result: a running Silo server with a web UI, REST API, and authentication.
@@ -48,7 +48,7 @@ Regardless of which method you choose:
## Option A: Docker Compose
A single Docker Compose file runs everything: PostgreSQL, MinIO, OpenLDAP, and Silo. An optional nginx container can be enabled for reverse proxying.
A single Docker Compose file runs everything: PostgreSQL, OpenLDAP, and Silo. Files are stored on the local filesystem. An optional nginx container can be enabled for reverse proxying.
### A.1 Prerequisites
@@ -80,7 +80,6 @@ The setup script generates credentials and configuration files:
It prompts for:
- Server domain (default: `localhost`)
- PostgreSQL password (auto-generated if you press Enter)
- MinIO credentials (auto-generated)
- OpenLDAP admin password and initial user (auto-generated)
- Silo local admin account (fallback when LDAP is unavailable)
@@ -106,7 +105,7 @@ Wait for all services to become healthy:
docker compose -f deployments/docker-compose.allinone.yaml ps
```
You should see `silo-postgres`, `silo-minio`, `silo-openldap`, and `silo-api` all in a healthy state.
You should see `silo-postgres`, `silo-openldap`, and `silo-api` all in a healthy state.
View logs:
@@ -124,7 +123,7 @@ docker compose -f deployments/docker-compose.allinone.yaml logs -f silo
# Health check
curl http://localhost:8080/health
# Readiness check (includes database and storage connectivity)
# Readiness check (includes database connectivity)
curl http://localhost:8080/ready
```
@@ -226,7 +225,7 @@ The Silo container is rebuilt from the updated source. Database migrations in `m
## Option B: Daemon Install (systemd + External Services)
This method runs Silo as a systemd service on a dedicated host, connecting to externally managed PostgreSQL, MinIO, and optionally LDAP services.
This method runs Silo as a systemd service on a dedicated host, connecting to externally managed PostgreSQL and optionally LDAP services. Files are stored on the local filesystem.
### B.1 Architecture Overview
@@ -240,21 +239,22 @@ This method runs Silo as a systemd service on a dedicated host, connecting to ex
│ ┌───────▼────────┐ │
│ │ silod │ │
│ │ (API server) │ │
└──┬─────────┬───┘
└─────┼─────────┼──────┘
┌─────────────┐ ┌─────────────────
│ PostgreSQL 16│ │ MinIO (S3)
:5432 │ │ :9000 API │
└──────────────┘ │ :9001 Console
└──────────────────┘
│ Files: /opt/ │
│ │ silo/data │ │
│ └──────┬─────────┘
─────────────────────
┌───────────▼──┐
│ PostgreSQL 16
│ :5432 │
└──────────────┘
```
### B.2 Prerequisites
- Linux host (Debian/Ubuntu or RHEL/Fedora/AlmaLinux)
- Root or sudo access
- Network access to your PostgreSQL and MinIO servers
- Network access to your PostgreSQL server
The setup script installs Go and other build dependencies automatically.
@@ -281,26 +281,6 @@ Verify:
psql -h YOUR_PG_HOST -U silo -d silo -c 'SELECT 1'
```
#### MinIO
Install MinIO and create a bucket and service account:
- [MinIO quickstart](https://min.io/docs/minio/linux/index.html)
```bash
# Using the MinIO client (mc):
mc alias set local http://YOUR_MINIO_HOST:9000 minioadmin minioadmin
mc mb local/silo-files
mc admin user add local silouser YOUR_MINIO_SECRET
mc admin policy attach local readwrite --user silouser
```
Verify:
```bash
curl -I http://YOUR_MINIO_HOST:9000/minio/health/live
```
#### LDAP / FreeIPA (Optional)
For LDAP authentication, you need an LDAP server with user and group entries. Options:
@@ -339,10 +319,10 @@ The script:
4. Clones the repository
5. Creates the environment file template
To override the default service hostnames:
To override the default database hostname:
```bash
SILO_DB_HOST=db.example.com SILO_MINIO_HOST=s3.example.com sudo -E bash scripts/setup-host.sh
SILO_DB_HOST=db.example.com sudo -E bash scripts/setup-host.sh
```
### B.5 Configure Credentials
@@ -357,10 +337,6 @@ sudo nano /etc/silo/silod.env
# Database
SILO_DB_PASSWORD=your-database-password
# MinIO
SILO_MINIO_ACCESS_KEY=silouser
SILO_MINIO_SECRET_KEY=your-minio-secret
# Authentication
SILO_SESSION_SECRET=generate-a-long-random-string
SILO_ADMIN_USERNAME=admin
@@ -379,7 +355,7 @@ Review the server configuration:
sudo nano /etc/silo/config.yaml
```
Update `database.host`, `storage.endpoint`, `server.base_url`, and authentication settings for your environment. See [CONFIGURATION.md](CONFIGURATION.md) for all options.
Update `database.host`, `storage.filesystem.root_dir`, `server.base_url`, and authentication settings for your environment. See [CONFIGURATION.md](CONFIGURATION.md) for all options.
### B.6 Deploy
@@ -412,10 +388,10 @@ sudo /opt/silo/src/scripts/deploy.sh --restart-only
sudo /opt/silo/src/scripts/deploy.sh --status
```
To override the target host or database host:
To override the target host:
```bash
SILO_DEPLOY_TARGET=silo.example.com SILO_DB_HOST=db.example.com sudo -E scripts/deploy.sh
SILO_DEPLOY_TARGET=silo.example.com sudo -E scripts/deploy.sh
```
### B.7 Set Up Nginx and TLS

485
docs/KC_SERVER.md Normal file
View File

@@ -0,0 +1,485 @@
# .kc Server-Side Metadata Integration
**Status:** Draft
**Date:** February 2026
---
## 1. Purpose
When a `.kc` file is committed to Silo, the server extracts and indexes the `silo/` directory contents so that metadata is queryable, diffable, and streamable without downloading the full file. This document specifies the server-side processing pipeline, database storage, API endpoints, and SSE events that support the Create viewport widgets defined in [SILO_VIEWPORT.md](SILO_VIEWPORT.md).
The core principle: **the `.kc` file is the transport format; Silo is the index.** The `silo/` directory entries are extracted into database columns on commit and packed back into the ZIP on checkout. The server never modifies the FreeCAD standard zone (`Document.xml`, `.brp` files, `thumbnails/`).
---
## 2. Commit Pipeline
When a `.kc` file is uploaded via `POST /api/items/{partNumber}/file`, the server runs an extraction pipeline before returning success.
### 2.1 Pipeline Steps
```
Client uploads .kc file
|
v
+-----------------------------+
| 1. Store file to disk | (existing behavior -- unchanged)
| items/{pn}/rev{N}.kc |
+-----------------------------+
|
v
+-----------------------------+
| 2. Open ZIP, read silo/ |
| Parse each entry |
+-----------------------------+
|
v
+-----------------------------+
| 3. Validate manifest.json |
| - UUID matches item |
| - kc_version supported |
| - revision_hash present |
+-----------------------------+
|
v
+-----------------------------+
| 4. Index metadata |
| - Upsert item_metadata |
| - Upsert dependencies |
| - Append history entry |
| - Snapshot approvals |
| - Register macros |
| - Register job defs |
+-----------------------------+
|
v
+-----------------------------+
| 5. Broadcast SSE events |
| - revision.created |
| - metadata.updated |
| - bom.changed (if deps |
| differ from previous) |
+-----------------------------+
|
v
Return 201 Created
```
### 2.2 Validation Rules
| Check | Failure response |
|-------|-----------------|
| `silo/manifest.json` missing | `400 Bad Request` -- file is `.fcstd` not `.kc` |
| `manifest.uuid` doesn't match item's UUID | `409 Conflict` -- wrong item |
| `manifest.kc_version` > server's supported version | `422 Unprocessable` -- client newer than server |
| `manifest.revision_hash` matches current head | `200 OK` (no-op, file unchanged) |
| Any `silo/` JSON fails to parse | `422 Unprocessable` with path and parse error |
If validation fails, the blob is still stored (the user uploaded it), but no metadata indexing occurs. The item's revision is created with a `metadata_error` flag so the web UI can surface the problem.
### 2.3 Backward Compatibility
Plain `.fcstd` files (no `silo/` directory) continue to work exactly as today -- stored on disk, revision created, no metadata extraction. The pipeline short-circuits at step 2 when no `silo/` directory is found.
---
## 3. Database Schema
### 3.1 `item_metadata` Table
Stores the indexed contents of `silo/metadata.json` as structured JSONB, searchable and filterable via the existing item query endpoints.
```sql
CREATE TABLE item_metadata (
item_id UUID PRIMARY KEY REFERENCES items(id) ON DELETE CASCADE,
schema_name TEXT,
tags TEXT[] NOT NULL DEFAULT '{}',
lifecycle_state TEXT NOT NULL DEFAULT 'draft',
fields JSONB NOT NULL DEFAULT '{}',
kc_version TEXT,
manifest_uuid UUID,
silo_instance TEXT,
revision_hash TEXT,
updated_at TIMESTAMPTZ DEFAULT now(),
updated_by TEXT
);
CREATE INDEX idx_item_metadata_tags ON item_metadata USING GIN (tags);
CREATE INDEX idx_item_metadata_lifecycle ON item_metadata (lifecycle_state);
CREATE INDEX idx_item_metadata_fields ON item_metadata USING GIN (fields);
```
On commit, the server upserts this row from `silo/manifest.json` and `silo/metadata.json`. The `fields` column contains the schema-driven key-value pairs exactly as they appear in the JSON.
### 3.2 `item_dependencies` Table
Stores the indexed contents of `silo/dependencies.json`. Replaces the BOM for assembly relationships that originate from the CAD model.
```sql
CREATE TABLE item_dependencies (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
parent_item_id UUID REFERENCES items(id) ON DELETE CASCADE,
child_uuid UUID NOT NULL,
child_part_number TEXT,
child_revision INTEGER,
quantity DECIMAL,
label TEXT,
relationship TEXT NOT NULL DEFAULT 'component',
revision_number INTEGER NOT NULL,
created_at TIMESTAMPTZ DEFAULT now()
);
CREATE INDEX idx_item_deps_parent ON item_dependencies (parent_item_id);
CREATE INDEX idx_item_deps_child ON item_dependencies (child_uuid);
```
This table complements the existing `relationships` table. The `relationships` table is the server-authoritative BOM (editable via the web UI and API). The `item_dependencies` table is the CAD-authoritative record extracted from the file. BOM merge (per [BOM_MERGE.md](BOM_MERGE.md)) reconciles the two.
### 3.3 `item_approvals` Table
Stores the indexed contents of `silo/approvals.json`. Server-authoritative -- the `.kc` snapshot is a read cache.
```sql
CREATE TABLE item_approvals (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
item_id UUID REFERENCES items(id) ON DELETE CASCADE,
eco_number TEXT,
state TEXT NOT NULL DEFAULT 'draft',
updated_at TIMESTAMPTZ DEFAULT now(),
updated_by TEXT
);
CREATE TABLE approval_signatures (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
approval_id UUID REFERENCES item_approvals(id) ON DELETE CASCADE,
username TEXT NOT NULL,
role TEXT NOT NULL,
status TEXT NOT NULL DEFAULT 'pending',
signed_at TIMESTAMPTZ,
comment TEXT
);
```
These tables exist independent of `.kc` commits -- approvals are created and managed through the web UI and API. On `.kc` checkout, the current approval state is serialized into `silo/approvals.json` for offline display.
### 3.4 `item_macros` Table
Registers macros from `silo/macros/` for server-side discoverability and the future Macro Store module.
```sql
CREATE TABLE item_macros (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
item_id UUID REFERENCES items(id) ON DELETE CASCADE,
filename TEXT NOT NULL,
trigger TEXT NOT NULL DEFAULT 'manual',
content TEXT NOT NULL,
revision_number INTEGER NOT NULL,
created_at TIMESTAMPTZ DEFAULT now(),
UNIQUE(item_id, filename)
);
```
---
## 4. API Endpoints
These endpoints serve the viewport widgets in Create. All are under `/api/items/{partNumber}` and follow the existing auth model.
### 4.1 Metadata
| Method | Path | Auth | Description |
|--------|------|------|-------------|
| `GET` | `/metadata` | viewer | Get indexed metadata (schema fields, tags, lifecycle) |
| `PUT` | `/metadata` | editor | Update metadata fields from client |
| `PATCH` | `/metadata/lifecycle` | editor | Transition lifecycle state |
| `PATCH` | `/metadata/tags` | editor | Add/remove tags |
**`GET /api/items/{partNumber}/metadata`**
Returns the indexed metadata for viewport display. This is the fast path -- reads from `item_metadata` rather than downloading and parsing the `.kc` ZIP.
```json
{
"schema_name": "mechanical-part-v2",
"lifecycle_state": "draft",
"tags": ["structural", "aluminum"],
"fields": {
"material": "6061-T6",
"finish": "anodized",
"weight_kg": 0.34,
"category": "bracket"
},
"manifest": {
"uuid": "550e8400-e29b-41d4-a716-446655440000",
"silo_instance": "https://silo.example.com",
"revision_hash": "a1b2c3d4e5f6",
"kc_version": "1.0"
},
"updated_at": "2026-02-13T20:30:00Z",
"updated_by": "joseph"
}
```
**`PUT /api/items/{partNumber}/metadata`**
Accepts a partial update of schema fields. The server merges into the existing `fields` JSONB. This is the write-back path for the Metadata Editor widget.
```json
{
"fields": {
"material": "7075-T6",
"weight_kg": 0.31
}
}
```
The server validates field names against the schema descriptor. Unknown fields are rejected with `422`.
**`PATCH /api/items/{partNumber}/metadata/lifecycle`**
Transitions lifecycle state. The server validates the transition is permitted (e.g., `draft` -> `review` is allowed, `released` -> `draft` is not without admin override).
```json
{ "state": "review" }
```
### 4.2 Dependencies
| Method | Path | Auth | Description |
|--------|------|------|-------------|
| `GET` | `/dependencies` | viewer | Get CAD-extracted dependency list |
| `GET` | `/dependencies/resolve` | viewer | Resolve UUIDs to current part numbers and file status |
**`GET /api/items/{partNumber}/dependencies`**
Returns the raw dependency list from the last `.kc` commit.
**`GET /api/items/{partNumber}/dependencies/resolve`**
Returns the dependency list with each UUID resolved to its current part number, revision, and whether the file exists on disk. This is what the Dependency Table widget calls to populate the status column.
```json
{
"links": [
{
"uuid": "660e8400-...",
"part_number": "KC-BRK-0042",
"label": "Base Plate",
"revision": 2,
"quantity": 1,
"resolved": true,
"file_available": true
},
{
"uuid": "770e8400-...",
"part_number": "KC-HDW-0108",
"label": "M6 SHCS",
"revision": 1,
"quantity": 4,
"resolved": true,
"file_available": true
},
{
"uuid": "880e8400-...",
"part_number": null,
"label": "Cover Panel",
"revision": 1,
"quantity": 1,
"resolved": false,
"file_available": false
}
]
}
```
### 4.3 Approvals
| Method | Path | Auth | Description |
|--------|------|------|-------------|
| `GET` | `/approvals` | viewer | Get current approval state |
| `POST` | `/approvals` | editor | Create ECO / start approval workflow |
| `POST` | `/approvals/{id}/sign` | editor | Sign (approve/reject) |
These endpoints power the Approvals Viewer widget. The viewer is read-only in Create -- sign actions happen in the web UI, but the API exists for both.
### 4.4 Macros
| Method | Path | Auth | Description |
|--------|------|------|-------------|
| `GET` | `/macros` | viewer | List registered macros |
| `GET` | `/macros/{filename}` | viewer | Get macro source |
Read-only server-side. Macros are authored in Create and committed inside the `.kc`. The server indexes them for discoverability in the future Macro Store.
### 4.5 Existing Endpoints (unchanged)
The viewport widgets also consume these existing endpoints:
| Widget | Endpoint | Purpose |
|--------|----------|---------|
| History Viewer | `GET /api/items/{pn}/revisions` | Full revision list |
| History Viewer | `GET /api/items/{pn}/revisions/compare` | Property diff |
| Job Viewer | `GET /api/jobs?item={pn}&definition={name}&limit=1` | Last job run |
| Job Viewer | `POST /api/jobs` | Trigger job |
| Job Viewer | `GET /api/jobs/{id}/logs` | Job log |
| Manifest Viewer | `GET /api/items/{pn}` | Item details (UUID, etc.) |
No changes needed to these -- they already exist and return the data the widgets need.
---
## 5. Checkout Pipeline
When a client downloads a `.kc` via `GET /api/items/{partNumber}/file`, the server packs current server-side state into the `silo/` directory before serving the file. This ensures the client always gets the latest metadata, even if it was edited via the web UI since the last commit.
### 5.1 Pipeline Steps
```
Client requests file download
|
v
+-----------------------------+
| 1. Read .kc from disk |
+-----------------------------+
|
v
+-----------------------------+
| 2. Pack silo/ from DB |
| - manifest.json (item) |
| - metadata.json (index) |
| - history.json (revs) |
| - approvals.json (ECO) |
| - dependencies.json |
| - macros/ (index) |
| - jobs/ (job defs) |
+-----------------------------+
|
v
+-----------------------------+
| 3. Replace silo/ in ZIP |
| Remove old entries |
| Write packed entries |
+-----------------------------+
|
v
Stream .kc to client
```
### 5.2 Packing Rules
| `silo/` entry | Source | Notes |
|---------------|--------|-------|
| `manifest.json` | `item_metadata` + `items` table | UUID from item, revision_hash from latest revision |
| `metadata.json` | `item_metadata.fields` + tags + lifecycle | Serialized from indexed columns |
| `history.json` | `revisions` table | Last 20 revisions for this item |
| `approvals.json` | `item_approvals` + `approval_signatures` | Current ECO state, omitted if no active ECO |
| `dependencies.json` | `item_dependencies` | Current revision's dependency list |
| `macros/*.py` | `item_macros` | All registered macros |
| `jobs/*.yaml` | `job_definitions` filtered by item type | Job definitions matching this item's trigger filters |
### 5.3 Caching
Packing the `silo/` directory on every download has a cost. To mitigate:
- **ETag header**: The response includes an ETag computed from the revision number + metadata `updated_at`. If the client sends `If-None-Match`, the server can return `304 Not Modified`.
- **Lazy packing**: If the `.kc` blob's `silo/manifest.json` revision_hash matches the current head *and* `item_metadata.updated_at` is older than the blob's upload time, skip repacking entirely -- the blob is already current.
---
## 6. SSE Events
The viewport widgets subscribe to SSE for live updates. These events are broadcast when server-side metadata changes, whether via `.kc` commit, web UI edit, or API call.
| Event | Payload | Trigger |
|-------|---------|---------|
| `metadata.updated` | `{part_number, changed_fields[], lifecycle_state, updated_by}` | Metadata PUT/PATCH |
| `metadata.lifecycle` | `{part_number, from_state, to_state, updated_by}` | Lifecycle transition |
| `metadata.tags` | `{part_number, added[], removed[]}` | Tag add/remove |
| `approval.created` | `{part_number, eco_number, state}` | ECO created |
| `approval.signed` | `{part_number, eco_number, user, role, status}` | Approver action |
| `approval.completed` | `{part_number, eco_number, final_state}` | All approvers acted |
| `dependencies.changed` | `{part_number, added[], removed[], changed[]}` | Dependency diff on commit |
Existing events (`revision.created`, `job.*`, `bom.changed`) continue to work as documented in [SPECIFICATION.md](SPECIFICATION.md) and [WORKERS.md](WORKERS.md).
### 6.1 Widget Subscription Map
| Viewport widget | Subscribes to |
|-----------------|---------------|
| Manifest Viewer | -- (read-only, no live updates) |
| Metadata Editor | `metadata.updated`, `metadata.lifecycle`, `metadata.tags` |
| History Viewer | `revision.created` |
| Approvals Viewer | `approval.created`, `approval.signed`, `approval.completed` |
| Dependency Table | `dependencies.changed` |
| Job Viewer | `job.created`, `job.progress`, `job.completed`, `job.failed` |
| Macro Editor | -- (local-only until committed) |
---
## 7. Web UI Integration
The Silo web UI also benefits from indexed metadata. These are additions to existing pages, not new pages.
### 7.1 Items Page
The item detail panel gains a **Metadata** tab (alongside Main, Properties, Revisions, BOM, Where Used) showing the schema-driven form from `GET /api/items/{pn}/metadata`. Editable for editors.
### 7.2 Items List
New filterable columns: `lifecycle_state`, `tags`. The existing search endpoint gains metadata-aware filtering:
```
GET /api/items?lifecycle=released&tag=aluminum
GET /api/items/search?q=bracket&lifecycle=draft
```
### 7.3 Approvals Page
A new page accessible from the top navigation (visible when a future `approvals` module is enabled). Lists all active ECOs with their approval progress.
---
## 8. Migration
### 8.1 Database Migration
A single migration adds the `item_metadata`, `item_dependencies`, `item_approvals`, `approval_signatures`, and `item_macros` tables. Existing items have no metadata rows -- they're created on first `.kc` commit or via `PUT /api/items/{pn}/metadata`.
### 8.2 Backfill
For items that already have `.kc` files stored on disk (committed before this feature), an admin endpoint re-runs the extraction pipeline:
```
POST /api/admin/reindex-metadata
```
This iterates all items with `.kc` files, opens each ZIP, and indexes the `silo/` contents. Idempotent -- safe to run multiple times.
---
## 9. Implementation Order
| Phase | Server work | Supports client phase |
|-------|------------|----------------------|
| 1 | `item_metadata` table + `GET/PUT /metadata` + commit extraction | SILO_VIEWPORT Phase 1-2 (Manifest, Metadata) |
| 2 | Pack `silo/` on checkout + ETag caching | SILO_VIEWPORT Phase 1-3 |
| 3 | `item_dependencies` table + `/dependencies/resolve` | SILO_VIEWPORT Phase 5 (Dependency Table) |
| 4 | `item_macros` table + `/macros` endpoints | SILO_VIEWPORT Phase 6 (Macro Editor) |
| 5 | `item_approvals` tables + `/approvals` endpoints | SILO_VIEWPORT Phase 7 (Approvals Viewer) |
| 6 | SSE events for metadata/approvals/dependencies | SILO_VIEWPORT Phase 8 (Live integration) |
| 7 | Web UI metadata tab + list filters | Independent of client |
Phases 1-2 are prerequisite for the viewport to work with live data. Phases 3-6 can be built in parallel with client widget development. Phase 7 is web-UI-only and independent.
---
## 10. References
- [SILO_VIEWPORT.md](SILO_VIEWPORT.md) -- Client-side viewport widget specification
- [KC_SPECIFICATION.md](KC_SPECIFICATION.md) -- .kc file format specification
- [SPECIFICATION.md](SPECIFICATION.md) -- Silo server API reference
- [BOM_MERGE.md](BOM_MERGE.md) -- BOM merge rules (dependency reconciliation)
- [WORKERS.md](WORKERS.md) -- Job queue (job viewer data source)
- [MODULES.md](MODULES.md) -- Module system (approval module gating)
- [ROADMAP.md](ROADMAP.md) -- Platform roadmap tiers

View File

@@ -170,11 +170,11 @@ Complete MVP and stabilize core functionality.
| Task | Description | Status |
|------|-------------|--------|
| Unit test suite | Core API, database, partnum, file, CSV/ODS handler tests | Complete (137 tests) |
| Date segment type | Implement `date` segment with strftime-style formatting | Complete (#79) |
| Part number validation | Validate format against schema on creation | Complete (#80) |
| Location CRUD API | Expose location hierarchy via REST | Not Started (#81) |
| Inventory API | Expose inventory operations via REST | Not Started (#82) |
| Unit test suite | Core API, database, partnum, file, CSV/ODS handler tests | Partial (~40%) |
| Date segment type | Implement `date` segment with strftime-style formatting | Not Started |
| Part number validation | Validate format against schema on creation | Not Started |
| Location CRUD API | Expose location hierarchy via REST | Not Started |
| Inventory API | Expose inventory operations via REST | Not Started |
**Success metrics:**
- All existing tests pass
@@ -187,9 +187,9 @@ Enable team collaboration (feeds into Tier 1 and Tier 4).
| Task | Description | Status |
|------|-------------|--------|
| Check-out locking | Pessimistic locks with timeout | Not Started (#87) |
| User/group management | Create, assign, manage users and groups | Not Started (#88) |
| Folder permissions | Read/write/delete per folder/project hierarchy | Not Started (#89) |
| Check-out locking | Pessimistic locks with timeout | Not Started |
| User/group management | Create, assign, manage users and groups | Not Started |
| Folder permissions | Read/write/delete per folder hierarchy | Not Started |
**Success metrics:**
- 5+ concurrent users supported
@@ -218,8 +218,8 @@ Improve findability and navigation (Tier 0 Web UI Shell).
| Task | Description | Status |
|------|-------------|--------|
| Advanced search UI | Web interface with filters and operators | Not Started (#90) |
| Saved searches | User-defined query favorites | Not Started (#91) |
| Advanced search UI | Web interface with filters and operators | Not Started |
| Saved searches | User-defined query favorites | Not Started |
**Success metrics:**
- Search returns results in <2 seconds
@@ -313,7 +313,7 @@ For full SOLIDWORKS PDM comparison tables, see [GAP_ANALYSIS.md Appendix C](GAP_
- Rollback functionality
#### File Management
- MinIO integration with versioning
- Filesystem-based file storage
- File upload/download via REST API
- SHA256 checksums for integrity
- Storage path: `items/{partNumber}/rev{N}.FCStd`
@@ -367,18 +367,18 @@ For full SOLIDWORKS PDM comparison tables, see [GAP_ANALYSIS.md Appendix C](GAP_
| Feature | Status | Notes |
|---------|--------|-------|
| Odoo ERP integration | Partial | Config and sync-log CRUD functional; push/pull sync operations are stubs |
| Date segment type | Complete | strftime-style formatting via Go time layout (#79) |
| Part number validation | Complete | Validates against schema on creation (#80) |
| Location hierarchy CRUD | Schema only | Tables exist, no API endpoints (#81) |
| Inventory tracking | Schema only | Tables exist, no API endpoints (#82) |
| Unit tests | Complete | 137 tests across 20 files covering api, db, ods, partnum, schema packages |
| Date segment type | Not started | Schema parser placeholder exists |
| Part number validation | Not started | API accepts but doesn't validate format |
| Location hierarchy CRUD | Schema only | Tables exist, no API endpoints |
| Inventory tracking | Schema only | Tables exist, no API endpoints |
| Unit tests | Partial | 11 Go test files across api, db, ods, partnum, schema packages |
---
## Appendix B: Phase 1 Detailed Tasks
### 1.1 MinIO Integration -- COMPLETE
- [x] MinIO service configured in Docker Compose
### 1.1 File Storage -- COMPLETE
- [x] Filesystem storage backend
- [x] File upload via REST API
- [x] File download via REST API (latest and by revision)
- [x] SHA256 checksums on upload
@@ -400,21 +400,18 @@ For full SOLIDWORKS PDM comparison tables, see [GAP_ANALYSIS.md Appendix C](GAP_
- [x] BOM ODS export
- [x] ODS item export/import/template
### 1.4 Unit Test Suite -- COMPLETE
- [x] Database connection and transaction tests
- [x] Item CRUD operation tests (including edge cases: duplicate keys, pagination, search)
- [x] Revision creation, retrieval, compare, rollback tests
- [x] Part number generation tests (including date segments, validation)
- [x] File upload/download handler tests
- [x] CSV import/export tests (dry-run, commit, BOM export)
- [x] ODS import/export tests (export, template, project sheet)
- [x] API endpoint tests (revisions, schemas, audit, auth tokens)
- [x] Item file CRUD tests
- [x] BOM handler tests (get, flat, cost, add, delete)
### 1.4 Unit Test Suite
- [ ] Database connection and transaction tests
- [ ] Item CRUD operation tests
- [ ] Revision creation and retrieval tests
- [ ] Part number generation tests
- [ ] File upload/download tests
- [ ] CSV import/export tests
- [ ] API endpoint tests
### 1.5 Missing Segment Types -- COMPLETE
- [x] Implement date segment type
- [x] Add strftime-style format support
### 1.5 Missing Segment Types
- [ ] Implement date segment type
- [ ] Add strftime-style format support
### 1.6 Location & Inventory APIs
- [ ] `GET /api/locations` - List locations

View File

@@ -49,9 +49,9 @@ Silo treats **part numbering schemas as configuration, not code**. Multiple numb
┌───────────────┴───────────────┐
▼ ▼
┌─────────────────────────┐ ┌─────────────────────────────┐
│ PostgreSQL │ │ MinIO
│ PostgreSQL │ │ Local Filesystem
│ (psql.example.internal)│ │ - File storage │
│ - Item metadata │ │ - Versioned objects
│ - Item metadata │ │ - Revision files
│ - Relationships │ │ - Thumbnails │
│ - Revision history │ │ │
│ - Auth / Sessions │ │ │
@@ -64,7 +64,7 @@ Silo treats **part numbering schemas as configuration, not code**. Multiple numb
| Component | Technology | Notes |
|-----------|------------|-------|
| Database | PostgreSQL 16 | Existing instance at psql.example.internal |
| File Storage | MinIO | S3-compatible, versioning enabled |
| File Storage | Local filesystem | Files stored under configurable root directory |
| CLI & API Server | Go (1.24) | chi/v5 router, pgx/v5 driver, zerolog |
| Authentication | Multi-backend | Local (bcrypt), LDAP/FreeIPA, OIDC/Keycloak |
| Sessions | PostgreSQL pgxstore | alexedwards/scs, 24h lifetime |
@@ -83,7 +83,7 @@ An **item** is the fundamental entity. Items have:
- **Properties** (key-value pairs, schema-defined and custom)
- **Relationships** to other items
- **Revisions** (append-only history)
- **Files** (optional, stored in MinIO)
- **Files** (optional, stored on the local filesystem)
- **Location** (optional physical inventory location)
### 3.2 Database Schema (Conceptual)
@@ -115,7 +115,7 @@ CREATE TABLE revisions (
item_id UUID REFERENCES items(id) NOT NULL,
revision_number INTEGER NOT NULL,
properties JSONB NOT NULL, -- all properties at this revision
file_version TEXT, -- MinIO version ID if applicable
file_version TEXT, -- storage version ID if applicable
created_at TIMESTAMPTZ DEFAULT now(),
created_by TEXT, -- user identifier (future: LDAP DN)
comment TEXT,
@@ -345,7 +345,7 @@ CAD workbench and spreadsheet extension implementations are maintained in separa
### 5.1 File Storage Strategy
Files are stored as whole objects in MinIO with versioning enabled. Storage path convention: `items/{partNumber}/rev{N}.ext`. SHA-256 checksums are captured on upload for integrity verification.
Files are stored on the local filesystem under a configurable root directory. Storage path convention: `items/{partNumber}/rev{N}.ext`. SHA-256 checksums are captured on upload for integrity verification.
Future option: exploded storage (unpack ZIP-based CAD archives for better diffing).
@@ -439,7 +439,7 @@ Revisions are created explicitly by user action (not automatic):
### 7.3 Revision vs. File Version
- **Revision**: Silo metadata revision (tracked in PostgreSQL)
- **File Version**: MinIO object version (automatic on upload)
- **File Version**: File on disk corresponding to a revision
A single Silo revision may span multiple file uploads during editing. Only committed revisions create formal revision records.
@@ -603,7 +603,7 @@ See [AUTH.md](AUTH.md) for full architecture details and [AUTH_USER_GUIDE.md](AU
```
# Health (no auth)
GET /health # Basic health check
GET /ready # Readiness (DB + MinIO)
GET /ready # Readiness (DB)
# Auth (no auth required)
GET /login # Login page
@@ -624,8 +624,8 @@ GET /api/auth/tokens # List user's API to
POST /api/auth/tokens # Create API token
DELETE /api/auth/tokens/{id} # Revoke API token
# Presigned Uploads (editor)
POST /api/uploads/presign # Get presigned MinIO upload URL [editor]
# Direct Uploads (editor)
POST /api/uploads/presign # Get upload URL [editor]
# Schemas (read: viewer, write: editor)
GET /api/schemas # List all schemas
@@ -744,7 +744,7 @@ POST /api/inventory/{partNumber}/move
- [x] Part number generation engine
- [x] CLI tool (`cmd/silo`)
- [x] API server (`cmd/silod`) with 78 endpoints
- [x] MinIO integration for file storage with versioning
- [x] Filesystem-based file storage
- [x] BOM relationships (component, alternate, reference)
- [x] Multi-level BOM (recursive expansion with configurable depth)
- [x] Where-used queries (reverse parent lookup)

View File

@@ -15,7 +15,7 @@
| Part number generator | Complete | Scoped sequences, category-based format |
| API server (`silod`) | Complete | 78 REST endpoints via chi/v5 |
| CLI tool (`silo`) | Complete | Item registration and management |
| MinIO file storage | Complete | Upload, download, versioning, checksums |
| Filesystem file storage | Complete | Upload, download, checksums |
| Revision control | Complete | Append-only history, rollback, comparison, status/labels |
| Project management | Complete | CRUD, many-to-many item tagging |
| CSV import/export | Complete | Dry-run validation, template generation |
@@ -29,7 +29,7 @@
| CSRF protection | Complete | nosurf on web forms |
| Fuzzy search | Complete | sahilm/fuzzy library |
| Web UI | Complete | React SPA (Vite + TypeScript), 6 pages, Catppuccin Mocha theme |
| File attachments | Complete | Presigned uploads, item file association, thumbnails |
| File attachments | Complete | Direct uploads, item file association, thumbnails |
| Odoo ERP integration | Partial | Config and sync-log CRUD functional; push/pull are stubs |
| Docker Compose | Complete | Dev and production configurations |
| Deployment scripts | Complete | setup-host, deploy, init-db, setup-ipa-nginx |
@@ -56,7 +56,7 @@ FreeCAD workbench and LibreOffice Calc extension are maintained in separate repo
| Service | Host | Status |
|---------|------|--------|
| PostgreSQL | psql.example.internal:5432 | Running |
| MinIO | localhost:9000 (API) / :9001 (console) | Configured |
| File Storage | /opt/silo/data (filesystem) | Configured |
| Silo API | localhost:8080 | Builds successfully |
---

364
docs/WORKERS.md Normal file
View File

@@ -0,0 +1,364 @@
# Worker System Specification
**Status:** Draft
**Last Updated:** 2026-02-13
---
## 1. Purpose
The worker system provides async compute job execution for Silo. Jobs are defined as YAML files, managed by the Silo server, and executed by external runner processes. The system is general-purpose -- while DAG validation is the first use case, it supports any compute workload: geometry export, thumbnail rendering, FEA/CFD batch jobs, report generation, and data migration.
---
## 2. Architecture
```
YAML Job Definitions (files on disk, version-controllable)
|
v
Silo Server (parser, scheduler, state machine, REST API, SSE events)
|
v
Runners (silorunner binary, polls via REST, executes Headless Create)
```
**Three layers:**
1. **Job definitions** -- YAML files in a configurable directory (default `/etc/silo/jobdefs`). Each file defines a job type: what triggers it, what it operates on, what computation to perform, and what runner capabilities are required. These are the source of truth and can be version-controlled alongside other Silo config.
2. **Silo server** -- Parses YAML definitions on startup and upserts them into the `job_definitions` table. Creates job instances when triggers fire (revision created, BOM changed, manual). Manages job lifecycle, enforces timeouts, and broadcasts status via SSE.
3. **Runners** -- Separate `silorunner` processes that authenticate with Silo via API tokens, poll for available jobs, claim them atomically, execute the compute, and report results. A runner host must have Headless Create and silo-mod installed for geometry jobs.
---
## 3. Job Definition Format
Job definitions are YAML files with the following structure:
```yaml
job:
name: assembly-validate
version: 1
description: "Validate assembly by rebuilding its dependency subgraph"
trigger:
type: revision_created # revision_created, bom_changed, manual, schedule
filter:
item_type: assembly # only trigger for assemblies
scope:
type: assembly # item, assembly, project
compute:
type: validate # validate, rebuild, diff, export, custom
command: create-validate # runner-side command identifier
args: # passed to runner as JSON
rebuild_mode: incremental
check_interference: true
runner:
tags: [create] # required runner capabilities
timeout: 900 # seconds before job is marked failed (default 600)
max_retries: 2 # retry count on failure (default 1)
priority: 50 # lower = higher priority (default 100)
```
### 3.1 Trigger Types
| Type | Description |
|------|-------------|
| `revision_created` | Fires when a new revision is created on an item matching the filter |
| `bom_changed` | Fires when a BOM merge completes |
| `manual` | Only triggered via `POST /api/jobs` |
| `schedule` | Future: cron-like scheduling (not yet implemented) |
### 3.2 Trigger Filters
The `filter` map supports key-value matching against item properties:
| Key | Description |
|-----|-------------|
| `item_type` | Match item type: `part`, `assembly`, `drawing`, etc. |
| `schema` | Match schema name |
All filter keys must match for the trigger to fire. An empty filter matches all items.
### 3.3 Scope Types
| Type | Description |
|------|-------------|
| `item` | Job operates on a single item |
| `assembly` | Job operates on an assembly and its BOM tree |
| `project` | Job operates on all items in a project |
### 3.4 Compute Commands
The `command` field identifies what the runner should execute. Built-in commands:
| Command | Description |
|---------|-------------|
| `create-validate` | Open file in Headless Create, rebuild features, report validation results |
| `create-export` | Open file, export to specified format (STEP, IGES, 3MF) |
| `create-dag-extract` | Open file, extract feature DAG, output as JSON |
| `create-thumbnail` | Open file, render thumbnail image |
Custom commands can be added by extending silo-mod's `silo.runner` module.
---
## 4. Job Lifecycle
```
pending → claimed → running → completed
→ failed
→ cancelled
```
| State | Description |
|-------|-------------|
| `pending` | Job created, waiting for a runner to claim it |
| `claimed` | Runner has claimed the job. `expires_at` is set. |
| `running` | Runner has started execution (reported via progress update) |
| `completed` | Runner reported success. `result` JSONB contains output. |
| `failed` | Runner reported failure, timeout expired, or max retries exceeded |
| `cancelled` | Admin cancelled the job before completion |
### 4.1 Claim Semantics
Runners claim jobs via `POST /api/runner/claim`. The server uses PostgreSQL's `SELECT FOR UPDATE SKIP LOCKED` to ensure exactly-once delivery:
```sql
WITH claimable AS (
SELECT id FROM jobs
WHERE status = 'pending'
AND runner_tags <@ $2::text[]
ORDER BY priority ASC, created_at ASC
LIMIT 1
FOR UPDATE SKIP LOCKED
)
UPDATE jobs SET
status = 'claimed',
runner_id = $1,
claimed_at = now(),
expires_at = now() + (timeout_seconds || ' seconds')::interval
FROM claimable
WHERE jobs.id = claimable.id
RETURNING jobs.*;
```
The `runner_tags <@ $2::text[]` condition ensures the runner has all tags required by the job. A runner with tags `["create", "linux", "gpu"]` can claim a job requiring `["create"]`, but not one requiring `["create", "windows"]`.
### 4.2 Timeout Enforcement
A background sweeper runs every 30 seconds (configurable via `jobs.job_timeout_check`) and marks expired jobs as failed:
```sql
UPDATE jobs SET status = 'failed', error_message = 'job timed out'
WHERE status IN ('claimed', 'running')
AND expires_at < now();
```
### 4.3 Retry
When a job fails and `retry_count < max_retries`, a new job is created with the same definition and scope, with `retry_count` incremented.
---
## 5. Runners
### 5.1 Registration
Runners are registered via `POST /api/runners` (admin only). The server generates a token (shown once) and stores the SHA-256 hash in the `runners` table. This follows the same pattern as API tokens in `internal/auth/token.go`.
### 5.2 Authentication
Runners authenticate via `Authorization: Bearer silo_runner_<token>`. A dedicated `RequireRunnerAuth` middleware validates the token against the `runners` table and injects a `RunnerIdentity` into the request context.
### 5.3 Heartbeat
Runners send `POST /api/runner/heartbeat` every 30 seconds. The server updates `last_heartbeat` and sets `status = 'online'`. A background sweeper marks runners as `offline` if their heartbeat is older than `runner_timeout` seconds (default 90).
### 5.4 Tags
Each runner declares capability tags (e.g., `["create", "linux", "gpu"]`). Jobs require specific tags via the `runner.tags` field in their YAML definition. A runner can only claim jobs whose required tags are a subset of the runner's tags.
### 5.5 Runner Config
The `silorunner` binary reads its config from a YAML file:
```yaml
server_url: "https://silo.example.com"
token: "silo_runner_abc123..."
name: "worker-01"
tags: ["create", "linux"]
poll_interval: 5 # seconds between claim attempts
create_path: "/usr/bin/create" # path to Headless Create binary (with silo-mod installed)
```
Or via environment variables: `SILO_RUNNER_SERVER_URL`, `SILO_RUNNER_TOKEN`, etc.
### 5.6 Deployment
Runner prerequisites:
- `silorunner` binary (built from `cmd/silorunner/`)
- Headless Create (Kindred's fork of FreeCAD) with silo-mod workbench installed
- Network access to Silo server API
Runners can be deployed as:
- Bare metal processes alongside Create installations
- Docker containers with Create pre-installed
- Scaled horizontally by registering multiple runners with different names
---
## 6. Job Log
Each job has an append-only log stored in the `job_log` table. Runners append entries via `POST /api/runner/jobs/{jobID}/log`:
```json
{
"level": "info",
"message": "Rebuilding Pad003...",
"metadata": {"node_key": "Pad003", "progress_pct": 45}
}
```
Log levels: `debug`, `info`, `warn`, `error`.
---
## 7. SSE Events
All job lifecycle transitions are broadcast via Silo's SSE broker. Clients subscribe to `/api/events` and receive:
| Event Type | Payload | When |
|------------|---------|------|
| `job.created` | `{id, definition_name, item_id, status, priority}` | Job created |
| `job.claimed` | `{id, runner_id, runner_name}` | Runner claims job |
| `job.progress` | `{id, progress, progress_message}` | Runner reports progress (0-100) |
| `job.completed` | `{id, result_summary, duration_seconds}` | Job completed successfully |
| `job.failed` | `{id, error_message}` | Job failed |
| `job.cancelled` | `{id, cancelled_by}` | Admin cancelled job |
| `runner.online` | `{id, name, tags}` | Runner heartbeat (first after offline) |
| `runner.offline` | `{id, name}` | Runner heartbeat timeout |
---
## 8. REST API
### 8.1 Job Endpoints (user-facing, require auth)
| Method | Path | Auth | Description |
|--------|------|------|-------------|
| `GET` | `/api/jobs` | viewer | List jobs (filterable by status, item, definition) |
| `GET` | `/api/jobs/{jobID}` | viewer | Get job details |
| `GET` | `/api/jobs/{jobID}/logs` | viewer | Get job log entries |
| `POST` | `/api/jobs` | editor | Manually trigger a job |
| `POST` | `/api/jobs/{jobID}/cancel` | editor | Cancel a pending/running job |
### 8.2 Job Definition Endpoints
| Method | Path | Auth | Description |
|--------|------|------|-------------|
| `GET` | `/api/job-definitions` | viewer | List loaded definitions |
| `GET` | `/api/job-definitions/{name}` | viewer | Get specific definition |
| `POST` | `/api/job-definitions/reload` | admin | Re-read YAML from disk |
### 8.3 Runner Management Endpoints (admin)
| Method | Path | Auth | Description |
|--------|------|------|-------------|
| `GET` | `/api/runners` | admin | List registered runners |
| `POST` | `/api/runners` | admin | Register runner (returns token) |
| `DELETE` | `/api/runners/{runnerID}` | admin | Delete runner |
### 8.4 Runner-Facing Endpoints (runner token auth)
| Method | Path | Auth | Description |
|--------|------|------|-------------|
| `POST` | `/api/runner/heartbeat` | runner | Send heartbeat |
| `POST` | `/api/runner/claim` | runner | Claim next available job |
| `PUT` | `/api/runner/jobs/{jobID}/progress` | runner | Report progress |
| `POST` | `/api/runner/jobs/{jobID}/complete` | runner | Report completion with result |
| `POST` | `/api/runner/jobs/{jobID}/fail` | runner | Report failure |
| `POST` | `/api/runner/jobs/{jobID}/log` | runner | Append log entry |
| `PUT` | `/api/runner/jobs/{jobID}/dag` | runner | Sync DAG results after compute |
---
## 9. Configuration
Add to `config.yaml`:
```yaml
jobs:
directory: /etc/silo/jobdefs # path to YAML job definitions
runner_timeout: 90 # seconds before marking runner offline
job_timeout_check: 30 # seconds between timeout sweeps
default_priority: 100 # default job priority
```
---
## 10. Example Job Definitions
### Assembly Validation
```yaml
job:
name: assembly-validate
version: 1
description: "Validate assembly by rebuilding its dependency subgraph"
trigger:
type: revision_created
filter:
item_type: assembly
scope:
type: assembly
compute:
type: validate
command: create-validate
args:
rebuild_mode: incremental
check_interference: true
runner:
tags: [create]
timeout: 900
max_retries: 2
priority: 50
```
### STEP Export
```yaml
job:
name: part-export-step
version: 1
description: "Export a part to STEP format"
trigger:
type: manual
scope:
type: item
compute:
type: export
command: create-export
args:
format: step
output_key_template: "exports/{part_number}_rev{revision}.step"
runner:
tags: [create]
timeout: 300
max_retries: 1
priority: 100
```
---
## 11. References
- [DAG.md](DAG.md) -- Dependency DAG specification
- [MULTI_USER_EDITS.md](MULTI_USER_EDITS.md) -- Multi-user editing specification
- [ROADMAP.md](ROADMAP.md) -- Tier 0 Job Queue Infrastructure, Tier 1 Headless Create

View File

@@ -11,6 +11,7 @@ import (
"github.com/go-chi/chi/v5"
"github.com/kindredsystems/silo/internal/auth"
"github.com/kindredsystems/silo/internal/db"
"github.com/kindredsystems/silo/internal/modules"
"github.com/kindredsystems/silo/internal/schema"
"github.com/kindredsystems/silo/internal/testutil"
"github.com/rs/zerolog"
@@ -38,6 +39,10 @@ func newAuthTestServer(t *testing.T) *Server {
nil, // authConfig
broker,
state,
nil, // jobDefs
"", // jobDefsDir
modules.NewRegistry(), // modules
nil, // cfg
)
}

View File

@@ -1,6 +1,7 @@
package api
import (
"context"
"encoding/csv"
"encoding/json"
"fmt"
@@ -284,6 +285,8 @@ func (s *Server) HandleAddBOMEntry(w http.ResponseWriter, r *http.Request) {
}
writeJSON(w, http.StatusCreated, entry)
go s.triggerJobs(context.Background(), "bom_changed", parent.ID, parent)
}
// HandleUpdateBOMEntry updates an existing BOM relationship.
@@ -352,6 +355,8 @@ func (s *Server) HandleUpdateBOMEntry(w http.ResponseWriter, r *http.Request) {
return
}
go s.triggerJobs(context.Background(), "bom_changed", parent.ID, parent)
// Reload and return updated entry
entries, err := s.relationships.GetBOM(ctx, parent.ID)
if err == nil {
@@ -418,6 +423,8 @@ func (s *Server) HandleDeleteBOMEntry(w http.ResponseWriter, r *http.Request) {
Msg("BOM entry removed")
w.WriteHeader(http.StatusNoContent)
go s.triggerJobs(context.Background(), "bom_changed", parent.ID, parent)
}
// Helper functions
@@ -1219,6 +1226,9 @@ func (s *Server) HandleMergeBOM(w http.ResponseWriter, r *http.Request) {
"unreferenced": len(diff.Removed),
}))
// Trigger auto-jobs (e.g. assembly validation)
go s.triggerJobs(context.Background(), "bom_changed", parent.ID, parent)
writeJSON(w, http.StatusOK, resp)
}

View File

@@ -11,6 +11,7 @@ import (
"github.com/go-chi/chi/v5"
"github.com/kindredsystems/silo/internal/auth"
"github.com/kindredsystems/silo/internal/db"
"github.com/kindredsystems/silo/internal/modules"
"github.com/kindredsystems/silo/internal/schema"
"github.com/kindredsystems/silo/internal/testutil"
"github.com/rs/zerolog"
@@ -35,6 +36,10 @@ func newTestServer(t *testing.T) *Server {
nil, // authConfig (nil = dev mode)
broker,
state,
nil, // jobDefs
"", // jobDefsDir
modules.NewRegistry(), // modules
nil, // cfg
)
}

View File

@@ -13,6 +13,7 @@ import (
"testing"
"github.com/go-chi/chi/v5"
"github.com/kindredsystems/silo/internal/modules"
"github.com/kindredsystems/silo/internal/schema"
"github.com/kindredsystems/silo/internal/testutil"
"github.com/rs/zerolog"
@@ -64,6 +65,10 @@ func newTestServerWithSchemas(t *testing.T) *Server {
nil, // authConfig
broker,
state,
nil, // jobDefs
"", // jobDefsDir
modules.NewRegistry(), // modules
nil, // cfg
)
}

View File

@@ -0,0 +1,271 @@
package api
import (
"encoding/json"
"net/http"
"github.com/go-chi/chi/v5"
"github.com/kindredsystems/silo/internal/db"
)
// dagSyncRequest is the payload for PUT /api/items/{partNumber}/dag.
type dagSyncRequest struct {
RevisionNumber int `json:"revision_number"`
Nodes []dagSyncNode `json:"nodes"`
Edges []dagSyncEdge `json:"edges"`
}
type dagSyncNode struct {
NodeKey string `json:"node_key"`
NodeType string `json:"node_type"`
PropertiesHash *string `json:"properties_hash,omitempty"`
ValidationState string `json:"validation_state,omitempty"`
Metadata map[string]any `json:"metadata,omitempty"`
}
type dagSyncEdge struct {
SourceKey string `json:"source_key"`
TargetKey string `json:"target_key"`
EdgeType string `json:"edge_type,omitempty"`
Metadata map[string]any `json:"metadata,omitempty"`
}
// HandleGetDAG returns the feature DAG for an item's current revision.
func (s *Server) HandleGetDAG(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
partNumber := chi.URLParam(r, "partNumber")
item, err := s.items.GetByPartNumber(ctx, partNumber)
if err != nil || item == nil {
writeError(w, http.StatusNotFound, "not_found", "Item not found")
return
}
nodes, err := s.dag.GetNodes(ctx, item.ID, item.CurrentRevision)
if err != nil {
s.logger.Error().Err(err).Msg("failed to get DAG nodes")
writeError(w, http.StatusInternalServerError, "internal_error", "Failed to get DAG")
return
}
edges, err := s.dag.GetEdges(ctx, item.ID, item.CurrentRevision)
if err != nil {
s.logger.Error().Err(err).Msg("failed to get DAG edges")
writeError(w, http.StatusInternalServerError, "internal_error", "Failed to get DAG edges")
return
}
writeJSON(w, http.StatusOK, map[string]any{
"item_id": item.ID,
"part_number": item.PartNumber,
"revision_number": item.CurrentRevision,
"nodes": nodes,
"edges": edges,
})
}
// HandleGetForwardCone returns all downstream dependents of a node.
func (s *Server) HandleGetForwardCone(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
partNumber := chi.URLParam(r, "partNumber")
nodeKey := chi.URLParam(r, "nodeKey")
item, err := s.items.GetByPartNumber(ctx, partNumber)
if err != nil || item == nil {
writeError(w, http.StatusNotFound, "not_found", "Item not found")
return
}
node, err := s.dag.GetNodeByKey(ctx, item.ID, item.CurrentRevision, nodeKey)
if err != nil {
s.logger.Error().Err(err).Msg("failed to get DAG node")
writeError(w, http.StatusInternalServerError, "internal_error", "Failed to get node")
return
}
if node == nil {
writeError(w, http.StatusNotFound, "not_found", "Node not found")
return
}
cone, err := s.dag.GetForwardCone(ctx, node.ID)
if err != nil {
s.logger.Error().Err(err).Msg("failed to get forward cone")
writeError(w, http.StatusInternalServerError, "internal_error", "Failed to get forward cone")
return
}
writeJSON(w, http.StatusOK, map[string]any{
"root_node": node,
"cone": cone,
})
}
// HandleGetDirtySubgraph returns all non-clean nodes for an item.
func (s *Server) HandleGetDirtySubgraph(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
partNumber := chi.URLParam(r, "partNumber")
item, err := s.items.GetByPartNumber(ctx, partNumber)
if err != nil || item == nil {
writeError(w, http.StatusNotFound, "not_found", "Item not found")
return
}
nodes, err := s.dag.GetDirtySubgraph(ctx, item.ID)
if err != nil {
s.logger.Error().Err(err).Msg("failed to get dirty subgraph")
writeError(w, http.StatusInternalServerError, "internal_error", "Failed to get dirty subgraph")
return
}
writeJSON(w, http.StatusOK, map[string]any{
"item_id": item.ID,
"nodes": nodes,
})
}
// HandleSyncDAG accepts a full feature tree from a client or runner.
func (s *Server) HandleSyncDAG(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
partNumber := chi.URLParam(r, "partNumber")
item, err := s.items.GetByPartNumber(ctx, partNumber)
if err != nil || item == nil {
writeError(w, http.StatusNotFound, "not_found", "Item not found")
return
}
var req dagSyncRequest
if err := json.NewDecoder(r.Body).Decode(&req); err != nil {
writeError(w, http.StatusBadRequest, "invalid_body", "Invalid JSON body")
return
}
if req.RevisionNumber == 0 {
req.RevisionNumber = item.CurrentRevision
}
// Convert request nodes to DB nodes
nodes := make([]db.DAGNode, len(req.Nodes))
for i, n := range req.Nodes {
state := n.ValidationState
if state == "" {
state = "clean"
}
nodes[i] = db.DAGNode{
NodeKey: n.NodeKey,
NodeType: n.NodeType,
PropertiesHash: n.PropertiesHash,
ValidationState: state,
Metadata: n.Metadata,
}
}
// Sync nodes first to get IDs
if err := s.dag.SyncFeatureTree(ctx, item.ID, req.RevisionNumber, nodes, nil); err != nil {
s.logger.Error().Err(err).Msg("failed to sync DAG nodes")
writeError(w, http.StatusInternalServerError, "internal_error", "Failed to sync DAG")
return
}
// Build key→ID map from synced nodes
keyToID := make(map[string]string, len(nodes))
for _, n := range nodes {
keyToID[n.NodeKey] = n.ID
}
// Convert request edges, resolving keys to IDs
edges := make([]db.DAGEdge, len(req.Edges))
for i, e := range req.Edges {
sourceID, ok := keyToID[e.SourceKey]
if !ok {
writeError(w, http.StatusBadRequest, "invalid_edge",
"Unknown source_key: "+e.SourceKey)
return
}
targetID, ok := keyToID[e.TargetKey]
if !ok {
writeError(w, http.StatusBadRequest, "invalid_edge",
"Unknown target_key: "+e.TargetKey)
return
}
edgeType := e.EdgeType
if edgeType == "" {
edgeType = "depends_on"
}
edges[i] = db.DAGEdge{
SourceNodeID: sourceID,
TargetNodeID: targetID,
EdgeType: edgeType,
Metadata: e.Metadata,
}
}
// Sync edges (nodes already synced, so pass empty nodes to skip re-upsert)
if len(edges) > 0 {
// Delete old edges and insert new ones
if err := s.dag.DeleteEdgesForItem(ctx, item.ID, req.RevisionNumber); err != nil {
s.logger.Error().Err(err).Msg("failed to delete old edges")
writeError(w, http.StatusInternalServerError, "internal_error", "Failed to sync DAG edges")
return
}
for i := range edges {
if err := s.dag.CreateEdge(ctx, &edges[i]); err != nil {
s.logger.Error().Err(err).Msg("failed to create edge")
writeError(w, http.StatusInternalServerError, "internal_error", "Failed to create edge")
return
}
}
}
// Publish SSE event
s.broker.Publish("dag.updated", mustMarshal(map[string]any{
"item_id": item.ID,
"part_number": item.PartNumber,
"revision_number": req.RevisionNumber,
"node_count": len(req.Nodes),
"edge_count": len(req.Edges),
}))
writeJSON(w, http.StatusOK, map[string]any{
"synced": true,
"node_count": len(req.Nodes),
"edge_count": len(req.Edges),
})
}
// HandleMarkDirty marks a node and all its downstream dependents as dirty.
func (s *Server) HandleMarkDirty(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
partNumber := chi.URLParam(r, "partNumber")
nodeKey := chi.URLParam(r, "nodeKey")
item, err := s.items.GetByPartNumber(ctx, partNumber)
if err != nil || item == nil {
writeError(w, http.StatusNotFound, "not_found", "Item not found")
return
}
node, err := s.dag.GetNodeByKey(ctx, item.ID, item.CurrentRevision, nodeKey)
if err != nil {
s.logger.Error().Err(err).Msg("failed to get DAG node")
writeError(w, http.StatusInternalServerError, "internal_error", "Failed to get node")
return
}
if node == nil {
writeError(w, http.StatusNotFound, "not_found", "Node not found")
return
}
affected, err := s.dag.MarkDirty(ctx, node.ID)
if err != nil {
s.logger.Error().Err(err).Msg("failed to mark dirty")
writeError(w, http.StatusInternalServerError, "internal_error", "Failed to mark dirty")
return
}
writeJSON(w, http.StatusOK, map[string]any{
"node_key": nodeKey,
"nodes_affected": affected,
})
}

View File

@@ -0,0 +1,249 @@
package api
import (
"context"
"encoding/json"
"net/http"
"net/http/httptest"
"strings"
"testing"
"github.com/go-chi/chi/v5"
"github.com/kindredsystems/silo/internal/db"
"github.com/kindredsystems/silo/internal/modules"
"github.com/kindredsystems/silo/internal/schema"
"github.com/kindredsystems/silo/internal/testutil"
"github.com/rs/zerolog"
)
func newDAGTestServer(t *testing.T) *Server {
t.Helper()
pool := testutil.MustConnectTestPool(t)
database := db.NewFromPool(pool)
broker := NewBroker(zerolog.Nop())
state := NewServerState(zerolog.Nop(), nil, broker)
return NewServer(
zerolog.Nop(),
database,
map[string]*schema.Schema{},
"",
nil, nil, nil, nil, nil,
broker, state,
nil, "",
modules.NewRegistry(), nil,
)
}
func newDAGRouter(s *Server) http.Handler {
r := chi.NewRouter()
r.Route("/api/items/{partNumber}", func(r chi.Router) {
r.Get("/dag", s.HandleGetDAG)
r.Get("/dag/forward-cone/{nodeKey}", s.HandleGetForwardCone)
r.Get("/dag/dirty", s.HandleGetDirtySubgraph)
r.Put("/dag", s.HandleSyncDAG)
r.Post("/dag/mark-dirty/{nodeKey}", s.HandleMarkDirty)
})
return r
}
func TestHandleGetDAG_Empty(t *testing.T) {
s := newDAGTestServer(t)
r := newDAGRouter(s)
// Create an item
item := &db.Item{PartNumber: "DAG-TEST-001", ItemType: "part", Description: "DAG test"}
if err := s.items.Create(context.Background(), item, nil); err != nil {
t.Fatalf("creating item: %v", err)
}
req := httptest.NewRequest("GET", "/api/items/DAG-TEST-001/dag", nil)
w := httptest.NewRecorder()
r.ServeHTTP(w, req)
if w.Code != http.StatusOK {
t.Fatalf("expected 200, got %d: %s", w.Code, w.Body.String())
}
var resp map[string]any
json.Unmarshal(w.Body.Bytes(), &resp)
if resp["part_number"] != "DAG-TEST-001" {
t.Errorf("expected part_number DAG-TEST-001, got %v", resp["part_number"])
}
}
func TestHandleSyncDAG(t *testing.T) {
s := newDAGTestServer(t)
r := newDAGRouter(s)
// Create an item with a revision
item := &db.Item{PartNumber: "DAG-SYNC-001", ItemType: "part", Description: "sync test"}
if err := s.items.Create(context.Background(), item, nil); err != nil {
t.Fatalf("creating item: %v", err)
}
// Sync a feature tree
body := `{
"nodes": [
{"node_key": "Sketch001", "node_type": "sketch"},
{"node_key": "Pad001", "node_type": "pad"},
{"node_key": "Fillet001", "node_type": "fillet"}
],
"edges": [
{"source_key": "Sketch001", "target_key": "Pad001", "edge_type": "depends_on"},
{"source_key": "Pad001", "target_key": "Fillet001", "edge_type": "depends_on"}
]
}`
req := httptest.NewRequest("PUT", "/api/items/DAG-SYNC-001/dag", strings.NewReader(body))
req.Header.Set("Content-Type", "application/json")
w := httptest.NewRecorder()
r.ServeHTTP(w, req)
if w.Code != http.StatusOK {
t.Fatalf("expected 200, got %d: %s", w.Code, w.Body.String())
}
var resp map[string]any
json.Unmarshal(w.Body.Bytes(), &resp)
if resp["node_count"] != float64(3) {
t.Errorf("expected 3 nodes, got %v", resp["node_count"])
}
if resp["edge_count"] != float64(2) {
t.Errorf("expected 2 edges, got %v", resp["edge_count"])
}
// Verify we can read the DAG back
req2 := httptest.NewRequest("GET", "/api/items/DAG-SYNC-001/dag", nil)
w2 := httptest.NewRecorder()
r.ServeHTTP(w2, req2)
if w2.Code != http.StatusOK {
t.Fatalf("GET dag: expected 200, got %d", w2.Code)
}
var dagResp map[string]any
json.Unmarshal(w2.Body.Bytes(), &dagResp)
nodes, ok := dagResp["nodes"].([]any)
if !ok || len(nodes) != 3 {
t.Errorf("expected 3 nodes in GET, got %v", dagResp["nodes"])
}
}
func TestHandleForwardCone(t *testing.T) {
s := newDAGTestServer(t)
r := newDAGRouter(s)
item := &db.Item{PartNumber: "DAG-CONE-001", ItemType: "part", Description: "cone test"}
if err := s.items.Create(context.Background(), item, nil); err != nil {
t.Fatalf("creating item: %v", err)
}
// Sync a linear chain: A -> B -> C
body := `{
"nodes": [
{"node_key": "A", "node_type": "sketch"},
{"node_key": "B", "node_type": "pad"},
{"node_key": "C", "node_type": "fillet"}
],
"edges": [
{"source_key": "A", "target_key": "B"},
{"source_key": "B", "target_key": "C"}
]
}`
req := httptest.NewRequest("PUT", "/api/items/DAG-CONE-001/dag", strings.NewReader(body))
req.Header.Set("Content-Type", "application/json")
w := httptest.NewRecorder()
r.ServeHTTP(w, req)
if w.Code != http.StatusOK {
t.Fatalf("sync: %d %s", w.Code, w.Body.String())
}
// Forward cone from A should include B and C
req2 := httptest.NewRequest("GET", "/api/items/DAG-CONE-001/dag/forward-cone/A", nil)
w2 := httptest.NewRecorder()
r.ServeHTTP(w2, req2)
if w2.Code != http.StatusOK {
t.Fatalf("forward-cone: %d %s", w2.Code, w2.Body.String())
}
var resp map[string]any
json.Unmarshal(w2.Body.Bytes(), &resp)
cone, ok := resp["cone"].([]any)
if !ok || len(cone) != 2 {
t.Errorf("expected 2 nodes in forward cone, got %v", resp["cone"])
}
}
func TestHandleMarkDirty(t *testing.T) {
s := newDAGTestServer(t)
r := newDAGRouter(s)
item := &db.Item{PartNumber: "DAG-DIRTY-001", ItemType: "part", Description: "dirty test"}
if err := s.items.Create(context.Background(), item, nil); err != nil {
t.Fatalf("creating item: %v", err)
}
// Sync: A -> B -> C
body := `{
"nodes": [
{"node_key": "X", "node_type": "sketch"},
{"node_key": "Y", "node_type": "pad"},
{"node_key": "Z", "node_type": "fillet"}
],
"edges": [
{"source_key": "X", "target_key": "Y"},
{"source_key": "Y", "target_key": "Z"}
]
}`
req := httptest.NewRequest("PUT", "/api/items/DAG-DIRTY-001/dag", strings.NewReader(body))
req.Header.Set("Content-Type", "application/json")
w := httptest.NewRecorder()
r.ServeHTTP(w, req)
if w.Code != http.StatusOK {
t.Fatalf("sync: %d %s", w.Code, w.Body.String())
}
// Mark X dirty — should propagate to Y and Z
req2 := httptest.NewRequest("POST", "/api/items/DAG-DIRTY-001/dag/mark-dirty/X", nil)
w2 := httptest.NewRecorder()
r.ServeHTTP(w2, req2)
if w2.Code != http.StatusOK {
t.Fatalf("mark-dirty: %d %s", w2.Code, w2.Body.String())
}
var resp map[string]any
json.Unmarshal(w2.Body.Bytes(), &resp)
affected := resp["nodes_affected"].(float64)
if affected != 3 {
t.Errorf("expected 3 nodes affected, got %v", affected)
}
// Verify dirty subgraph
req3 := httptest.NewRequest("GET", "/api/items/DAG-DIRTY-001/dag/dirty", nil)
w3 := httptest.NewRecorder()
r.ServeHTTP(w3, req3)
if w3.Code != http.StatusOK {
t.Fatalf("dirty: %d %s", w3.Code, w3.Body.String())
}
var dirtyResp map[string]any
json.Unmarshal(w3.Body.Bytes(), &dirtyResp)
dirtyNodes, ok := dirtyResp["nodes"].([]any)
if !ok || len(dirtyNodes) != 3 {
t.Errorf("expected 3 dirty nodes, got %v", dirtyResp["nodes"])
}
}
func TestHandleGetDAG_NotFound(t *testing.T) {
s := newDAGTestServer(t)
r := newDAGRouter(s)
req := httptest.NewRequest("GET", "/api/items/NONEXISTENT-999/dag", nil)
w := httptest.NewRecorder()
r.ServeHTTP(w, req)
if w.Code != http.StatusNotFound {
t.Errorf("expected 404, got %d", w.Code)
}
}

View File

@@ -0,0 +1,125 @@
package api
import (
"net/http"
"github.com/go-chi/chi/v5"
"github.com/kindredsystems/silo/internal/storage"
)
// DependencyResponse is the JSON representation for GET /dependencies.
type DependencyResponse struct {
UUID string `json:"uuid"`
PartNumber *string `json:"part_number"`
Revision *int `json:"revision"`
Quantity *float64 `json:"quantity"`
Label *string `json:"label"`
Relationship string `json:"relationship"`
}
// ResolvedDependencyResponse is the JSON representation for GET /dependencies/resolve.
type ResolvedDependencyResponse struct {
UUID string `json:"uuid"`
PartNumber *string `json:"part_number"`
Label *string `json:"label"`
Revision *int `json:"revision"`
Quantity *float64 `json:"quantity"`
Resolved bool `json:"resolved"`
FileAvailable bool `json:"file_available"`
}
// HandleGetDependencies returns the raw dependency list for an item.
// GET /api/items/{partNumber}/dependencies
func (s *Server) HandleGetDependencies(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
partNumber := chi.URLParam(r, "partNumber")
item, err := s.items.GetByPartNumber(ctx, partNumber)
if err != nil {
s.logger.Error().Err(err).Msg("failed to get item")
writeError(w, http.StatusInternalServerError, "internal_error", "Failed to get item")
return
}
if item == nil {
writeError(w, http.StatusNotFound, "not_found", "Item not found")
return
}
deps, err := s.deps.ListByItem(ctx, item.ID)
if err != nil {
s.logger.Error().Err(err).Msg("failed to list dependencies")
writeError(w, http.StatusInternalServerError, "internal_error", "Failed to list dependencies")
return
}
resp := make([]DependencyResponse, len(deps))
for i, d := range deps {
resp[i] = DependencyResponse{
UUID: d.ChildUUID,
PartNumber: d.ChildPartNumber,
Revision: d.ChildRevision,
Quantity: d.Quantity,
Label: d.Label,
Relationship: d.Relationship,
}
}
writeJSON(w, http.StatusOK, resp)
}
// HandleResolveDependencies returns dependencies with UUIDs resolved to part numbers
// and file availability status.
// GET /api/items/{partNumber}/dependencies/resolve
func (s *Server) HandleResolveDependencies(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
partNumber := chi.URLParam(r, "partNumber")
item, err := s.items.GetByPartNumber(ctx, partNumber)
if err != nil {
s.logger.Error().Err(err).Msg("failed to get item")
writeError(w, http.StatusInternalServerError, "internal_error", "Failed to get item")
return
}
if item == nil {
writeError(w, http.StatusNotFound, "not_found", "Item not found")
return
}
deps, err := s.deps.Resolve(ctx, item.ID)
if err != nil {
s.logger.Error().Err(err).Msg("failed to resolve dependencies")
writeError(w, http.StatusInternalServerError, "internal_error", "Failed to resolve dependencies")
return
}
resp := make([]ResolvedDependencyResponse, len(deps))
for i, d := range deps {
// Use resolved part number if available, fall back to .kc-provided value.
pn := d.ChildPartNumber
rev := d.ChildRevision
if d.Resolved {
pn = d.ResolvedPartNumber
rev = d.ResolvedRevision
}
fileAvailable := false
if d.Resolved && pn != nil && rev != nil && s.storage != nil {
key := storage.FileKey(*pn, *rev)
if exists, err := s.storage.Exists(ctx, key); err == nil {
fileAvailable = exists
}
}
resp[i] = ResolvedDependencyResponse{
UUID: d.ChildUUID,
PartNumber: pn,
Label: d.Label,
Revision: rev,
Quantity: d.Quantity,
Resolved: d.Resolved,
FileAvailable: fileAvailable,
}
}
writeJSON(w, http.StatusOK, resp)
}

View File

@@ -3,7 +3,9 @@ package api
import (
"encoding/json"
"fmt"
"io"
"net/http"
"strconv"
"strings"
"time"
@@ -314,3 +316,188 @@ func (s *Server) HandleSetItemThumbnail(w http.ResponseWriter, r *http.Request)
w.WriteHeader(http.StatusNoContent)
}
// storageBackend returns the configured storage backend name, defaulting to "minio".
func (s *Server) storageBackend() string {
if s.cfg != nil && s.cfg.Storage.Backend != "" {
return s.cfg.Storage.Backend
}
return "minio"
}
// HandleUploadItemFile accepts a multipart file upload and stores it as an item attachment.
func (s *Server) HandleUploadItemFile(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
partNumber := chi.URLParam(r, "partNumber")
if s.storage == nil {
writeError(w, http.StatusServiceUnavailable, "storage_unavailable", "File storage not configured")
return
}
item, err := s.items.GetByPartNumber(ctx, partNumber)
if err != nil {
s.logger.Error().Err(err).Msg("failed to get item")
writeError(w, http.StatusInternalServerError, "internal_error", "Failed to get item")
return
}
if item == nil {
writeError(w, http.StatusNotFound, "not_found", "Item not found")
return
}
// Parse multipart form (max 500MB)
if err := r.ParseMultipartForm(500 << 20); err != nil {
writeError(w, http.StatusBadRequest, "invalid_form", err.Error())
return
}
file, header, err := r.FormFile("file")
if err != nil {
writeError(w, http.StatusBadRequest, "missing_file", "File is required")
return
}
defer file.Close()
contentType := header.Header.Get("Content-Type")
if contentType == "" {
contentType = "application/octet-stream"
}
// Generate permanent key
fileID := uuid.New().String()
permanentKey := fmt.Sprintf("items/%s/files/%s/%s", item.ID, fileID, header.Filename)
// Write directly to storage
result, err := s.storage.Put(ctx, permanentKey, file, header.Size, contentType)
if err != nil {
s.logger.Error().Err(err).Msg("failed to upload file")
writeError(w, http.StatusInternalServerError, "upload_failed", "Failed to store file")
return
}
// Create DB record
itemFile := &db.ItemFile{
ItemID: item.ID,
Filename: header.Filename,
ContentType: contentType,
Size: result.Size,
ObjectKey: permanentKey,
StorageBackend: s.storageBackend(),
}
if err := s.itemFiles.Create(ctx, itemFile); err != nil {
s.logger.Error().Err(err).Msg("failed to create item file record")
writeError(w, http.StatusInternalServerError, "internal_error", "Failed to save file record")
return
}
s.logger.Info().
Str("part_number", partNumber).
Str("file_id", itemFile.ID).
Str("filename", header.Filename).
Int64("size", result.Size).
Msg("file uploaded to item")
writeJSON(w, http.StatusCreated, itemFileToResponse(itemFile))
}
// HandleUploadItemThumbnail accepts a multipart file upload and sets it as the item thumbnail.
func (s *Server) HandleUploadItemThumbnail(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
partNumber := chi.URLParam(r, "partNumber")
if s.storage == nil {
writeError(w, http.StatusServiceUnavailable, "storage_unavailable", "File storage not configured")
return
}
item, err := s.items.GetByPartNumber(ctx, partNumber)
if err != nil {
s.logger.Error().Err(err).Msg("failed to get item")
writeError(w, http.StatusInternalServerError, "internal_error", "Failed to get item")
return
}
if item == nil {
writeError(w, http.StatusNotFound, "not_found", "Item not found")
return
}
// Parse multipart form (max 10MB for thumbnails)
if err := r.ParseMultipartForm(10 << 20); err != nil {
writeError(w, http.StatusBadRequest, "invalid_form", err.Error())
return
}
file, header, err := r.FormFile("file")
if err != nil {
writeError(w, http.StatusBadRequest, "missing_file", "File is required")
return
}
defer file.Close()
contentType := header.Header.Get("Content-Type")
if contentType == "" {
contentType = "image/png"
}
thumbnailKey := fmt.Sprintf("items/%s/thumbnail.png", item.ID)
if _, err := s.storage.Put(ctx, thumbnailKey, file, header.Size, contentType); err != nil {
s.logger.Error().Err(err).Msg("failed to upload thumbnail")
writeError(w, http.StatusInternalServerError, "upload_failed", "Failed to store thumbnail")
return
}
if err := s.items.SetThumbnailKey(ctx, item.ID, thumbnailKey); err != nil {
s.logger.Error().Err(err).Msg("failed to update thumbnail key")
writeError(w, http.StatusInternalServerError, "internal_error", "Failed to save thumbnail")
return
}
w.WriteHeader(http.StatusNoContent)
}
// HandleDownloadItemFile streams an item file attachment to the client.
func (s *Server) HandleDownloadItemFile(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
partNumber := chi.URLParam(r, "partNumber")
fileID := chi.URLParam(r, "fileId")
if s.storage == nil {
writeError(w, http.StatusServiceUnavailable, "storage_unavailable", "File storage not configured")
return
}
item, err := s.items.GetByPartNumber(ctx, partNumber)
if err != nil || item == nil {
writeError(w, http.StatusNotFound, "not_found", "Item not found")
return
}
file, err := s.itemFiles.Get(ctx, fileID)
if err != nil {
writeError(w, http.StatusNotFound, "not_found", "File not found")
return
}
if file.ItemID != item.ID {
writeError(w, http.StatusNotFound, "not_found", "File not found")
return
}
reader, err := s.storage.Get(ctx, file.ObjectKey)
if err != nil {
s.logger.Error().Err(err).Str("key", file.ObjectKey).Msg("failed to get file")
writeError(w, http.StatusInternalServerError, "download_failed", "Failed to retrieve file")
return
}
defer reader.Close()
w.Header().Set("Content-Type", file.ContentType)
w.Header().Set("Content-Disposition", fmt.Sprintf(`attachment; filename="%s"`, file.Filename))
if file.Size > 0 {
w.Header().Set("Content-Length", strconv.FormatInt(file.Size, 10))
}
io.Copy(w, reader)
}

View File

@@ -5,6 +5,7 @@ import (
"encoding/json"
"errors"
"fmt"
"io"
"net/http"
"os"
"path/filepath"
@@ -18,6 +19,9 @@ import (
"github.com/kindredsystems/silo/internal/auth"
"github.com/kindredsystems/silo/internal/config"
"github.com/kindredsystems/silo/internal/db"
"github.com/kindredsystems/silo/internal/jobdef"
"github.com/kindredsystems/silo/internal/kc"
"github.com/kindredsystems/silo/internal/modules"
"github.com/kindredsystems/silo/internal/partnum"
"github.com/kindredsystems/silo/internal/schema"
"github.com/kindredsystems/silo/internal/storage"
@@ -35,7 +39,7 @@ type Server struct {
schemas map[string]*schema.Schema
schemasDir string
partgen *partnum.Generator
storage *storage.Storage
storage storage.FileStore
auth *auth.Service
sessions *scs.SessionManager
oidc *auth.OIDCBackend
@@ -43,6 +47,16 @@ type Server struct {
itemFiles *db.ItemFileRepository
broker *Broker
serverState *ServerState
dag *db.DAGRepository
jobs *db.JobRepository
locations *db.LocationRepository
jobDefs map[string]*jobdef.Definition
jobDefsDir string
modules *modules.Registry
cfg *config.Config
settings *db.SettingsRepository
metadata *db.ItemMetadataRepository
deps *db.ItemDependencyRepository
}
// NewServer creates a new API server.
@@ -51,18 +65,28 @@ func NewServer(
database *db.DB,
schemas map[string]*schema.Schema,
schemasDir string,
store *storage.Storage,
store storage.FileStore,
authService *auth.Service,
sessionManager *scs.SessionManager,
oidcBackend *auth.OIDCBackend,
authCfg *config.AuthConfig,
broker *Broker,
state *ServerState,
jobDefs map[string]*jobdef.Definition,
jobDefsDir string,
registry *modules.Registry,
cfg *config.Config,
) *Server {
items := db.NewItemRepository(database)
projects := db.NewProjectRepository(database)
relationships := db.NewRelationshipRepository(database)
itemFiles := db.NewItemFileRepository(database)
dag := db.NewDAGRepository(database)
jobs := db.NewJobRepository(database)
settings := db.NewSettingsRepository(database)
locations := db.NewLocationRepository(database)
metadata := db.NewItemMetadataRepository(database)
itemDeps := db.NewItemDependencyRepository(database)
seqStore := &dbSequenceStore{db: database, schemas: schemas}
partgen := partnum.NewGenerator(schemas, seqStore)
@@ -83,6 +107,16 @@ func NewServer(
itemFiles: itemFiles,
broker: broker,
serverState: state,
dag: dag,
jobs: jobs,
locations: locations,
jobDefs: jobDefs,
jobDefsDir: jobDefsDir,
modules: registry,
cfg: cfg,
settings: settings,
metadata: metadata,
deps: itemDeps,
}
}
@@ -153,6 +187,54 @@ func (s *Server) HandleReady(w http.ResponseWriter, r *http.Request) {
})
}
// HandleGetModules returns the public module discovery response.
// No authentication required — clients call this pre-login.
func (s *Server) HandleGetModules(w http.ResponseWriter, r *http.Request) {
mods := make(map[string]any, 10)
for _, m := range s.modules.All() {
entry := map[string]any{
"enabled": s.modules.IsEnabled(m.ID),
"required": m.Required,
"name": m.Name,
}
if m.Version != "" {
entry["version"] = m.Version
}
if len(m.DependsOn) > 0 {
entry["depends_on"] = m.DependsOn
}
// Public config (non-secret) for specific modules.
switch m.ID {
case "auth":
if s.cfg != nil {
entry["config"] = map[string]any{
"local_enabled": s.cfg.Auth.Local.Enabled,
"ldap_enabled": s.cfg.Auth.LDAP.Enabled,
"oidc_enabled": s.cfg.Auth.OIDC.Enabled,
"oidc_issuer_url": s.cfg.Auth.OIDC.IssuerURL,
}
}
case "freecad":
if s.cfg != nil {
entry["config"] = map[string]any{
"uri_scheme": s.cfg.FreeCAD.URIScheme,
}
}
}
mods[m.ID] = entry
}
writeJSON(w, http.StatusOK, map[string]any{
"modules": mods,
"server": map[string]any{
"version": "0.2",
"read_only": s.serverState.IsReadOnly(),
},
})
}
// Schema handlers
// SchemaResponse represents a schema in API responses.
@@ -1476,6 +1558,9 @@ func (s *Server) HandleCreateRevision(w http.ResponseWriter, r *http.Request) {
"part_number": partNumber,
"revision_number": rev.RevisionNumber,
}))
// Trigger auto-jobs (e.g. validation, export)
go s.triggerJobs(context.Background(), "revision_created", item.ID, item)
}
// HandleUploadFile uploads a file and creates a new revision.
@@ -1575,10 +1660,14 @@ func (s *Server) HandleUploadFile(w http.ResponseWriter, r *http.Request) {
Int64("size", result.Size).
Msg("file uploaded")
// .kc metadata extraction (best-effort)
s.extractKCMetadata(ctx, item, fileKey, rev)
writeJSON(w, http.StatusCreated, revisionToResponse(rev))
}
// HandleDownloadFile downloads the file for a specific revision.
// For .kc files, silo/ entries are repacked with current DB state.
func (s *Server) HandleDownloadFile(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
partNumber := chi.URLParam(r, "partNumber")
@@ -1633,18 +1722,23 @@ func (s *Server) HandleDownloadFile(w http.ResponseWriter, r *http.Request) {
return
}
// Get file from storage
var reader interface {
Read(p []byte) (n int, err error)
Close() error
// ETag: computed from revision + metadata freshness.
meta, _ := s.metadata.Get(ctx, item.ID) // nil is ok (plain .fcstd)
etag := computeETag(revision, meta)
if match := r.Header.Get("If-None-Match"); match == etag {
w.Header().Set("ETag", etag)
w.WriteHeader(http.StatusNotModified)
return
}
// Get file from storage
var reader io.ReadCloser
if revision.FileVersion != nil && *revision.FileVersion != "" {
reader, err = s.storage.GetVersion(ctx, *revision.FileKey, *revision.FileVersion)
} else {
reader, err = s.storage.Get(ctx, *revision.FileKey)
}
if err != nil {
s.logger.Error().Err(err).Str("key", *revision.FileKey).Msg("failed to get file")
writeError(w, http.StatusInternalServerError, "download_failed", err.Error())
@@ -1652,28 +1746,37 @@ func (s *Server) HandleDownloadFile(w http.ResponseWriter, r *http.Request) {
}
defer reader.Close()
// Read entire file for potential .kc repacking.
data, err := io.ReadAll(reader)
if err != nil {
s.logger.Error().Err(err).Msg("failed to read file")
writeError(w, http.StatusInternalServerError, "download_failed", "Failed to read file")
return
}
// Repack silo/ entries for .kc files with indexed metadata.
output := data
if meta != nil {
if hasSilo, chkErr := kc.HasSiloDir(data); chkErr == nil && hasSilo {
if !canSkipRepack(revision, meta) {
if packed, packErr := s.packKCFile(ctx, data, item, revision, meta); packErr != nil {
s.logger.Warn().Err(packErr).Str("part_number", partNumber).Msg("kc: packing failed, serving original")
} else {
output = packed
}
}
}
}
// Set response headers
filename := partNumber + "_rev" + strconv.Itoa(revNum) + ".FCStd"
w.Header().Set("Content-Type", "application/octet-stream")
w.Header().Set("Content-Disposition", "attachment; filename=\""+filename+"\"")
if revision.FileSize != nil {
w.Header().Set("Content-Length", strconv.FormatInt(*revision.FileSize, 10))
}
w.Header().Set("Content-Length", strconv.Itoa(len(output)))
w.Header().Set("ETag", etag)
w.Header().Set("Cache-Control", "private, must-revalidate")
// Stream file to response
buf := make([]byte, 32*1024)
for {
n, readErr := reader.Read(buf)
if n > 0 {
if _, writeErr := w.Write(buf[:n]); writeErr != nil {
s.logger.Error().Err(writeErr).Msg("failed to write response")
return
}
}
if readErr != nil {
break
}
}
w.Write(output)
}
// HandleDownloadLatestFile downloads the file for the latest revision.

View File

@@ -0,0 +1,382 @@
package api
import (
"context"
"encoding/json"
"net/http"
"strconv"
"github.com/go-chi/chi/v5"
"github.com/kindredsystems/silo/internal/auth"
"github.com/kindredsystems/silo/internal/db"
)
// HandleListJobs returns jobs filtered by status and/or item.
func (s *Server) HandleListJobs(w http.ResponseWriter, r *http.Request) {
status := r.URL.Query().Get("status")
itemID := r.URL.Query().Get("item_id")
limit := 50
if v := r.URL.Query().Get("limit"); v != "" {
if n, err := strconv.Atoi(v); err == nil && n > 0 && n <= 200 {
limit = n
}
}
offset := 0
if v := r.URL.Query().Get("offset"); v != "" {
if n, err := strconv.Atoi(v); err == nil && n >= 0 {
offset = n
}
}
jobs, err := s.jobs.ListJobs(r.Context(), status, itemID, limit, offset)
if err != nil {
s.logger.Error().Err(err).Msg("failed to list jobs")
writeError(w, http.StatusInternalServerError, "internal_error", "Failed to list jobs")
return
}
writeJSON(w, http.StatusOK, jobs)
}
// HandleGetJob returns a single job by ID.
func (s *Server) HandleGetJob(w http.ResponseWriter, r *http.Request) {
jobID := chi.URLParam(r, "jobID")
job, err := s.jobs.GetJob(r.Context(), jobID)
if err != nil {
s.logger.Error().Err(err).Msg("failed to get job")
writeError(w, http.StatusInternalServerError, "internal_error", "Failed to get job")
return
}
if job == nil {
writeError(w, http.StatusNotFound, "not_found", "Job not found")
return
}
writeJSON(w, http.StatusOK, job)
}
// HandleGetJobLogs returns log entries for a job.
func (s *Server) HandleGetJobLogs(w http.ResponseWriter, r *http.Request) {
jobID := chi.URLParam(r, "jobID")
logs, err := s.jobs.GetJobLogs(r.Context(), jobID)
if err != nil {
s.logger.Error().Err(err).Msg("failed to get job logs")
writeError(w, http.StatusInternalServerError, "internal_error", "Failed to get job logs")
return
}
writeJSON(w, http.StatusOK, logs)
}
// HandleCreateJob manually triggers a job.
func (s *Server) HandleCreateJob(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
user := auth.UserFromContext(ctx)
var req struct {
DefinitionName string `json:"definition_name"`
ItemID *string `json:"item_id,omitempty"`
ProjectID *string `json:"project_id,omitempty"`
ScopeMetadata map[string]any `json:"scope_metadata,omitempty"`
}
if err := json.NewDecoder(r.Body).Decode(&req); err != nil {
writeError(w, http.StatusBadRequest, "invalid_body", "Invalid JSON body")
return
}
if req.DefinitionName == "" {
writeError(w, http.StatusBadRequest, "missing_field", "definition_name is required")
return
}
// Look up definition
def, err := s.jobs.GetDefinition(ctx, req.DefinitionName)
if err != nil {
s.logger.Error().Err(err).Msg("failed to look up job definition")
writeError(w, http.StatusInternalServerError, "internal_error", "Failed to look up definition")
return
}
if def == nil {
writeError(w, http.StatusNotFound, "not_found", "Job definition not found: "+req.DefinitionName)
return
}
var createdBy *string
if user != nil {
createdBy = &user.Username
}
job := &db.Job{
JobDefinitionID: &def.ID,
DefinitionName: def.Name,
Priority: def.Priority,
ItemID: req.ItemID,
ProjectID: req.ProjectID,
ScopeMetadata: req.ScopeMetadata,
RunnerTags: def.RunnerTags,
TimeoutSeconds: def.TimeoutSeconds,
MaxRetries: def.MaxRetries,
CreatedBy: createdBy,
}
if err := s.jobs.CreateJob(ctx, job); err != nil {
s.logger.Error().Err(err).Msg("failed to create job")
writeError(w, http.StatusInternalServerError, "internal_error", "Failed to create job")
return
}
s.broker.Publish("job.created", mustMarshal(map[string]any{
"job_id": job.ID,
"definition_name": job.DefinitionName,
"item_id": job.ItemID,
}))
writeJSON(w, http.StatusCreated, job)
}
// HandleCancelJob cancels a pending or active job.
func (s *Server) HandleCancelJob(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
jobID := chi.URLParam(r, "jobID")
user := auth.UserFromContext(ctx)
cancelledBy := "system"
if user != nil {
cancelledBy = user.Username
}
if err := s.jobs.CancelJob(ctx, jobID, cancelledBy); err != nil {
writeError(w, http.StatusBadRequest, "cancel_failed", err.Error())
return
}
s.broker.Publish("job.cancelled", mustMarshal(map[string]any{
"job_id": jobID,
"cancelled_by": cancelledBy,
}))
writeJSON(w, http.StatusOK, map[string]string{"status": "cancelled"})
}
// HandleListJobDefinitions returns all loaded job definitions.
func (s *Server) HandleListJobDefinitions(w http.ResponseWriter, r *http.Request) {
defs, err := s.jobs.ListDefinitions(r.Context())
if err != nil {
s.logger.Error().Err(err).Msg("failed to list job definitions")
writeError(w, http.StatusInternalServerError, "internal_error", "Failed to list definitions")
return
}
writeJSON(w, http.StatusOK, defs)
}
// HandleGetJobDefinition returns a single job definition by name.
func (s *Server) HandleGetJobDefinition(w http.ResponseWriter, r *http.Request) {
name := chi.URLParam(r, "name")
def, err := s.jobs.GetDefinition(r.Context(), name)
if err != nil {
s.logger.Error().Err(err).Msg("failed to get job definition")
writeError(w, http.StatusInternalServerError, "internal_error", "Failed to get definition")
return
}
if def == nil {
writeError(w, http.StatusNotFound, "not_found", "Job definition not found")
return
}
writeJSON(w, http.StatusOK, def)
}
// HandleReloadJobDefinitions re-reads YAML files from disk and upserts them.
func (s *Server) HandleReloadJobDefinitions(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
if s.jobDefsDir == "" {
writeError(w, http.StatusBadRequest, "no_directory", "Job definitions directory not configured")
return
}
defs, err := loadAndUpsertJobDefs(ctx, s.jobDefsDir, s.jobs)
if err != nil {
s.logger.Error().Err(err).Msg("failed to reload job definitions")
writeError(w, http.StatusInternalServerError, "internal_error", "Failed to reload definitions")
return
}
// Update in-memory map
s.jobDefs = defs
writeJSON(w, http.StatusOK, map[string]any{
"reloaded": len(defs),
})
}
// HandleListRunners returns all registered runners (admin).
func (s *Server) HandleListRunners(w http.ResponseWriter, r *http.Request) {
runners, err := s.jobs.ListRunners(r.Context())
if err != nil {
s.logger.Error().Err(err).Msg("failed to list runners")
writeError(w, http.StatusInternalServerError, "internal_error", "Failed to list runners")
return
}
// Redact token hashes from response
type runnerResponse struct {
ID string `json:"id"`
Name string `json:"name"`
TokenPrefix string `json:"token_prefix"`
Tags []string `json:"tags"`
Status string `json:"status"`
LastHeartbeat *string `json:"last_heartbeat,omitempty"`
LastJobID *string `json:"last_job_id,omitempty"`
Metadata map[string]any `json:"metadata,omitempty"`
CreatedAt string `json:"created_at"`
}
resp := make([]runnerResponse, len(runners))
for i, runner := range runners {
var hb *string
if runner.LastHeartbeat != nil {
s := runner.LastHeartbeat.Format("2006-01-02T15:04:05Z07:00")
hb = &s
}
resp[i] = runnerResponse{
ID: runner.ID,
Name: runner.Name,
TokenPrefix: runner.TokenPrefix,
Tags: runner.Tags,
Status: runner.Status,
LastHeartbeat: hb,
LastJobID: runner.LastJobID,
Metadata: runner.Metadata,
CreatedAt: runner.CreatedAt.Format("2006-01-02T15:04:05Z07:00"),
}
}
writeJSON(w, http.StatusOK, resp)
}
// HandleRegisterRunner creates a new runner and returns the token (admin).
func (s *Server) HandleRegisterRunner(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
var req struct {
Name string `json:"name"`
Tags []string `json:"tags"`
Metadata map[string]any `json:"metadata,omitempty"`
}
if err := json.NewDecoder(r.Body).Decode(&req); err != nil {
writeError(w, http.StatusBadRequest, "invalid_body", "Invalid JSON body")
return
}
if req.Name == "" {
writeError(w, http.StatusBadRequest, "missing_field", "name is required")
return
}
if len(req.Tags) == 0 {
writeError(w, http.StatusBadRequest, "missing_field", "tags is required (at least one)")
return
}
rawToken, tokenHash, tokenPrefix := generateRunnerToken()
runner := &db.Runner{
Name: req.Name,
TokenHash: tokenHash,
TokenPrefix: tokenPrefix,
Tags: req.Tags,
Metadata: req.Metadata,
}
if err := s.jobs.RegisterRunner(ctx, runner); err != nil {
s.logger.Error().Err(err).Msg("failed to register runner")
writeError(w, http.StatusInternalServerError, "internal_error", "Failed to register runner")
return
}
s.broker.Publish("runner.online", mustMarshal(map[string]any{
"runner_id": runner.ID,
"name": runner.Name,
}))
writeJSON(w, http.StatusCreated, map[string]any{
"id": runner.ID,
"name": runner.Name,
"token": rawToken,
"tags": runner.Tags,
})
}
// HandleDeleteRunner removes a runner (admin).
func (s *Server) HandleDeleteRunner(w http.ResponseWriter, r *http.Request) {
runnerID := chi.URLParam(r, "runnerID")
if err := s.jobs.DeleteRunner(r.Context(), runnerID); err != nil {
writeError(w, http.StatusNotFound, "not_found", err.Error())
return
}
w.WriteHeader(http.StatusNoContent)
}
// triggerJobs creates jobs for all enabled definitions matching the trigger type.
// It applies trigger filters (e.g. item_type) before creating each job.
func (s *Server) triggerJobs(ctx context.Context, triggerType string, itemID string, item *db.Item) {
if !s.modules.IsEnabled("jobs") {
return
}
defs, err := s.jobs.GetDefinitionsByTrigger(ctx, triggerType)
if err != nil {
s.logger.Error().Err(err).Str("trigger", triggerType).Msg("failed to get job definitions for trigger")
return
}
for _, def := range defs {
// Apply trigger filter (e.g. item_type == "assembly")
if def.Definition != nil {
if triggerCfg, ok := def.Definition["trigger"].(map[string]any); ok {
if filterCfg, ok := triggerCfg["filter"].(map[string]any); ok {
if reqType, ok := filterCfg["item_type"].(string); ok && item != nil {
if item.ItemType != reqType {
continue
}
}
}
}
}
job := &db.Job{
JobDefinitionID: &def.ID,
DefinitionName: def.Name,
Priority: def.Priority,
ItemID: &itemID,
RunnerTags: def.RunnerTags,
TimeoutSeconds: def.TimeoutSeconds,
MaxRetries: def.MaxRetries,
}
if err := s.jobs.CreateJob(ctx, job); err != nil {
s.logger.Error().Err(err).Str("definition", def.Name).Msg("failed to create triggered job")
continue
}
s.broker.Publish("job.created", mustMarshal(map[string]any{
"job_id": job.ID,
"definition_name": def.Name,
"trigger": triggerType,
"item_id": itemID,
}))
s.logger.Info().
Str("job_id", job.ID).
Str("definition", def.Name).
Str("trigger", triggerType).
Str("item_id", itemID).
Msg("triggered job")
}
}

View File

@@ -0,0 +1,595 @@
package api
import (
"context"
"encoding/json"
"net/http"
"net/http/httptest"
"strings"
"testing"
"time"
"github.com/go-chi/chi/v5"
"github.com/kindredsystems/silo/internal/db"
"github.com/kindredsystems/silo/internal/modules"
"github.com/kindredsystems/silo/internal/schema"
"github.com/kindredsystems/silo/internal/testutil"
"github.com/rs/zerolog"
)
func newJobTestServer(t *testing.T) *Server {
t.Helper()
pool := testutil.MustConnectTestPool(t)
database := db.NewFromPool(pool)
broker := NewBroker(zerolog.Nop())
state := NewServerState(zerolog.Nop(), nil, broker)
return NewServer(
zerolog.Nop(),
database,
map[string]*schema.Schema{},
"",
nil, nil, nil, nil, nil,
broker, state,
nil, "",
modules.NewRegistry(), nil,
)
}
func newJobRouter(s *Server) http.Handler {
r := chi.NewRouter()
r.Route("/api/jobs", func(r chi.Router) {
r.Get("/", s.HandleListJobs)
r.Get("/{jobID}", s.HandleGetJob)
r.Get("/{jobID}/logs", s.HandleGetJobLogs)
r.Post("/", s.HandleCreateJob)
r.Post("/{jobID}/cancel", s.HandleCancelJob)
})
r.Route("/api/job-definitions", func(r chi.Router) {
r.Get("/", s.HandleListJobDefinitions)
r.Get("/{name}", s.HandleGetJobDefinition)
})
r.Route("/api/runners", func(r chi.Router) {
r.Get("/", s.HandleListRunners)
r.Post("/", s.HandleRegisterRunner)
r.Delete("/{runnerID}", s.HandleDeleteRunner)
})
return r
}
func seedJobDefinition(t *testing.T, s *Server) *db.JobDefinitionRecord {
t.Helper()
rec := &db.JobDefinitionRecord{
Name: "test-validate",
Version: 1,
TriggerType: "manual",
ScopeType: "item",
ComputeType: "validate",
RunnerTags: []string{"create"},
TimeoutSeconds: 300,
MaxRetries: 1,
Priority: 100,
Definition: map[string]any{"compute": map[string]any{"command": "create-validate"}},
Enabled: true,
}
if err := s.jobs.UpsertDefinition(context.Background(), rec); err != nil {
t.Fatalf("seeding job definition: %v", err)
}
return rec
}
func TestHandleListJobDefinitions(t *testing.T) {
s := newJobTestServer(t)
r := newJobRouter(s)
seedJobDefinition(t, s)
req := httptest.NewRequest("GET", "/api/job-definitions", nil)
w := httptest.NewRecorder()
r.ServeHTTP(w, req)
if w.Code != http.StatusOK {
t.Fatalf("expected 200, got %d: %s", w.Code, w.Body.String())
}
var defs []map[string]any
json.Unmarshal(w.Body.Bytes(), &defs)
if len(defs) == 0 {
t.Error("expected at least one definition")
}
}
func TestHandleGetJobDefinition(t *testing.T) {
s := newJobTestServer(t)
r := newJobRouter(s)
seedJobDefinition(t, s)
req := httptest.NewRequest("GET", "/api/job-definitions/test-validate", nil)
w := httptest.NewRecorder()
r.ServeHTTP(w, req)
if w.Code != http.StatusOK {
t.Fatalf("expected 200, got %d: %s", w.Code, w.Body.String())
}
var def map[string]any
json.Unmarshal(w.Body.Bytes(), &def)
if def["name"] != "test-validate" {
t.Errorf("expected name test-validate, got %v", def["name"])
}
}
func TestHandleCreateAndGetJob(t *testing.T) {
s := newJobTestServer(t)
r := newJobRouter(s)
seedJobDefinition(t, s)
// Create a job
body := `{"definition_name": "test-validate"}`
req := httptest.NewRequest("POST", "/api/jobs", strings.NewReader(body))
req.Header.Set("Content-Type", "application/json")
w := httptest.NewRecorder()
r.ServeHTTP(w, req)
if w.Code != http.StatusCreated {
t.Fatalf("create: expected 201, got %d: %s", w.Code, w.Body.String())
}
var job map[string]any
json.Unmarshal(w.Body.Bytes(), &job)
jobID := job["ID"].(string)
if jobID == "" {
t.Fatal("job ID is empty")
}
// Get the job
req2 := httptest.NewRequest("GET", "/api/jobs/"+jobID, nil)
w2 := httptest.NewRecorder()
r.ServeHTTP(w2, req2)
if w2.Code != http.StatusOK {
t.Fatalf("get: expected 200, got %d: %s", w2.Code, w2.Body.String())
}
}
func TestHandleCancelJob(t *testing.T) {
s := newJobTestServer(t)
r := newJobRouter(s)
seedJobDefinition(t, s)
// Create a job
body := `{"definition_name": "test-validate"}`
req := httptest.NewRequest("POST", "/api/jobs", strings.NewReader(body))
req.Header.Set("Content-Type", "application/json")
w := httptest.NewRecorder()
r.ServeHTTP(w, req)
var job map[string]any
json.Unmarshal(w.Body.Bytes(), &job)
jobID := job["ID"].(string)
// Cancel the job
req2 := httptest.NewRequest("POST", "/api/jobs/"+jobID+"/cancel", nil)
w2 := httptest.NewRecorder()
r.ServeHTTP(w2, req2)
if w2.Code != http.StatusOK {
t.Fatalf("cancel: expected 200, got %d: %s", w2.Code, w2.Body.String())
}
}
func TestHandleListJobs(t *testing.T) {
s := newJobTestServer(t)
r := newJobRouter(s)
seedJobDefinition(t, s)
// Create a job
body := `{"definition_name": "test-validate"}`
req := httptest.NewRequest("POST", "/api/jobs", strings.NewReader(body))
req.Header.Set("Content-Type", "application/json")
w := httptest.NewRecorder()
r.ServeHTTP(w, req)
// List jobs
req2 := httptest.NewRequest("GET", "/api/jobs", nil)
w2 := httptest.NewRecorder()
r.ServeHTTP(w2, req2)
if w2.Code != http.StatusOK {
t.Fatalf("list: expected 200, got %d: %s", w2.Code, w2.Body.String())
}
var jobs []map[string]any
json.Unmarshal(w2.Body.Bytes(), &jobs)
if len(jobs) == 0 {
t.Error("expected at least one job")
}
}
func TestHandleListJobs_FilterByStatus(t *testing.T) {
s := newJobTestServer(t)
r := newJobRouter(s)
seedJobDefinition(t, s)
// Create a job
body := `{"definition_name": "test-validate"}`
req := httptest.NewRequest("POST", "/api/jobs", strings.NewReader(body))
req.Header.Set("Content-Type", "application/json")
w := httptest.NewRecorder()
r.ServeHTTP(w, req)
// Filter by pending
req2 := httptest.NewRequest("GET", "/api/jobs?status=pending", nil)
w2 := httptest.NewRecorder()
r.ServeHTTP(w2, req2)
if w2.Code != http.StatusOK {
t.Fatalf("expected 200, got %d", w2.Code)
}
var jobs []map[string]any
json.Unmarshal(w2.Body.Bytes(), &jobs)
if len(jobs) == 0 {
t.Error("expected pending jobs")
}
// Filter by completed (should be empty)
req3 := httptest.NewRequest("GET", "/api/jobs?status=completed", nil)
w3 := httptest.NewRecorder()
r.ServeHTTP(w3, req3)
var completedJobs []map[string]any
json.Unmarshal(w3.Body.Bytes(), &completedJobs)
if len(completedJobs) != 0 {
t.Errorf("expected no completed jobs, got %d", len(completedJobs))
}
}
func TestHandleRegisterAndListRunners(t *testing.T) {
s := newJobTestServer(t)
r := newJobRouter(s)
// Register a runner
body := `{"name": "test-runner-1", "tags": ["create", "linux"]}`
req := httptest.NewRequest("POST", "/api/runners", strings.NewReader(body))
req.Header.Set("Content-Type", "application/json")
w := httptest.NewRecorder()
r.ServeHTTP(w, req)
if w.Code != http.StatusCreated {
t.Fatalf("register: expected 201, got %d: %s", w.Code, w.Body.String())
}
var resp map[string]any
json.Unmarshal(w.Body.Bytes(), &resp)
if resp["token"] == nil || resp["token"] == "" {
t.Error("expected a token in response")
}
if !strings.HasPrefix(resp["token"].(string), "silo_runner_") {
t.Errorf("expected token to start with silo_runner_, got %s", resp["token"])
}
// List runners
req2 := httptest.NewRequest("GET", "/api/runners", nil)
w2 := httptest.NewRecorder()
r.ServeHTTP(w2, req2)
if w2.Code != http.StatusOK {
t.Fatalf("list: expected 200, got %d", w2.Code)
}
var runners []map[string]any
json.Unmarshal(w2.Body.Bytes(), &runners)
if len(runners) == 0 {
t.Error("expected at least one runner")
}
// Token hash should not be exposed
for _, runner := range runners {
if runner["token_hash"] != nil {
t.Error("token_hash should not be in response")
}
}
}
func TestHandleDeleteRunner(t *testing.T) {
s := newJobTestServer(t)
r := newJobRouter(s)
// Register a runner
body := `{"name": "test-runner-delete", "tags": ["create"]}`
req := httptest.NewRequest("POST", "/api/runners", strings.NewReader(body))
req.Header.Set("Content-Type", "application/json")
w := httptest.NewRecorder()
r.ServeHTTP(w, req)
var resp map[string]any
json.Unmarshal(w.Body.Bytes(), &resp)
runnerID := resp["id"].(string)
// Delete the runner
req2 := httptest.NewRequest("DELETE", "/api/runners/"+runnerID, nil)
w2 := httptest.NewRecorder()
r.ServeHTTP(w2, req2)
if w2.Code != http.StatusNoContent {
t.Fatalf("delete: expected 204, got %d: %s", w2.Code, w2.Body.String())
}
}
// --- Trigger integration tests ---
// newTriggerRouter builds a router with items, revisions, BOM, and jobs routes
// so that HTTP-based actions can fire triggerJobs via goroutine.
func newTriggerRouter(s *Server) http.Handler {
r := chi.NewRouter()
r.Route("/api/items", func(r chi.Router) {
r.Post("/", s.HandleCreateItem)
r.Route("/{partNumber}", func(r chi.Router) {
r.Post("/revisions", s.HandleCreateRevision)
r.Post("/bom", s.HandleAddBOMEntry)
r.Put("/bom/{childPartNumber}", s.HandleUpdateBOMEntry)
r.Delete("/bom/{childPartNumber}", s.HandleDeleteBOMEntry)
})
})
r.Route("/api/jobs", func(r chi.Router) {
r.Get("/", s.HandleListJobs)
})
return r
}
func waitForJobs(t *testing.T, s *Server, itemID string, wantCount int) []*db.Job {
t.Helper()
// triggerJobs runs in a goroutine; poll up to 2 seconds.
for i := 0; i < 20; i++ {
jobs, err := s.jobs.ListJobs(context.Background(), "", itemID, 50, 0)
if err != nil {
t.Fatalf("listing jobs: %v", err)
}
if len(jobs) >= wantCount {
return jobs
}
time.Sleep(100 * time.Millisecond)
}
jobs, _ := s.jobs.ListJobs(context.Background(), "", itemID, 50, 0)
return jobs
}
func TestTriggerJobsOnRevisionCreate(t *testing.T) {
s := newJobTestServer(t)
if err := s.modules.SetEnabled("jobs", true); err != nil {
t.Fatalf("enabling jobs module: %v", err)
}
router := newTriggerRouter(s)
// Create an item.
createItemDirect(t, s, "TRIG-REV-001", "trigger test item", nil)
// Seed a job definition that triggers on revision_created.
def := &db.JobDefinitionRecord{
Name: "rev-trigger-test",
Version: 1,
TriggerType: "revision_created",
ScopeType: "item",
ComputeType: "validate",
RunnerTags: []string{"test"},
TimeoutSeconds: 60,
MaxRetries: 0,
Priority: 100,
Enabled: true,
}
if err := s.jobs.UpsertDefinition(context.Background(), def); err != nil {
t.Fatalf("seeding definition: %v", err)
}
// Create a revision via HTTP (fires triggerJobs in goroutine).
body := `{"properties":{"material":"steel"},"comment":"trigger test"}`
req := authRequest(httptest.NewRequest("POST", "/api/items/TRIG-REV-001/revisions", strings.NewReader(body)))
req.Header.Set("Content-Type", "application/json")
w := httptest.NewRecorder()
router.ServeHTTP(w, req)
if w.Code != http.StatusCreated {
t.Fatalf("create revision: expected 201, got %d: %s", w.Code, w.Body.String())
}
// Get the item ID to filter jobs.
item, _ := s.items.GetByPartNumber(context.Background(), "TRIG-REV-001")
if item == nil {
t.Fatal("item not found after creation")
}
jobs := waitForJobs(t, s, item.ID, 1)
if len(jobs) == 0 {
t.Fatal("expected at least 1 triggered job, got 0")
}
if jobs[0].DefinitionName != "rev-trigger-test" {
t.Errorf("expected definition name rev-trigger-test, got %s", jobs[0].DefinitionName)
}
}
func TestTriggerJobsOnBOMChange(t *testing.T) {
s := newJobTestServer(t)
if err := s.modules.SetEnabled("jobs", true); err != nil {
t.Fatalf("enabling jobs module: %v", err)
}
router := newTriggerRouter(s)
// Create parent and child items.
createItemDirect(t, s, "TRIG-BOM-P", "parent", nil)
createItemDirect(t, s, "TRIG-BOM-C", "child", nil)
// Seed a bom_changed job definition.
def := &db.JobDefinitionRecord{
Name: "bom-trigger-test",
Version: 1,
TriggerType: "bom_changed",
ScopeType: "item",
ComputeType: "validate",
RunnerTags: []string{"test"},
TimeoutSeconds: 60,
MaxRetries: 0,
Priority: 100,
Enabled: true,
}
if err := s.jobs.UpsertDefinition(context.Background(), def); err != nil {
t.Fatalf("seeding definition: %v", err)
}
// Add a BOM entry via HTTP.
body := `{"child_part_number":"TRIG-BOM-C","rel_type":"component","quantity":2}`
req := authRequest(httptest.NewRequest("POST", "/api/items/TRIG-BOM-P/bom", strings.NewReader(body)))
req.Header.Set("Content-Type", "application/json")
w := httptest.NewRecorder()
router.ServeHTTP(w, req)
if w.Code != http.StatusCreated {
t.Fatalf("add BOM entry: expected 201, got %d: %s", w.Code, w.Body.String())
}
// Get the parent item ID.
parent, _ := s.items.GetByPartNumber(context.Background(), "TRIG-BOM-P")
if parent == nil {
t.Fatal("parent item not found")
}
jobs := waitForJobs(t, s, parent.ID, 1)
if len(jobs) == 0 {
t.Fatal("expected at least 1 triggered job, got 0")
}
if jobs[0].DefinitionName != "bom-trigger-test" {
t.Errorf("expected definition name bom-trigger-test, got %s", jobs[0].DefinitionName)
}
}
func TestTriggerJobsFilterMismatch(t *testing.T) {
s := newJobTestServer(t)
if err := s.modules.SetEnabled("jobs", true); err != nil {
t.Fatalf("enabling jobs module: %v", err)
}
router := newTriggerRouter(s)
// Create a "part" type item (not "assembly").
createItemDirect(t, s, "TRIG-FILT-P", "filter parent", nil)
createItemDirect(t, s, "TRIG-FILT-C", "filter child", nil)
// Seed a definition that only triggers for assembly items.
def := &db.JobDefinitionRecord{
Name: "assembly-only-test",
Version: 1,
TriggerType: "bom_changed",
ScopeType: "item",
ComputeType: "validate",
RunnerTags: []string{"test"},
TimeoutSeconds: 60,
MaxRetries: 0,
Priority: 100,
Enabled: true,
Definition: map[string]any{
"trigger": map[string]any{
"filter": map[string]any{
"item_type": "assembly",
},
},
},
}
if err := s.jobs.UpsertDefinition(context.Background(), def); err != nil {
t.Fatalf("seeding definition: %v", err)
}
// Add a BOM entry on a "part" item (should NOT match assembly filter).
body := `{"child_part_number":"TRIG-FILT-C","rel_type":"component","quantity":1}`
req := authRequest(httptest.NewRequest("POST", "/api/items/TRIG-FILT-P/bom", strings.NewReader(body)))
req.Header.Set("Content-Type", "application/json")
w := httptest.NewRecorder()
router.ServeHTTP(w, req)
if w.Code != http.StatusCreated {
t.Fatalf("add BOM entry: expected 201, got %d: %s", w.Code, w.Body.String())
}
// Wait briefly, then verify no jobs were created.
parent, _ := s.items.GetByPartNumber(context.Background(), "TRIG-FILT-P")
time.Sleep(500 * time.Millisecond)
jobs, err := s.jobs.ListJobs(context.Background(), "", parent.ID, 50, 0)
if err != nil {
t.Fatalf("listing jobs: %v", err)
}
if len(jobs) != 0 {
t.Errorf("expected 0 jobs (filter mismatch), got %d", len(jobs))
}
}
func TestTriggerJobsModuleDisabled(t *testing.T) {
s := newJobTestServer(t)
// Jobs module is disabled by default in NewRegistry().
router := newTriggerRouter(s)
// Create items.
createItemDirect(t, s, "TRIG-DIS-P", "disabled parent", nil)
createItemDirect(t, s, "TRIG-DIS-C", "disabled child", nil)
// Seed a bom_changed definition (it exists in DB but module is off).
def := &db.JobDefinitionRecord{
Name: "disabled-trigger-test",
Version: 1,
TriggerType: "bom_changed",
ScopeType: "item",
ComputeType: "validate",
RunnerTags: []string{"test"},
TimeoutSeconds: 60,
MaxRetries: 0,
Priority: 100,
Enabled: true,
}
if err := s.jobs.UpsertDefinition(context.Background(), def); err != nil {
t.Fatalf("seeding definition: %v", err)
}
// Add a BOM entry with jobs module disabled.
body := `{"child_part_number":"TRIG-DIS-C","rel_type":"component","quantity":1}`
req := authRequest(httptest.NewRequest("POST", "/api/items/TRIG-DIS-P/bom", strings.NewReader(body)))
req.Header.Set("Content-Type", "application/json")
w := httptest.NewRecorder()
router.ServeHTTP(w, req)
if w.Code != http.StatusCreated {
t.Fatalf("add BOM entry: expected 201, got %d: %s", w.Code, w.Body.String())
}
// Wait briefly, then verify no jobs were created.
parent, _ := s.items.GetByPartNumber(context.Background(), "TRIG-DIS-P")
time.Sleep(500 * time.Millisecond)
jobs, err := s.jobs.ListJobs(context.Background(), "", parent.ID, 50, 0)
if err != nil {
t.Fatalf("listing jobs: %v", err)
}
if len(jobs) != 0 {
t.Errorf("expected 0 jobs (module disabled), got %d", len(jobs))
}
}
func TestGenerateRunnerToken(t *testing.T) {
raw, hash, prefix := generateRunnerToken()
if !strings.HasPrefix(raw, "silo_runner_") {
t.Errorf("raw token should start with silo_runner_, got %s", raw[:20])
}
if len(hash) != 64 {
t.Errorf("hash should be 64 hex chars, got %d", len(hash))
}
if len(prefix) != 20 {
t.Errorf("prefix should be 20 chars, got %d: %s", len(prefix), prefix)
}
// Two tokens should be different
raw2, _, _ := generateRunnerToken()
if raw == raw2 {
t.Error("two generated tokens should be different")
}
}

View File

@@ -0,0 +1,234 @@
package api
import (
"encoding/json"
"net/http"
"strings"
"github.com/go-chi/chi/v5"
"github.com/kindredsystems/silo/internal/db"
)
// LocationResponse is the API representation of a location.
type LocationResponse struct {
ID string `json:"id"`
Path string `json:"path"`
Name string `json:"name"`
ParentID *string `json:"parent_id,omitempty"`
LocationType string `json:"location_type"`
Depth int `json:"depth"`
Metadata map[string]any `json:"metadata,omitempty"`
CreatedAt string `json:"created_at"`
}
// CreateLocationRequest represents a request to create a location.
type CreateLocationRequest struct {
Path string `json:"path"`
Name string `json:"name"`
LocationType string `json:"location_type"`
Metadata map[string]any `json:"metadata,omitempty"`
}
// UpdateLocationRequest represents a request to update a location.
type UpdateLocationRequest struct {
Name string `json:"name"`
LocationType string `json:"location_type"`
Metadata map[string]any `json:"metadata,omitempty"`
}
func locationToResponse(loc *db.Location) LocationResponse {
return LocationResponse{
ID: loc.ID,
Path: loc.Path,
Name: loc.Name,
ParentID: loc.ParentID,
LocationType: loc.LocationType,
Depth: loc.Depth,
Metadata: loc.Metadata,
CreatedAt: loc.CreatedAt.Format("2006-01-02T15:04:05Z07:00"),
}
}
// HandleListLocations lists all locations. If ?tree={path} is set, returns that
// subtree. If ?root=true, returns only root-level locations (depth 0).
func (s *Server) HandleListLocations(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
treePath := r.URL.Query().Get("tree")
if treePath != "" {
locs, err := s.locations.GetTree(ctx, treePath)
if err != nil {
s.logger.Error().Err(err).Str("tree", treePath).Msg("failed to get location tree")
writeError(w, http.StatusInternalServerError, "internal_error", "Failed to get location tree")
return
}
writeJSON(w, http.StatusOK, locationsToResponse(locs))
return
}
locs, err := s.locations.List(ctx)
if err != nil {
s.logger.Error().Err(err).Msg("failed to list locations")
writeError(w, http.StatusInternalServerError, "internal_error", "Failed to list locations")
return
}
writeJSON(w, http.StatusOK, locationsToResponse(locs))
}
// HandleCreateLocation creates a new location.
func (s *Server) HandleCreateLocation(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
var req CreateLocationRequest
if err := json.NewDecoder(r.Body).Decode(&req); err != nil {
writeError(w, http.StatusBadRequest, "invalid_json", err.Error())
return
}
if req.Path == "" {
writeError(w, http.StatusBadRequest, "invalid_request", "Path is required")
return
}
if req.Name == "" {
writeError(w, http.StatusBadRequest, "invalid_request", "Name is required")
return
}
if req.LocationType == "" {
writeError(w, http.StatusBadRequest, "invalid_request", "Location type is required")
return
}
// Normalize: trim slashes
req.Path = strings.Trim(req.Path, "/")
loc := &db.Location{
Path: req.Path,
Name: req.Name,
LocationType: req.LocationType,
Metadata: req.Metadata,
}
if loc.Metadata == nil {
loc.Metadata = map[string]any{}
}
if err := s.locations.Create(ctx, loc); err != nil {
if strings.Contains(err.Error(), "parent location") || strings.Contains(err.Error(), "does not exist") {
writeError(w, http.StatusBadRequest, "invalid_parent", err.Error())
return
}
if strings.Contains(err.Error(), "duplicate key") || strings.Contains(err.Error(), "unique") {
writeError(w, http.StatusConflict, "already_exists", "Location path already exists")
return
}
s.logger.Error().Err(err).Str("path", req.Path).Msg("failed to create location")
writeError(w, http.StatusInternalServerError, "create_failed", err.Error())
return
}
writeJSON(w, http.StatusCreated, locationToResponse(loc))
}
// HandleGetLocation retrieves a location by path. The path is the rest of the
// URL after /api/locations/, which chi captures as a wildcard.
func (s *Server) HandleGetLocation(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
path := strings.Trim(chi.URLParam(r, "*"), "/")
if path == "" {
writeError(w, http.StatusBadRequest, "invalid_request", "Location path is required")
return
}
loc, err := s.locations.GetByPath(ctx, path)
if err != nil {
s.logger.Error().Err(err).Str("path", path).Msg("failed to get location")
writeError(w, http.StatusInternalServerError, "internal_error", "Failed to get location")
return
}
if loc == nil {
writeError(w, http.StatusNotFound, "not_found", "Location not found")
return
}
writeJSON(w, http.StatusOK, locationToResponse(loc))
}
// HandleUpdateLocation updates a location by path.
func (s *Server) HandleUpdateLocation(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
path := strings.Trim(chi.URLParam(r, "*"), "/")
if path == "" {
writeError(w, http.StatusBadRequest, "invalid_request", "Location path is required")
return
}
var req UpdateLocationRequest
if err := json.NewDecoder(r.Body).Decode(&req); err != nil {
writeError(w, http.StatusBadRequest, "invalid_json", err.Error())
return
}
if req.Name == "" {
writeError(w, http.StatusBadRequest, "invalid_request", "Name is required")
return
}
if req.LocationType == "" {
writeError(w, http.StatusBadRequest, "invalid_request", "Location type is required")
return
}
meta := req.Metadata
if meta == nil {
meta = map[string]any{}
}
if err := s.locations.Update(ctx, path, req.Name, req.LocationType, meta); err != nil {
if strings.Contains(err.Error(), "not found") {
writeError(w, http.StatusNotFound, "not_found", "Location not found")
return
}
s.logger.Error().Err(err).Str("path", path).Msg("failed to update location")
writeError(w, http.StatusInternalServerError, "update_failed", err.Error())
return
}
loc, _ := s.locations.GetByPath(ctx, path)
writeJSON(w, http.StatusOK, locationToResponse(loc))
}
// HandleDeleteLocation deletes a location by path. Rejects if inventory exists.
func (s *Server) HandleDeleteLocation(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
path := strings.Trim(chi.URLParam(r, "*"), "/")
if path == "" {
writeError(w, http.StatusBadRequest, "invalid_request", "Location path is required")
return
}
if err := s.locations.Delete(ctx, path); err != nil {
if strings.Contains(err.Error(), "inventory record") {
writeError(w, http.StatusConflict, "has_inventory", err.Error())
return
}
if strings.Contains(err.Error(), "not found") {
writeError(w, http.StatusNotFound, "not_found", "Location not found")
return
}
s.logger.Error().Err(err).Str("path", path).Msg("failed to delete location")
writeError(w, http.StatusInternalServerError, "delete_failed", err.Error())
return
}
w.WriteHeader(http.StatusNoContent)
}
func locationsToResponse(locs []*db.Location) []LocationResponse {
result := make([]LocationResponse, len(locs))
for i, l := range locs {
result[i] = locationToResponse(l)
}
return result
}

View File

@@ -0,0 +1,323 @@
package api
import (
"encoding/json"
"net/http"
"net/http/httptest"
"strings"
"testing"
"github.com/go-chi/chi/v5"
)
func newLocationRouter(s *Server) http.Handler {
r := chi.NewRouter()
r.Get("/api/locations", s.HandleListLocations)
r.Post("/api/locations", s.HandleCreateLocation)
r.Get("/api/locations/*", s.HandleGetLocation)
r.Put("/api/locations/*", s.HandleUpdateLocation)
r.Delete("/api/locations/*", s.HandleDeleteLocation)
return r
}
func TestHandleListLocationsEmpty(t *testing.T) {
s := newTestServer(t)
router := newLocationRouter(s)
req := httptest.NewRequest("GET", "/api/locations", nil)
w := httptest.NewRecorder()
router.ServeHTTP(w, req)
if w.Code != http.StatusOK {
t.Fatalf("status: got %d, want %d; body: %s", w.Code, http.StatusOK, w.Body.String())
}
var locs []LocationResponse
if err := json.Unmarshal(w.Body.Bytes(), &locs); err != nil {
t.Fatalf("decoding response: %v", err)
}
if len(locs) != 0 {
t.Fatalf("expected 0 locations, got %d", len(locs))
}
}
func TestHandleCreateAndGetLocation(t *testing.T) {
s := newTestServer(t)
router := newLocationRouter(s)
// Create root location
body := `{"path": "lab", "name": "Lab", "location_type": "building"}`
req := httptest.NewRequest("POST", "/api/locations", strings.NewReader(body))
w := httptest.NewRecorder()
router.ServeHTTP(w, req)
if w.Code != http.StatusCreated {
t.Fatalf("create status: got %d, want %d; body: %s", w.Code, http.StatusCreated, w.Body.String())
}
var created LocationResponse
if err := json.Unmarshal(w.Body.Bytes(), &created); err != nil {
t.Fatalf("decoding create response: %v", err)
}
if created.Path != "lab" {
t.Errorf("path: got %q, want %q", created.Path, "lab")
}
if created.Name != "Lab" {
t.Errorf("name: got %q, want %q", created.Name, "Lab")
}
if created.Depth != 0 {
t.Errorf("depth: got %d, want 0", created.Depth)
}
if created.ID == "" {
t.Error("expected ID to be set")
}
// Get by path
req = httptest.NewRequest("GET", "/api/locations/lab", nil)
w = httptest.NewRecorder()
router.ServeHTTP(w, req)
if w.Code != http.StatusOK {
t.Fatalf("get status: got %d, want %d; body: %s", w.Code, http.StatusOK, w.Body.String())
}
var got LocationResponse
if err := json.Unmarshal(w.Body.Bytes(), &got); err != nil {
t.Fatalf("decoding get response: %v", err)
}
if got.ID != created.ID {
t.Errorf("ID mismatch: got %q, want %q", got.ID, created.ID)
}
}
func TestHandleCreateNestedLocation(t *testing.T) {
s := newTestServer(t)
router := newLocationRouter(s)
// Create root
body := `{"path": "warehouse", "name": "Warehouse", "location_type": "building"}`
req := httptest.NewRequest("POST", "/api/locations", strings.NewReader(body))
w := httptest.NewRecorder()
router.ServeHTTP(w, req)
if w.Code != http.StatusCreated {
t.Fatalf("create root: got %d; body: %s", w.Code, w.Body.String())
}
// Create child
body = `{"path": "warehouse/shelf-a", "name": "Shelf A", "location_type": "shelf"}`
req = httptest.NewRequest("POST", "/api/locations", strings.NewReader(body))
w = httptest.NewRecorder()
router.ServeHTTP(w, req)
if w.Code != http.StatusCreated {
t.Fatalf("create child: got %d; body: %s", w.Code, w.Body.String())
}
var child LocationResponse
json.Unmarshal(w.Body.Bytes(), &child)
if child.Depth != 1 {
t.Errorf("child depth: got %d, want 1", child.Depth)
}
if child.ParentID == nil {
t.Error("expected parent_id to be set")
}
// Create grandchild
body = `{"path": "warehouse/shelf-a/bin-3", "name": "Bin 3", "location_type": "bin"}`
req = httptest.NewRequest("POST", "/api/locations", strings.NewReader(body))
w = httptest.NewRecorder()
router.ServeHTTP(w, req)
if w.Code != http.StatusCreated {
t.Fatalf("create grandchild: got %d; body: %s", w.Code, w.Body.String())
}
var gc LocationResponse
json.Unmarshal(w.Body.Bytes(), &gc)
if gc.Depth != 2 {
t.Errorf("grandchild depth: got %d, want 2", gc.Depth)
}
// Get nested path
req = httptest.NewRequest("GET", "/api/locations/warehouse/shelf-a/bin-3", nil)
w = httptest.NewRecorder()
router.ServeHTTP(w, req)
if w.Code != http.StatusOK {
t.Fatalf("get nested: got %d; body: %s", w.Code, w.Body.String())
}
}
func TestHandleCreateLocationMissingParent(t *testing.T) {
s := newTestServer(t)
router := newLocationRouter(s)
body := `{"path": "nonexistent/child", "name": "Child", "location_type": "shelf"}`
req := httptest.NewRequest("POST", "/api/locations", strings.NewReader(body))
w := httptest.NewRecorder()
router.ServeHTTP(w, req)
if w.Code != http.StatusBadRequest {
t.Fatalf("expected 400, got %d; body: %s", w.Code, w.Body.String())
}
}
func TestHandleUpdateLocation(t *testing.T) {
s := newTestServer(t)
router := newLocationRouter(s)
// Create
body := `{"path": "office", "name": "Office", "location_type": "room"}`
req := httptest.NewRequest("POST", "/api/locations", strings.NewReader(body))
w := httptest.NewRecorder()
router.ServeHTTP(w, req)
if w.Code != http.StatusCreated {
t.Fatalf("create: got %d; body: %s", w.Code, w.Body.String())
}
// Update
body = `{"name": "Main Office", "location_type": "building", "metadata": {"floor": 2}}`
req = httptest.NewRequest("PUT", "/api/locations/office", strings.NewReader(body))
w = httptest.NewRecorder()
router.ServeHTTP(w, req)
if w.Code != http.StatusOK {
t.Fatalf("update: got %d; body: %s", w.Code, w.Body.String())
}
var updated LocationResponse
json.Unmarshal(w.Body.Bytes(), &updated)
if updated.Name != "Main Office" {
t.Errorf("name: got %q, want %q", updated.Name, "Main Office")
}
if updated.LocationType != "building" {
t.Errorf("type: got %q, want %q", updated.LocationType, "building")
}
}
func TestHandleDeleteLocation(t *testing.T) {
s := newTestServer(t)
router := newLocationRouter(s)
// Create
body := `{"path": "temp", "name": "Temp", "location_type": "area"}`
req := httptest.NewRequest("POST", "/api/locations", strings.NewReader(body))
w := httptest.NewRecorder()
router.ServeHTTP(w, req)
if w.Code != http.StatusCreated {
t.Fatalf("create: got %d; body: %s", w.Code, w.Body.String())
}
// Delete
req = httptest.NewRequest("DELETE", "/api/locations/temp", nil)
w = httptest.NewRecorder()
router.ServeHTTP(w, req)
if w.Code != http.StatusNoContent {
t.Fatalf("delete: got %d, want %d; body: %s", w.Code, http.StatusNoContent, w.Body.String())
}
// Verify gone
req = httptest.NewRequest("GET", "/api/locations/temp", nil)
w = httptest.NewRecorder()
router.ServeHTTP(w, req)
if w.Code != http.StatusNotFound {
t.Fatalf("get after delete: got %d, want %d", w.Code, http.StatusNotFound)
}
}
func TestHandleDeleteLocationNotFound(t *testing.T) {
s := newTestServer(t)
router := newLocationRouter(s)
req := httptest.NewRequest("DELETE", "/api/locations/doesnotexist", nil)
w := httptest.NewRecorder()
router.ServeHTTP(w, req)
if w.Code != http.StatusNotFound {
t.Fatalf("delete missing: got %d, want %d; body: %s", w.Code, http.StatusNotFound, w.Body.String())
}
}
func TestHandleListLocationsTree(t *testing.T) {
s := newTestServer(t)
router := newLocationRouter(s)
// Create hierarchy
for _, loc := range []string{
`{"path": "site", "name": "Site", "location_type": "site"}`,
`{"path": "site/bldg", "name": "Building", "location_type": "building"}`,
`{"path": "site/bldg/room1", "name": "Room 1", "location_type": "room"}`,
`{"path": "other", "name": "Other", "location_type": "site"}`,
} {
req := httptest.NewRequest("POST", "/api/locations", strings.NewReader(loc))
w := httptest.NewRecorder()
router.ServeHTTP(w, req)
if w.Code != http.StatusCreated {
t.Fatalf("create: got %d; body: %s", w.Code, w.Body.String())
}
}
// List tree under "site"
req := httptest.NewRequest("GET", "/api/locations?tree=site", nil)
w := httptest.NewRecorder()
router.ServeHTTP(w, req)
if w.Code != http.StatusOK {
t.Fatalf("tree: got %d; body: %s", w.Code, w.Body.String())
}
var locs []LocationResponse
json.Unmarshal(w.Body.Bytes(), &locs)
if len(locs) != 3 {
t.Fatalf("tree count: got %d, want 3 (site + bldg + room1)", len(locs))
}
// Full list should have 4
req = httptest.NewRequest("GET", "/api/locations", nil)
w = httptest.NewRecorder()
router.ServeHTTP(w, req)
json.Unmarshal(w.Body.Bytes(), &locs)
if len(locs) != 4 {
t.Fatalf("full list: got %d, want 4", len(locs))
}
}
func TestHandleCreateLocationDuplicate(t *testing.T) {
s := newTestServer(t)
router := newLocationRouter(s)
body := `{"path": "dup", "name": "Dup", "location_type": "area"}`
req := httptest.NewRequest("POST", "/api/locations", strings.NewReader(body))
w := httptest.NewRecorder()
router.ServeHTTP(w, req)
if w.Code != http.StatusCreated {
t.Fatalf("first create: got %d; body: %s", w.Code, w.Body.String())
}
// Duplicate
req = httptest.NewRequest("POST", "/api/locations", strings.NewReader(body))
w = httptest.NewRecorder()
router.ServeHTTP(w, req)
if w.Code != http.StatusConflict {
t.Fatalf("duplicate: got %d, want %d; body: %s", w.Code, http.StatusConflict, w.Body.String())
}
}
func TestHandleCreateLocationValidation(t *testing.T) {
s := newTestServer(t)
router := newLocationRouter(s)
tests := []struct {
name string
body string
}{
{"missing path", `{"name": "X", "location_type": "area"}`},
{"missing name", `{"path": "x", "location_type": "area"}`},
{"missing type", `{"path": "x", "name": "X"}`},
}
for _, tc := range tests {
t.Run(tc.name, func(t *testing.T) {
req := httptest.NewRequest("POST", "/api/locations", strings.NewReader(tc.body))
w := httptest.NewRecorder()
router.ServeHTTP(w, req)
if w.Code != http.StatusBadRequest {
t.Fatalf("got %d, want 400; body: %s", w.Code, w.Body.String())
}
})
}
}

View File

@@ -0,0 +1,451 @@
package api
import (
"context"
"encoding/json"
"io"
"net/http"
"github.com/go-chi/chi/v5"
"github.com/kindredsystems/silo/internal/auth"
"github.com/kindredsystems/silo/internal/db"
"github.com/kindredsystems/silo/internal/kc"
)
// validTransitions defines allowed lifecycle state transitions for Phase 1.
var validTransitions = map[string][]string{
"draft": {"review"},
"review": {"draft", "released"},
"released": {"obsolete"},
"obsolete": {},
}
// MetadataResponse is the JSON representation returned by GET /metadata.
type MetadataResponse struct {
SchemaName *string `json:"schema_name"`
LifecycleState string `json:"lifecycle_state"`
Tags []string `json:"tags"`
Fields map[string]any `json:"fields"`
Manifest *ManifestInfo `json:"manifest,omitempty"`
UpdatedAt string `json:"updated_at"`
UpdatedBy *string `json:"updated_by,omitempty"`
}
// ManifestInfo is the manifest subset included in MetadataResponse.
type ManifestInfo struct {
UUID *string `json:"uuid,omitempty"`
SiloInstance *string `json:"silo_instance,omitempty"`
RevisionHash *string `json:"revision_hash,omitempty"`
KCVersion *string `json:"kc_version,omitempty"`
}
func metadataToResponse(m *db.ItemMetadata) MetadataResponse {
resp := MetadataResponse{
SchemaName: m.SchemaName,
LifecycleState: m.LifecycleState,
Tags: m.Tags,
Fields: m.Fields,
UpdatedAt: m.UpdatedAt.UTC().Format("2006-01-02T15:04:05Z"),
UpdatedBy: m.UpdatedBy,
}
if m.ManifestUUID != nil || m.SiloInstance != nil || m.RevisionHash != nil || m.KCVersion != nil {
resp.Manifest = &ManifestInfo{
UUID: m.ManifestUUID,
SiloInstance: m.SiloInstance,
RevisionHash: m.RevisionHash,
KCVersion: m.KCVersion,
}
}
return resp
}
// HandleGetMetadata returns indexed metadata for an item.
// GET /api/items/{partNumber}/metadata
func (s *Server) HandleGetMetadata(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
partNumber := chi.URLParam(r, "partNumber")
item, err := s.items.GetByPartNumber(ctx, partNumber)
if err != nil {
s.logger.Error().Err(err).Msg("failed to get item")
writeError(w, http.StatusInternalServerError, "internal_error", "Failed to get item")
return
}
if item == nil {
writeError(w, http.StatusNotFound, "not_found", "Item not found")
return
}
meta, err := s.metadata.Get(ctx, item.ID)
if err != nil {
s.logger.Error().Err(err).Msg("failed to get metadata")
writeError(w, http.StatusInternalServerError, "internal_error", "Failed to get metadata")
return
}
if meta == nil {
writeError(w, http.StatusNotFound, "not_found", "No metadata indexed for this item")
return
}
writeJSON(w, http.StatusOK, metadataToResponse(meta))
}
// HandleUpdateMetadata merges fields into the metadata JSONB.
// PUT /api/items/{partNumber}/metadata
func (s *Server) HandleUpdateMetadata(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
partNumber := chi.URLParam(r, "partNumber")
item, err := s.items.GetByPartNumber(ctx, partNumber)
if err != nil {
s.logger.Error().Err(err).Msg("failed to get item")
writeError(w, http.StatusInternalServerError, "internal_error", "Failed to get item")
return
}
if item == nil {
writeError(w, http.StatusNotFound, "not_found", "Item not found")
return
}
var req struct {
Fields map[string]any `json:"fields"`
}
if err := json.NewDecoder(r.Body).Decode(&req); err != nil {
writeError(w, http.StatusBadRequest, "invalid_body", "Invalid JSON body")
return
}
if len(req.Fields) == 0 {
writeError(w, http.StatusBadRequest, "invalid_body", "Fields must not be empty")
return
}
username := ""
if user := auth.UserFromContext(ctx); user != nil {
username = user.Username
}
if err := s.metadata.UpdateFields(ctx, item.ID, req.Fields, username); err != nil {
s.logger.Error().Err(err).Msg("failed to update metadata fields")
writeError(w, http.StatusInternalServerError, "internal_error", "Failed to update metadata")
return
}
meta, err := s.metadata.Get(ctx, item.ID)
if err != nil {
s.logger.Error().Err(err).Msg("failed to read back metadata")
writeError(w, http.StatusInternalServerError, "internal_error", "Failed to read metadata")
return
}
s.broker.Publish("metadata.updated", mustMarshal(map[string]any{
"part_number": partNumber,
"changed_fields": fieldKeys(req.Fields),
"lifecycle_state": meta.LifecycleState,
"updated_by": username,
}))
writeJSON(w, http.StatusOK, metadataToResponse(meta))
}
// HandleUpdateLifecycle transitions the lifecycle state.
// PATCH /api/items/{partNumber}/metadata/lifecycle
func (s *Server) HandleUpdateLifecycle(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
partNumber := chi.URLParam(r, "partNumber")
item, err := s.items.GetByPartNumber(ctx, partNumber)
if err != nil {
s.logger.Error().Err(err).Msg("failed to get item")
writeError(w, http.StatusInternalServerError, "internal_error", "Failed to get item")
return
}
if item == nil {
writeError(w, http.StatusNotFound, "not_found", "Item not found")
return
}
var req struct {
State string `json:"state"`
}
if err := json.NewDecoder(r.Body).Decode(&req); err != nil {
writeError(w, http.StatusBadRequest, "invalid_body", "Invalid JSON body")
return
}
if req.State == "" {
writeError(w, http.StatusBadRequest, "invalid_body", "State is required")
return
}
meta, err := s.metadata.Get(ctx, item.ID)
if err != nil {
s.logger.Error().Err(err).Msg("failed to get metadata")
writeError(w, http.StatusInternalServerError, "internal_error", "Failed to get metadata")
return
}
if meta == nil {
writeError(w, http.StatusNotFound, "not_found", "No metadata indexed for this item")
return
}
// Validate transition
allowed := validTransitions[meta.LifecycleState]
valid := false
for _, s := range allowed {
if s == req.State {
valid = true
break
}
}
if !valid {
writeError(w, http.StatusUnprocessableEntity, "invalid_transition",
"Cannot transition from '"+meta.LifecycleState+"' to '"+req.State+"'")
return
}
username := ""
if user := auth.UserFromContext(ctx); user != nil {
username = user.Username
}
fromState := meta.LifecycleState
if err := s.metadata.UpdateLifecycle(ctx, item.ID, req.State, username); err != nil {
s.logger.Error().Err(err).Msg("failed to update lifecycle")
writeError(w, http.StatusInternalServerError, "internal_error", "Failed to update lifecycle")
return
}
s.broker.Publish("metadata.lifecycle", mustMarshal(map[string]any{
"part_number": partNumber,
"from_state": fromState,
"to_state": req.State,
"updated_by": username,
}))
writeJSON(w, http.StatusOK, map[string]string{"lifecycle_state": req.State})
}
// HandleUpdateTags adds/removes tags.
// PATCH /api/items/{partNumber}/metadata/tags
func (s *Server) HandleUpdateTags(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
partNumber := chi.URLParam(r, "partNumber")
item, err := s.items.GetByPartNumber(ctx, partNumber)
if err != nil {
s.logger.Error().Err(err).Msg("failed to get item")
writeError(w, http.StatusInternalServerError, "internal_error", "Failed to get item")
return
}
if item == nil {
writeError(w, http.StatusNotFound, "not_found", "Item not found")
return
}
var req struct {
Add []string `json:"add"`
Remove []string `json:"remove"`
}
if err := json.NewDecoder(r.Body).Decode(&req); err != nil {
writeError(w, http.StatusBadRequest, "invalid_body", "Invalid JSON body")
return
}
if len(req.Add) == 0 && len(req.Remove) == 0 {
writeError(w, http.StatusBadRequest, "invalid_body", "Must provide 'add' or 'remove'")
return
}
meta, err := s.metadata.Get(ctx, item.ID)
if err != nil {
s.logger.Error().Err(err).Msg("failed to get metadata")
writeError(w, http.StatusInternalServerError, "internal_error", "Failed to get metadata")
return
}
if meta == nil {
writeError(w, http.StatusNotFound, "not_found", "No metadata indexed for this item")
return
}
// Compute new tag set: (existing + add) - remove
tagSet := make(map[string]struct{})
for _, t := range meta.Tags {
tagSet[t] = struct{}{}
}
for _, t := range req.Add {
tagSet[t] = struct{}{}
}
removeSet := make(map[string]struct{})
for _, t := range req.Remove {
removeSet[t] = struct{}{}
}
var newTags []string
for t := range tagSet {
if _, removed := removeSet[t]; !removed {
newTags = append(newTags, t)
}
}
if newTags == nil {
newTags = []string{}
}
username := ""
if user := auth.UserFromContext(ctx); user != nil {
username = user.Username
}
if err := s.metadata.SetTags(ctx, item.ID, newTags, username); err != nil {
s.logger.Error().Err(err).Msg("failed to update tags")
writeError(w, http.StatusInternalServerError, "internal_error", "Failed to update tags")
return
}
s.broker.Publish("metadata.tags", mustMarshal(map[string]any{
"part_number": partNumber,
"added": req.Add,
"removed": req.Remove,
}))
writeJSON(w, http.StatusOK, map[string]any{"tags": newTags})
}
// extractKCMetadata attempts to extract and index silo/ metadata from an
// uploaded .kc file. Failures are logged but non-fatal for Phase 1.
func (s *Server) extractKCMetadata(ctx context.Context, item *db.Item, fileKey string, rev *db.Revision) {
if s.storage == nil {
return
}
reader, err := s.storage.Get(ctx, fileKey)
if err != nil {
s.logger.Warn().Err(err).Str("file_key", fileKey).Msg("kc: failed to read back file for extraction")
return
}
defer reader.Close()
data, err := io.ReadAll(reader)
if err != nil {
s.logger.Warn().Err(err).Msg("kc: failed to read file bytes")
return
}
result, err := kc.Extract(data)
if err != nil {
s.logger.Warn().Err(err).Str("part_number", item.PartNumber).Msg("kc: extraction failed")
return
}
if result == nil {
return // plain .fcstd, no silo/ directory
}
// Validate manifest UUID matches item
if result.Manifest != nil && result.Manifest.UUID != "" && result.Manifest.UUID != item.ID {
s.logger.Warn().
Str("manifest_uuid", result.Manifest.UUID).
Str("item_id", item.ID).
Msg("kc: manifest UUID does not match item, skipping indexing")
return
}
// Check for no-op (revision_hash unchanged)
if result.Manifest != nil && result.Manifest.RevisionHash != "" {
existing, _ := s.metadata.Get(ctx, item.ID)
if existing != nil && existing.RevisionHash != nil && *existing.RevisionHash == result.Manifest.RevisionHash {
s.logger.Debug().Str("part_number", item.PartNumber).Msg("kc: revision_hash unchanged, skipping")
return
}
}
username := ""
if rev.CreatedBy != nil {
username = *rev.CreatedBy
}
meta := &db.ItemMetadata{
ItemID: item.ID,
LifecycleState: "draft",
Fields: make(map[string]any),
Tags: []string{},
UpdatedBy: strPtr(username),
}
if result.Manifest != nil {
meta.KCVersion = strPtr(result.Manifest.KCVersion)
meta.ManifestUUID = strPtr(result.Manifest.UUID)
meta.SiloInstance = strPtr(result.Manifest.SiloInstance)
meta.RevisionHash = strPtr(result.Manifest.RevisionHash)
}
if result.Metadata != nil {
meta.SchemaName = strPtr(result.Metadata.SchemaName)
if result.Metadata.Tags != nil {
meta.Tags = result.Metadata.Tags
}
if result.Metadata.LifecycleState != "" {
meta.LifecycleState = result.Metadata.LifecycleState
}
if result.Metadata.Fields != nil {
meta.Fields = result.Metadata.Fields
}
}
if err := s.metadata.Upsert(ctx, meta); err != nil {
s.logger.Warn().Err(err).Str("part_number", item.PartNumber).Msg("kc: failed to upsert metadata")
return
}
s.broker.Publish("metadata.updated", mustMarshal(map[string]any{
"part_number": item.PartNumber,
"lifecycle_state": meta.LifecycleState,
"updated_by": username,
}))
// Index dependencies from silo/dependencies.json.
if result.Dependencies != nil {
dbDeps := make([]*db.ItemDependency, len(result.Dependencies))
for i, d := range result.Dependencies {
pn := d.PartNumber
rev := d.Revision
qty := d.Quantity
label := d.Label
rel := d.Relationship
if rel == "" {
rel = "component"
}
dbDeps[i] = &db.ItemDependency{
ParentItemID: item.ID,
ChildUUID: d.UUID,
ChildPartNumber: &pn,
ChildRevision: &rev,
Quantity: &qty,
Label: &label,
Relationship: rel,
}
}
if err := s.deps.ReplaceForRevision(ctx, item.ID, rev.RevisionNumber, dbDeps); err != nil {
s.logger.Warn().Err(err).Str("part_number", item.PartNumber).Msg("kc: failed to index dependencies")
} else {
s.broker.Publish("dependencies.changed", mustMarshal(map[string]any{
"part_number": item.PartNumber,
"count": len(dbDeps),
}))
}
}
s.logger.Info().Str("part_number", item.PartNumber).Msg("kc: metadata indexed successfully")
}
// strPtr returns a pointer to s, or nil if s is empty.
func strPtr(s string) *string {
if s == "" {
return nil
}
return &s
}
// fieldKeys returns the keys from a map.
func fieldKeys(m map[string]any) []string {
keys := make([]string, 0, len(m))
for k := range m {
keys = append(keys, k)
}
return keys
}

View File

@@ -2,6 +2,8 @@
package api
import (
"crypto/sha256"
"encoding/hex"
"net/http"
"strings"
"time"
@@ -148,6 +150,55 @@ func (s *Server) RequireWritable(next http.Handler) http.Handler {
})
}
// RequireRunnerAuth extracts and validates a runner token from the
// Authorization header. On success, injects RunnerIdentity into context
// and updates the runner's heartbeat.
func (s *Server) RequireRunnerAuth(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
token := extractBearerToken(r)
if token == "" || !strings.HasPrefix(token, "silo_runner_") {
writeError(w, http.StatusUnauthorized, "unauthorized", "Runner token required")
return
}
hash := sha256.Sum256([]byte(token))
tokenHash := hex.EncodeToString(hash[:])
runner, err := s.jobs.GetRunnerByToken(r.Context(), tokenHash)
if err != nil || runner == nil {
writeError(w, http.StatusUnauthorized, "unauthorized", "Invalid runner token")
return
}
// Update heartbeat on every authenticated request
_ = s.jobs.Heartbeat(r.Context(), runner.ID)
identity := &auth.RunnerIdentity{
ID: runner.ID,
Name: runner.Name,
Tags: runner.Tags,
}
ctx := auth.ContextWithRunner(r.Context(), identity)
next.ServeHTTP(w, r.WithContext(ctx))
})
}
// RequireModule returns middleware that rejects requests with 404 when
// the named module is not enabled.
func (s *Server) RequireModule(id string) func(http.Handler) http.Handler {
return func(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
if !s.modules.IsEnabled(id) {
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(http.StatusNotFound)
w.Write([]byte(`{"error":"module '` + id + `' is not enabled"}`))
return
}
next.ServeHTTP(w, r)
})
}
}
func extractBearerToken(r *http.Request) string {
h := r.Header.Get("Authorization")
if strings.HasPrefix(h, "Bearer ") {

View File

@@ -0,0 +1,135 @@
package api
import (
"context"
"crypto/sha256"
"encoding/hex"
"fmt"
"time"
"github.com/kindredsystems/silo/internal/db"
"github.com/kindredsystems/silo/internal/kc"
)
// packKCFile gathers DB state and repacks silo/ entries in a .kc file.
func (s *Server) packKCFile(ctx context.Context, data []byte, item *db.Item, rev *db.Revision, meta *db.ItemMetadata) ([]byte, error) {
manifest := &kc.Manifest{
UUID: item.ID,
KCVersion: derefStr(meta.KCVersion, "1.0"),
RevisionHash: derefStr(meta.RevisionHash, ""),
SiloInstance: derefStr(meta.SiloInstance, ""),
}
metadata := &kc.Metadata{
SchemaName: derefStr(meta.SchemaName, ""),
Tags: meta.Tags,
LifecycleState: meta.LifecycleState,
Fields: meta.Fields,
}
// Build history from last 20 revisions.
revisions, err := s.items.GetRevisions(ctx, item.ID)
if err != nil {
return nil, fmt.Errorf("getting revisions: %w", err)
}
limit := 20
if len(revisions) < limit {
limit = len(revisions)
}
history := make([]kc.HistoryEntry, limit)
for i, r := range revisions[:limit] {
labels := r.Labels
if labels == nil {
labels = []string{}
}
history[i] = kc.HistoryEntry{
RevisionNumber: r.RevisionNumber,
CreatedAt: r.CreatedAt.UTC().Format(time.RFC3339),
CreatedBy: r.CreatedBy,
Comment: r.Comment,
Status: r.Status,
Labels: labels,
}
}
// Build dependencies from item_dependencies table.
var deps []kc.Dependency
dbDeps, err := s.deps.ListByItem(ctx, item.ID)
if err != nil {
s.logger.Warn().Err(err).Str("part_number", item.PartNumber).Msg("kc: failed to query dependencies for packing")
} else {
deps = make([]kc.Dependency, len(dbDeps))
for i, d := range dbDeps {
deps[i] = kc.Dependency{
UUID: d.ChildUUID,
PartNumber: derefStr(d.ChildPartNumber, ""),
Revision: derefInt(d.ChildRevision, 0),
Quantity: derefFloat(d.Quantity, 0),
Label: derefStr(d.Label, ""),
Relationship: d.Relationship,
}
}
}
if deps == nil {
deps = []kc.Dependency{}
}
input := &kc.PackInput{
Manifest: manifest,
Metadata: metadata,
History: history,
Dependencies: deps,
}
return kc.Pack(data, input)
}
// computeETag generates a quoted ETag from the revision number and metadata freshness.
func computeETag(rev *db.Revision, meta *db.ItemMetadata) string {
var ts int64
if meta != nil {
ts = meta.UpdatedAt.UnixNano()
} else {
ts = rev.CreatedAt.UnixNano()
}
raw := fmt.Sprintf("%d:%d", rev.RevisionNumber, ts)
h := sha256.Sum256([]byte(raw))
return `"` + hex.EncodeToString(h[:8]) + `"`
}
// canSkipRepack returns true if the stored blob already has up-to-date silo/ data.
func canSkipRepack(rev *db.Revision, meta *db.ItemMetadata) bool {
if meta == nil {
return true // no metadata row = plain .fcstd
}
if meta.RevisionHash != nil && rev.FileChecksum != nil &&
*meta.RevisionHash == *rev.FileChecksum &&
meta.UpdatedAt.Before(rev.CreatedAt) {
return true
}
return false
}
// derefStr returns the value of a *string pointer, or fallback if nil.
func derefStr(p *string, fallback string) string {
if p != nil {
return *p
}
return fallback
}
// derefInt returns the value of a *int pointer, or fallback if nil.
func derefInt(p *int, fallback int) int {
if p != nil {
return *p
}
return fallback
}
// derefFloat returns the value of a *float64 pointer, or fallback if nil.
func derefFloat(p *float64, fallback float64) float64 {
if p != nil {
return *p
}
return fallback
}

View File

@@ -58,6 +58,7 @@ func NewRouter(server *Server, logger zerolog.Logger) http.Handler {
r.Get("/auth/callback", server.HandleOIDCCallback)
// Public API endpoints (no auth required)
r.Get("/api/modules", server.HandleGetModules)
r.Get("/api/auth/config", server.HandleAuthConfig)
// API routes (require auth, no CSRF — token auth instead)
@@ -101,6 +102,7 @@ func NewRouter(server *Server, logger zerolog.Logger) http.Handler {
// Projects (read: viewer, write: editor)
r.Route("/projects", func(r chi.Router) {
r.Use(server.RequireModule("projects"))
r.Get("/", server.HandleListProjects)
r.Get("/{code}", server.HandleGetProject)
r.Get("/{code}/items", server.HandleGetProjectItems)
@@ -115,6 +117,26 @@ func NewRouter(server *Server, logger zerolog.Logger) http.Handler {
})
})
// Locations (read: viewer, write: editor)
r.Route("/locations", func(r chi.Router) {
r.Get("/", server.HandleListLocations)
r.Group(func(r chi.Router) {
r.Use(server.RequireWritable)
r.Use(server.RequireRole(auth.RoleEditor))
r.Post("/", server.HandleCreateLocation)
})
// Wildcard routes for path-based lookup (e.g., /api/locations/lab/shelf-a/bin-3)
r.Get("/*", server.HandleGetLocation)
r.Group(func(r chi.Router) {
r.Use(server.RequireWritable)
r.Use(server.RequireRole(auth.RoleEditor))
r.Put("/*", server.HandleUpdateLocation)
r.Delete("/*", server.HandleDeleteLocation)
})
})
// Items (read: viewer, write: editor)
r.Route("/items", func(r chi.Router) {
r.Get("/", server.HandleListItems)
@@ -140,6 +162,7 @@ func NewRouter(server *Server, logger zerolog.Logger) http.Handler {
r.Get("/revisions/compare", server.HandleCompareRevisions)
r.Get("/revisions/{revision}", server.HandleGetRevision)
r.Get("/files", server.HandleListItemFiles)
r.Get("/files/{fileId}/download", server.HandleDownloadItemFile)
r.Get("/file", server.HandleDownloadLatestFile)
r.Get("/file/{revision}", server.HandleDownloadFile)
r.Get("/bom", server.HandleGetBOM)
@@ -149,6 +172,24 @@ func NewRouter(server *Server, logger zerolog.Logger) http.Handler {
r.Get("/bom/where-used", server.HandleGetWhereUsed)
r.Get("/bom/export.csv", server.HandleExportBOMCSV)
r.Get("/bom/export.ods", server.HandleExportBOMODS)
r.Get("/metadata", server.HandleGetMetadata)
r.Get("/dependencies", server.HandleGetDependencies)
r.Get("/dependencies/resolve", server.HandleResolveDependencies)
// DAG (gated by dag module)
r.Route("/dag", func(r chi.Router) {
r.Use(server.RequireModule("dag"))
r.Get("/", server.HandleGetDAG)
r.Get("/forward-cone/{nodeKey}", server.HandleGetForwardCone)
r.Get("/dirty", server.HandleGetDirtySubgraph)
r.Group(func(r chi.Router) {
r.Use(server.RequireWritable)
r.Use(server.RequireRole(auth.RoleEditor))
r.Put("/", server.HandleSyncDAG)
r.Post("/mark-dirty/{nodeKey}", server.HandleMarkDirty)
})
})
r.Group(func(r chi.Router) {
r.Use(server.RequireWritable)
@@ -162,25 +203,32 @@ func NewRouter(server *Server, logger zerolog.Logger) http.Handler {
r.Post("/revisions/{revision}/rollback", server.HandleRollbackRevision)
r.Post("/file", server.HandleUploadFile)
r.Post("/files", server.HandleAssociateItemFile)
r.Post("/files/upload", server.HandleUploadItemFile)
r.Delete("/files/{fileId}", server.HandleDeleteItemFile)
r.Put("/thumbnail", server.HandleSetItemThumbnail)
r.Post("/thumbnail/upload", server.HandleUploadItemThumbnail)
r.Post("/bom", server.HandleAddBOMEntry)
r.Post("/bom/import", server.HandleImportBOMCSV)
r.Post("/bom/merge", server.HandleMergeBOM)
r.Put("/bom/{childPartNumber}", server.HandleUpdateBOMEntry)
r.Delete("/bom/{childPartNumber}", server.HandleDeleteBOMEntry)
r.Put("/metadata", server.HandleUpdateMetadata)
r.Patch("/metadata/lifecycle", server.HandleUpdateLifecycle)
r.Patch("/metadata/tags", server.HandleUpdateTags)
})
})
})
// Audit (read-only, viewer role)
r.Route("/audit", func(r chi.Router) {
r.Use(server.RequireModule("audit"))
r.Get("/completeness", server.HandleAuditCompleteness)
r.Get("/completeness/{partNumber}", server.HandleAuditItemDetail)
})
// Integrations (read: viewer, write: editor)
r.Route("/integrations/odoo", func(r chi.Router) {
r.Use(server.RequireModule("odoo"))
r.Get("/config", server.HandleGetOdooConfig)
r.Get("/sync-log", server.HandleGetOdooSyncLog)
@@ -201,12 +249,71 @@ func NewRouter(server *Server, logger zerolog.Logger) http.Handler {
r.Post("/sheets/diff", server.HandleSheetDiff)
})
// Jobs (read: viewer, write: editor)
r.Route("/jobs", func(r chi.Router) {
r.Use(server.RequireModule("jobs"))
r.Get("/", server.HandleListJobs)
r.Get("/{jobID}", server.HandleGetJob)
r.Get("/{jobID}/logs", server.HandleGetJobLogs)
r.Group(func(r chi.Router) {
r.Use(server.RequireWritable)
r.Use(server.RequireRole(auth.RoleEditor))
r.Post("/", server.HandleCreateJob)
r.Post("/{jobID}/cancel", server.HandleCancelJob)
})
})
// Job definitions (read: viewer, reload: admin)
r.Route("/job-definitions", func(r chi.Router) {
r.Use(server.RequireModule("jobs"))
r.Get("/", server.HandleListJobDefinitions)
r.Get("/{name}", server.HandleGetJobDefinition)
r.Group(func(r chi.Router) {
r.Use(server.RequireRole(auth.RoleAdmin))
r.Post("/reload", server.HandleReloadJobDefinitions)
})
})
// Runners (admin)
r.Route("/runners", func(r chi.Router) {
r.Use(server.RequireModule("jobs"))
r.Use(server.RequireRole(auth.RoleAdmin))
r.Get("/", server.HandleListRunners)
r.Post("/", server.HandleRegisterRunner)
r.Delete("/{runnerID}", server.HandleDeleteRunner)
})
// Part number generation (editor)
r.Group(func(r chi.Router) {
r.Use(server.RequireWritable)
r.Use(server.RequireRole(auth.RoleEditor))
r.Post("/generate-part-number", server.HandleGeneratePartNumber)
})
// Admin settings (admin only)
r.Route("/admin/settings", func(r chi.Router) {
r.Use(server.RequireRole(auth.RoleAdmin))
r.Get("/", server.HandleGetAllSettings)
r.Get("/{module}", server.HandleGetModuleSettings)
r.Put("/{module}", server.HandleUpdateModuleSettings)
r.Post("/{module}/test", server.HandleTestModuleConnectivity)
})
})
// Runner-facing API (runner token auth, not user auth)
r.Route("/api/runner", func(r chi.Router) {
r.Use(server.RequireModule("jobs"))
r.Use(server.RequireRunnerAuth)
r.Post("/heartbeat", server.HandleRunnerHeartbeat)
r.Post("/claim", server.HandleRunnerClaim)
r.Post("/jobs/{jobID}/start", server.HandleRunnerStartJob)
r.Put("/jobs/{jobID}/progress", server.HandleRunnerUpdateProgress)
r.Post("/jobs/{jobID}/complete", server.HandleRunnerCompleteJob)
r.Post("/jobs/{jobID}/fail", server.HandleRunnerFailJob)
r.Post("/jobs/{jobID}/log", server.HandleRunnerAppendLog)
r.Put("/jobs/{jobID}/dag", server.HandleRunnerSyncDAG)
})
// React SPA — serve from web/dist at root, fallback to index.html

View File

@@ -0,0 +1,385 @@
package api
import (
"context"
"crypto/rand"
"crypto/sha256"
"encoding/hex"
"encoding/json"
"fmt"
"net/http"
"github.com/go-chi/chi/v5"
"github.com/kindredsystems/silo/internal/auth"
"github.com/kindredsystems/silo/internal/db"
"github.com/kindredsystems/silo/internal/jobdef"
)
// HandleRunnerHeartbeat updates the runner's heartbeat timestamp.
func (s *Server) HandleRunnerHeartbeat(w http.ResponseWriter, r *http.Request) {
runner := auth.RunnerFromContext(r.Context())
if runner == nil {
writeError(w, http.StatusUnauthorized, "unauthorized", "Runner identity required")
return
}
// Heartbeat already updated by RequireRunnerAuth middleware
writeJSON(w, http.StatusOK, map[string]string{"status": "ok"})
}
// HandleRunnerClaim claims the next available job matching the runner's tags.
func (s *Server) HandleRunnerClaim(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
runner := auth.RunnerFromContext(ctx)
if runner == nil {
writeError(w, http.StatusUnauthorized, "unauthorized", "Runner identity required")
return
}
job, err := s.jobs.ClaimJob(ctx, runner.ID, runner.Tags)
if err != nil {
s.logger.Error().Err(err).Msg("failed to claim job")
writeError(w, http.StatusInternalServerError, "internal_error", "Failed to claim job")
return
}
if job == nil {
writeJSON(w, http.StatusNoContent, nil)
return
}
// Look up the full definition to send to the runner
var defPayload map[string]any
if job.JobDefinitionID != nil {
rec, err := s.jobs.GetDefinitionByID(ctx, *job.JobDefinitionID)
if err == nil && rec != nil {
defPayload = rec.Definition
}
}
s.broker.Publish("job.claimed", mustMarshal(map[string]any{
"job_id": job.ID,
"runner_id": runner.ID,
"runner": runner.Name,
}))
writeJSON(w, http.StatusOK, map[string]any{
"job": job,
"definition": defPayload,
})
}
// HandleRunnerStartJob transitions a claimed job to running.
func (s *Server) HandleRunnerStartJob(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
runner := auth.RunnerFromContext(ctx)
if runner == nil {
writeError(w, http.StatusUnauthorized, "unauthorized", "Runner identity required")
return
}
jobID := chi.URLParam(r, "jobID")
if err := s.jobs.StartJob(ctx, jobID, runner.ID); err != nil {
writeError(w, http.StatusBadRequest, "start_failed", err.Error())
return
}
writeJSON(w, http.StatusOK, map[string]string{"status": "running"})
}
// HandleRunnerUpdateProgress updates a running job's progress.
func (s *Server) HandleRunnerUpdateProgress(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
runner := auth.RunnerFromContext(ctx)
if runner == nil {
writeError(w, http.StatusUnauthorized, "unauthorized", "Runner identity required")
return
}
jobID := chi.URLParam(r, "jobID")
var req struct {
Progress int `json:"progress"`
Message string `json:"message,omitempty"`
}
if err := json.NewDecoder(r.Body).Decode(&req); err != nil {
writeError(w, http.StatusBadRequest, "invalid_body", "Invalid JSON body")
return
}
if err := s.jobs.UpdateProgress(ctx, jobID, runner.ID, req.Progress, req.Message); err != nil {
writeError(w, http.StatusBadRequest, "update_failed", err.Error())
return
}
s.broker.Publish("job.progress", mustMarshal(map[string]any{
"job_id": jobID,
"progress": req.Progress,
"message": req.Message,
}))
writeJSON(w, http.StatusOK, map[string]string{"status": "ok"})
}
// HandleRunnerCompleteJob marks a job as completed.
func (s *Server) HandleRunnerCompleteJob(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
runner := auth.RunnerFromContext(ctx)
if runner == nil {
writeError(w, http.StatusUnauthorized, "unauthorized", "Runner identity required")
return
}
jobID := chi.URLParam(r, "jobID")
var req struct {
Result map[string]any `json:"result,omitempty"`
}
if err := json.NewDecoder(r.Body).Decode(&req); err != nil {
writeError(w, http.StatusBadRequest, "invalid_body", "Invalid JSON body")
return
}
if err := s.jobs.CompleteJob(ctx, jobID, runner.ID, req.Result); err != nil {
writeError(w, http.StatusBadRequest, "complete_failed", err.Error())
return
}
s.broker.Publish("job.completed", mustMarshal(map[string]any{
"job_id": jobID,
"runner_id": runner.ID,
}))
writeJSON(w, http.StatusOK, map[string]string{"status": "completed"})
}
// HandleRunnerFailJob marks a job as failed.
func (s *Server) HandleRunnerFailJob(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
runner := auth.RunnerFromContext(ctx)
if runner == nil {
writeError(w, http.StatusUnauthorized, "unauthorized", "Runner identity required")
return
}
jobID := chi.URLParam(r, "jobID")
var req struct {
Error string `json:"error"`
}
if err := json.NewDecoder(r.Body).Decode(&req); err != nil {
writeError(w, http.StatusBadRequest, "invalid_body", "Invalid JSON body")
return
}
if err := s.jobs.FailJob(ctx, jobID, runner.ID, req.Error); err != nil {
writeError(w, http.StatusBadRequest, "fail_failed", err.Error())
return
}
s.broker.Publish("job.failed", mustMarshal(map[string]any{
"job_id": jobID,
"runner_id": runner.ID,
"error": req.Error,
}))
writeJSON(w, http.StatusOK, map[string]string{"status": "failed"})
}
// HandleRunnerAppendLog appends a log entry to a job.
func (s *Server) HandleRunnerAppendLog(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
runner := auth.RunnerFromContext(ctx)
if runner == nil {
writeError(w, http.StatusUnauthorized, "unauthorized", "Runner identity required")
return
}
jobID := chi.URLParam(r, "jobID")
var req struct {
Level string `json:"level"`
Message string `json:"message"`
Metadata map[string]any `json:"metadata,omitempty"`
}
if err := json.NewDecoder(r.Body).Decode(&req); err != nil {
writeError(w, http.StatusBadRequest, "invalid_body", "Invalid JSON body")
return
}
if req.Level == "" {
req.Level = "info"
}
entry := &db.JobLogEntry{
JobID: jobID,
Level: req.Level,
Message: req.Message,
Metadata: req.Metadata,
}
if err := s.jobs.AppendLog(ctx, entry); err != nil {
s.logger.Error().Err(err).Msg("failed to append job log")
writeError(w, http.StatusInternalServerError, "internal_error", "Failed to append log")
return
}
writeJSON(w, http.StatusCreated, entry)
}
// HandleRunnerSyncDAG allows a runner to push DAG results for a job's item.
func (s *Server) HandleRunnerSyncDAG(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
runner := auth.RunnerFromContext(ctx)
if runner == nil {
writeError(w, http.StatusUnauthorized, "unauthorized", "Runner identity required")
return
}
jobID := chi.URLParam(r, "jobID")
// Get the job to find the item
job, err := s.jobs.GetJob(ctx, jobID)
if err != nil || job == nil {
writeError(w, http.StatusNotFound, "not_found", "Job not found")
return
}
if job.ItemID == nil {
writeError(w, http.StatusBadRequest, "no_item", "Job has no associated item")
return
}
// Delegate to the DAG sync handler logic
var req dagSyncRequest
if err := json.NewDecoder(r.Body).Decode(&req); err != nil {
writeError(w, http.StatusBadRequest, "invalid_body", "Invalid JSON body")
return
}
if req.RevisionNumber == 0 {
// Look up current revision
item, err := s.items.GetByID(ctx, *job.ItemID)
if err != nil || item == nil {
writeError(w, http.StatusNotFound, "not_found", "Item not found")
return
}
req.RevisionNumber = item.CurrentRevision
}
// Convert and sync nodes
nodes := make([]db.DAGNode, len(req.Nodes))
for i, n := range req.Nodes {
state := n.ValidationState
if state == "" {
state = "clean"
}
nodes[i] = db.DAGNode{
NodeKey: n.NodeKey,
NodeType: n.NodeType,
PropertiesHash: n.PropertiesHash,
ValidationState: state,
Metadata: n.Metadata,
}
}
if err := s.dag.SyncFeatureTree(ctx, *job.ItemID, req.RevisionNumber, nodes, nil); err != nil {
s.logger.Error().Err(err).Msg("failed to sync DAG from runner")
writeError(w, http.StatusInternalServerError, "internal_error", "Failed to sync DAG")
return
}
// Build key→ID map and sync edges
keyToID := make(map[string]string, len(nodes))
for _, n := range nodes {
keyToID[n.NodeKey] = n.ID
}
if len(req.Edges) > 0 {
if err := s.dag.DeleteEdgesForItem(ctx, *job.ItemID, req.RevisionNumber); err != nil {
s.logger.Error().Err(err).Msg("failed to delete old edges")
writeError(w, http.StatusInternalServerError, "internal_error", "Failed to sync DAG edges")
return
}
for _, e := range req.Edges {
sourceID, ok := keyToID[e.SourceKey]
if !ok {
continue
}
targetID, ok := keyToID[e.TargetKey]
if !ok {
continue
}
edgeType := e.EdgeType
if edgeType == "" {
edgeType = "depends_on"
}
edge := &db.DAGEdge{
SourceNodeID: sourceID,
TargetNodeID: targetID,
EdgeType: edgeType,
Metadata: e.Metadata,
}
if err := s.dag.CreateEdge(ctx, edge); err != nil {
s.logger.Error().Err(err).Msg("failed to create edge from runner")
}
}
}
s.broker.Publish("dag.updated", mustMarshal(map[string]any{
"item_id": *job.ItemID,
"job_id": jobID,
"runner": runner.Name,
"node_count": len(req.Nodes),
"edge_count": len(req.Edges),
}))
writeJSON(w, http.StatusOK, map[string]any{
"synced": true,
"node_count": len(req.Nodes),
"edge_count": len(req.Edges),
})
}
// generateRunnerToken creates a new runner token. Returns raw token, hash, and prefix.
func generateRunnerToken() (raw, hash, prefix string) {
rawBytes := make([]byte, 32)
if _, err := rand.Read(rawBytes); err != nil {
panic(fmt.Sprintf("generating random bytes: %v", err))
}
raw = "silo_runner_" + hex.EncodeToString(rawBytes)
h := sha256.Sum256([]byte(raw))
hash = hex.EncodeToString(h[:])
prefix = raw[:20] // "silo_runner_" + first 8 hex chars
return
}
// loadAndUpsertJobDefs loads YAML definitions from a directory and upserts them into the database.
func loadAndUpsertJobDefs(ctx context.Context, dir string, repo *db.JobRepository) (map[string]*jobdef.Definition, error) {
defs, err := jobdef.LoadAll(dir)
if err != nil {
return nil, fmt.Errorf("loading job definitions: %w", err)
}
for _, def := range defs {
defJSON, _ := json.Marshal(def)
var defMap map[string]any
json.Unmarshal(defJSON, &defMap)
rec := &db.JobDefinitionRecord{
Name: def.Name,
Version: def.Version,
TriggerType: def.Trigger.Type,
ScopeType: def.Scope.Type,
ComputeType: def.Compute.Type,
RunnerTags: def.Runner.Tags,
TimeoutSeconds: def.Timeout,
MaxRetries: def.MaxRetries,
Priority: def.Priority,
Definition: defMap,
Enabled: true,
}
if err := repo.UpsertDefinition(ctx, rec); err != nil {
return nil, fmt.Errorf("upserting definition %s: %w", def.Name, err)
}
}
return defs, nil
}

View File

@@ -26,13 +26,13 @@ type ServerState struct {
mu sync.RWMutex
readOnly bool
storageOK bool
storage *storage.Storage
storage storage.FileStore
broker *Broker
done chan struct{}
}
// NewServerState creates a new server state tracker.
func NewServerState(logger zerolog.Logger, store *storage.Storage, broker *Broker) *ServerState {
func NewServerState(logger zerolog.Logger, store storage.FileStore, broker *Broker) *ServerState {
return &ServerState{
logger: logger.With().Str("component", "server-state").Logger(),
storageOK: store != nil, // assume healthy if configured

View File

@@ -0,0 +1,316 @@
package api
import (
"context"
"encoding/json"
"net/http"
"strings"
"time"
"github.com/go-chi/chi/v5"
"github.com/kindredsystems/silo/internal/auth"
)
// HandleGetAllSettings returns the full config grouped by module with secrets redacted.
func (s *Server) HandleGetAllSettings(w http.ResponseWriter, r *http.Request) {
resp := map[string]any{
"core": s.buildCoreSettings(),
"schemas": s.buildSchemasSettings(),
"storage": s.buildStorageSettings(r.Context()),
"database": s.buildDatabaseSettings(r.Context()),
"auth": s.buildAuthSettings(),
"projects": map[string]any{"enabled": s.modules.IsEnabled("projects")},
"audit": map[string]any{"enabled": s.modules.IsEnabled("audit")},
"odoo": s.buildOdooSettings(),
"freecad": s.buildFreecadSettings(),
"jobs": s.buildJobsSettings(),
"dag": map[string]any{"enabled": s.modules.IsEnabled("dag")},
}
writeJSON(w, http.StatusOK, resp)
}
// HandleGetModuleSettings returns settings for a single module.
func (s *Server) HandleGetModuleSettings(w http.ResponseWriter, r *http.Request) {
module := chi.URLParam(r, "module")
var settings any
switch module {
case "core":
settings = s.buildCoreSettings()
case "schemas":
settings = s.buildSchemasSettings()
case "storage":
settings = s.buildStorageSettings(r.Context())
case "database":
settings = s.buildDatabaseSettings(r.Context())
case "auth":
settings = s.buildAuthSettings()
case "projects":
settings = map[string]any{"enabled": s.modules.IsEnabled("projects")}
case "audit":
settings = map[string]any{"enabled": s.modules.IsEnabled("audit")}
case "odoo":
settings = s.buildOdooSettings()
case "freecad":
settings = s.buildFreecadSettings()
case "jobs":
settings = s.buildJobsSettings()
case "dag":
settings = map[string]any{"enabled": s.modules.IsEnabled("dag")}
default:
writeError(w, http.StatusNotFound, "not_found", "Unknown module: "+module)
return
}
writeJSON(w, http.StatusOK, settings)
}
// HandleUpdateModuleSettings handles module toggle and config overrides.
func (s *Server) HandleUpdateModuleSettings(w http.ResponseWriter, r *http.Request) {
module := chi.URLParam(r, "module")
// Validate module exists
if s.modules.Get(module) == nil {
writeError(w, http.StatusNotFound, "not_found", "Unknown module: "+module)
return
}
var body map[string]any
if err := json.NewDecoder(r.Body).Decode(&body); err != nil {
writeError(w, http.StatusBadRequest, "invalid_json", err.Error())
return
}
user := auth.UserFromContext(r.Context())
username := "system"
if user != nil {
username = user.Username
}
var updated []string
restartRequired := false
// Handle module toggle
if enabledVal, ok := body["enabled"]; ok {
enabled, ok := enabledVal.(bool)
if !ok {
writeError(w, http.StatusBadRequest, "invalid_value", "'enabled' must be a boolean")
return
}
if err := s.modules.SetEnabled(module, enabled); err != nil {
writeError(w, http.StatusBadRequest, "toggle_failed", err.Error())
return
}
if err := s.settings.SetModuleState(r.Context(), module, enabled, username); err != nil {
s.logger.Error().Err(err).Str("module", module).Msg("failed to persist module state")
writeError(w, http.StatusInternalServerError, "persist_failed", "Failed to save module state")
return
}
updated = append(updated, module+".enabled")
}
// Handle config overrides (future use — persisted but not hot-reloaded)
for key, value := range body {
if key == "enabled" {
continue
}
fullKey := module + "." + key
if err := s.settings.SetOverride(r.Context(), fullKey, value, username); err != nil {
s.logger.Error().Err(err).Str("key", fullKey).Msg("failed to persist setting override")
writeError(w, http.StatusInternalServerError, "persist_failed", "Failed to save setting: "+key)
return
}
updated = append(updated, fullKey)
// These namespaces require a restart to take effect
if strings.HasPrefix(fullKey, "database.") ||
strings.HasPrefix(fullKey, "storage.") ||
strings.HasPrefix(fullKey, "server.") ||
strings.HasPrefix(fullKey, "schemas.") {
restartRequired = true
}
}
writeJSON(w, http.StatusOK, map[string]any{
"updated": updated,
"restart_required": restartRequired,
})
// Publish SSE event
s.broker.Publish("settings.changed", mustMarshal(map[string]any{
"module": module,
"changed_keys": updated,
"updated_by": username,
}))
}
// HandleTestModuleConnectivity tests external connectivity for a module.
func (s *Server) HandleTestModuleConnectivity(w http.ResponseWriter, r *http.Request) {
module := chi.URLParam(r, "module")
start := time.Now()
var success bool
var message string
switch module {
case "database":
if err := s.db.Pool().Ping(r.Context()); err != nil {
success = false
message = "Database ping failed: " + err.Error()
} else {
success = true
message = "Database connection OK"
}
case "storage":
if s.storage == nil {
success = false
message = "Storage not configured"
} else if err := s.storage.Ping(r.Context()); err != nil {
success = false
message = "Storage ping failed: " + err.Error()
} else {
success = true
message = "Storage connection OK"
}
case "auth", "odoo":
success = false
message = "Connectivity test not implemented for " + module
default:
writeError(w, http.StatusBadRequest, "not_testable", "No connectivity test available for module: "+module)
return
}
latency := time.Since(start).Milliseconds()
writeJSON(w, http.StatusOK, map[string]any{
"success": success,
"message": message,
"latency_ms": latency,
})
}
// --- build helpers (read config, redact secrets) ---
func redact(s string) string {
if s == "" {
return ""
}
return "****"
}
func (s *Server) buildCoreSettings() map[string]any {
return map[string]any{
"enabled": true,
"host": s.cfg.Server.Host,
"port": s.cfg.Server.Port,
"base_url": s.cfg.Server.BaseURL,
"readonly": s.cfg.Server.ReadOnly,
}
}
func (s *Server) buildSchemasSettings() map[string]any {
return map[string]any{
"enabled": true,
"directory": s.cfg.Schemas.Directory,
"default": s.cfg.Schemas.Default,
"count": len(s.schemas),
}
}
func (s *Server) buildStorageSettings(ctx context.Context) map[string]any {
result := map[string]any{
"enabled": true,
"endpoint": s.cfg.Storage.Endpoint,
"bucket": s.cfg.Storage.Bucket,
"use_ssl": s.cfg.Storage.UseSSL,
"region": s.cfg.Storage.Region,
}
if s.storage != nil {
if err := s.storage.Ping(ctx); err != nil {
result["status"] = "unavailable"
} else {
result["status"] = "ok"
}
} else {
result["status"] = "not_configured"
}
return result
}
func (s *Server) buildDatabaseSettings(ctx context.Context) map[string]any {
result := map[string]any{
"enabled": true,
"host": s.cfg.Database.Host,
"port": s.cfg.Database.Port,
"name": s.cfg.Database.Name,
"user": s.cfg.Database.User,
"password": redact(s.cfg.Database.Password),
"sslmode": s.cfg.Database.SSLMode,
"max_connections": s.cfg.Database.MaxConnections,
}
if err := s.db.Pool().Ping(ctx); err != nil {
result["status"] = "unavailable"
} else {
result["status"] = "ok"
}
return result
}
func (s *Server) buildAuthSettings() map[string]any {
return map[string]any{
"enabled": s.modules.IsEnabled("auth"),
"session_secret": redact(s.cfg.Auth.SessionSecret),
"local": map[string]any{
"enabled": s.cfg.Auth.Local.Enabled,
"default_admin_username": s.cfg.Auth.Local.DefaultAdminUsername,
"default_admin_password": redact(s.cfg.Auth.Local.DefaultAdminPassword),
},
"ldap": map[string]any{
"enabled": s.cfg.Auth.LDAP.Enabled,
"url": s.cfg.Auth.LDAP.URL,
"base_dn": s.cfg.Auth.LDAP.BaseDN,
"bind_dn": s.cfg.Auth.LDAP.BindDN,
"bind_password": redact(s.cfg.Auth.LDAP.BindPassword),
},
"oidc": map[string]any{
"enabled": s.cfg.Auth.OIDC.Enabled,
"issuer_url": s.cfg.Auth.OIDC.IssuerURL,
"client_id": s.cfg.Auth.OIDC.ClientID,
"client_secret": redact(s.cfg.Auth.OIDC.ClientSecret),
"redirect_url": s.cfg.Auth.OIDC.RedirectURL,
},
}
}
func (s *Server) buildOdooSettings() map[string]any {
return map[string]any{
"enabled": s.modules.IsEnabled("odoo"),
"url": s.cfg.Odoo.URL,
"database": s.cfg.Odoo.Database,
"username": s.cfg.Odoo.Username,
"api_key": redact(s.cfg.Odoo.APIKey),
}
}
func (s *Server) buildFreecadSettings() map[string]any {
return map[string]any{
"enabled": s.modules.IsEnabled("freecad"),
"uri_scheme": s.cfg.FreeCAD.URIScheme,
"executable": s.cfg.FreeCAD.Executable,
}
}
func (s *Server) buildJobsSettings() map[string]any {
return map[string]any{
"enabled": s.modules.IsEnabled("jobs"),
"directory": s.cfg.Jobs.Directory,
"runner_timeout": s.cfg.Jobs.RunnerTimeout,
"job_timeout_check": s.cfg.Jobs.JobTimeoutCheck,
"default_priority": s.cfg.Jobs.DefaultPriority,
"definitions_count": len(s.jobDefs),
}
}

View File

@@ -0,0 +1,285 @@
package api
import (
"encoding/json"
"net/http"
"net/http/httptest"
"strings"
"testing"
"github.com/go-chi/chi/v5"
"github.com/kindredsystems/silo/internal/auth"
"github.com/kindredsystems/silo/internal/config"
"github.com/kindredsystems/silo/internal/db"
"github.com/kindredsystems/silo/internal/modules"
"github.com/kindredsystems/silo/internal/schema"
"github.com/kindredsystems/silo/internal/testutil"
"github.com/rs/zerolog"
)
func newSettingsTestServer(t *testing.T) *Server {
t.Helper()
pool := testutil.MustConnectTestPool(t)
database := db.NewFromPool(pool)
broker := NewBroker(zerolog.Nop())
state := NewServerState(zerolog.Nop(), nil, broker)
cfg := &config.Config{
Server: config.ServerConfig{Host: "0.0.0.0", Port: 8080},
Database: config.DatabaseConfig{
Host: "localhost", Port: 5432, Name: "silo_test",
User: "silo", Password: "secret", SSLMode: "disable",
MaxConnections: 10,
},
Storage: config.StorageConfig{
Endpoint: "minio:9000", Bucket: "silo", Region: "us-east-1",
AccessKey: "minioadmin", SecretKey: "miniosecret",
},
Schemas: config.SchemasConfig{Directory: "/etc/silo/schemas", Default: "kindred-rd"},
Auth: config.AuthConfig{
SessionSecret: "supersecret",
Local: config.LocalAuth{Enabled: true, DefaultAdminUsername: "admin", DefaultAdminPassword: "changeme"},
LDAP: config.LDAPAuth{Enabled: false, BindPassword: "ldapsecret"},
OIDC: config.OIDCAuth{Enabled: false, ClientSecret: "oidcsecret"},
},
FreeCAD: config.FreeCADConfig{URIScheme: "silo"},
Odoo: config.OdooConfig{URL: "https://odoo.example.com", APIKey: "odoo-api-key"},
Jobs: config.JobsConfig{Directory: "/etc/silo/jobdefs", RunnerTimeout: 90, JobTimeoutCheck: 30, DefaultPriority: 100},
}
return NewServer(
zerolog.Nop(),
database,
map[string]*schema.Schema{"test": {Name: "test"}},
cfg.Schemas.Directory,
nil, // storage
nil, // authService
nil, // sessionManager
nil, // oidcBackend
nil, // authConfig
broker,
state,
nil, // jobDefs
"", // jobDefsDir
modules.NewRegistry(), // modules
cfg,
)
}
func newSettingsRouter(s *Server) http.Handler {
r := chi.NewRouter()
r.Route("/api/admin/settings", func(r chi.Router) {
r.Get("/", s.HandleGetAllSettings)
r.Get("/{module}", s.HandleGetModuleSettings)
r.Put("/{module}", s.HandleUpdateModuleSettings)
r.Post("/{module}/test", s.HandleTestModuleConnectivity)
})
return r
}
func adminSettingsRequest(r *http.Request) *http.Request {
u := &auth.User{
ID: "admin-id",
Username: "testadmin",
Role: auth.RoleAdmin,
}
return r.WithContext(auth.ContextWithUser(r.Context(), u))
}
func viewerSettingsRequest(r *http.Request) *http.Request {
u := &auth.User{
ID: "viewer-id",
Username: "testviewer",
Role: auth.RoleViewer,
}
return r.WithContext(auth.ContextWithUser(r.Context(), u))
}
func TestGetAllSettings(t *testing.T) {
s := newSettingsTestServer(t)
router := newSettingsRouter(s)
req := adminSettingsRequest(httptest.NewRequest("GET", "/api/admin/settings", nil))
w := httptest.NewRecorder()
router.ServeHTTP(w, req)
if w.Code != http.StatusOK {
t.Fatalf("status: got %d, want %d; body: %s", w.Code, http.StatusOK, w.Body.String())
}
var resp map[string]any
if err := json.Unmarshal(w.Body.Bytes(), &resp); err != nil {
t.Fatalf("decoding: %v", err)
}
// Verify all module keys present
expectedModules := []string{"core", "schemas", "storage", "database", "auth", "projects", "audit", "odoo", "freecad", "jobs", "dag"}
for _, mod := range expectedModules {
if _, ok := resp[mod]; !ok {
t.Errorf("missing module key: %s", mod)
}
}
// Verify secrets are redacted
dbSettings, _ := resp["database"].(map[string]any)
if dbSettings["password"] != "****" {
t.Errorf("database password not redacted: got %v", dbSettings["password"])
}
authSettings, _ := resp["auth"].(map[string]any)
if authSettings["session_secret"] != "****" {
t.Errorf("session_secret not redacted: got %v", authSettings["session_secret"])
}
ldap, _ := authSettings["ldap"].(map[string]any)
if ldap["bind_password"] != "****" {
t.Errorf("ldap bind_password not redacted: got %v", ldap["bind_password"])
}
oidc, _ := authSettings["oidc"].(map[string]any)
if oidc["client_secret"] != "****" {
t.Errorf("oidc client_secret not redacted: got %v", oidc["client_secret"])
}
odoo, _ := resp["odoo"].(map[string]any)
if odoo["api_key"] != "****" {
t.Errorf("odoo api_key not redacted: got %v", odoo["api_key"])
}
}
func TestGetModuleSettings(t *testing.T) {
s := newSettingsTestServer(t)
router := newSettingsRouter(s)
req := adminSettingsRequest(httptest.NewRequest("GET", "/api/admin/settings/jobs", nil))
w := httptest.NewRecorder()
router.ServeHTTP(w, req)
if w.Code != http.StatusOK {
t.Fatalf("status: got %d, want %d; body: %s", w.Code, http.StatusOK, w.Body.String())
}
var resp map[string]any
if err := json.Unmarshal(w.Body.Bytes(), &resp); err != nil {
t.Fatalf("decoding: %v", err)
}
if resp["directory"] != "/etc/silo/jobdefs" {
t.Errorf("jobs directory: got %v, want /etc/silo/jobdefs", resp["directory"])
}
if resp["runner_timeout"] != float64(90) {
t.Errorf("runner_timeout: got %v, want 90", resp["runner_timeout"])
}
}
func TestGetModuleSettings_Unknown(t *testing.T) {
s := newSettingsTestServer(t)
router := newSettingsRouter(s)
req := adminSettingsRequest(httptest.NewRequest("GET", "/api/admin/settings/nonexistent", nil))
w := httptest.NewRecorder()
router.ServeHTTP(w, req)
if w.Code != http.StatusNotFound {
t.Errorf("status: got %d, want %d", w.Code, http.StatusNotFound)
}
}
func TestToggleModule(t *testing.T) {
s := newSettingsTestServer(t)
router := newSettingsRouter(s)
// Projects is enabled by default; disable it
body := `{"enabled": false}`
req := adminSettingsRequest(httptest.NewRequest("PUT", "/api/admin/settings/projects", strings.NewReader(body)))
req.Header.Set("Content-Type", "application/json")
w := httptest.NewRecorder()
router.ServeHTTP(w, req)
if w.Code != http.StatusOK {
t.Fatalf("status: got %d, want %d; body: %s", w.Code, http.StatusOK, w.Body.String())
}
var resp map[string]any
if err := json.Unmarshal(w.Body.Bytes(), &resp); err != nil {
t.Fatalf("decoding: %v", err)
}
updated, _ := resp["updated"].([]any)
if len(updated) != 1 || updated[0] != "projects.enabled" {
t.Errorf("updated: got %v, want [projects.enabled]", updated)
}
// Verify registry state
if s.modules.IsEnabled("projects") {
t.Error("projects should be disabled after toggle")
}
}
func TestToggleModule_DependencyError(t *testing.T) {
s := newSettingsTestServer(t)
router := newSettingsRouter(s)
// DAG depends on Jobs. Jobs is disabled by default.
// Enabling DAG without Jobs should fail.
body := `{"enabled": true}`
req := adminSettingsRequest(httptest.NewRequest("PUT", "/api/admin/settings/dag", strings.NewReader(body)))
req.Header.Set("Content-Type", "application/json")
w := httptest.NewRecorder()
router.ServeHTTP(w, req)
if w.Code != http.StatusBadRequest {
t.Errorf("status: got %d, want %d; body: %s", w.Code, http.StatusBadRequest, w.Body.String())
}
}
func TestToggleRequiredModule(t *testing.T) {
s := newSettingsTestServer(t)
router := newSettingsRouter(s)
body := `{"enabled": false}`
req := adminSettingsRequest(httptest.NewRequest("PUT", "/api/admin/settings/core", strings.NewReader(body)))
req.Header.Set("Content-Type", "application/json")
w := httptest.NewRecorder()
router.ServeHTTP(w, req)
if w.Code != http.StatusBadRequest {
t.Errorf("status: got %d, want %d; body: %s", w.Code, http.StatusBadRequest, w.Body.String())
}
}
func TestTestConnectivity_Database(t *testing.T) {
s := newSettingsTestServer(t)
router := newSettingsRouter(s)
req := adminSettingsRequest(httptest.NewRequest("POST", "/api/admin/settings/database/test", nil))
w := httptest.NewRecorder()
router.ServeHTTP(w, req)
if w.Code != http.StatusOK {
t.Fatalf("status: got %d, want %d; body: %s", w.Code, http.StatusOK, w.Body.String())
}
var resp map[string]any
if err := json.Unmarshal(w.Body.Bytes(), &resp); err != nil {
t.Fatalf("decoding: %v", err)
}
if resp["success"] != true {
t.Errorf("expected success=true, got %v; message: %v", resp["success"], resp["message"])
}
if resp["latency_ms"] == nil {
t.Error("expected latency_ms in response")
}
}
func TestTestConnectivity_NotTestable(t *testing.T) {
s := newSettingsTestServer(t)
router := newSettingsRouter(s)
req := adminSettingsRequest(httptest.NewRequest("POST", "/api/admin/settings/core/test", nil))
w := httptest.NewRecorder()
router.ServeHTTP(w, req)
if w.Code != http.StatusBadRequest {
t.Errorf("status: got %d, want %d; body: %s", w.Code, http.StatusBadRequest, w.Body.String())
}
}

24
internal/auth/runner.go Normal file
View File

@@ -0,0 +1,24 @@
package auth
import "context"
const runnerContextKey contextKey = iota + 1
// RunnerIdentity represents an authenticated runner in the request context.
type RunnerIdentity struct {
ID string
Name string
Tags []string
}
// RunnerFromContext extracts the authenticated runner from the request context.
// Returns nil if no runner is present.
func RunnerFromContext(ctx context.Context) *RunnerIdentity {
r, _ := ctx.Value(runnerContextKey).(*RunnerIdentity)
return r
}
// ContextWithRunner returns a new context carrying the given runner identity.
func ContextWithRunner(ctx context.Context, r *RunnerIdentity) context.Context {
return context.WithValue(ctx, runnerContextKey, r)
}

View File

@@ -17,6 +17,26 @@ type Config struct {
FreeCAD FreeCADConfig `yaml:"freecad"`
Odoo OdooConfig `yaml:"odoo"`
Auth AuthConfig `yaml:"auth"`
Jobs JobsConfig `yaml:"jobs"`
Modules ModulesConfig `yaml:"modules"`
}
// ModulesConfig holds explicit enable/disable toggles for optional modules.
// A nil pointer means "use the module's default state".
type ModulesConfig struct {
Auth *ModuleToggle `yaml:"auth"`
Projects *ModuleToggle `yaml:"projects"`
Audit *ModuleToggle `yaml:"audit"`
Odoo *ModuleToggle `yaml:"odoo"`
FreeCAD *ModuleToggle `yaml:"freecad"`
Jobs *ModuleToggle `yaml:"jobs"`
DAG *ModuleToggle `yaml:"dag"`
}
// ModuleToggle holds an optional enabled flag. The pointer allows
// distinguishing "not set" (nil) from "explicitly false".
type ModuleToggle struct {
Enabled *bool `yaml:"enabled"`
}
// AuthConfig holds authentication and authorization settings.
@@ -89,14 +109,21 @@ type DatabaseConfig struct {
MaxConnections int `yaml:"max_connections"`
}
// StorageConfig holds MinIO connection settings.
// StorageConfig holds object storage settings.
type StorageConfig struct {
Endpoint string `yaml:"endpoint"`
AccessKey string `yaml:"access_key"`
SecretKey string `yaml:"secret_key"`
Bucket string `yaml:"bucket"`
UseSSL bool `yaml:"use_ssl"`
Region string `yaml:"region"`
Backend string `yaml:"backend"` // "minio" (default) or "filesystem"
Endpoint string `yaml:"endpoint"`
AccessKey string `yaml:"access_key"`
SecretKey string `yaml:"secret_key"`
Bucket string `yaml:"bucket"`
UseSSL bool `yaml:"use_ssl"`
Region string `yaml:"region"`
Filesystem FilesystemConfig `yaml:"filesystem"`
}
// FilesystemConfig holds local filesystem storage settings.
type FilesystemConfig struct {
RootDir string `yaml:"root_dir"`
}
// SchemasConfig holds schema loading settings.
@@ -111,6 +138,14 @@ type FreeCADConfig struct {
Executable string `yaml:"executable"`
}
// JobsConfig holds worker/runner system settings.
type JobsConfig struct {
Directory string `yaml:"directory"` // default /etc/silo/jobdefs
RunnerTimeout int `yaml:"runner_timeout"` // seconds, default 90
JobTimeoutCheck int `yaml:"job_timeout_check"` // seconds, default 30
DefaultPriority int `yaml:"default_priority"` // default 100
}
// OdooConfig holds Odoo ERP integration settings.
type OdooConfig struct {
Enabled bool `yaml:"enabled"`
@@ -157,6 +192,18 @@ func Load(path string) (*Config, error) {
if cfg.FreeCAD.URIScheme == "" {
cfg.FreeCAD.URIScheme = "silo"
}
if cfg.Jobs.Directory == "" {
cfg.Jobs.Directory = "/etc/silo/jobdefs"
}
if cfg.Jobs.RunnerTimeout == 0 {
cfg.Jobs.RunnerTimeout = 90
}
if cfg.Jobs.JobTimeoutCheck == 0 {
cfg.Jobs.JobTimeoutCheck = 30
}
if cfg.Jobs.DefaultPriority == 0 {
cfg.Jobs.DefaultPriority = 100
}
// Override with environment variables
if v := os.Getenv("SILO_DB_HOST"); v != "" {

520
internal/db/dag.go Normal file
View File

@@ -0,0 +1,520 @@
package db
import (
"context"
"encoding/json"
"fmt"
"time"
"github.com/jackc/pgx/v5"
)
// DAGNode represents a feature-level node in the dependency graph.
type DAGNode struct {
ID string
ItemID string
RevisionNumber int
NodeKey string
NodeType string
PropertiesHash *string
ValidationState string
ValidationMsg *string
Metadata map[string]any
CreatedAt time.Time
UpdatedAt time.Time
}
// DAGEdge represents a dependency between two nodes.
type DAGEdge struct {
ID string
SourceNodeID string
TargetNodeID string
EdgeType string
Metadata map[string]any
}
// DAGCrossEdge represents a dependency between nodes in different items.
type DAGCrossEdge struct {
ID string
SourceNodeID string
TargetNodeID string
RelationshipID *string
EdgeType string
Metadata map[string]any
}
// DAGRepository provides dependency graph database operations.
type DAGRepository struct {
db *DB
}
// NewDAGRepository creates a new DAG repository.
func NewDAGRepository(db *DB) *DAGRepository {
return &DAGRepository{db: db}
}
// GetNodes returns all DAG nodes for an item at a specific revision.
func (r *DAGRepository) GetNodes(ctx context.Context, itemID string, revisionNumber int) ([]*DAGNode, error) {
rows, err := r.db.pool.Query(ctx, `
SELECT id, item_id, revision_number, node_key, node_type,
properties_hash, validation_state, validation_msg,
metadata, created_at, updated_at
FROM dag_nodes
WHERE item_id = $1 AND revision_number = $2
ORDER BY node_key
`, itemID, revisionNumber)
if err != nil {
return nil, fmt.Errorf("querying DAG nodes: %w", err)
}
defer rows.Close()
return scanDAGNodes(rows)
}
// GetNodeByKey returns a single DAG node by item, revision, and key.
func (r *DAGRepository) GetNodeByKey(ctx context.Context, itemID string, revisionNumber int, nodeKey string) (*DAGNode, error) {
n := &DAGNode{}
var metadataJSON []byte
err := r.db.pool.QueryRow(ctx, `
SELECT id, item_id, revision_number, node_key, node_type,
properties_hash, validation_state, validation_msg,
metadata, created_at, updated_at
FROM dag_nodes
WHERE item_id = $1 AND revision_number = $2 AND node_key = $3
`, itemID, revisionNumber, nodeKey).Scan(
&n.ID, &n.ItemID, &n.RevisionNumber, &n.NodeKey, &n.NodeType,
&n.PropertiesHash, &n.ValidationState, &n.ValidationMsg,
&metadataJSON, &n.CreatedAt, &n.UpdatedAt,
)
if err == pgx.ErrNoRows {
return nil, nil
}
if err != nil {
return nil, fmt.Errorf("querying DAG node: %w", err)
}
if metadataJSON != nil {
if err := json.Unmarshal(metadataJSON, &n.Metadata); err != nil {
return nil, fmt.Errorf("unmarshaling node metadata: %w", err)
}
}
return n, nil
}
// GetNodeByID returns a single DAG node by its ID.
func (r *DAGRepository) GetNodeByID(ctx context.Context, nodeID string) (*DAGNode, error) {
n := &DAGNode{}
var metadataJSON []byte
err := r.db.pool.QueryRow(ctx, `
SELECT id, item_id, revision_number, node_key, node_type,
properties_hash, validation_state, validation_msg,
metadata, created_at, updated_at
FROM dag_nodes
WHERE id = $1
`, nodeID).Scan(
&n.ID, &n.ItemID, &n.RevisionNumber, &n.NodeKey, &n.NodeType,
&n.PropertiesHash, &n.ValidationState, &n.ValidationMsg,
&metadataJSON, &n.CreatedAt, &n.UpdatedAt,
)
if err == pgx.ErrNoRows {
return nil, nil
}
if err != nil {
return nil, fmt.Errorf("querying DAG node by ID: %w", err)
}
if metadataJSON != nil {
if err := json.Unmarshal(metadataJSON, &n.Metadata); err != nil {
return nil, fmt.Errorf("unmarshaling node metadata: %w", err)
}
}
return n, nil
}
// UpsertNode inserts or updates a single DAG node.
func (r *DAGRepository) UpsertNode(ctx context.Context, n *DAGNode) error {
metadataJSON, err := json.Marshal(n.Metadata)
if err != nil {
return fmt.Errorf("marshaling metadata: %w", err)
}
err = r.db.pool.QueryRow(ctx, `
INSERT INTO dag_nodes (item_id, revision_number, node_key, node_type,
properties_hash, validation_state, validation_msg, metadata)
VALUES ($1, $2, $3, $4, $5, $6, $7, $8)
ON CONFLICT (item_id, revision_number, node_key)
DO UPDATE SET
node_type = EXCLUDED.node_type,
properties_hash = EXCLUDED.properties_hash,
validation_state = EXCLUDED.validation_state,
validation_msg = EXCLUDED.validation_msg,
metadata = EXCLUDED.metadata,
updated_at = now()
RETURNING id, created_at, updated_at
`, n.ItemID, n.RevisionNumber, n.NodeKey, n.NodeType,
n.PropertiesHash, n.ValidationState, n.ValidationMsg, metadataJSON,
).Scan(&n.ID, &n.CreatedAt, &n.UpdatedAt)
if err != nil {
return fmt.Errorf("upserting DAG node: %w", err)
}
return nil
}
// GetEdges returns all edges for nodes belonging to an item at a specific revision.
func (r *DAGRepository) GetEdges(ctx context.Context, itemID string, revisionNumber int) ([]*DAGEdge, error) {
rows, err := r.db.pool.Query(ctx, `
SELECT e.id, e.source_node_id, e.target_node_id, e.edge_type, e.metadata
FROM dag_edges e
JOIN dag_nodes src ON src.id = e.source_node_id
WHERE src.item_id = $1 AND src.revision_number = $2
ORDER BY e.source_node_id, e.target_node_id
`, itemID, revisionNumber)
if err != nil {
return nil, fmt.Errorf("querying DAG edges: %w", err)
}
defer rows.Close()
var edges []*DAGEdge
for rows.Next() {
e := &DAGEdge{}
var metadataJSON []byte
if err := rows.Scan(&e.ID, &e.SourceNodeID, &e.TargetNodeID, &e.EdgeType, &metadataJSON); err != nil {
return nil, fmt.Errorf("scanning DAG edge: %w", err)
}
if metadataJSON != nil {
if err := json.Unmarshal(metadataJSON, &e.Metadata); err != nil {
return nil, fmt.Errorf("unmarshaling edge metadata: %w", err)
}
}
edges = append(edges, e)
}
return edges, rows.Err()
}
// CreateEdge inserts a new edge between two nodes.
func (r *DAGRepository) CreateEdge(ctx context.Context, e *DAGEdge) error {
if e.EdgeType == "" {
e.EdgeType = "depends_on"
}
metadataJSON, err := json.Marshal(e.Metadata)
if err != nil {
return fmt.Errorf("marshaling edge metadata: %w", err)
}
err = r.db.pool.QueryRow(ctx, `
INSERT INTO dag_edges (source_node_id, target_node_id, edge_type, metadata)
VALUES ($1, $2, $3, $4)
ON CONFLICT (source_node_id, target_node_id, edge_type) DO NOTHING
RETURNING id
`, e.SourceNodeID, e.TargetNodeID, e.EdgeType, metadataJSON).Scan(&e.ID)
if err == pgx.ErrNoRows {
// Edge already exists, not an error
return nil
}
if err != nil {
return fmt.Errorf("creating DAG edge: %w", err)
}
return nil
}
// DeleteEdgesForItem removes all edges for nodes belonging to an item/revision.
func (r *DAGRepository) DeleteEdgesForItem(ctx context.Context, itemID string, revisionNumber int) error {
_, err := r.db.pool.Exec(ctx, `
DELETE FROM dag_edges
WHERE source_node_id IN (
SELECT id FROM dag_nodes WHERE item_id = $1 AND revision_number = $2
)
`, itemID, revisionNumber)
if err != nil {
return fmt.Errorf("deleting edges for item: %w", err)
}
return nil
}
// GetForwardCone returns all downstream dependent nodes reachable from the
// given node via edges. This is the key query for interference detection.
func (r *DAGRepository) GetForwardCone(ctx context.Context, nodeID string) ([]*DAGNode, error) {
rows, err := r.db.pool.Query(ctx, `
WITH RECURSIVE forward_cone AS (
SELECT target_node_id AS node_id
FROM dag_edges
WHERE source_node_id = $1
UNION
SELECT e.target_node_id
FROM dag_edges e
JOIN forward_cone fc ON fc.node_id = e.source_node_id
)
SELECT n.id, n.item_id, n.revision_number, n.node_key, n.node_type,
n.properties_hash, n.validation_state, n.validation_msg,
n.metadata, n.created_at, n.updated_at
FROM dag_nodes n
JOIN forward_cone fc ON n.id = fc.node_id
ORDER BY n.node_key
`, nodeID)
if err != nil {
return nil, fmt.Errorf("querying forward cone: %w", err)
}
defer rows.Close()
return scanDAGNodes(rows)
}
// GetBackwardCone returns all upstream dependency nodes that the given
// node depends on.
func (r *DAGRepository) GetBackwardCone(ctx context.Context, nodeID string) ([]*DAGNode, error) {
rows, err := r.db.pool.Query(ctx, `
WITH RECURSIVE backward_cone AS (
SELECT source_node_id AS node_id
FROM dag_edges
WHERE target_node_id = $1
UNION
SELECT e.source_node_id
FROM dag_edges e
JOIN backward_cone bc ON bc.node_id = e.target_node_id
)
SELECT n.id, n.item_id, n.revision_number, n.node_key, n.node_type,
n.properties_hash, n.validation_state, n.validation_msg,
n.metadata, n.created_at, n.updated_at
FROM dag_nodes n
JOIN backward_cone bc ON n.id = bc.node_id
ORDER BY n.node_key
`, nodeID)
if err != nil {
return nil, fmt.Errorf("querying backward cone: %w", err)
}
defer rows.Close()
return scanDAGNodes(rows)
}
// GetDirtySubgraph returns all non-clean nodes for an item.
func (r *DAGRepository) GetDirtySubgraph(ctx context.Context, itemID string) ([]*DAGNode, error) {
rows, err := r.db.pool.Query(ctx, `
SELECT id, item_id, revision_number, node_key, node_type,
properties_hash, validation_state, validation_msg,
metadata, created_at, updated_at
FROM dag_nodes
WHERE item_id = $1 AND validation_state != 'clean'
ORDER BY node_key
`, itemID)
if err != nil {
return nil, fmt.Errorf("querying dirty subgraph: %w", err)
}
defer rows.Close()
return scanDAGNodes(rows)
}
// MarkDirty marks a node and all its downstream dependents as dirty.
func (r *DAGRepository) MarkDirty(ctx context.Context, nodeID string) (int64, error) {
result, err := r.db.pool.Exec(ctx, `
WITH RECURSIVE forward_cone AS (
SELECT $1::uuid AS node_id
UNION
SELECT e.target_node_id
FROM dag_edges e
JOIN forward_cone fc ON fc.node_id = e.source_node_id
)
UPDATE dag_nodes SET validation_state = 'dirty', updated_at = now()
WHERE id IN (SELECT node_id FROM forward_cone)
AND validation_state = 'clean'
`, nodeID)
if err != nil {
return 0, fmt.Errorf("marking dirty: %w", err)
}
return result.RowsAffected(), nil
}
// MarkValidating sets a node's state to 'validating'.
func (r *DAGRepository) MarkValidating(ctx context.Context, nodeID string) error {
_, err := r.db.pool.Exec(ctx, `
UPDATE dag_nodes SET validation_state = 'validating', updated_at = now()
WHERE id = $1
`, nodeID)
if err != nil {
return fmt.Errorf("marking validating: %w", err)
}
return nil
}
// MarkClean sets a node's state to 'clean' and updates its properties hash.
func (r *DAGRepository) MarkClean(ctx context.Context, nodeID string, propertiesHash string) error {
_, err := r.db.pool.Exec(ctx, `
UPDATE dag_nodes
SET validation_state = 'clean',
properties_hash = $2,
validation_msg = NULL,
updated_at = now()
WHERE id = $1
`, nodeID, propertiesHash)
if err != nil {
return fmt.Errorf("marking clean: %w", err)
}
return nil
}
// MarkFailed sets a node's state to 'failed' with an error message.
func (r *DAGRepository) MarkFailed(ctx context.Context, nodeID string, message string) error {
_, err := r.db.pool.Exec(ctx, `
UPDATE dag_nodes
SET validation_state = 'failed',
validation_msg = $2,
updated_at = now()
WHERE id = $1
`, nodeID, message)
if err != nil {
return fmt.Errorf("marking failed: %w", err)
}
return nil
}
// HasCycle checks whether adding an edge from sourceID to targetID would
// create a cycle. It walks upward from sourceID to see if targetID is
// already an ancestor.
func (r *DAGRepository) HasCycle(ctx context.Context, sourceID, targetID string) (bool, error) {
if sourceID == targetID {
return true, nil
}
var hasCycle bool
err := r.db.pool.QueryRow(ctx, `
WITH RECURSIVE ancestors AS (
SELECT source_node_id AS node_id
FROM dag_edges
WHERE target_node_id = $1
UNION
SELECT e.source_node_id
FROM dag_edges e
JOIN ancestors a ON a.node_id = e.target_node_id
)
SELECT EXISTS (
SELECT 1 FROM ancestors WHERE node_id = $2
)
`, sourceID, targetID).Scan(&hasCycle)
if err != nil {
return false, fmt.Errorf("checking for cycle: %w", err)
}
return hasCycle, nil
}
// SyncFeatureTree replaces the entire feature DAG for an item/revision
// within a single transaction. It upserts nodes, replaces edges, and
// marks changed nodes dirty.
func (r *DAGRepository) SyncFeatureTree(ctx context.Context, itemID string, revisionNumber int, nodes []DAGNode, edges []DAGEdge) error {
tx, err := r.db.pool.Begin(ctx)
if err != nil {
return fmt.Errorf("beginning transaction: %w", err)
}
defer tx.Rollback(ctx)
// Upsert all nodes
for i := range nodes {
n := &nodes[i]
n.ItemID = itemID
n.RevisionNumber = revisionNumber
if n.ValidationState == "" {
n.ValidationState = "clean"
}
metadataJSON, err := json.Marshal(n.Metadata)
if err != nil {
return fmt.Errorf("marshaling node metadata: %w", err)
}
err = tx.QueryRow(ctx, `
INSERT INTO dag_nodes (item_id, revision_number, node_key, node_type,
properties_hash, validation_state, validation_msg, metadata)
VALUES ($1, $2, $3, $4, $5, $6, $7, $8)
ON CONFLICT (item_id, revision_number, node_key)
DO UPDATE SET
node_type = EXCLUDED.node_type,
properties_hash = EXCLUDED.properties_hash,
metadata = EXCLUDED.metadata,
updated_at = now()
RETURNING id, created_at, updated_at
`, n.ItemID, n.RevisionNumber, n.NodeKey, n.NodeType,
n.PropertiesHash, n.ValidationState, n.ValidationMsg, metadataJSON,
).Scan(&n.ID, &n.CreatedAt, &n.UpdatedAt)
if err != nil {
return fmt.Errorf("upserting node %s: %w", n.NodeKey, err)
}
}
// Build key→ID map for edge resolution
keyToID := make(map[string]string, len(nodes))
for _, n := range nodes {
keyToID[n.NodeKey] = n.ID
}
// Delete existing edges for this item/revision
_, err = tx.Exec(ctx, `
DELETE FROM dag_edges
WHERE source_node_id IN (
SELECT id FROM dag_nodes WHERE item_id = $1 AND revision_number = $2
)
`, itemID, revisionNumber)
if err != nil {
return fmt.Errorf("deleting old edges: %w", err)
}
// Insert new edges
for i := range edges {
e := &edges[i]
if e.EdgeType == "" {
e.EdgeType = "depends_on"
}
// Resolve source/target from node keys if IDs are not set
sourceID := e.SourceNodeID
targetID := e.TargetNodeID
if sourceID == "" {
return fmt.Errorf("edge %d: source_node_id is required", i)
}
if targetID == "" {
return fmt.Errorf("edge %d: target_node_id is required", i)
}
metadataJSON, err := json.Marshal(e.Metadata)
if err != nil {
return fmt.Errorf("marshaling edge metadata: %w", err)
}
err = tx.QueryRow(ctx, `
INSERT INTO dag_edges (source_node_id, target_node_id, edge_type, metadata)
VALUES ($1, $2, $3, $4)
RETURNING id
`, sourceID, targetID, e.EdgeType, metadataJSON).Scan(&e.ID)
if err != nil {
return fmt.Errorf("creating edge: %w", err)
}
}
return tx.Commit(ctx)
}
// DeleteNodesForItem removes all DAG nodes (and cascades to edges) for an item/revision.
func (r *DAGRepository) DeleteNodesForItem(ctx context.Context, itemID string, revisionNumber int) error {
_, err := r.db.pool.Exec(ctx, `
DELETE FROM dag_nodes WHERE item_id = $1 AND revision_number = $2
`, itemID, revisionNumber)
if err != nil {
return fmt.Errorf("deleting nodes for item: %w", err)
}
return nil
}
func scanDAGNodes(rows pgx.Rows) ([]*DAGNode, error) {
var nodes []*DAGNode
for rows.Next() {
n := &DAGNode{}
var metadataJSON []byte
err := rows.Scan(
&n.ID, &n.ItemID, &n.RevisionNumber, &n.NodeKey, &n.NodeType,
&n.PropertiesHash, &n.ValidationState, &n.ValidationMsg,
&metadataJSON, &n.CreatedAt, &n.UpdatedAt,
)
if err != nil {
return nil, fmt.Errorf("scanning DAG node: %w", err)
}
if metadataJSON != nil {
if err := json.Unmarshal(metadataJSON, &n.Metadata); err != nil {
return nil, fmt.Errorf("unmarshaling node metadata: %w", err)
}
}
nodes = append(nodes, n)
}
return nodes, rows.Err()
}

View File

@@ -0,0 +1,127 @@
package db
import (
"context"
"fmt"
"time"
"github.com/jackc/pgx/v5"
)
// ItemDependency represents a row in the item_dependencies table.
type ItemDependency struct {
ID string
ParentItemID string
ChildUUID string
ChildPartNumber *string
ChildRevision *int
Quantity *float64
Label *string
Relationship string
RevisionNumber int
CreatedAt time.Time
}
// ResolvedDependency extends ItemDependency with resolution info from a LEFT JOIN.
type ResolvedDependency struct {
ItemDependency
ResolvedPartNumber *string
ResolvedRevision *int
Resolved bool
}
// ItemDependencyRepository provides item_dependencies database operations.
type ItemDependencyRepository struct {
db *DB
}
// NewItemDependencyRepository creates a new item dependency repository.
func NewItemDependencyRepository(db *DB) *ItemDependencyRepository {
return &ItemDependencyRepository{db: db}
}
// ReplaceForRevision atomically replaces all dependencies for an item's revision.
// Deletes existing rows for the parent item and inserts the new set.
func (r *ItemDependencyRepository) ReplaceForRevision(ctx context.Context, parentItemID string, revisionNumber int, deps []*ItemDependency) error {
return r.db.Tx(ctx, func(tx pgx.Tx) error {
_, err := tx.Exec(ctx, `DELETE FROM item_dependencies WHERE parent_item_id = $1`, parentItemID)
if err != nil {
return fmt.Errorf("deleting old dependencies: %w", err)
}
for _, d := range deps {
_, err := tx.Exec(ctx, `
INSERT INTO item_dependencies
(parent_item_id, child_uuid, child_part_number, child_revision,
quantity, label, relationship, revision_number)
VALUES ($1, $2, $3, $4, $5, $6, $7, $8)
`, parentItemID, d.ChildUUID, d.ChildPartNumber, d.ChildRevision,
d.Quantity, d.Label, d.Relationship, revisionNumber)
if err != nil {
return fmt.Errorf("inserting dependency: %w", err)
}
}
return nil
})
}
// ListByItem returns all dependencies for an item.
func (r *ItemDependencyRepository) ListByItem(ctx context.Context, parentItemID string) ([]*ItemDependency, error) {
rows, err := r.db.pool.Query(ctx, `
SELECT id, parent_item_id, child_uuid, child_part_number, child_revision,
quantity, label, relationship, revision_number, created_at
FROM item_dependencies
WHERE parent_item_id = $1
ORDER BY label NULLS LAST
`, parentItemID)
if err != nil {
return nil, fmt.Errorf("listing dependencies: %w", err)
}
defer rows.Close()
var deps []*ItemDependency
for rows.Next() {
d := &ItemDependency{}
if err := rows.Scan(
&d.ID, &d.ParentItemID, &d.ChildUUID, &d.ChildPartNumber, &d.ChildRevision,
&d.Quantity, &d.Label, &d.Relationship, &d.RevisionNumber, &d.CreatedAt,
); err != nil {
return nil, fmt.Errorf("scanning dependency: %w", err)
}
deps = append(deps, d)
}
return deps, nil
}
// Resolve returns dependencies with child UUIDs resolved against the items table.
// Unresolvable UUIDs (external or deleted items) have Resolved=false.
func (r *ItemDependencyRepository) Resolve(ctx context.Context, parentItemID string) ([]*ResolvedDependency, error) {
rows, err := r.db.pool.Query(ctx, `
SELECT d.id, d.parent_item_id, d.child_uuid, d.child_part_number, d.child_revision,
d.quantity, d.label, d.relationship, d.revision_number, d.created_at,
i.part_number, i.current_revision
FROM item_dependencies d
LEFT JOIN items i ON i.id = d.child_uuid AND i.archived_at IS NULL
WHERE d.parent_item_id = $1
ORDER BY d.label NULLS LAST
`, parentItemID)
if err != nil {
return nil, fmt.Errorf("resolving dependencies: %w", err)
}
defer rows.Close()
var deps []*ResolvedDependency
for rows.Next() {
d := &ResolvedDependency{}
if err := rows.Scan(
&d.ID, &d.ParentItemID, &d.ChildUUID, &d.ChildPartNumber, &d.ChildRevision,
&d.Quantity, &d.Label, &d.Relationship, &d.RevisionNumber, &d.CreatedAt,
&d.ResolvedPartNumber, &d.ResolvedRevision,
); err != nil {
return nil, fmt.Errorf("scanning resolved dependency: %w", err)
}
d.Resolved = d.ResolvedPartNumber != nil
deps = append(deps, d)
}
return deps, nil
}

View File

@@ -8,13 +8,14 @@ import (
// ItemFile represents a file attachment on an item.
type ItemFile struct {
ID string
ItemID string
Filename string
ContentType string
Size int64
ObjectKey string
CreatedAt time.Time
ID string
ItemID string
Filename string
ContentType string
Size int64
ObjectKey string
StorageBackend string // "minio" or "filesystem"
CreatedAt time.Time
}
// ItemFileRepository provides item_files database operations.
@@ -29,11 +30,14 @@ func NewItemFileRepository(db *DB) *ItemFileRepository {
// Create inserts a new item file record.
func (r *ItemFileRepository) Create(ctx context.Context, f *ItemFile) error {
if f.StorageBackend == "" {
f.StorageBackend = "minio"
}
err := r.db.pool.QueryRow(ctx,
`INSERT INTO item_files (item_id, filename, content_type, size, object_key)
VALUES ($1, $2, $3, $4, $5)
`INSERT INTO item_files (item_id, filename, content_type, size, object_key, storage_backend)
VALUES ($1, $2, $3, $4, $5, $6)
RETURNING id, created_at`,
f.ItemID, f.Filename, f.ContentType, f.Size, f.ObjectKey,
f.ItemID, f.Filename, f.ContentType, f.Size, f.ObjectKey, f.StorageBackend,
).Scan(&f.ID, &f.CreatedAt)
if err != nil {
return fmt.Errorf("creating item file: %w", err)
@@ -44,7 +48,8 @@ func (r *ItemFileRepository) Create(ctx context.Context, f *ItemFile) error {
// ListByItem returns all file attachments for an item.
func (r *ItemFileRepository) ListByItem(ctx context.Context, itemID string) ([]*ItemFile, error) {
rows, err := r.db.pool.Query(ctx,
`SELECT id, item_id, filename, content_type, size, object_key, created_at
`SELECT id, item_id, filename, content_type, size, object_key,
COALESCE(storage_backend, 'minio'), created_at
FROM item_files WHERE item_id = $1 ORDER BY created_at`,
itemID,
)
@@ -56,7 +61,7 @@ func (r *ItemFileRepository) ListByItem(ctx context.Context, itemID string) ([]*
var files []*ItemFile
for rows.Next() {
f := &ItemFile{}
if err := rows.Scan(&f.ID, &f.ItemID, &f.Filename, &f.ContentType, &f.Size, &f.ObjectKey, &f.CreatedAt); err != nil {
if err := rows.Scan(&f.ID, &f.ItemID, &f.Filename, &f.ContentType, &f.Size, &f.ObjectKey, &f.StorageBackend, &f.CreatedAt); err != nil {
return nil, fmt.Errorf("scanning item file: %w", err)
}
files = append(files, f)
@@ -68,10 +73,11 @@ func (r *ItemFileRepository) ListByItem(ctx context.Context, itemID string) ([]*
func (r *ItemFileRepository) Get(ctx context.Context, id string) (*ItemFile, error) {
f := &ItemFile{}
err := r.db.pool.QueryRow(ctx,
`SELECT id, item_id, filename, content_type, size, object_key, created_at
`SELECT id, item_id, filename, content_type, size, object_key,
COALESCE(storage_backend, 'minio'), created_at
FROM item_files WHERE id = $1`,
id,
).Scan(&f.ID, &f.ItemID, &f.Filename, &f.ContentType, &f.Size, &f.ObjectKey, &f.CreatedAt)
).Scan(&f.ID, &f.ItemID, &f.Filename, &f.ContentType, &f.Size, &f.ObjectKey, &f.StorageBackend, &f.CreatedAt)
if err != nil {
return nil, fmt.Errorf("getting item file: %w", err)
}

View File

@@ -0,0 +1,161 @@
package db
import (
"context"
"encoding/json"
"fmt"
"time"
"github.com/jackc/pgx/v5"
)
// ItemMetadata represents a row in the item_metadata table.
type ItemMetadata struct {
ItemID string
SchemaName *string
Tags []string
LifecycleState string
Fields map[string]any
KCVersion *string
ManifestUUID *string
SiloInstance *string
RevisionHash *string
UpdatedAt time.Time
UpdatedBy *string
}
// ItemMetadataRepository provides item_metadata database operations.
type ItemMetadataRepository struct {
db *DB
}
// NewItemMetadataRepository creates a new item metadata repository.
func NewItemMetadataRepository(db *DB) *ItemMetadataRepository {
return &ItemMetadataRepository{db: db}
}
// Get returns metadata for an item, or nil if none exists.
func (r *ItemMetadataRepository) Get(ctx context.Context, itemID string) (*ItemMetadata, error) {
m := &ItemMetadata{}
var fieldsJSON []byte
err := r.db.pool.QueryRow(ctx, `
SELECT item_id, schema_name, tags, lifecycle_state, fields,
kc_version, manifest_uuid, silo_instance, revision_hash,
updated_at, updated_by
FROM item_metadata
WHERE item_id = $1
`, itemID).Scan(
&m.ItemID, &m.SchemaName, &m.Tags, &m.LifecycleState, &fieldsJSON,
&m.KCVersion, &m.ManifestUUID, &m.SiloInstance, &m.RevisionHash,
&m.UpdatedAt, &m.UpdatedBy,
)
if err == pgx.ErrNoRows {
return nil, nil
}
if err != nil {
return nil, fmt.Errorf("getting item metadata: %w", err)
}
if fieldsJSON != nil {
if err := json.Unmarshal(fieldsJSON, &m.Fields); err != nil {
return nil, fmt.Errorf("unmarshaling fields: %w", err)
}
}
if m.Fields == nil {
m.Fields = make(map[string]any)
}
if m.Tags == nil {
m.Tags = []string{}
}
return m, nil
}
// Upsert inserts or updates the metadata row for an item.
// Used by the commit extraction pipeline.
func (r *ItemMetadataRepository) Upsert(ctx context.Context, m *ItemMetadata) error {
fieldsJSON, err := json.Marshal(m.Fields)
if err != nil {
return fmt.Errorf("marshaling fields: %w", err)
}
_, err = r.db.pool.Exec(ctx, `
INSERT INTO item_metadata
(item_id, schema_name, tags, lifecycle_state, fields,
kc_version, manifest_uuid, silo_instance, revision_hash,
updated_at, updated_by)
VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, now(), $10)
ON CONFLICT (item_id) DO UPDATE SET
schema_name = EXCLUDED.schema_name,
tags = EXCLUDED.tags,
lifecycle_state = EXCLUDED.lifecycle_state,
fields = EXCLUDED.fields,
kc_version = EXCLUDED.kc_version,
manifest_uuid = EXCLUDED.manifest_uuid,
silo_instance = EXCLUDED.silo_instance,
revision_hash = EXCLUDED.revision_hash,
updated_at = now(),
updated_by = EXCLUDED.updated_by
`, m.ItemID, m.SchemaName, m.Tags, m.LifecycleState, fieldsJSON,
m.KCVersion, m.ManifestUUID, m.SiloInstance, m.RevisionHash,
m.UpdatedBy)
if err != nil {
return fmt.Errorf("upserting item metadata: %w", err)
}
return nil
}
// UpdateFields merges the given fields into the existing JSONB fields column.
func (r *ItemMetadataRepository) UpdateFields(ctx context.Context, itemID string, fields map[string]any, updatedBy string) error {
fieldsJSON, err := json.Marshal(fields)
if err != nil {
return fmt.Errorf("marshaling fields: %w", err)
}
tag, err := r.db.pool.Exec(ctx, `
UPDATE item_metadata
SET fields = fields || $2::jsonb,
updated_at = now(),
updated_by = $3
WHERE item_id = $1
`, itemID, fieldsJSON, updatedBy)
if err != nil {
return fmt.Errorf("updating metadata fields: %w", err)
}
if tag.RowsAffected() == 0 {
return fmt.Errorf("item metadata not found")
}
return nil
}
// UpdateLifecycle sets the lifecycle_state column.
func (r *ItemMetadataRepository) UpdateLifecycle(ctx context.Context, itemID, state, updatedBy string) error {
tag, err := r.db.pool.Exec(ctx, `
UPDATE item_metadata
SET lifecycle_state = $2,
updated_at = now(),
updated_by = $3
WHERE item_id = $1
`, itemID, state, updatedBy)
if err != nil {
return fmt.Errorf("updating lifecycle state: %w", err)
}
if tag.RowsAffected() == 0 {
return fmt.Errorf("item metadata not found")
}
return nil
}
// SetTags replaces the tags array.
func (r *ItemMetadataRepository) SetTags(ctx context.Context, itemID string, tags []string, updatedBy string) error {
tag, err := r.db.pool.Exec(ctx, `
UPDATE item_metadata
SET tags = $2,
updated_at = now(),
updated_by = $3
WHERE item_id = $1
`, itemID, tags, updatedBy)
if err != nil {
return fmt.Errorf("updating tags: %w", err)
}
if tag.RowsAffected() == 0 {
return fmt.Errorf("item metadata not found")
}
return nil
}

View File

@@ -35,11 +35,12 @@ type Revision struct {
ItemID string
RevisionNumber int
Properties map[string]any
FileKey *string
FileVersion *string
FileChecksum *string
FileSize *int64
ThumbnailKey *string
FileKey *string
FileVersion *string
FileChecksum *string
FileSize *int64
FileStorageBackend string // "minio" or "filesystem"
ThumbnailKey *string
CreatedAt time.Time
CreatedBy *string
Comment *string
@@ -306,16 +307,20 @@ func (r *ItemRepository) CreateRevision(ctx context.Context, rev *Revision) erro
return fmt.Errorf("marshaling properties: %w", err)
}
if rev.FileStorageBackend == "" {
rev.FileStorageBackend = "minio"
}
err = r.db.pool.QueryRow(ctx, `
INSERT INTO revisions (
item_id, revision_number, properties, file_key, file_version,
file_checksum, file_size, thumbnail_key, created_by, comment
file_checksum, file_size, file_storage_backend, thumbnail_key, created_by, comment
)
SELECT $1, current_revision + 1, $2, $3, $4, $5, $6, $7, $8, $9
SELECT $1, current_revision + 1, $2, $3, $4, $5, $6, $7, $8, $9, $10
FROM items WHERE id = $1
RETURNING id, revision_number, created_at
`, rev.ItemID, propsJSON, rev.FileKey, rev.FileVersion,
rev.FileChecksum, rev.FileSize, rev.ThumbnailKey, rev.CreatedBy, rev.Comment,
rev.FileChecksum, rev.FileSize, rev.FileStorageBackend, rev.ThumbnailKey, rev.CreatedBy, rev.Comment,
).Scan(&rev.ID, &rev.RevisionNumber, &rev.CreatedAt)
if err != nil {
return fmt.Errorf("inserting revision: %w", err)
@@ -342,7 +347,8 @@ func (r *ItemRepository) GetRevisions(ctx context.Context, itemID string) ([]*Re
if hasStatusColumn {
rows, err = r.db.pool.Query(ctx, `
SELECT id, item_id, revision_number, properties, file_key, file_version,
file_checksum, file_size, thumbnail_key, created_at, created_by, comment,
file_checksum, file_size, COALESCE(file_storage_backend, 'minio'),
thumbnail_key, created_at, created_by, comment,
COALESCE(status, 'draft') as status, COALESCE(labels, ARRAY[]::TEXT[]) as labels
FROM revisions
WHERE item_id = $1
@@ -369,7 +375,8 @@ func (r *ItemRepository) GetRevisions(ctx context.Context, itemID string) ([]*Re
if hasStatusColumn {
err = rows.Scan(
&rev.ID, &rev.ItemID, &rev.RevisionNumber, &propsJSON, &rev.FileKey, &rev.FileVersion,
&rev.FileChecksum, &rev.FileSize, &rev.ThumbnailKey, &rev.CreatedAt, &rev.CreatedBy, &rev.Comment,
&rev.FileChecksum, &rev.FileSize, &rev.FileStorageBackend,
&rev.ThumbnailKey, &rev.CreatedAt, &rev.CreatedBy, &rev.Comment,
&rev.Status, &rev.Labels,
)
} else {
@@ -379,6 +386,7 @@ func (r *ItemRepository) GetRevisions(ctx context.Context, itemID string) ([]*Re
)
rev.Status = "draft"
rev.Labels = []string{}
rev.FileStorageBackend = "minio"
}
if err != nil {
return nil, fmt.Errorf("scanning revision: %w", err)
@@ -412,13 +420,15 @@ func (r *ItemRepository) GetRevision(ctx context.Context, itemID string, revisio
if hasStatusColumn {
err = r.db.pool.QueryRow(ctx, `
SELECT id, item_id, revision_number, properties, file_key, file_version,
file_checksum, file_size, thumbnail_key, created_at, created_by, comment,
file_checksum, file_size, COALESCE(file_storage_backend, 'minio'),
thumbnail_key, created_at, created_by, comment,
COALESCE(status, 'draft') as status, COALESCE(labels, ARRAY[]::TEXT[]) as labels
FROM revisions
WHERE item_id = $1 AND revision_number = $2
`, itemID, revisionNumber).Scan(
&rev.ID, &rev.ItemID, &rev.RevisionNumber, &propsJSON, &rev.FileKey, &rev.FileVersion,
&rev.FileChecksum, &rev.FileSize, &rev.ThumbnailKey, &rev.CreatedAt, &rev.CreatedBy, &rev.Comment,
&rev.FileChecksum, &rev.FileSize, &rev.FileStorageBackend,
&rev.ThumbnailKey, &rev.CreatedAt, &rev.CreatedBy, &rev.Comment,
&rev.Status, &rev.Labels,
)
} else {
@@ -433,6 +443,7 @@ func (r *ItemRepository) GetRevision(ctx context.Context, itemID string, revisio
)
rev.Status = "draft"
rev.Labels = []string{}
rev.FileStorageBackend = "minio"
}
if err == pgx.ErrNoRows {
@@ -606,15 +617,16 @@ func (r *ItemRepository) CreateRevisionFromExisting(ctx context.Context, itemID
// Create new revision with copied properties (and optionally file reference)
newRev := &Revision{
ItemID: itemID,
Properties: source.Properties,
FileKey: source.FileKey,
FileVersion: source.FileVersion,
FileChecksum: source.FileChecksum,
FileSize: source.FileSize,
ThumbnailKey: source.ThumbnailKey,
CreatedBy: createdBy,
Comment: &comment,
ItemID: itemID,
Properties: source.Properties,
FileKey: source.FileKey,
FileVersion: source.FileVersion,
FileChecksum: source.FileChecksum,
FileSize: source.FileSize,
FileStorageBackend: source.FileStorageBackend,
ThumbnailKey: source.ThumbnailKey,
CreatedBy: createdBy,
Comment: &comment,
}
// Insert the new revision
@@ -626,13 +638,13 @@ func (r *ItemRepository) CreateRevisionFromExisting(ctx context.Context, itemID
err = r.db.pool.QueryRow(ctx, `
INSERT INTO revisions (
item_id, revision_number, properties, file_key, file_version,
file_checksum, file_size, thumbnail_key, created_by, comment, status
file_checksum, file_size, file_storage_backend, thumbnail_key, created_by, comment, status
)
SELECT $1, current_revision + 1, $2, $3, $4, $5, $6, $7, $8, $9, 'draft'
SELECT $1, current_revision + 1, $2, $3, $4, $5, $6, $7, $8, $9, $10, 'draft'
FROM items WHERE id = $1
RETURNING id, revision_number, created_at
`, newRev.ItemID, propsJSON, newRev.FileKey, newRev.FileVersion,
newRev.FileChecksum, newRev.FileSize, newRev.ThumbnailKey, newRev.CreatedBy, newRev.Comment,
newRev.FileChecksum, newRev.FileSize, newRev.FileStorageBackend, newRev.ThumbnailKey, newRev.CreatedBy, newRev.Comment,
).Scan(&newRev.ID, &newRev.RevisionNumber, &newRev.CreatedAt)
if err != nil {
return nil, fmt.Errorf("inserting revision: %w", err)

759
internal/db/jobs.go Normal file
View File

@@ -0,0 +1,759 @@
package db
import (
"context"
"encoding/json"
"fmt"
"time"
"github.com/jackc/pgx/v5"
)
// Runner represents a registered compute worker.
type Runner struct {
ID string
Name string
TokenHash string
TokenPrefix string
Tags []string
Status string
LastHeartbeat *time.Time
LastJobID *string
Metadata map[string]any
CreatedAt time.Time
UpdatedAt time.Time
}
// JobDefinitionRecord is a job definition stored in the database.
type JobDefinitionRecord struct {
ID string
Name string
Version int
TriggerType string
ScopeType string
ComputeType string
RunnerTags []string
TimeoutSeconds int
MaxRetries int
Priority int
Definition map[string]any
Enabled bool
CreatedAt time.Time
UpdatedAt time.Time
}
// Job represents a single compute job instance.
type Job struct {
ID string
JobDefinitionID *string
DefinitionName string
Status string
Priority int
ItemID *string
ProjectID *string
ScopeMetadata map[string]any
RunnerID *string
RunnerTags []string
CreatedAt time.Time
ClaimedAt *time.Time
StartedAt *time.Time
CompletedAt *time.Time
TimeoutSeconds int
ExpiresAt *time.Time
Progress int
ProgressMessage *string
Result map[string]any
ErrorMessage *string
RetryCount int
MaxRetries int
CreatedBy *string
CancelledBy *string
}
// JobLogEntry is a single log line for a job.
type JobLogEntry struct {
ID string
JobID string
Timestamp time.Time
Level string
Message string
Metadata map[string]any
}
// JobRepository provides job and runner database operations.
type JobRepository struct {
db *DB
}
// NewJobRepository creates a new job repository.
func NewJobRepository(db *DB) *JobRepository {
return &JobRepository{db: db}
}
// ---------------------------------------------------------------------------
// Job Definitions
// ---------------------------------------------------------------------------
// UpsertDefinition inserts or updates a job definition record.
func (r *JobRepository) UpsertDefinition(ctx context.Context, d *JobDefinitionRecord) error {
defJSON, err := json.Marshal(d.Definition)
if err != nil {
return fmt.Errorf("marshaling definition: %w", err)
}
err = r.db.pool.QueryRow(ctx, `
INSERT INTO job_definitions (name, version, trigger_type, scope_type, compute_type,
runner_tags, timeout_seconds, max_retries, priority,
definition, enabled)
VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10, $11)
ON CONFLICT (name) DO UPDATE SET
version = EXCLUDED.version,
trigger_type = EXCLUDED.trigger_type,
scope_type = EXCLUDED.scope_type,
compute_type = EXCLUDED.compute_type,
runner_tags = EXCLUDED.runner_tags,
timeout_seconds = EXCLUDED.timeout_seconds,
max_retries = EXCLUDED.max_retries,
priority = EXCLUDED.priority,
definition = EXCLUDED.definition,
enabled = EXCLUDED.enabled,
updated_at = now()
RETURNING id, created_at, updated_at
`, d.Name, d.Version, d.TriggerType, d.ScopeType, d.ComputeType,
d.RunnerTags, d.TimeoutSeconds, d.MaxRetries, d.Priority,
defJSON, d.Enabled,
).Scan(&d.ID, &d.CreatedAt, &d.UpdatedAt)
if err != nil {
return fmt.Errorf("upserting job definition: %w", err)
}
return nil
}
// GetDefinition returns a job definition by name.
func (r *JobRepository) GetDefinition(ctx context.Context, name string) (*JobDefinitionRecord, error) {
d := &JobDefinitionRecord{}
var defJSON []byte
err := r.db.pool.QueryRow(ctx, `
SELECT id, name, version, trigger_type, scope_type, compute_type,
runner_tags, timeout_seconds, max_retries, priority,
definition, enabled, created_at, updated_at
FROM job_definitions WHERE name = $1
`, name).Scan(
&d.ID, &d.Name, &d.Version, &d.TriggerType, &d.ScopeType, &d.ComputeType,
&d.RunnerTags, &d.TimeoutSeconds, &d.MaxRetries, &d.Priority,
&defJSON, &d.Enabled, &d.CreatedAt, &d.UpdatedAt,
)
if err == pgx.ErrNoRows {
return nil, nil
}
if err != nil {
return nil, fmt.Errorf("querying job definition: %w", err)
}
if defJSON != nil {
if err := json.Unmarshal(defJSON, &d.Definition); err != nil {
return nil, fmt.Errorf("unmarshaling definition: %w", err)
}
}
return d, nil
}
// ListDefinitions returns all job definitions.
func (r *JobRepository) ListDefinitions(ctx context.Context) ([]*JobDefinitionRecord, error) {
rows, err := r.db.pool.Query(ctx, `
SELECT id, name, version, trigger_type, scope_type, compute_type,
runner_tags, timeout_seconds, max_retries, priority,
definition, enabled, created_at, updated_at
FROM job_definitions ORDER BY name
`)
if err != nil {
return nil, fmt.Errorf("querying job definitions: %w", err)
}
defer rows.Close()
return scanJobDefinitions(rows)
}
// GetDefinitionsByTrigger returns all enabled definitions matching a trigger type.
func (r *JobRepository) GetDefinitionsByTrigger(ctx context.Context, triggerType string) ([]*JobDefinitionRecord, error) {
rows, err := r.db.pool.Query(ctx, `
SELECT id, name, version, trigger_type, scope_type, compute_type,
runner_tags, timeout_seconds, max_retries, priority,
definition, enabled, created_at, updated_at
FROM job_definitions
WHERE trigger_type = $1 AND enabled = true
ORDER BY priority ASC, name
`, triggerType)
if err != nil {
return nil, fmt.Errorf("querying definitions by trigger: %w", err)
}
defer rows.Close()
return scanJobDefinitions(rows)
}
// GetDefinitionByID returns a job definition by ID.
func (r *JobRepository) GetDefinitionByID(ctx context.Context, id string) (*JobDefinitionRecord, error) {
d := &JobDefinitionRecord{}
var defJSON []byte
err := r.db.pool.QueryRow(ctx, `
SELECT id, name, version, trigger_type, scope_type, compute_type,
runner_tags, timeout_seconds, max_retries, priority,
definition, enabled, created_at, updated_at
FROM job_definitions WHERE id = $1
`, id).Scan(
&d.ID, &d.Name, &d.Version, &d.TriggerType, &d.ScopeType, &d.ComputeType,
&d.RunnerTags, &d.TimeoutSeconds, &d.MaxRetries, &d.Priority,
&defJSON, &d.Enabled, &d.CreatedAt, &d.UpdatedAt,
)
if err == pgx.ErrNoRows {
return nil, nil
}
if err != nil {
return nil, fmt.Errorf("querying job definition by ID: %w", err)
}
if defJSON != nil {
if err := json.Unmarshal(defJSON, &d.Definition); err != nil {
return nil, fmt.Errorf("unmarshaling definition: %w", err)
}
}
return d, nil
}
// ---------------------------------------------------------------------------
// Jobs
// ---------------------------------------------------------------------------
// CreateJob inserts a new job.
func (r *JobRepository) CreateJob(ctx context.Context, j *Job) error {
scopeJSON, err := json.Marshal(j.ScopeMetadata)
if err != nil {
return fmt.Errorf("marshaling scope metadata: %w", err)
}
err = r.db.pool.QueryRow(ctx, `
INSERT INTO jobs (job_definition_id, definition_name, status, priority,
item_id, project_id, scope_metadata,
runner_tags, timeout_seconds, max_retries, created_by)
VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10, $11)
RETURNING id, created_at
`, j.JobDefinitionID, j.DefinitionName, "pending", j.Priority,
j.ItemID, j.ProjectID, scopeJSON,
j.RunnerTags, j.TimeoutSeconds, j.MaxRetries, j.CreatedBy,
).Scan(&j.ID, &j.CreatedAt)
if err != nil {
return fmt.Errorf("creating job: %w", err)
}
j.Status = "pending"
return nil
}
// GetJob returns a job by ID.
func (r *JobRepository) GetJob(ctx context.Context, jobID string) (*Job, error) {
j := &Job{}
var scopeJSON, resultJSON []byte
err := r.db.pool.QueryRow(ctx, `
SELECT id, job_definition_id, definition_name, status, priority,
item_id, project_id, scope_metadata, runner_id, runner_tags,
created_at, claimed_at, started_at, completed_at,
timeout_seconds, expires_at, progress, progress_message,
result, error_message, retry_count, max_retries,
created_by, cancelled_by
FROM jobs WHERE id = $1
`, jobID).Scan(
&j.ID, &j.JobDefinitionID, &j.DefinitionName, &j.Status, &j.Priority,
&j.ItemID, &j.ProjectID, &scopeJSON, &j.RunnerID, &j.RunnerTags,
&j.CreatedAt, &j.ClaimedAt, &j.StartedAt, &j.CompletedAt,
&j.TimeoutSeconds, &j.ExpiresAt, &j.Progress, &j.ProgressMessage,
&resultJSON, &j.ErrorMessage, &j.RetryCount, &j.MaxRetries,
&j.CreatedBy, &j.CancelledBy,
)
if err == pgx.ErrNoRows {
return nil, nil
}
if err != nil {
return nil, fmt.Errorf("querying job: %w", err)
}
if scopeJSON != nil {
if err := json.Unmarshal(scopeJSON, &j.ScopeMetadata); err != nil {
return nil, fmt.Errorf("unmarshaling scope metadata: %w", err)
}
}
if resultJSON != nil {
if err := json.Unmarshal(resultJSON, &j.Result); err != nil {
return nil, fmt.Errorf("unmarshaling result: %w", err)
}
}
return j, nil
}
// ListJobs returns jobs matching optional filters.
func (r *JobRepository) ListJobs(ctx context.Context, status, itemID string, limit, offset int) ([]*Job, error) {
query := `
SELECT id, job_definition_id, definition_name, status, priority,
item_id, project_id, scope_metadata, runner_id, runner_tags,
created_at, claimed_at, started_at, completed_at,
timeout_seconds, expires_at, progress, progress_message,
result, error_message, retry_count, max_retries,
created_by, cancelled_by
FROM jobs WHERE 1=1`
args := []any{}
argN := 1
if status != "" {
query += fmt.Sprintf(" AND status = $%d", argN)
args = append(args, status)
argN++
}
if itemID != "" {
query += fmt.Sprintf(" AND item_id = $%d", argN)
args = append(args, itemID)
argN++
}
query += " ORDER BY created_at DESC"
if limit > 0 {
query += fmt.Sprintf(" LIMIT $%d", argN)
args = append(args, limit)
argN++
}
if offset > 0 {
query += fmt.Sprintf(" OFFSET $%d", argN)
args = append(args, offset)
}
rows, err := r.db.pool.Query(ctx, query, args...)
if err != nil {
return nil, fmt.Errorf("querying jobs: %w", err)
}
defer rows.Close()
return scanJobs(rows)
}
// ClaimJob atomically claims the next available job matching the runner's tags.
// Uses SELECT FOR UPDATE SKIP LOCKED for exactly-once delivery.
func (r *JobRepository) ClaimJob(ctx context.Context, runnerID string, tags []string) (*Job, error) {
j := &Job{}
var scopeJSON, resultJSON []byte
err := r.db.pool.QueryRow(ctx, `
WITH claimable AS (
SELECT id FROM jobs
WHERE status = 'pending' AND runner_tags <@ $2::text[]
ORDER BY priority ASC, created_at ASC
LIMIT 1
FOR UPDATE SKIP LOCKED
)
UPDATE jobs SET
status = 'claimed',
runner_id = $1,
claimed_at = now(),
expires_at = now() + (timeout_seconds || ' seconds')::interval
FROM claimable
WHERE jobs.id = claimable.id
RETURNING jobs.id, jobs.job_definition_id, jobs.definition_name, jobs.status,
jobs.priority, jobs.item_id, jobs.project_id, jobs.scope_metadata,
jobs.runner_id, jobs.runner_tags, jobs.created_at, jobs.claimed_at,
jobs.started_at, jobs.completed_at, jobs.timeout_seconds, jobs.expires_at,
jobs.progress, jobs.progress_message, jobs.result, jobs.error_message,
jobs.retry_count, jobs.max_retries, jobs.created_by, jobs.cancelled_by
`, runnerID, tags).Scan(
&j.ID, &j.JobDefinitionID, &j.DefinitionName, &j.Status,
&j.Priority, &j.ItemID, &j.ProjectID, &scopeJSON,
&j.RunnerID, &j.RunnerTags, &j.CreatedAt, &j.ClaimedAt,
&j.StartedAt, &j.CompletedAt, &j.TimeoutSeconds, &j.ExpiresAt,
&j.Progress, &j.ProgressMessage, &resultJSON, &j.ErrorMessage,
&j.RetryCount, &j.MaxRetries, &j.CreatedBy, &j.CancelledBy,
)
if err == pgx.ErrNoRows {
return nil, nil
}
if err != nil {
return nil, fmt.Errorf("claiming job: %w", err)
}
if scopeJSON != nil {
if err := json.Unmarshal(scopeJSON, &j.ScopeMetadata); err != nil {
return nil, fmt.Errorf("unmarshaling scope metadata: %w", err)
}
}
if resultJSON != nil {
if err := json.Unmarshal(resultJSON, &j.Result); err != nil {
return nil, fmt.Errorf("unmarshaling result: %w", err)
}
}
return j, nil
}
// StartJob transitions a claimed job to running.
func (r *JobRepository) StartJob(ctx context.Context, jobID, runnerID string) error {
result, err := r.db.pool.Exec(ctx, `
UPDATE jobs SET status = 'running', started_at = now()
WHERE id = $1 AND runner_id = $2 AND status = 'claimed'
`, jobID, runnerID)
if err != nil {
return fmt.Errorf("starting job: %w", err)
}
if result.RowsAffected() == 0 {
return fmt.Errorf("job %s not claimable by runner %s or not in claimed state", jobID, runnerID)
}
return nil
}
// UpdateProgress updates a running job's progress.
func (r *JobRepository) UpdateProgress(ctx context.Context, jobID, runnerID string, progress int, message string) error {
var msg *string
if message != "" {
msg = &message
}
result, err := r.db.pool.Exec(ctx, `
UPDATE jobs SET progress = $3, progress_message = $4
WHERE id = $1 AND runner_id = $2 AND status IN ('claimed', 'running')
`, jobID, runnerID, progress, msg)
if err != nil {
return fmt.Errorf("updating progress: %w", err)
}
if result.RowsAffected() == 0 {
return fmt.Errorf("job %s not owned by runner %s or not active", jobID, runnerID)
}
return nil
}
// CompleteJob marks a job as completed with optional result data.
func (r *JobRepository) CompleteJob(ctx context.Context, jobID, runnerID string, resultData map[string]any) error {
var resultJSON []byte
var err error
if resultData != nil {
resultJSON, err = json.Marshal(resultData)
if err != nil {
return fmt.Errorf("marshaling result: %w", err)
}
}
res, err := r.db.pool.Exec(ctx, `
UPDATE jobs SET
status = 'completed',
progress = 100,
result = $3,
completed_at = now()
WHERE id = $1 AND runner_id = $2 AND status IN ('claimed', 'running')
`, jobID, runnerID, resultJSON)
if err != nil {
return fmt.Errorf("completing job: %w", err)
}
if res.RowsAffected() == 0 {
return fmt.Errorf("job %s not owned by runner %s or not active", jobID, runnerID)
}
return nil
}
// FailJob marks a job as failed with an error message.
func (r *JobRepository) FailJob(ctx context.Context, jobID, runnerID string, errMsg string) error {
res, err := r.db.pool.Exec(ctx, `
UPDATE jobs SET
status = 'failed',
error_message = $3,
completed_at = now()
WHERE id = $1 AND runner_id = $2 AND status IN ('claimed', 'running')
`, jobID, runnerID, errMsg)
if err != nil {
return fmt.Errorf("failing job: %w", err)
}
if res.RowsAffected() == 0 {
return fmt.Errorf("job %s not owned by runner %s or not active", jobID, runnerID)
}
return nil
}
// CancelJob cancels a pending or active job.
func (r *JobRepository) CancelJob(ctx context.Context, jobID string, cancelledBy string) error {
res, err := r.db.pool.Exec(ctx, `
UPDATE jobs SET
status = 'cancelled',
cancelled_by = $2,
completed_at = now()
WHERE id = $1 AND status IN ('pending', 'claimed', 'running')
`, jobID, cancelledBy)
if err != nil {
return fmt.Errorf("cancelling job: %w", err)
}
if res.RowsAffected() == 0 {
return fmt.Errorf("job %s not cancellable", jobID)
}
return nil
}
// TimeoutExpiredJobs marks expired claimed/running jobs as failed.
// Returns the number of jobs timed out.
func (r *JobRepository) TimeoutExpiredJobs(ctx context.Context) (int64, error) {
result, err := r.db.pool.Exec(ctx, `
UPDATE jobs SET
status = 'failed',
error_message = 'job timed out',
completed_at = now()
WHERE status IN ('claimed', 'running')
AND expires_at IS NOT NULL
AND expires_at < now()
`)
if err != nil {
return 0, fmt.Errorf("timing out expired jobs: %w", err)
}
return result.RowsAffected(), nil
}
// ---------------------------------------------------------------------------
// Job Log
// ---------------------------------------------------------------------------
// AppendLog adds a log entry to a job.
func (r *JobRepository) AppendLog(ctx context.Context, entry *JobLogEntry) error {
metaJSON, err := json.Marshal(entry.Metadata)
if err != nil {
return fmt.Errorf("marshaling log metadata: %w", err)
}
err = r.db.pool.QueryRow(ctx, `
INSERT INTO job_log (job_id, level, message, metadata)
VALUES ($1, $2, $3, $4)
RETURNING id, timestamp
`, entry.JobID, entry.Level, entry.Message, metaJSON,
).Scan(&entry.ID, &entry.Timestamp)
if err != nil {
return fmt.Errorf("appending job log: %w", err)
}
return nil
}
// GetJobLogs returns all log entries for a job.
func (r *JobRepository) GetJobLogs(ctx context.Context, jobID string) ([]*JobLogEntry, error) {
rows, err := r.db.pool.Query(ctx, `
SELECT id, job_id, timestamp, level, message, metadata
FROM job_log WHERE job_id = $1 ORDER BY timestamp ASC
`, jobID)
if err != nil {
return nil, fmt.Errorf("querying job logs: %w", err)
}
defer rows.Close()
var entries []*JobLogEntry
for rows.Next() {
e := &JobLogEntry{}
var metaJSON []byte
if err := rows.Scan(&e.ID, &e.JobID, &e.Timestamp, &e.Level, &e.Message, &metaJSON); err != nil {
return nil, fmt.Errorf("scanning job log: %w", err)
}
if metaJSON != nil {
if err := json.Unmarshal(metaJSON, &e.Metadata); err != nil {
return nil, fmt.Errorf("unmarshaling log metadata: %w", err)
}
}
entries = append(entries, e)
}
return entries, rows.Err()
}
// ---------------------------------------------------------------------------
// Runners
// ---------------------------------------------------------------------------
// RegisterRunner creates a new runner record.
func (r *JobRepository) RegisterRunner(ctx context.Context, runner *Runner) error {
metaJSON, err := json.Marshal(runner.Metadata)
if err != nil {
return fmt.Errorf("marshaling runner metadata: %w", err)
}
err = r.db.pool.QueryRow(ctx, `
INSERT INTO runners (name, token_hash, token_prefix, tags, status, metadata)
VALUES ($1, $2, $3, $4, 'offline', $5)
RETURNING id, created_at, updated_at
`, runner.Name, runner.TokenHash, runner.TokenPrefix, runner.Tags, metaJSON,
).Scan(&runner.ID, &runner.CreatedAt, &runner.UpdatedAt)
if err != nil {
return fmt.Errorf("registering runner: %w", err)
}
runner.Status = "offline"
return nil
}
// GetRunnerByToken looks up a runner by token hash.
func (r *JobRepository) GetRunnerByToken(ctx context.Context, tokenHash string) (*Runner, error) {
runner := &Runner{}
var metaJSON []byte
err := r.db.pool.QueryRow(ctx, `
SELECT id, name, token_hash, token_prefix, tags, status,
last_heartbeat, last_job_id, metadata, created_at, updated_at
FROM runners WHERE token_hash = $1
`, tokenHash).Scan(
&runner.ID, &runner.Name, &runner.TokenHash, &runner.TokenPrefix,
&runner.Tags, &runner.Status, &runner.LastHeartbeat, &runner.LastJobID,
&metaJSON, &runner.CreatedAt, &runner.UpdatedAt,
)
if err == pgx.ErrNoRows {
return nil, nil
}
if err != nil {
return nil, fmt.Errorf("querying runner by token: %w", err)
}
if metaJSON != nil {
if err := json.Unmarshal(metaJSON, &runner.Metadata); err != nil {
return nil, fmt.Errorf("unmarshaling runner metadata: %w", err)
}
}
return runner, nil
}
// GetRunner returns a runner by ID.
func (r *JobRepository) GetRunner(ctx context.Context, runnerID string) (*Runner, error) {
runner := &Runner{}
var metaJSON []byte
err := r.db.pool.QueryRow(ctx, `
SELECT id, name, token_hash, token_prefix, tags, status,
last_heartbeat, last_job_id, metadata, created_at, updated_at
FROM runners WHERE id = $1
`, runnerID).Scan(
&runner.ID, &runner.Name, &runner.TokenHash, &runner.TokenPrefix,
&runner.Tags, &runner.Status, &runner.LastHeartbeat, &runner.LastJobID,
&metaJSON, &runner.CreatedAt, &runner.UpdatedAt,
)
if err == pgx.ErrNoRows {
return nil, nil
}
if err != nil {
return nil, fmt.Errorf("querying runner: %w", err)
}
if metaJSON != nil {
if err := json.Unmarshal(metaJSON, &runner.Metadata); err != nil {
return nil, fmt.Errorf("unmarshaling runner metadata: %w", err)
}
}
return runner, nil
}
// Heartbeat updates a runner's heartbeat timestamp and sets status to online.
func (r *JobRepository) Heartbeat(ctx context.Context, runnerID string) error {
res, err := r.db.pool.Exec(ctx, `
UPDATE runners SET
status = 'online',
last_heartbeat = now(),
updated_at = now()
WHERE id = $1
`, runnerID)
if err != nil {
return fmt.Errorf("updating heartbeat: %w", err)
}
if res.RowsAffected() == 0 {
return fmt.Errorf("runner %s not found", runnerID)
}
return nil
}
// ListRunners returns all registered runners.
func (r *JobRepository) ListRunners(ctx context.Context) ([]*Runner, error) {
rows, err := r.db.pool.Query(ctx, `
SELECT id, name, token_hash, token_prefix, tags, status,
last_heartbeat, last_job_id, metadata, created_at, updated_at
FROM runners ORDER BY name
`)
if err != nil {
return nil, fmt.Errorf("querying runners: %w", err)
}
defer rows.Close()
var runners []*Runner
for rows.Next() {
runner := &Runner{}
var metaJSON []byte
if err := rows.Scan(
&runner.ID, &runner.Name, &runner.TokenHash, &runner.TokenPrefix,
&runner.Tags, &runner.Status, &runner.LastHeartbeat, &runner.LastJobID,
&metaJSON, &runner.CreatedAt, &runner.UpdatedAt,
); err != nil {
return nil, fmt.Errorf("scanning runner: %w", err)
}
if metaJSON != nil {
if err := json.Unmarshal(metaJSON, &runner.Metadata); err != nil {
return nil, fmt.Errorf("unmarshaling runner metadata: %w", err)
}
}
runners = append(runners, runner)
}
return runners, rows.Err()
}
// DeleteRunner removes a runner by ID.
func (r *JobRepository) DeleteRunner(ctx context.Context, runnerID string) error {
res, err := r.db.pool.Exec(ctx, `DELETE FROM runners WHERE id = $1`, runnerID)
if err != nil {
return fmt.Errorf("deleting runner: %w", err)
}
if res.RowsAffected() == 0 {
return fmt.Errorf("runner %s not found", runnerID)
}
return nil
}
// ExpireStaleRunners marks runners with no recent heartbeat as offline.
func (r *JobRepository) ExpireStaleRunners(ctx context.Context, timeout time.Duration) (int64, error) {
result, err := r.db.pool.Exec(ctx, `
UPDATE runners SET status = 'offline', updated_at = now()
WHERE status = 'online'
AND last_heartbeat < now() - $1::interval
`, timeout.String())
if err != nil {
return 0, fmt.Errorf("expiring stale runners: %w", err)
}
return result.RowsAffected(), nil
}
// ---------------------------------------------------------------------------
// Helpers
// ---------------------------------------------------------------------------
func scanJobDefinitions(rows pgx.Rows) ([]*JobDefinitionRecord, error) {
var defs []*JobDefinitionRecord
for rows.Next() {
d := &JobDefinitionRecord{}
var defJSON []byte
if err := rows.Scan(
&d.ID, &d.Name, &d.Version, &d.TriggerType, &d.ScopeType, &d.ComputeType,
&d.RunnerTags, &d.TimeoutSeconds, &d.MaxRetries, &d.Priority,
&defJSON, &d.Enabled, &d.CreatedAt, &d.UpdatedAt,
); err != nil {
return nil, fmt.Errorf("scanning job definition: %w", err)
}
if defJSON != nil {
if err := json.Unmarshal(defJSON, &d.Definition); err != nil {
return nil, fmt.Errorf("unmarshaling definition: %w", err)
}
}
defs = append(defs, d)
}
return defs, rows.Err()
}
func scanJobs(rows pgx.Rows) ([]*Job, error) {
var jobs []*Job
for rows.Next() {
j := &Job{}
var scopeJSON, resultJSON []byte
if err := rows.Scan(
&j.ID, &j.JobDefinitionID, &j.DefinitionName, &j.Status, &j.Priority,
&j.ItemID, &j.ProjectID, &scopeJSON, &j.RunnerID, &j.RunnerTags,
&j.CreatedAt, &j.ClaimedAt, &j.StartedAt, &j.CompletedAt,
&j.TimeoutSeconds, &j.ExpiresAt, &j.Progress, &j.ProgressMessage,
&resultJSON, &j.ErrorMessage, &j.RetryCount, &j.MaxRetries,
&j.CreatedBy, &j.CancelledBy,
); err != nil {
return nil, fmt.Errorf("scanning job: %w", err)
}
if scopeJSON != nil {
if err := json.Unmarshal(scopeJSON, &j.ScopeMetadata); err != nil {
return nil, fmt.Errorf("unmarshaling scope metadata: %w", err)
}
}
if resultJSON != nil {
if err := json.Unmarshal(resultJSON, &j.Result); err != nil {
return nil, fmt.Errorf("unmarshaling result: %w", err)
}
}
jobs = append(jobs, j)
}
return jobs, rows.Err()
}

230
internal/db/locations.go Normal file
View File

@@ -0,0 +1,230 @@
package db
import (
"context"
"encoding/json"
"fmt"
"strings"
"time"
"github.com/jackc/pgx/v5"
)
// Location represents a location in the hierarchy.
type Location struct {
ID string
Path string
Name string
ParentID *string
LocationType string
Depth int
Metadata map[string]any
CreatedAt time.Time
}
// LocationRepository provides location database operations.
type LocationRepository struct {
db *DB
}
// NewLocationRepository creates a new location repository.
func NewLocationRepository(db *DB) *LocationRepository {
return &LocationRepository{db: db}
}
// List returns all locations ordered by path.
func (r *LocationRepository) List(ctx context.Context) ([]*Location, error) {
rows, err := r.db.pool.Query(ctx, `
SELECT id, path, name, parent_id, location_type, depth, metadata, created_at
FROM locations
ORDER BY path
`)
if err != nil {
return nil, err
}
defer rows.Close()
return scanLocations(rows)
}
// GetByPath returns a location by its path.
func (r *LocationRepository) GetByPath(ctx context.Context, path string) (*Location, error) {
loc := &Location{}
var meta []byte
err := r.db.pool.QueryRow(ctx, `
SELECT id, path, name, parent_id, location_type, depth, metadata, created_at
FROM locations
WHERE path = $1
`, path).Scan(&loc.ID, &loc.Path, &loc.Name, &loc.ParentID, &loc.LocationType, &loc.Depth, &meta, &loc.CreatedAt)
if err == pgx.ErrNoRows {
return nil, nil
}
if err != nil {
return nil, err
}
if meta != nil {
json.Unmarshal(meta, &loc.Metadata)
}
return loc, nil
}
// GetByID returns a location by its ID.
func (r *LocationRepository) GetByID(ctx context.Context, id string) (*Location, error) {
loc := &Location{}
var meta []byte
err := r.db.pool.QueryRow(ctx, `
SELECT id, path, name, parent_id, location_type, depth, metadata, created_at
FROM locations
WHERE id = $1
`, id).Scan(&loc.ID, &loc.Path, &loc.Name, &loc.ParentID, &loc.LocationType, &loc.Depth, &meta, &loc.CreatedAt)
if err == pgx.ErrNoRows {
return nil, nil
}
if err != nil {
return nil, err
}
if meta != nil {
json.Unmarshal(meta, &loc.Metadata)
}
return loc, nil
}
// GetChildren returns direct children of a location.
func (r *LocationRepository) GetChildren(ctx context.Context, parentID string) ([]*Location, error) {
rows, err := r.db.pool.Query(ctx, `
SELECT id, path, name, parent_id, location_type, depth, metadata, created_at
FROM locations
WHERE parent_id = $1
ORDER BY path
`, parentID)
if err != nil {
return nil, err
}
defer rows.Close()
return scanLocations(rows)
}
// GetTree returns a location and all its descendants (by path prefix).
func (r *LocationRepository) GetTree(ctx context.Context, rootPath string) ([]*Location, error) {
rows, err := r.db.pool.Query(ctx, `
SELECT id, path, name, parent_id, location_type, depth, metadata, created_at
FROM locations
WHERE path = $1 OR path LIKE $2
ORDER BY path
`, rootPath, rootPath+"/%")
if err != nil {
return nil, err
}
defer rows.Close()
return scanLocations(rows)
}
// Create inserts a new location. ParentID and Depth are resolved from the path.
func (r *LocationRepository) Create(ctx context.Context, loc *Location) error {
// Auto-calculate depth from path segments
loc.Depth = strings.Count(loc.Path, "/")
// Resolve parent_id from path if not explicitly set
if loc.ParentID == nil && loc.Depth > 0 {
parentPath := loc.Path[:strings.LastIndex(loc.Path, "/")]
parent, err := r.GetByPath(ctx, parentPath)
if err != nil {
return fmt.Errorf("looking up parent %q: %w", parentPath, err)
}
if parent == nil {
return fmt.Errorf("parent location %q does not exist", parentPath)
}
loc.ParentID = &parent.ID
}
meta, err := json.Marshal(loc.Metadata)
if err != nil {
return fmt.Errorf("marshaling metadata: %w", err)
}
return r.db.pool.QueryRow(ctx, `
INSERT INTO locations (path, name, parent_id, location_type, depth, metadata)
VALUES ($1, $2, $3, $4, $5, $6)
RETURNING id, created_at
`, loc.Path, loc.Name, loc.ParentID, loc.LocationType, loc.Depth, meta).Scan(&loc.ID, &loc.CreatedAt)
}
// Update updates a location's name, type, and metadata.
func (r *LocationRepository) Update(ctx context.Context, path string, name, locationType string, metadata map[string]any) error {
meta, err := json.Marshal(metadata)
if err != nil {
return fmt.Errorf("marshaling metadata: %w", err)
}
tag, err := r.db.pool.Exec(ctx, `
UPDATE locations
SET name = $2, location_type = $3, metadata = $4
WHERE path = $1
`, path, name, locationType, meta)
if err != nil {
return err
}
if tag.RowsAffected() == 0 {
return fmt.Errorf("location %q not found", path)
}
return nil
}
// Delete removes a location. Returns an error if inventory rows reference it.
func (r *LocationRepository) Delete(ctx context.Context, path string) error {
// Check for inventory references
var count int
err := r.db.pool.QueryRow(ctx, `
SELECT COUNT(*) FROM inventory
WHERE location_id = (SELECT id FROM locations WHERE path = $1)
`, path).Scan(&count)
if err != nil {
return err
}
if count > 0 {
return fmt.Errorf("cannot delete location %q: %d inventory record(s) exist", path, count)
}
// Delete children first (cascade by path prefix), deepest first
_, err = r.db.pool.Exec(ctx, `
DELETE FROM locations
WHERE path LIKE $1
`, path+"/%")
if err != nil {
return err
}
tag, err := r.db.pool.Exec(ctx, `DELETE FROM locations WHERE path = $1`, path)
if err != nil {
return err
}
if tag.RowsAffected() == 0 {
return fmt.Errorf("location %q not found", path)
}
return nil
}
// HasInventory checks if a location (or descendants) have inventory records.
func (r *LocationRepository) HasInventory(ctx context.Context, path string) (bool, error) {
var count int
err := r.db.pool.QueryRow(ctx, `
SELECT COUNT(*) FROM inventory i
JOIN locations l ON l.id = i.location_id
WHERE l.path = $1 OR l.path LIKE $2
`, path, path+"/%").Scan(&count)
return count > 0, err
}
func scanLocations(rows pgx.Rows) ([]*Location, error) {
var locs []*Location
for rows.Next() {
loc := &Location{}
var meta []byte
if err := rows.Scan(&loc.ID, &loc.Path, &loc.Name, &loc.ParentID, &loc.LocationType, &loc.Depth, &meta, &loc.CreatedAt); err != nil {
return nil, err
}
if meta != nil {
json.Unmarshal(meta, &loc.Metadata)
}
locs = append(locs, loc)
}
return locs, rows.Err()
}

105
internal/db/settings.go Normal file
View File

@@ -0,0 +1,105 @@
package db
import (
"context"
"encoding/json"
"fmt"
)
// SettingsRepository provides access to module_state and settings_overrides tables.
type SettingsRepository struct {
db *DB
}
// NewSettingsRepository creates a new SettingsRepository.
func NewSettingsRepository(db *DB) *SettingsRepository {
return &SettingsRepository{db: db}
}
// GetModuleStates returns all module enabled/disabled states from the database.
func (r *SettingsRepository) GetModuleStates(ctx context.Context) (map[string]bool, error) {
rows, err := r.db.pool.Query(ctx,
`SELECT module_id, enabled FROM module_state`)
if err != nil {
return nil, fmt.Errorf("querying module states: %w", err)
}
defer rows.Close()
states := make(map[string]bool)
for rows.Next() {
var id string
var enabled bool
if err := rows.Scan(&id, &enabled); err != nil {
return nil, fmt.Errorf("scanning module state: %w", err)
}
states[id] = enabled
}
return states, rows.Err()
}
// SetModuleState persists a module's enabled state. Uses upsert semantics.
func (r *SettingsRepository) SetModuleState(ctx context.Context, moduleID string, enabled bool, updatedBy string) error {
_, err := r.db.pool.Exec(ctx,
`INSERT INTO module_state (module_id, enabled, updated_by, updated_at)
VALUES ($1, $2, $3, now())
ON CONFLICT (module_id) DO UPDATE
SET enabled = EXCLUDED.enabled,
updated_by = EXCLUDED.updated_by,
updated_at = now()`,
moduleID, enabled, updatedBy)
if err != nil {
return fmt.Errorf("setting module state: %w", err)
}
return nil
}
// GetOverrides returns all settings overrides from the database.
func (r *SettingsRepository) GetOverrides(ctx context.Context) (map[string]json.RawMessage, error) {
rows, err := r.db.pool.Query(ctx,
`SELECT key, value FROM settings_overrides`)
if err != nil {
return nil, fmt.Errorf("querying settings overrides: %w", err)
}
defer rows.Close()
overrides := make(map[string]json.RawMessage)
for rows.Next() {
var key string
var value json.RawMessage
if err := rows.Scan(&key, &value); err != nil {
return nil, fmt.Errorf("scanning settings override: %w", err)
}
overrides[key] = value
}
return overrides, rows.Err()
}
// SetOverride persists a settings override. Uses upsert semantics.
func (r *SettingsRepository) SetOverride(ctx context.Context, key string, value any, updatedBy string) error {
jsonVal, err := json.Marshal(value)
if err != nil {
return fmt.Errorf("marshaling override value: %w", err)
}
_, err = r.db.pool.Exec(ctx,
`INSERT INTO settings_overrides (key, value, updated_by, updated_at)
VALUES ($1, $2, $3, now())
ON CONFLICT (key) DO UPDATE
SET value = EXCLUDED.value,
updated_by = EXCLUDED.updated_by,
updated_at = now()`,
key, jsonVal, updatedBy)
if err != nil {
return fmt.Errorf("setting override: %w", err)
}
return nil
}
// DeleteOverride removes a settings override.
func (r *SettingsRepository) DeleteOverride(ctx context.Context, key string) error {
_, err := r.db.pool.Exec(ctx,
`DELETE FROM settings_overrides WHERE key = $1`, key)
if err != nil {
return fmt.Errorf("deleting override: %w", err)
}
return nil
}

166
internal/jobdef/jobdef.go Normal file
View File

@@ -0,0 +1,166 @@
// Package jobdef handles YAML job definition parsing and validation.
package jobdef
import (
"fmt"
"os"
"path/filepath"
"strings"
"gopkg.in/yaml.v3"
)
// Definition represents a compute job definition loaded from YAML.
type Definition struct {
Name string `yaml:"name" json:"name"`
Version int `yaml:"version" json:"version"`
Description string `yaml:"description" json:"description"`
Trigger TriggerConfig `yaml:"trigger" json:"trigger"`
Scope ScopeConfig `yaml:"scope" json:"scope"`
Compute ComputeConfig `yaml:"compute" json:"compute"`
Runner RunnerConfig `yaml:"runner" json:"runner"`
Timeout int `yaml:"timeout" json:"timeout"`
MaxRetries int `yaml:"max_retries" json:"max_retries"`
Priority int `yaml:"priority" json:"priority"`
}
// TriggerConfig describes when a job is created.
type TriggerConfig struct {
Type string `yaml:"type" json:"type"`
Filter map[string]string `yaml:"filter,omitempty" json:"filter,omitempty"`
}
// ScopeConfig describes what a job operates on.
type ScopeConfig struct {
Type string `yaml:"type" json:"type"`
}
// ComputeConfig describes the computation to perform.
type ComputeConfig struct {
Type string `yaml:"type" json:"type"`
Command string `yaml:"command" json:"command"`
Args map[string]any `yaml:"args,omitempty" json:"args,omitempty"`
}
// RunnerConfig describes runner requirements.
type RunnerConfig struct {
Tags []string `yaml:"tags" json:"tags"`
}
// DefinitionFile wraps a definition for YAML parsing.
type DefinitionFile struct {
Job Definition `yaml:"job"`
}
var validTriggerTypes = map[string]bool{
"revision_created": true,
"bom_changed": true,
"manual": true,
"schedule": true,
}
var validScopeTypes = map[string]bool{
"item": true,
"assembly": true,
"project": true,
}
var validComputeTypes = map[string]bool{
"validate": true,
"rebuild": true,
"diff": true,
"export": true,
"custom": true,
}
// Load reads a job definition from a YAML file.
func Load(path string) (*Definition, error) {
data, err := os.ReadFile(path)
if err != nil {
return nil, fmt.Errorf("reading job definition file: %w", err)
}
var df DefinitionFile
if err := yaml.Unmarshal(data, &df); err != nil {
return nil, fmt.Errorf("parsing job definition YAML: %w", err)
}
def := &df.Job
// Apply defaults
if def.Timeout <= 0 {
def.Timeout = 600
}
if def.MaxRetries <= 0 {
def.MaxRetries = 1
}
if def.Priority <= 0 {
def.Priority = 100
}
if def.Version <= 0 {
def.Version = 1
}
if err := def.Validate(); err != nil {
return nil, fmt.Errorf("validating %s: %w", path, err)
}
return def, nil
}
// LoadAll reads all job definitions from a directory.
func LoadAll(dir string) (map[string]*Definition, error) {
defs := make(map[string]*Definition)
entries, err := os.ReadDir(dir)
if err != nil {
return nil, fmt.Errorf("reading job definitions directory: %w", err)
}
for _, entry := range entries {
if entry.IsDir() {
continue
}
if !strings.HasSuffix(entry.Name(), ".yaml") && !strings.HasSuffix(entry.Name(), ".yml") {
continue
}
path := filepath.Join(dir, entry.Name())
def, err := Load(path)
if err != nil {
return nil, fmt.Errorf("loading %s: %w", entry.Name(), err)
}
defs[def.Name] = def
}
return defs, nil
}
// Validate checks that the definition is well-formed.
func (d *Definition) Validate() error {
if d.Name == "" {
return fmt.Errorf("job definition name is required")
}
if d.Trigger.Type == "" {
return fmt.Errorf("trigger type is required")
}
if !validTriggerTypes[d.Trigger.Type] {
return fmt.Errorf("invalid trigger type %q", d.Trigger.Type)
}
if d.Scope.Type == "" {
return fmt.Errorf("scope type is required")
}
if !validScopeTypes[d.Scope.Type] {
return fmt.Errorf("invalid scope type %q", d.Scope.Type)
}
if d.Compute.Type == "" {
return fmt.Errorf("compute type is required")
}
if !validComputeTypes[d.Compute.Type] {
return fmt.Errorf("invalid compute type %q", d.Compute.Type)
}
if d.Compute.Command == "" {
return fmt.Errorf("compute command is required")
}
return nil
}

View File

@@ -0,0 +1,328 @@
package jobdef
import (
"os"
"path/filepath"
"testing"
)
func TestLoadValid(t *testing.T) {
dir := t.TempDir()
content := `
job:
name: test-job
version: 1
description: "A test job"
trigger:
type: manual
scope:
type: item
compute:
type: validate
command: create-validate
runner:
tags: [create]
timeout: 300
max_retries: 2
priority: 50
`
path := filepath.Join(dir, "test-job.yaml")
if err := os.WriteFile(path, []byte(content), 0644); err != nil {
t.Fatalf("writing test file: %v", err)
}
def, err := Load(path)
if err != nil {
t.Fatalf("Load: %v", err)
}
if def.Name != "test-job" {
t.Errorf("name = %q, want %q", def.Name, "test-job")
}
if def.Version != 1 {
t.Errorf("version = %d, want 1", def.Version)
}
if def.Trigger.Type != "manual" {
t.Errorf("trigger type = %q, want %q", def.Trigger.Type, "manual")
}
if def.Scope.Type != "item" {
t.Errorf("scope type = %q, want %q", def.Scope.Type, "item")
}
if def.Compute.Type != "validate" {
t.Errorf("compute type = %q, want %q", def.Compute.Type, "validate")
}
if def.Compute.Command != "create-validate" {
t.Errorf("compute command = %q, want %q", def.Compute.Command, "create-validate")
}
if len(def.Runner.Tags) != 1 || def.Runner.Tags[0] != "create" {
t.Errorf("runner tags = %v, want [create]", def.Runner.Tags)
}
if def.Timeout != 300 {
t.Errorf("timeout = %d, want 300", def.Timeout)
}
if def.MaxRetries != 2 {
t.Errorf("max_retries = %d, want 2", def.MaxRetries)
}
if def.Priority != 50 {
t.Errorf("priority = %d, want 50", def.Priority)
}
}
func TestLoadDefaults(t *testing.T) {
dir := t.TempDir()
content := `
job:
name: minimal
trigger:
type: manual
scope:
type: item
compute:
type: custom
command: do-something
`
path := filepath.Join(dir, "minimal.yaml")
if err := os.WriteFile(path, []byte(content), 0644); err != nil {
t.Fatalf("writing test file: %v", err)
}
def, err := Load(path)
if err != nil {
t.Fatalf("Load: %v", err)
}
if def.Timeout != 600 {
t.Errorf("default timeout = %d, want 600", def.Timeout)
}
if def.MaxRetries != 1 {
t.Errorf("default max_retries = %d, want 1", def.MaxRetries)
}
if def.Priority != 100 {
t.Errorf("default priority = %d, want 100", def.Priority)
}
if def.Version != 1 {
t.Errorf("default version = %d, want 1", def.Version)
}
}
func TestLoadInvalidTriggerType(t *testing.T) {
dir := t.TempDir()
content := `
job:
name: bad-trigger
trigger:
type: invalid_trigger
scope:
type: item
compute:
type: validate
command: create-validate
`
path := filepath.Join(dir, "bad.yaml")
if err := os.WriteFile(path, []byte(content), 0644); err != nil {
t.Fatalf("writing test file: %v", err)
}
_, err := Load(path)
if err == nil {
t.Fatal("expected error for invalid trigger type")
}
}
func TestLoadMissingName(t *testing.T) {
dir := t.TempDir()
content := `
job:
trigger:
type: manual
scope:
type: item
compute:
type: validate
command: create-validate
`
path := filepath.Join(dir, "no-name.yaml")
if err := os.WriteFile(path, []byte(content), 0644); err != nil {
t.Fatalf("writing test file: %v", err)
}
_, err := Load(path)
if err == nil {
t.Fatal("expected error for missing name")
}
}
func TestLoadMissingCommand(t *testing.T) {
dir := t.TempDir()
content := `
job:
name: no-command
trigger:
type: manual
scope:
type: item
compute:
type: validate
`
path := filepath.Join(dir, "no-cmd.yaml")
if err := os.WriteFile(path, []byte(content), 0644); err != nil {
t.Fatalf("writing test file: %v", err)
}
_, err := Load(path)
if err == nil {
t.Fatal("expected error for missing command")
}
}
func TestLoadAllDirectory(t *testing.T) {
dir := t.TempDir()
job1 := `
job:
name: job-one
trigger:
type: manual
scope:
type: item
compute:
type: validate
command: create-validate
`
job2 := `
job:
name: job-two
trigger:
type: revision_created
scope:
type: assembly
compute:
type: export
command: create-export
`
if err := os.WriteFile(filepath.Join(dir, "one.yaml"), []byte(job1), 0644); err != nil {
t.Fatal(err)
}
if err := os.WriteFile(filepath.Join(dir, "two.yml"), []byte(job2), 0644); err != nil {
t.Fatal(err)
}
// Non-YAML file should be ignored
if err := os.WriteFile(filepath.Join(dir, "readme.txt"), []byte("ignore me"), 0644); err != nil {
t.Fatal(err)
}
defs, err := LoadAll(dir)
if err != nil {
t.Fatalf("LoadAll: %v", err)
}
if len(defs) != 2 {
t.Fatalf("loaded %d definitions, want 2", len(defs))
}
if _, ok := defs["job-one"]; !ok {
t.Error("job-one not found")
}
if _, ok := defs["job-two"]; !ok {
t.Error("job-two not found")
}
}
func TestLoadAllEmptyDirectory(t *testing.T) {
dir := t.TempDir()
defs, err := LoadAll(dir)
if err != nil {
t.Fatalf("LoadAll: %v", err)
}
if len(defs) != 0 {
t.Errorf("loaded %d definitions from empty dir, want 0", len(defs))
}
}
func TestLoadWithFilter(t *testing.T) {
dir := t.TempDir()
content := `
job:
name: filtered-job
trigger:
type: revision_created
filter:
item_type: assembly
scope:
type: assembly
compute:
type: validate
command: create-validate
`
path := filepath.Join(dir, "filtered.yaml")
if err := os.WriteFile(path, []byte(content), 0644); err != nil {
t.Fatalf("writing test file: %v", err)
}
def, err := Load(path)
if err != nil {
t.Fatalf("Load: %v", err)
}
if def.Trigger.Filter["item_type"] != "assembly" {
t.Errorf("filter item_type = %q, want %q", def.Trigger.Filter["item_type"], "assembly")
}
}
func TestLoadWithArgs(t *testing.T) {
dir := t.TempDir()
content := `
job:
name: args-job
trigger:
type: manual
scope:
type: item
compute:
type: export
command: create-export
args:
format: step
include_mesh: true
`
path := filepath.Join(dir, "args.yaml")
if err := os.WriteFile(path, []byte(content), 0644); err != nil {
t.Fatalf("writing test file: %v", err)
}
def, err := Load(path)
if err != nil {
t.Fatalf("Load: %v", err)
}
if def.Compute.Args["format"] != "step" {
t.Errorf("args format = %v, want %q", def.Compute.Args["format"], "step")
}
if def.Compute.Args["include_mesh"] != true {
t.Errorf("args include_mesh = %v, want true", def.Compute.Args["include_mesh"])
}
}
func TestValidateInvalidScopeType(t *testing.T) {
d := &Definition{
Name: "test",
Trigger: TriggerConfig{Type: "manual"},
Scope: ScopeConfig{Type: "galaxy"},
Compute: ComputeConfig{Type: "validate", Command: "create-validate"},
}
if err := d.Validate(); err == nil {
t.Fatal("expected error for invalid scope type")
}
}
func TestValidateInvalidComputeType(t *testing.T) {
d := &Definition{
Name: "test",
Trigger: TriggerConfig{Type: "manual"},
Scope: ScopeConfig{Type: "item"},
Compute: ComputeConfig{Type: "teleport", Command: "beam-up"},
}
if err := d.Validate(); err == nil {
t.Fatal("expected error for invalid compute type")
}
}

148
internal/kc/kc.go Normal file
View File

@@ -0,0 +1,148 @@
// Package kc extracts and parses the silo/ metadata directory from .kc files.
//
// A .kc file is a ZIP archive (superset of .fcstd) that contains a silo/
// directory with JSON metadata entries. This package handles extraction and
// packing — no database or HTTP dependencies.
package kc
import (
"archive/zip"
"bytes"
"encoding/json"
"fmt"
"io"
"strings"
)
// Manifest represents the contents of silo/manifest.json.
type Manifest struct {
UUID string `json:"uuid"`
KCVersion string `json:"kc_version"`
RevisionHash string `json:"revision_hash"`
SiloInstance string `json:"silo_instance"`
}
// Metadata represents the contents of silo/metadata.json.
type Metadata struct {
SchemaName string `json:"schema_name"`
Tags []string `json:"tags"`
LifecycleState string `json:"lifecycle_state"`
Fields map[string]any `json:"fields"`
}
// Dependency represents one entry in silo/dependencies.json.
type Dependency struct {
UUID string `json:"uuid"`
PartNumber string `json:"part_number"`
Revision int `json:"revision"`
Quantity float64 `json:"quantity"`
Label string `json:"label"`
Relationship string `json:"relationship"`
}
// ExtractResult holds the parsed silo/ directory contents from a .kc file.
type ExtractResult struct {
Manifest *Manifest
Metadata *Metadata
Dependencies []Dependency
}
// HistoryEntry represents one entry in silo/history.json.
type HistoryEntry struct {
RevisionNumber int `json:"revision_number"`
CreatedAt string `json:"created_at"`
CreatedBy *string `json:"created_by,omitempty"`
Comment *string `json:"comment,omitempty"`
Status string `json:"status"`
Labels []string `json:"labels"`
}
// PackInput holds all the data needed to repack silo/ entries in a .kc file.
// Each field is optional — nil/empty means the entry is omitted from the ZIP.
type PackInput struct {
Manifest *Manifest
Metadata *Metadata
History []HistoryEntry
Dependencies []Dependency
}
// Extract opens a ZIP archive from data and parses the silo/ directory.
// Returns nil, nil if no silo/ directory is found (plain .fcstd file).
// Returns nil, error if silo/ entries exist but fail to parse.
func Extract(data []byte) (*ExtractResult, error) {
r, err := zip.NewReader(bytes.NewReader(data), int64(len(data)))
if err != nil {
return nil, fmt.Errorf("kc: open zip: %w", err)
}
var manifestFile, metadataFile, dependenciesFile *zip.File
hasSiloDir := false
for _, f := range r.File {
if f.Name == "silo/" || strings.HasPrefix(f.Name, "silo/") {
hasSiloDir = true
}
switch f.Name {
case "silo/manifest.json":
manifestFile = f
case "silo/metadata.json":
metadataFile = f
case "silo/dependencies.json":
dependenciesFile = f
}
}
if !hasSiloDir {
return nil, nil // plain .fcstd, no extraction
}
result := &ExtractResult{}
if manifestFile != nil {
m, err := readJSON[Manifest](manifestFile)
if err != nil {
return nil, fmt.Errorf("kc: parse manifest.json: %w", err)
}
result.Manifest = m
}
if metadataFile != nil {
m, err := readJSON[Metadata](metadataFile)
if err != nil {
return nil, fmt.Errorf("kc: parse metadata.json: %w", err)
}
result.Metadata = m
}
if dependenciesFile != nil {
deps, err := readJSON[[]Dependency](dependenciesFile)
if err != nil {
return nil, fmt.Errorf("kc: parse dependencies.json: %w", err)
}
if deps != nil {
result.Dependencies = *deps
}
}
return result, nil
}
// readJSON opens a zip.File and decodes its contents as JSON into T.
func readJSON[T any](f *zip.File) (*T, error) {
rc, err := f.Open()
if err != nil {
return nil, err
}
defer rc.Close()
data, err := io.ReadAll(rc)
if err != nil {
return nil, err
}
var v T
if err := json.Unmarshal(data, &v); err != nil {
return nil, err
}
return &v, nil
}

188
internal/kc/kc_test.go Normal file
View File

@@ -0,0 +1,188 @@
package kc
import (
"archive/zip"
"bytes"
"encoding/json"
"testing"
)
// buildZip creates a ZIP archive in memory from a map of filename → content.
func buildZip(t *testing.T, files map[string][]byte) []byte {
t.Helper()
var buf bytes.Buffer
w := zip.NewWriter(&buf)
for name, content := range files {
f, err := w.Create(name)
if err != nil {
t.Fatalf("creating zip entry %s: %v", name, err)
}
if _, err := f.Write(content); err != nil {
t.Fatalf("writing zip entry %s: %v", name, err)
}
}
if err := w.Close(); err != nil {
t.Fatalf("closing zip: %v", err)
}
return buf.Bytes()
}
func mustJSON(t *testing.T, v any) []byte {
t.Helper()
data, err := json.Marshal(v)
if err != nil {
t.Fatalf("marshaling JSON: %v", err)
}
return data
}
func TestExtract_PlainFCStd(t *testing.T) {
data := buildZip(t, map[string][]byte{
"Document.xml": []byte("<xml/>"),
"thumbnails/a.png": []byte("png"),
})
result, err := Extract(data)
if err != nil {
t.Fatalf("unexpected error: %v", err)
}
if result != nil {
t.Fatalf("expected nil result for plain .fcstd, got %+v", result)
}
}
func TestExtract_ValidKC(t *testing.T) {
manifest := Manifest{
UUID: "550e8400-e29b-41d4-a716-446655440000",
KCVersion: "1.0",
RevisionHash: "abc123",
SiloInstance: "https://silo.example.com",
}
metadata := Metadata{
SchemaName: "mechanical-part-v2",
Tags: []string{"structural", "aluminum"},
LifecycleState: "draft",
Fields: map[string]any{
"material": "6061-T6",
"weight_kg": 0.34,
},
}
data := buildZip(t, map[string][]byte{
"Document.xml": []byte("<xml/>"),
"silo/manifest.json": mustJSON(t, manifest),
"silo/metadata.json": mustJSON(t, metadata),
})
result, err := Extract(data)
if err != nil {
t.Fatalf("unexpected error: %v", err)
}
if result == nil {
t.Fatal("expected non-nil result")
}
if result.Manifest == nil {
t.Fatal("expected manifest")
}
if result.Manifest.UUID != manifest.UUID {
t.Errorf("manifest UUID = %q, want %q", result.Manifest.UUID, manifest.UUID)
}
if result.Manifest.KCVersion != manifest.KCVersion {
t.Errorf("manifest KCVersion = %q, want %q", result.Manifest.KCVersion, manifest.KCVersion)
}
if result.Manifest.RevisionHash != manifest.RevisionHash {
t.Errorf("manifest RevisionHash = %q, want %q", result.Manifest.RevisionHash, manifest.RevisionHash)
}
if result.Manifest.SiloInstance != manifest.SiloInstance {
t.Errorf("manifest SiloInstance = %q, want %q", result.Manifest.SiloInstance, manifest.SiloInstance)
}
if result.Metadata == nil {
t.Fatal("expected metadata")
}
if result.Metadata.SchemaName != metadata.SchemaName {
t.Errorf("metadata SchemaName = %q, want %q", result.Metadata.SchemaName, metadata.SchemaName)
}
if result.Metadata.LifecycleState != metadata.LifecycleState {
t.Errorf("metadata LifecycleState = %q, want %q", result.Metadata.LifecycleState, metadata.LifecycleState)
}
if len(result.Metadata.Tags) != 2 {
t.Errorf("metadata Tags len = %d, want 2", len(result.Metadata.Tags))
}
if result.Metadata.Fields["material"] != "6061-T6" {
t.Errorf("metadata Fields[material] = %v, want 6061-T6", result.Metadata.Fields["material"])
}
}
func TestExtract_ManifestOnly(t *testing.T) {
manifest := Manifest{
UUID: "550e8400-e29b-41d4-a716-446655440000",
KCVersion: "1.0",
}
data := buildZip(t, map[string][]byte{
"Document.xml": []byte("<xml/>"),
"silo/manifest.json": mustJSON(t, manifest),
})
result, err := Extract(data)
if err != nil {
t.Fatalf("unexpected error: %v", err)
}
if result == nil {
t.Fatal("expected non-nil result")
}
if result.Manifest == nil {
t.Fatal("expected manifest")
}
if result.Metadata != nil {
t.Errorf("expected nil metadata, got %+v", result.Metadata)
}
}
func TestExtract_InvalidJSON(t *testing.T) {
data := buildZip(t, map[string][]byte{
"silo/manifest.json": []byte("{not valid json"),
})
result, err := Extract(data)
if err == nil {
t.Fatal("expected error for invalid JSON")
}
if result != nil {
t.Errorf("expected nil result on error, got %+v", result)
}
}
func TestExtract_NotAZip(t *testing.T) {
result, err := Extract([]byte("this is not a zip file"))
if err == nil {
t.Fatal("expected error for non-ZIP data")
}
if result != nil {
t.Errorf("expected nil result on error, got %+v", result)
}
}
func TestExtract_EmptySiloDir(t *testing.T) {
// silo/ directory entry exists but no manifest or metadata files
data := buildZip(t, map[string][]byte{
"Document.xml": []byte("<xml/>"),
"silo/": {},
})
result, err := Extract(data)
if err != nil {
t.Fatalf("unexpected error: %v", err)
}
if result == nil {
t.Fatal("expected non-nil result for silo/ dir")
}
if result.Manifest != nil {
t.Errorf("expected nil manifest, got %+v", result.Manifest)
}
if result.Metadata != nil {
t.Errorf("expected nil metadata, got %+v", result.Metadata)
}
}

131
internal/kc/pack.go Normal file
View File

@@ -0,0 +1,131 @@
package kc
import (
"archive/zip"
"bytes"
"encoding/json"
"fmt"
"io"
"strings"
)
// HasSiloDir opens a ZIP archive and returns true if any entry starts with "silo/".
// This is a lightweight check used to short-circuit before gathering DB data.
func HasSiloDir(data []byte) (bool, error) {
r, err := zip.NewReader(bytes.NewReader(data), int64(len(data)))
if err != nil {
return false, fmt.Errorf("kc: open zip: %w", err)
}
for _, f := range r.File {
if f.Name == "silo/" || strings.HasPrefix(f.Name, "silo/") {
return true, nil
}
}
return false, nil
}
// Pack takes original ZIP file bytes and a PackInput, and returns new ZIP bytes
// with all silo/ entries replaced by the data from input. Non-silo entries
// (FreeCAD Document.xml, thumbnails, etc.) are copied verbatim with their
// original compression method and timestamps preserved.
//
// If the original ZIP contains no silo/ directory, the original bytes are
// returned unchanged (plain .fcstd pass-through).
func Pack(original []byte, input *PackInput) ([]byte, error) {
r, err := zip.NewReader(bytes.NewReader(original), int64(len(original)))
if err != nil {
return nil, fmt.Errorf("kc: open zip: %w", err)
}
// Partition entries into silo/ vs non-silo.
hasSilo := false
for _, f := range r.File {
if f.Name == "silo/" || strings.HasPrefix(f.Name, "silo/") {
hasSilo = true
break
}
}
if !hasSilo {
return original, nil // plain .fcstd, no repacking needed
}
var buf bytes.Buffer
zw := zip.NewWriter(&buf)
// Copy all non-silo entries verbatim.
for _, f := range r.File {
if f.Name == "silo/" || strings.HasPrefix(f.Name, "silo/") {
continue
}
if err := copyZipEntry(zw, f); err != nil {
return nil, fmt.Errorf("kc: copying entry %s: %w", f.Name, err)
}
}
// Write new silo/ entries from PackInput.
if input.Manifest != nil {
if err := writeJSONEntry(zw, "silo/manifest.json", input.Manifest); err != nil {
return nil, fmt.Errorf("kc: writing manifest.json: %w", err)
}
}
if input.Metadata != nil {
if err := writeJSONEntry(zw, "silo/metadata.json", input.Metadata); err != nil {
return nil, fmt.Errorf("kc: writing metadata.json: %w", err)
}
}
if input.History != nil {
if err := writeJSONEntry(zw, "silo/history.json", input.History); err != nil {
return nil, fmt.Errorf("kc: writing history.json: %w", err)
}
}
if input.Dependencies != nil {
if err := writeJSONEntry(zw, "silo/dependencies.json", input.Dependencies); err != nil {
return nil, fmt.Errorf("kc: writing dependencies.json: %w", err)
}
}
if err := zw.Close(); err != nil {
return nil, fmt.Errorf("kc: closing zip writer: %w", err)
}
return buf.Bytes(), nil
}
// copyZipEntry copies a single entry from the original ZIP to the new writer,
// preserving the file header (compression method, timestamps, etc.).
func copyZipEntry(zw *zip.Writer, f *zip.File) error {
header := f.FileHeader
w, err := zw.CreateHeader(&header)
if err != nil {
return err
}
rc, err := f.Open()
if err != nil {
return err
}
defer rc.Close()
_, err = io.Copy(w, rc)
return err
}
// writeJSONEntry writes a new silo/ entry as JSON with Deflate compression.
func writeJSONEntry(zw *zip.Writer, name string, v any) error {
data, err := json.MarshalIndent(v, "", " ")
if err != nil {
return err
}
header := &zip.FileHeader{
Name: name,
Method: zip.Deflate,
}
w, err := zw.CreateHeader(header)
if err != nil {
return err
}
_, err = w.Write(data)
return err
}

229
internal/kc/pack_test.go Normal file
View File

@@ -0,0 +1,229 @@
package kc
import (
"archive/zip"
"bytes"
"io"
"testing"
)
func TestHasSiloDir_PlainFCStd(t *testing.T) {
data := buildZip(t, map[string][]byte{
"Document.xml": []byte("<xml/>"),
})
has, err := HasSiloDir(data)
if err != nil {
t.Fatalf("unexpected error: %v", err)
}
if has {
t.Fatal("expected false for plain .fcstd")
}
}
func TestHasSiloDir_KC(t *testing.T) {
data := buildZip(t, map[string][]byte{
"Document.xml": []byte("<xml/>"),
"silo/manifest.json": []byte("{}"),
})
has, err := HasSiloDir(data)
if err != nil {
t.Fatalf("unexpected error: %v", err)
}
if !has {
t.Fatal("expected true for .kc with silo/ dir")
}
}
func TestHasSiloDir_NotAZip(t *testing.T) {
_, err := HasSiloDir([]byte("not a zip"))
if err == nil {
t.Fatal("expected error for non-ZIP data")
}
}
func TestPack_PlainFCStd_Passthrough(t *testing.T) {
original := buildZip(t, map[string][]byte{
"Document.xml": []byte("<xml/>"),
"thumbnails/a.png": []byte("png-data"),
})
result, err := Pack(original, &PackInput{
Manifest: &Manifest{UUID: "test"},
})
if err != nil {
t.Fatalf("unexpected error: %v", err)
}
if !bytes.Equal(result, original) {
t.Fatal("expected original bytes returned unchanged for plain .fcstd")
}
}
func TestPack_RoundTrip(t *testing.T) {
// Build a .kc with old silo/ data
oldManifest := Manifest{UUID: "old-uuid", KCVersion: "0.9", RevisionHash: "old-hash"}
oldMetadata := Metadata{SchemaName: "old-schema", Tags: []string{"old"}, LifecycleState: "draft"}
original := buildZip(t, map[string][]byte{
"Document.xml": []byte("<freecad/>"),
"thumbnails/t.png": []byte("thumb-data"),
"silo/manifest.json": mustJSON(t, oldManifest),
"silo/metadata.json": mustJSON(t, oldMetadata),
})
// Pack with new data
newManifest := &Manifest{UUID: "new-uuid", KCVersion: "1.0", RevisionHash: "new-hash", SiloInstance: "https://silo.test"}
newMetadata := &Metadata{SchemaName: "mechanical-part-v2", Tags: []string{"aluminum", "structural"}, LifecycleState: "review", Fields: map[string]any{"material": "7075-T6"}}
comment := "initial commit"
history := []HistoryEntry{
{RevisionNumber: 1, CreatedAt: "2026-01-01T00:00:00Z", Comment: &comment, Status: "draft", Labels: []string{}},
}
packed, err := Pack(original, &PackInput{
Manifest: newManifest,
Metadata: newMetadata,
History: history,
Dependencies: []Dependency{},
})
if err != nil {
t.Fatalf("Pack error: %v", err)
}
// Extract and verify new silo/ data
result, err := Extract(packed)
if err != nil {
t.Fatalf("Extract error: %v", err)
}
if result == nil {
t.Fatal("expected non-nil extract result")
}
if result.Manifest.UUID != "new-uuid" {
t.Errorf("manifest UUID = %q, want %q", result.Manifest.UUID, "new-uuid")
}
if result.Manifest.KCVersion != "1.0" {
t.Errorf("manifest KCVersion = %q, want %q", result.Manifest.KCVersion, "1.0")
}
if result.Manifest.SiloInstance != "https://silo.test" {
t.Errorf("manifest SiloInstance = %q, want %q", result.Manifest.SiloInstance, "https://silo.test")
}
if result.Metadata.SchemaName != "mechanical-part-v2" {
t.Errorf("metadata SchemaName = %q, want %q", result.Metadata.SchemaName, "mechanical-part-v2")
}
if result.Metadata.LifecycleState != "review" {
t.Errorf("metadata LifecycleState = %q, want %q", result.Metadata.LifecycleState, "review")
}
if len(result.Metadata.Tags) != 2 {
t.Errorf("metadata Tags len = %d, want 2", len(result.Metadata.Tags))
}
if result.Metadata.Fields["material"] != "7075-T6" {
t.Errorf("metadata Fields[material] = %v, want 7075-T6", result.Metadata.Fields["material"])
}
// Verify non-silo entries are preserved
r, err := zip.NewReader(bytes.NewReader(packed), int64(len(packed)))
if err != nil {
t.Fatalf("opening packed ZIP: %v", err)
}
entryMap := make(map[string]bool)
for _, f := range r.File {
entryMap[f.Name] = true
}
if !entryMap["Document.xml"] {
t.Error("Document.xml missing from packed ZIP")
}
if !entryMap["thumbnails/t.png"] {
t.Error("thumbnails/t.png missing from packed ZIP")
}
// Verify non-silo content is byte-identical
for _, f := range r.File {
if f.Name == "Document.xml" {
content := readZipEntry(t, f)
if string(content) != "<freecad/>" {
t.Errorf("Document.xml content = %q, want %q", content, "<freecad/>")
}
}
if f.Name == "thumbnails/t.png" {
content := readZipEntry(t, f)
if string(content) != "thumb-data" {
t.Errorf("thumbnails/t.png content = %q, want %q", content, "thumb-data")
}
}
}
}
func TestPack_NilFields(t *testing.T) {
original := buildZip(t, map[string][]byte{
"Document.xml": []byte("<xml/>"),
"silo/manifest.json": []byte(`{"uuid":"x"}`),
})
// Pack with only manifest, nil metadata/history/deps
packed, err := Pack(original, &PackInput{
Manifest: &Manifest{UUID: "updated"},
})
if err != nil {
t.Fatalf("Pack error: %v", err)
}
// Extract — should have manifest but no metadata
result, err := Extract(packed)
if err != nil {
t.Fatalf("Extract error: %v", err)
}
if result.Manifest == nil || result.Manifest.UUID != "updated" {
t.Errorf("manifest UUID = %v, want updated", result.Manifest)
}
if result.Metadata != nil {
t.Errorf("expected nil metadata, got %+v", result.Metadata)
}
// Verify no old silo/ entries leaked through
r, _ := zip.NewReader(bytes.NewReader(packed), int64(len(packed)))
for _, f := range r.File {
if f.Name == "silo/metadata.json" {
t.Error("old silo/metadata.json should have been removed")
}
}
}
func TestPack_EmptyDependencies(t *testing.T) {
original := buildZip(t, map[string][]byte{
"silo/manifest.json": []byte(`{"uuid":"x"}`),
})
packed, err := Pack(original, &PackInput{
Manifest: &Manifest{UUID: "x"},
Dependencies: []Dependency{},
})
if err != nil {
t.Fatalf("Pack error: %v", err)
}
// Verify dependencies.json exists and is []
r, _ := zip.NewReader(bytes.NewReader(packed), int64(len(packed)))
for _, f := range r.File {
if f.Name == "silo/dependencies.json" {
content := readZipEntry(t, f)
if string(content) != "[]" {
t.Errorf("dependencies.json = %q, want %q", content, "[]")
}
return
}
}
t.Error("silo/dependencies.json not found in packed ZIP")
}
// readZipEntry reads the full contents of a zip.File.
func readZipEntry(t *testing.T, f *zip.File) []byte {
t.Helper()
rc, err := f.Open()
if err != nil {
t.Fatalf("opening zip entry %s: %v", f.Name, err)
}
defer rc.Close()
data, err := io.ReadAll(rc)
if err != nil {
t.Fatalf("reading zip entry %s: %v", f.Name, err)
}
return data
}

View File

@@ -0,0 +1,84 @@
package modules
import (
"context"
"github.com/jackc/pgx/v5/pgxpool"
"github.com/kindredsystems/silo/internal/config"
)
// LoadState applies module state from config YAML and database overrides.
//
// Precedence (highest wins):
// 1. Database module_state table
// 2. YAML modules.* toggles
// 3. Backward-compat YAML fields (auth.enabled, odoo.enabled)
// 4. Module defaults (set by NewRegistry)
func LoadState(r *Registry, cfg *config.Config, pool *pgxpool.Pool) error {
// Step 1: Apply backward-compat top-level YAML fields.
// auth.enabled and odoo.enabled existed before the modules section.
// Only apply if the new modules.* section doesn't override them.
if cfg.Modules.Auth == nil {
r.setEnabledUnchecked(Auth, cfg.Auth.Enabled)
}
if cfg.Modules.Odoo == nil {
r.setEnabledUnchecked(Odoo, cfg.Odoo.Enabled)
}
// Step 2: Apply explicit modules.* YAML toggles (override defaults + compat).
applyToggle(r, Auth, cfg.Modules.Auth)
applyToggle(r, Projects, cfg.Modules.Projects)
applyToggle(r, Audit, cfg.Modules.Audit)
applyToggle(r, Odoo, cfg.Modules.Odoo)
applyToggle(r, FreeCAD, cfg.Modules.FreeCAD)
applyToggle(r, Jobs, cfg.Modules.Jobs)
applyToggle(r, DAG, cfg.Modules.DAG)
// Step 3: Apply database overrides (highest precedence).
if pool != nil {
if err := loadFromDB(r, pool); err != nil {
return err
}
}
// Step 4: Validate the final state.
return r.ValidateDependencies()
}
// applyToggle sets a module's state from a YAML ModuleToggle if present.
func applyToggle(r *Registry, id string, toggle *config.ModuleToggle) {
if toggle == nil || toggle.Enabled == nil {
return
}
r.setEnabledUnchecked(id, *toggle.Enabled)
}
// setEnabledUnchecked sets module state without dependency validation.
// Used during loading when the full state is being assembled incrementally.
func (r *Registry) setEnabledUnchecked(id string, enabled bool) {
r.mu.Lock()
defer r.mu.Unlock()
if m, ok := r.modules[id]; ok && !m.Required {
m.enabled = enabled
}
}
// loadFromDB reads module_state rows and applies them to the registry.
func loadFromDB(r *Registry, pool *pgxpool.Pool) error {
rows, err := pool.Query(context.Background(),
`SELECT module_id, enabled FROM module_state`)
if err != nil {
return err
}
defer rows.Close()
for rows.Next() {
var id string
var enabled bool
if err := rows.Scan(&id, &enabled); err != nil {
return err
}
r.setEnabledUnchecked(id, enabled)
}
return rows.Err()
}

View File

@@ -0,0 +1,88 @@
package modules
import (
"testing"
"github.com/kindredsystems/silo/internal/config"
)
func boolPtr(v bool) *bool { return &v }
func TestLoadState_DefaultsOnly(t *testing.T) {
r := NewRegistry()
cfg := &config.Config{}
if err := LoadState(r, cfg, nil); err != nil {
t.Fatalf("LoadState: %v", err)
}
// Auth defaults to true from registry, but cfg.Auth.Enabled is false
// (zero value) and backward-compat applies, so auth ends up disabled.
if r.IsEnabled(Auth) {
t.Error("auth should be disabled (cfg.Auth.Enabled is false by default)")
}
}
func TestLoadState_BackwardCompat(t *testing.T) {
r := NewRegistry()
cfg := &config.Config{}
cfg.Auth.Enabled = true
cfg.Odoo.Enabled = true
if err := LoadState(r, cfg, nil); err != nil {
t.Fatalf("LoadState: %v", err)
}
if !r.IsEnabled(Auth) {
t.Error("auth should be enabled via cfg.Auth.Enabled")
}
if !r.IsEnabled(Odoo) {
t.Error("odoo should be enabled via cfg.Odoo.Enabled")
}
}
func TestLoadState_YAMLModulesOverrideCompat(t *testing.T) {
r := NewRegistry()
cfg := &config.Config{}
cfg.Auth.Enabled = true // compat says enabled
cfg.Modules.Auth = &config.ModuleToggle{Enabled: boolPtr(false)} // explicit says disabled
if err := LoadState(r, cfg, nil); err != nil {
t.Fatalf("LoadState: %v", err)
}
if r.IsEnabled(Auth) {
t.Error("modules.auth.enabled=false should override auth.enabled=true")
}
}
func TestLoadState_EnableJobsAndDAG(t *testing.T) {
r := NewRegistry()
cfg := &config.Config{}
cfg.Auth.Enabled = true
cfg.Modules.Jobs = &config.ModuleToggle{Enabled: boolPtr(true)}
cfg.Modules.DAG = &config.ModuleToggle{Enabled: boolPtr(true)}
if err := LoadState(r, cfg, nil); err != nil {
t.Fatalf("LoadState: %v", err)
}
if !r.IsEnabled(Jobs) {
t.Error("jobs should be enabled")
}
if !r.IsEnabled(DAG) {
t.Error("dag should be enabled")
}
}
func TestLoadState_InvalidDependency(t *testing.T) {
r := NewRegistry()
cfg := &config.Config{}
// Auth disabled (default), but enable jobs which depends on auth.
cfg.Modules.Jobs = &config.ModuleToggle{Enabled: boolPtr(true)}
err := LoadState(r, cfg, nil)
if err == nil {
t.Error("should fail: jobs enabled but auth disabled")
}
}

163
internal/modules/modules.go Normal file
View File

@@ -0,0 +1,163 @@
// Package modules provides the module registry for Silo.
// Each module groups API endpoints, UI views, and configuration.
// Modules can be required (always on) or optional (admin-toggleable).
package modules
import (
"fmt"
"sort"
"sync"
)
// Module IDs.
const (
Core = "core"
Schemas = "schemas"
Storage = "storage"
Auth = "auth"
Projects = "projects"
Audit = "audit"
Odoo = "odoo"
FreeCAD = "freecad"
Jobs = "jobs"
DAG = "dag"
)
// ModuleInfo describes a module's metadata.
type ModuleInfo struct {
ID string
Name string
Description string
Required bool // cannot be disabled
DefaultEnabled bool // initial state for optional modules
DependsOn []string // module IDs this module requires
Version string
}
// registry entries with their runtime enabled state.
type moduleState struct {
ModuleInfo
enabled bool
}
// Registry holds all module definitions and their enabled state.
type Registry struct {
mu sync.RWMutex
modules map[string]*moduleState
}
// builtinModules defines the complete set of Silo modules.
var builtinModules = []ModuleInfo{
{ID: Core, Name: "Core PDM", Description: "Items, revisions, files, BOM, search, import/export", Required: true, Version: "0.2"},
{ID: Schemas, Name: "Schemas", Description: "Part numbering schema parsing and segment management", Required: true},
{ID: Storage, Name: "Storage", Description: "MinIO/S3 file storage, presigned uploads", Required: true},
{ID: Auth, Name: "Authentication", Description: "Local, LDAP, OIDC authentication and RBAC", DefaultEnabled: true},
{ID: Projects, Name: "Projects", Description: "Project management and item tagging", DefaultEnabled: true},
{ID: Audit, Name: "Audit", Description: "Audit logging, completeness scoring", DefaultEnabled: true},
{ID: Odoo, Name: "Odoo ERP", Description: "Odoo integration (config, sync-log, push/pull)", DependsOn: []string{Auth}},
{ID: FreeCAD, Name: "Create Integration", Description: "URI scheme, executable path, client settings", DefaultEnabled: true},
{ID: Jobs, Name: "Job Queue", Description: "Async compute jobs, runner management", DependsOn: []string{Auth}},
{ID: DAG, Name: "Dependency DAG", Description: "Feature DAG sync, validation states, interference detection", DependsOn: []string{Jobs}},
}
// NewRegistry creates a registry with all builtin modules set to their default state.
func NewRegistry() *Registry {
r := &Registry{modules: make(map[string]*moduleState, len(builtinModules))}
for _, m := range builtinModules {
enabled := m.Required || m.DefaultEnabled
r.modules[m.ID] = &moduleState{ModuleInfo: m, enabled: enabled}
}
return r
}
// IsEnabled returns whether a module is currently enabled.
func (r *Registry) IsEnabled(id string) bool {
r.mu.RLock()
defer r.mu.RUnlock()
if m, ok := r.modules[id]; ok {
return m.enabled
}
return false
}
// SetEnabled changes a module's enabled state with dependency validation.
func (r *Registry) SetEnabled(id string, enabled bool) error {
r.mu.Lock()
defer r.mu.Unlock()
m, ok := r.modules[id]
if !ok {
return fmt.Errorf("unknown module %q", id)
}
if m.Required {
return fmt.Errorf("module %q is required and cannot be disabled", id)
}
if enabled {
// Check that all dependencies are enabled.
for _, dep := range m.DependsOn {
if dm, ok := r.modules[dep]; ok && !dm.enabled {
return fmt.Errorf("cannot enable %q: dependency %q is disabled", id, dep)
}
}
} else {
// Check that no enabled module depends on this one.
for _, other := range r.modules {
if !other.enabled || other.ID == id {
continue
}
for _, dep := range other.DependsOn {
if dep == id {
return fmt.Errorf("cannot disable %q: module %q depends on it", id, other.ID)
}
}
}
}
m.enabled = enabled
return nil
}
// All returns info for every module, sorted by ID.
func (r *Registry) All() []ModuleInfo {
r.mu.RLock()
defer r.mu.RUnlock()
out := make([]ModuleInfo, 0, len(r.modules))
for _, m := range r.modules {
out = append(out, m.ModuleInfo)
}
sort.Slice(out, func(i, j int) bool { return out[i].ID < out[j].ID })
return out
}
// Get returns info for a single module, or nil if not found.
func (r *Registry) Get(id string) *ModuleInfo {
r.mu.RLock()
defer r.mu.RUnlock()
if m, ok := r.modules[id]; ok {
info := m.ModuleInfo
return &info
}
return nil
}
// ValidateDependencies checks that every enabled module's dependencies
// are also enabled. Returns the first violation found.
func (r *Registry) ValidateDependencies() error {
r.mu.RLock()
defer r.mu.RUnlock()
for _, m := range r.modules {
if !m.enabled {
continue
}
for _, dep := range m.DependsOn {
if dm, ok := r.modules[dep]; ok && !dm.enabled {
return fmt.Errorf("module %q is enabled but its dependency %q is disabled", m.ID, dep)
}
}
}
return nil
}

View File

@@ -0,0 +1,169 @@
package modules
import (
"testing"
)
func TestNewRegistry_DefaultState(t *testing.T) {
r := NewRegistry()
// Required modules are always enabled.
for _, id := range []string{Core, Schemas, Storage} {
if !r.IsEnabled(id) {
t.Errorf("required module %q should be enabled by default", id)
}
}
// Optional modules with DefaultEnabled=true.
for _, id := range []string{Auth, Projects, Audit, FreeCAD} {
if !r.IsEnabled(id) {
t.Errorf("module %q should be enabled by default", id)
}
}
// Optional modules with DefaultEnabled=false.
for _, id := range []string{Odoo, Jobs, DAG} {
if r.IsEnabled(id) {
t.Errorf("module %q should be disabled by default", id)
}
}
}
func TestSetEnabled_BasicToggle(t *testing.T) {
r := NewRegistry()
// Disable an optional module with no dependents.
if err := r.SetEnabled(Projects, false); err != nil {
t.Fatalf("disabling projects: %v", err)
}
if r.IsEnabled(Projects) {
t.Error("projects should be disabled after SetEnabled(false)")
}
// Re-enable it.
if err := r.SetEnabled(Projects, true); err != nil {
t.Fatalf("enabling projects: %v", err)
}
if !r.IsEnabled(Projects) {
t.Error("projects should be enabled after SetEnabled(true)")
}
}
func TestCannotDisableRequired(t *testing.T) {
r := NewRegistry()
for _, id := range []string{Core, Schemas, Storage} {
if err := r.SetEnabled(id, false); err == nil {
t.Errorf("disabling required module %q should return error", id)
}
}
}
func TestDependencyChain_EnableWithoutDep(t *testing.T) {
r := NewRegistry()
// Jobs depends on Auth. Auth is enabled by default, so enabling jobs works.
if err := r.SetEnabled(Jobs, true); err != nil {
t.Fatalf("enabling jobs (auth enabled): %v", err)
}
// DAG depends on Jobs. Jobs is now enabled, so enabling dag works.
if err := r.SetEnabled(DAG, true); err != nil {
t.Fatalf("enabling dag (jobs enabled): %v", err)
}
// Now try with deps disabled. Start fresh.
r2 := NewRegistry()
// DAG depends on Jobs, which is disabled by default.
if err := r2.SetEnabled(DAG, true); err == nil {
t.Error("enabling dag without jobs should fail")
}
}
func TestDisableDependedOn(t *testing.T) {
r := NewRegistry()
// Enable the full chain: auth (already on) → jobs → dag.
if err := r.SetEnabled(Jobs, true); err != nil {
t.Fatal(err)
}
if err := r.SetEnabled(DAG, true); err != nil {
t.Fatal(err)
}
// Cannot disable jobs while dag depends on it.
if err := r.SetEnabled(Jobs, false); err == nil {
t.Error("disabling jobs while dag is enabled should fail")
}
// Disable dag first, then jobs should work.
if err := r.SetEnabled(DAG, false); err != nil {
t.Fatal(err)
}
if err := r.SetEnabled(Jobs, false); err != nil {
t.Fatalf("disabling jobs after dag disabled: %v", err)
}
}
func TestCannotDisableAuthWhileJobsEnabled(t *testing.T) {
r := NewRegistry()
if err := r.SetEnabled(Jobs, true); err != nil {
t.Fatal(err)
}
// Auth is depended on by jobs.
if err := r.SetEnabled(Auth, false); err == nil {
t.Error("disabling auth while jobs is enabled should fail")
}
}
func TestUnknownModule(t *testing.T) {
r := NewRegistry()
if r.IsEnabled("nonexistent") {
t.Error("unknown module should not be enabled")
}
if err := r.SetEnabled("nonexistent", true); err == nil {
t.Error("setting unknown module should return error")
}
if r.Get("nonexistent") != nil {
t.Error("getting unknown module should return nil")
}
}
func TestAll_ReturnsAllModules(t *testing.T) {
r := NewRegistry()
all := r.All()
if len(all) != 10 {
t.Errorf("expected 10 modules, got %d", len(all))
}
// Should be sorted by ID.
for i := 1; i < len(all); i++ {
if all[i].ID < all[i-1].ID {
t.Errorf("modules not sorted: %s before %s", all[i-1].ID, all[i].ID)
}
}
}
func TestValidateDependencies(t *testing.T) {
r := NewRegistry()
// Default state should be valid.
if err := r.ValidateDependencies(); err != nil {
t.Fatalf("default state should be valid: %v", err)
}
// Force an invalid state by directly mutating (bypassing SetEnabled).
r.mu.Lock()
r.modules[Jobs].enabled = true
r.modules[Auth].enabled = false
r.mu.Unlock()
if err := r.ValidateDependencies(); err == nil {
t.Error("should detect jobs enabled without auth")
}
}

View File

@@ -0,0 +1,177 @@
package storage
import (
"context"
"crypto/sha256"
"encoding/hex"
"errors"
"fmt"
"io"
"net/url"
"os"
"path/filepath"
"time"
)
// ErrPresignNotSupported is returned when presigned URLs are requested from a
// backend that does not support them.
var ErrPresignNotSupported = errors.New("presigned URLs not supported by filesystem backend")
// Compile-time check: *FilesystemStore implements FileStore.
var _ FileStore = (*FilesystemStore)(nil)
// FilesystemStore stores objects as files under a root directory.
type FilesystemStore struct {
root string // absolute path
}
// NewFilesystemStore creates a new filesystem-backed store rooted at root.
// The directory is created if it does not exist.
func NewFilesystemStore(root string) (*FilesystemStore, error) {
abs, err := filepath.Abs(root)
if err != nil {
return nil, fmt.Errorf("resolving root path: %w", err)
}
if err := os.MkdirAll(abs, 0o755); err != nil {
return nil, fmt.Errorf("creating root directory: %w", err)
}
return &FilesystemStore{root: abs}, nil
}
// path returns the absolute filesystem path for a storage key.
func (fs *FilesystemStore) path(key string) string {
return filepath.Join(fs.root, filepath.FromSlash(key))
}
// Put writes reader to the file at key using atomic rename.
// SHA-256 checksum is computed during write and returned in PutResult.
func (fs *FilesystemStore) Put(_ context.Context, key string, reader io.Reader, _ int64, _ string) (*PutResult, error) {
dest := fs.path(key)
if err := os.MkdirAll(filepath.Dir(dest), 0o755); err != nil {
return nil, fmt.Errorf("creating directories: %w", err)
}
// Write to a temp file in the same directory so os.Rename is atomic.
tmp, err := os.CreateTemp(filepath.Dir(dest), ".silo-tmp-*")
if err != nil {
return nil, fmt.Errorf("creating temp file: %w", err)
}
tmpPath := tmp.Name()
defer func() {
// Clean up temp file on any failure path.
tmp.Close()
os.Remove(tmpPath)
}()
h := sha256.New()
w := io.MultiWriter(tmp, h)
n, err := io.Copy(w, reader)
if err != nil {
return nil, fmt.Errorf("writing file: %w", err)
}
if err := tmp.Close(); err != nil {
return nil, fmt.Errorf("closing temp file: %w", err)
}
if err := os.Rename(tmpPath, dest); err != nil {
return nil, fmt.Errorf("renaming temp file: %w", err)
}
return &PutResult{
Key: key,
Size: n,
Checksum: hex.EncodeToString(h.Sum(nil)),
}, nil
}
// Get opens the file at key for reading.
func (fs *FilesystemStore) Get(_ context.Context, key string) (io.ReadCloser, error) {
f, err := os.Open(fs.path(key))
if err != nil {
return nil, fmt.Errorf("opening file: %w", err)
}
return f, nil
}
// GetVersion delegates to Get — filesystem storage has no versioning.
func (fs *FilesystemStore) GetVersion(ctx context.Context, key string, _ string) (io.ReadCloser, error) {
return fs.Get(ctx, key)
}
// Delete removes the file at key. No error if already absent.
func (fs *FilesystemStore) Delete(_ context.Context, key string) error {
err := os.Remove(fs.path(key))
if err != nil && !errors.Is(err, os.ErrNotExist) {
return fmt.Errorf("removing file: %w", err)
}
return nil
}
// Exists reports whether the file at key exists.
func (fs *FilesystemStore) Exists(_ context.Context, key string) (bool, error) {
_, err := os.Stat(fs.path(key))
if err == nil {
return true, nil
}
if errors.Is(err, os.ErrNotExist) {
return false, nil
}
return false, fmt.Errorf("checking file: %w", err)
}
// Copy duplicates a file from srcKey to dstKey using atomic rename.
func (fs *FilesystemStore) Copy(_ context.Context, srcKey, dstKey string) error {
srcPath := fs.path(srcKey)
dstPath := fs.path(dstKey)
src, err := os.Open(srcPath)
if err != nil {
return fmt.Errorf("opening source: %w", err)
}
defer src.Close()
if err := os.MkdirAll(filepath.Dir(dstPath), 0o755); err != nil {
return fmt.Errorf("creating directories: %w", err)
}
tmp, err := os.CreateTemp(filepath.Dir(dstPath), ".silo-tmp-*")
if err != nil {
return fmt.Errorf("creating temp file: %w", err)
}
tmpPath := tmp.Name()
defer func() {
tmp.Close()
os.Remove(tmpPath)
}()
if _, err := io.Copy(tmp, src); err != nil {
return fmt.Errorf("copying file: %w", err)
}
if err := tmp.Close(); err != nil {
return fmt.Errorf("closing temp file: %w", err)
}
if err := os.Rename(tmpPath, dstPath); err != nil {
return fmt.Errorf("renaming temp file: %w", err)
}
return nil
}
// PresignPut is not supported by the filesystem backend.
func (fs *FilesystemStore) PresignPut(_ context.Context, _ string, _ time.Duration) (*url.URL, error) {
return nil, ErrPresignNotSupported
}
// Ping verifies the root directory is accessible and writable.
func (fs *FilesystemStore) Ping(_ context.Context) error {
tmp, err := os.CreateTemp(fs.root, ".silo-ping-*")
if err != nil {
return fmt.Errorf("storage ping failed: %w", err)
}
name := tmp.Name()
tmp.Close()
os.Remove(name)
return nil
}

View File

@@ -0,0 +1,277 @@
package storage
import (
"bytes"
"context"
"crypto/sha256"
"encoding/hex"
"io"
"os"
"path/filepath"
"strings"
"testing"
)
func newTestStore(t *testing.T) *FilesystemStore {
t.Helper()
fs, err := NewFilesystemStore(t.TempDir())
if err != nil {
t.Fatalf("NewFilesystemStore: %v", err)
}
return fs
}
func TestNewFilesystemStore(t *testing.T) {
dir := t.TempDir()
sub := filepath.Join(dir, "a", "b")
fs, err := NewFilesystemStore(sub)
if err != nil {
t.Fatalf("unexpected error: %v", err)
}
if !filepath.IsAbs(fs.root) {
t.Errorf("root is not absolute: %s", fs.root)
}
info, err := os.Stat(sub)
if err != nil {
t.Fatalf("root dir missing: %v", err)
}
if !info.IsDir() {
t.Error("root is not a directory")
}
}
func TestPut(t *testing.T) {
fs := newTestStore(t)
ctx := context.Background()
data := []byte("hello world")
h := sha256.Sum256(data)
wantChecksum := hex.EncodeToString(h[:])
result, err := fs.Put(ctx, "items/P001/rev1.FCStd", bytes.NewReader(data), int64(len(data)), "application/octet-stream")
if err != nil {
t.Fatalf("Put: %v", err)
}
if result.Key != "items/P001/rev1.FCStd" {
t.Errorf("Key = %q, want %q", result.Key, "items/P001/rev1.FCStd")
}
if result.Size != int64(len(data)) {
t.Errorf("Size = %d, want %d", result.Size, len(data))
}
if result.Checksum != wantChecksum {
t.Errorf("Checksum = %q, want %q", result.Checksum, wantChecksum)
}
// Verify file on disk.
got, err := os.ReadFile(fs.path("items/P001/rev1.FCStd"))
if err != nil {
t.Fatalf("reading file: %v", err)
}
if !bytes.Equal(got, data) {
t.Error("file content mismatch")
}
}
func TestPutAtomicity(t *testing.T) {
fs := newTestStore(t)
ctx := context.Background()
key := "test/atomic.bin"
// Write an initial file.
if _, err := fs.Put(ctx, key, strings.NewReader("original"), 8, ""); err != nil {
t.Fatalf("initial Put: %v", err)
}
// Write with a reader that fails partway through.
failing := io.MultiReader(strings.NewReader("partial"), &errReader{})
_, err := fs.Put(ctx, key, failing, 100, "")
if err == nil {
t.Fatal("expected error from failing reader")
}
// Original file should still be intact.
got, err := os.ReadFile(fs.path(key))
if err != nil {
t.Fatalf("reading file after failed put: %v", err)
}
if string(got) != "original" {
t.Errorf("file content = %q, want %q", got, "original")
}
}
type errReader struct{}
func (e *errReader) Read([]byte) (int, error) {
return 0, io.ErrUnexpectedEOF
}
func TestGet(t *testing.T) {
fs := newTestStore(t)
ctx := context.Background()
data := []byte("test content")
if _, err := fs.Put(ctx, "f.txt", bytes.NewReader(data), int64(len(data)), ""); err != nil {
t.Fatalf("Put: %v", err)
}
rc, err := fs.Get(ctx, "f.txt")
if err != nil {
t.Fatalf("Get: %v", err)
}
defer rc.Close()
got, err := io.ReadAll(rc)
if err != nil {
t.Fatalf("ReadAll: %v", err)
}
if !bytes.Equal(got, data) {
t.Error("content mismatch")
}
}
func TestGetMissing(t *testing.T) {
fs := newTestStore(t)
_, err := fs.Get(context.Background(), "no/such/file")
if err == nil {
t.Fatal("expected error for missing file")
}
}
func TestGetVersion(t *testing.T) {
fs := newTestStore(t)
ctx := context.Background()
data := []byte("versioned")
if _, err := fs.Put(ctx, "v.txt", bytes.NewReader(data), int64(len(data)), ""); err != nil {
t.Fatalf("Put: %v", err)
}
// GetVersion ignores versionID, returns same file.
rc, err := fs.GetVersion(ctx, "v.txt", "ignored-version-id")
if err != nil {
t.Fatalf("GetVersion: %v", err)
}
defer rc.Close()
got, err := io.ReadAll(rc)
if err != nil {
t.Fatalf("ReadAll: %v", err)
}
if !bytes.Equal(got, data) {
t.Error("content mismatch")
}
}
func TestDelete(t *testing.T) {
fs := newTestStore(t)
ctx := context.Background()
if _, err := fs.Put(ctx, "del.txt", strings.NewReader("x"), 1, ""); err != nil {
t.Fatalf("Put: %v", err)
}
if err := fs.Delete(ctx, "del.txt"); err != nil {
t.Fatalf("Delete: %v", err)
}
if _, err := os.Stat(fs.path("del.txt")); !os.IsNotExist(err) {
t.Error("file still exists after delete")
}
}
func TestDeleteMissing(t *testing.T) {
fs := newTestStore(t)
if err := fs.Delete(context.Background(), "no/such/file"); err != nil {
t.Fatalf("Delete missing file should not error: %v", err)
}
}
func TestExists(t *testing.T) {
fs := newTestStore(t)
ctx := context.Background()
ok, err := fs.Exists(ctx, "nope")
if err != nil {
t.Fatalf("Exists: %v", err)
}
if ok {
t.Error("Exists returned true for missing file")
}
if _, err := fs.Put(ctx, "yes.txt", strings.NewReader("y"), 1, ""); err != nil {
t.Fatalf("Put: %v", err)
}
ok, err = fs.Exists(ctx, "yes.txt")
if err != nil {
t.Fatalf("Exists: %v", err)
}
if !ok {
t.Error("Exists returned false for existing file")
}
}
func TestCopy(t *testing.T) {
fs := newTestStore(t)
ctx := context.Background()
data := []byte("copy me")
if _, err := fs.Put(ctx, "src.bin", bytes.NewReader(data), int64(len(data)), ""); err != nil {
t.Fatalf("Put: %v", err)
}
if err := fs.Copy(ctx, "src.bin", "deep/nested/dst.bin"); err != nil {
t.Fatalf("Copy: %v", err)
}
got, err := os.ReadFile(fs.path("deep/nested/dst.bin"))
if err != nil {
t.Fatalf("reading copied file: %v", err)
}
if !bytes.Equal(got, data) {
t.Error("copied content mismatch")
}
// Source should still exist.
if _, err := os.Stat(fs.path("src.bin")); err != nil {
t.Error("source file missing after copy")
}
}
func TestPresignPut(t *testing.T) {
fs := newTestStore(t)
_, err := fs.PresignPut(context.Background(), "key", 5*60)
if err != ErrPresignNotSupported {
t.Errorf("PresignPut error = %v, want ErrPresignNotSupported", err)
}
}
func TestPing(t *testing.T) {
fs := newTestStore(t)
if err := fs.Ping(context.Background()); err != nil {
t.Fatalf("Ping: %v", err)
}
}
func TestPingBadRoot(t *testing.T) {
fs := &FilesystemStore{root: "/nonexistent/path/that/should/not/exist"}
if err := fs.Ping(context.Background()); err == nil {
t.Fatal("expected Ping to fail with invalid root")
}
}
func TestPutOverwrite(t *testing.T) {
fs := newTestStore(t)
ctx := context.Background()
if _, err := fs.Put(ctx, "ow.txt", strings.NewReader("first"), 5, ""); err != nil {
t.Fatalf("Put: %v", err)
}
if _, err := fs.Put(ctx, "ow.txt", strings.NewReader("second"), 6, ""); err != nil {
t.Fatalf("Put overwrite: %v", err)
}
got, _ := os.ReadFile(fs.path("ow.txt"))
if string(got) != "second" {
t.Errorf("content = %q, want %q", got, "second")
}
}

View File

@@ -0,0 +1,21 @@
// Package storage defines the FileStore interface and backend implementations.
package storage
import (
"context"
"io"
"net/url"
"time"
)
// FileStore is the interface for file storage backends.
type FileStore interface {
Put(ctx context.Context, key string, reader io.Reader, size int64, contentType string) (*PutResult, error)
Get(ctx context.Context, key string) (io.ReadCloser, error)
GetVersion(ctx context.Context, key string, versionID string) (io.ReadCloser, error)
Delete(ctx context.Context, key string) error
Exists(ctx context.Context, key string) (bool, error)
Copy(ctx context.Context, srcKey, dstKey string) error
PresignPut(ctx context.Context, key string, expiry time.Duration) (*url.URL, error)
Ping(ctx context.Context) error
}

View File

@@ -1,4 +1,3 @@
// Package storage provides MinIO file storage operations.
package storage
import (
@@ -22,6 +21,9 @@ type Config struct {
Region string
}
// Compile-time check: *Storage implements FileStore.
var _ FileStore = (*Storage)(nil)
// Storage wraps MinIO client operations.
type Storage struct {
client *minio.Client
@@ -112,6 +114,19 @@ func (s *Storage) Delete(ctx context.Context, key string) error {
return nil
}
// Exists checks if an object exists in storage.
func (s *Storage) Exists(ctx context.Context, key string) (bool, error) {
_, err := s.client.StatObject(ctx, s.bucket, key, minio.StatObjectOptions{})
if err != nil {
resp := minio.ToErrorResponse(err)
if resp.Code == "NoSuchKey" {
return false, nil
}
return false, fmt.Errorf("checking object existence: %w", err)
}
return true, nil
}
// Ping checks if the storage backend is reachable by verifying the bucket exists.
func (s *Storage) Ping(ctx context.Context) error {
_, err := s.client.BucketExists(ctx, s.bucket)

View File

@@ -3,7 +3,6 @@ package testutil
import (
"context"
"fmt"
"os"
"path/filepath"
"sort"
@@ -80,9 +79,13 @@ func TruncateAll(t *testing.T, pool *pgxpool.Pool) {
_, err := pool.Exec(context.Background(), `
TRUNCATE
item_metadata, item_dependencies, approval_signatures, item_approvals, item_macros,
settings_overrides, module_state,
job_log, jobs, job_definitions, runners,
dag_cross_edges, dag_edges, dag_nodes,
audit_log, sync_log, api_tokens, sessions, item_files,
item_projects, relationships, revisions, inventory, items,
projects, sequences_by_name, users, property_migrations
locations, projects, sequences_by_name, users, property_migrations
CASCADE
`)
if err != nil {
@@ -109,6 +112,4 @@ func findProjectRoot(t *testing.T) string {
}
dir = parent
}
panic(fmt.Sprintf("unreachable"))
}

View File

@@ -0,0 +1,26 @@
job:
name: assembly-validate
version: 1
description: "Validate assembly by rebuilding its dependency subgraph"
trigger:
type: revision_created
filter:
item_type: assembly
scope:
type: assembly
compute:
type: validate
command: create-validate
args:
rebuild_mode: incremental
check_interference: true
runner:
tags: [create]
timeout: 900
max_retries: 2
priority: 50

View File

@@ -0,0 +1,24 @@
job:
name: part-export-step
version: 1
description: "Export a part to STEP format"
trigger:
type: manual
scope:
type: item
compute:
type: export
command: create-export
args:
format: step
output_key_template: "exports/{part_number}_rev{revision}.step"
runner:
tags: [create]
timeout: 300
max_retries: 1
priority: 100

View File

@@ -0,0 +1,67 @@
-- Dependency DAG: feature-level nodes and edges within items.
-- Migration: 014_dag_nodes_edges
-- Date: 2026-02
BEGIN;
--------------------------------------------------------------------------------
-- DAG Nodes (feature-level nodes within an item's revision)
--------------------------------------------------------------------------------
CREATE TABLE dag_nodes (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
item_id UUID NOT NULL REFERENCES items(id) ON DELETE CASCADE,
revision_number INTEGER NOT NULL,
node_key TEXT NOT NULL,
node_type TEXT NOT NULL,
properties_hash TEXT,
validation_state TEXT NOT NULL DEFAULT 'clean',
validation_msg TEXT,
metadata JSONB DEFAULT '{}',
created_at TIMESTAMPTZ NOT NULL DEFAULT now(),
updated_at TIMESTAMPTZ NOT NULL DEFAULT now(),
UNIQUE(item_id, revision_number, node_key)
);
CREATE INDEX idx_dag_nodes_item ON dag_nodes(item_id);
CREATE INDEX idx_dag_nodes_item_rev ON dag_nodes(item_id, revision_number);
CREATE INDEX idx_dag_nodes_state ON dag_nodes(validation_state)
WHERE validation_state != 'clean';
CREATE INDEX idx_dag_nodes_type ON dag_nodes(node_type);
--------------------------------------------------------------------------------
-- DAG Edges (dependencies between nodes within a single item)
-- Direction: source → target means "target depends on source"
--------------------------------------------------------------------------------
CREATE TABLE dag_edges (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
source_node_id UUID NOT NULL REFERENCES dag_nodes(id) ON DELETE CASCADE,
target_node_id UUID NOT NULL REFERENCES dag_nodes(id) ON DELETE CASCADE,
edge_type TEXT NOT NULL DEFAULT 'depends_on',
metadata JSONB DEFAULT '{}',
UNIQUE(source_node_id, target_node_id, edge_type),
CONSTRAINT no_self_edge CHECK (source_node_id != target_node_id)
);
CREATE INDEX idx_dag_edges_source ON dag_edges(source_node_id);
CREATE INDEX idx_dag_edges_target ON dag_edges(target_node_id);
--------------------------------------------------------------------------------
-- Cross-item DAG edges (linking feature nodes across BOM boundaries)
--------------------------------------------------------------------------------
CREATE TABLE dag_cross_edges (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
source_node_id UUID NOT NULL REFERENCES dag_nodes(id) ON DELETE CASCADE,
target_node_id UUID NOT NULL REFERENCES dag_nodes(id) ON DELETE CASCADE,
relationship_id UUID REFERENCES relationships(id) ON DELETE SET NULL,
edge_type TEXT NOT NULL DEFAULT 'assembly_ref',
metadata JSONB DEFAULT '{}',
UNIQUE(source_node_id, target_node_id)
);
CREATE INDEX idx_dag_cross_source ON dag_cross_edges(source_node_id);
CREATE INDEX idx_dag_cross_target ON dag_cross_edges(target_node_id);
COMMIT;

View File

@@ -0,0 +1,109 @@
-- Worker system: runners, job definitions, jobs, and job log.
-- Migration: 015_jobs_runners
-- Date: 2026-02
BEGIN;
--------------------------------------------------------------------------------
-- Runners (registered compute workers)
--------------------------------------------------------------------------------
CREATE TABLE runners (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
name TEXT UNIQUE NOT NULL,
token_hash TEXT NOT NULL,
token_prefix TEXT NOT NULL,
tags TEXT[] NOT NULL DEFAULT '{}',
status TEXT NOT NULL DEFAULT 'offline',
last_heartbeat TIMESTAMPTZ,
last_job_id UUID,
metadata JSONB DEFAULT '{}',
created_at TIMESTAMPTZ NOT NULL DEFAULT now(),
updated_at TIMESTAMPTZ NOT NULL DEFAULT now()
);
CREATE INDEX idx_runners_status ON runners(status);
CREATE INDEX idx_runners_token ON runners(token_hash);
--------------------------------------------------------------------------------
-- Job Definitions (parsed from YAML, stored for reference and FK)
--------------------------------------------------------------------------------
CREATE TABLE job_definitions (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
name TEXT UNIQUE NOT NULL,
version INTEGER NOT NULL DEFAULT 1,
trigger_type TEXT NOT NULL,
scope_type TEXT NOT NULL,
compute_type TEXT NOT NULL,
runner_tags TEXT[] NOT NULL DEFAULT '{}',
timeout_seconds INTEGER NOT NULL DEFAULT 600,
max_retries INTEGER NOT NULL DEFAULT 1,
priority INTEGER NOT NULL DEFAULT 100,
definition JSONB NOT NULL,
enabled BOOLEAN NOT NULL DEFAULT true,
created_at TIMESTAMPTZ NOT NULL DEFAULT now(),
updated_at TIMESTAMPTZ NOT NULL DEFAULT now()
);
CREATE INDEX idx_job_defs_trigger ON job_definitions(trigger_type);
CREATE INDEX idx_job_defs_enabled ON job_definitions(enabled) WHERE enabled = true;
--------------------------------------------------------------------------------
-- Jobs (individual compute job instances)
--------------------------------------------------------------------------------
CREATE TYPE job_status AS ENUM (
'pending', 'claimed', 'running', 'completed', 'failed', 'cancelled'
);
CREATE TABLE jobs (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
job_definition_id UUID REFERENCES job_definitions(id) ON DELETE SET NULL,
definition_name TEXT NOT NULL,
status job_status NOT NULL DEFAULT 'pending',
priority INTEGER NOT NULL DEFAULT 100,
item_id UUID REFERENCES items(id) ON DELETE CASCADE,
project_id UUID REFERENCES projects(id) ON DELETE SET NULL,
scope_metadata JSONB DEFAULT '{}',
runner_id UUID REFERENCES runners(id) ON DELETE SET NULL,
runner_tags TEXT[] NOT NULL DEFAULT '{}',
created_at TIMESTAMPTZ NOT NULL DEFAULT now(),
claimed_at TIMESTAMPTZ,
started_at TIMESTAMPTZ,
completed_at TIMESTAMPTZ,
timeout_seconds INTEGER NOT NULL DEFAULT 600,
expires_at TIMESTAMPTZ,
progress INTEGER DEFAULT 0,
progress_message TEXT,
result JSONB,
error_message TEXT,
retry_count INTEGER NOT NULL DEFAULT 0,
max_retries INTEGER NOT NULL DEFAULT 1,
created_by TEXT,
cancelled_by TEXT
);
CREATE INDEX idx_jobs_status ON jobs(status);
CREATE INDEX idx_jobs_pending ON jobs(status, priority, created_at)
WHERE status = 'pending';
CREATE INDEX idx_jobs_item ON jobs(item_id);
CREATE INDEX idx_jobs_runner ON jobs(runner_id);
CREATE INDEX idx_jobs_definition ON jobs(job_definition_id);
--------------------------------------------------------------------------------
-- Job Log (append-only progress entries)
--------------------------------------------------------------------------------
CREATE TABLE job_log (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
job_id UUID NOT NULL REFERENCES jobs(id) ON DELETE CASCADE,
timestamp TIMESTAMPTZ NOT NULL DEFAULT now(),
level TEXT NOT NULL DEFAULT 'info',
message TEXT NOT NULL,
metadata JSONB DEFAULT '{}'
);
CREATE INDEX idx_job_log_job ON job_log(job_id, timestamp);
COMMIT;

View File

@@ -0,0 +1,15 @@
-- 016_module_system.sql — settings overrides and module state persistence
CREATE TABLE IF NOT EXISTS settings_overrides (
key TEXT PRIMARY KEY,
value JSONB NOT NULL,
updated_by TEXT NOT NULL,
updated_at TIMESTAMPTZ NOT NULL DEFAULT now()
);
CREATE TABLE IF NOT EXISTS module_state (
module_id TEXT PRIMARY KEY,
enabled BOOLEAN NOT NULL,
updated_by TEXT NOT NULL,
updated_at TIMESTAMPTZ NOT NULL DEFAULT now()
);

View File

@@ -0,0 +1,7 @@
-- Track which storage backend holds each attached file.
ALTER TABLE item_files
ADD COLUMN IF NOT EXISTS storage_backend TEXT NOT NULL DEFAULT 'minio';
-- Track which storage backend holds each revision file.
ALTER TABLE revisions
ADD COLUMN IF NOT EXISTS file_storage_backend TEXT NOT NULL DEFAULT 'minio';

View File

@@ -0,0 +1,110 @@
-- Migration 018: .kc Server-Side Metadata Tables
--
-- Adds tables for indexing the silo/ directory contents from .kc files.
-- See docs/KC_SERVER.md for the full specification.
--
-- Tables:
-- item_metadata - indexed manifest + metadata fields (Section 3.1)
-- item_dependencies - CAD-extracted assembly dependencies (Section 3.2)
-- item_approvals - ECO workflow state (Section 3.3)
-- approval_signatures - individual approval/rejection records (Section 3.3)
-- item_macros - registered macros from silo/macros/ (Section 3.4)
BEGIN;
--------------------------------------------------------------------------------
-- item_metadata: indexed silo/manifest.json + silo/metadata.json
--------------------------------------------------------------------------------
CREATE TABLE item_metadata (
item_id UUID PRIMARY KEY REFERENCES items(id) ON DELETE CASCADE,
schema_name TEXT,
tags TEXT[] NOT NULL DEFAULT '{}',
lifecycle_state TEXT NOT NULL DEFAULT 'draft',
fields JSONB NOT NULL DEFAULT '{}',
kc_version TEXT,
manifest_uuid UUID,
silo_instance TEXT,
revision_hash TEXT,
updated_at TIMESTAMPTZ NOT NULL DEFAULT now(),
updated_by TEXT
);
CREATE INDEX idx_item_metadata_tags ON item_metadata USING GIN (tags);
CREATE INDEX idx_item_metadata_lifecycle ON item_metadata (lifecycle_state);
CREATE INDEX idx_item_metadata_fields ON item_metadata USING GIN (fields);
--------------------------------------------------------------------------------
-- item_dependencies: indexed silo/dependencies.json
--
-- Complements the existing `relationships` table.
-- relationships = server-authoritative BOM (web UI / API editable)
-- item_dependencies = CAD-authoritative record (extracted from .kc)
-- BOM merge reconciles the two (see docs/BOM_MERGE.md).
--------------------------------------------------------------------------------
CREATE TABLE item_dependencies (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
parent_item_id UUID NOT NULL REFERENCES items(id) ON DELETE CASCADE,
child_uuid UUID NOT NULL,
child_part_number TEXT,
child_revision INTEGER,
quantity DECIMAL,
label TEXT,
relationship TEXT NOT NULL DEFAULT 'component',
revision_number INTEGER NOT NULL,
created_at TIMESTAMPTZ NOT NULL DEFAULT now()
);
CREATE INDEX idx_item_deps_parent ON item_dependencies (parent_item_id);
CREATE INDEX idx_item_deps_child ON item_dependencies (child_uuid);
--------------------------------------------------------------------------------
-- item_approvals + approval_signatures: ECO workflow
--
-- Server-authoritative. The .kc silo/approvals.json is a read cache
-- packed on checkout for offline display in Create.
--------------------------------------------------------------------------------
CREATE TABLE item_approvals (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
item_id UUID NOT NULL REFERENCES items(id) ON DELETE CASCADE,
eco_number TEXT,
state TEXT NOT NULL DEFAULT 'draft',
updated_at TIMESTAMPTZ NOT NULL DEFAULT now(),
updated_by TEXT
);
CREATE INDEX idx_item_approvals_item ON item_approvals (item_id);
CREATE INDEX idx_item_approvals_state ON item_approvals (state);
CREATE TABLE approval_signatures (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
approval_id UUID NOT NULL REFERENCES item_approvals(id) ON DELETE CASCADE,
username TEXT NOT NULL,
role TEXT NOT NULL,
status TEXT NOT NULL DEFAULT 'pending',
signed_at TIMESTAMPTZ,
comment TEXT
);
CREATE INDEX idx_approval_sigs_approval ON approval_signatures (approval_id);
--------------------------------------------------------------------------------
-- item_macros: registered macros from silo/macros/
--------------------------------------------------------------------------------
CREATE TABLE item_macros (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
item_id UUID NOT NULL REFERENCES items(id) ON DELETE CASCADE,
filename TEXT NOT NULL,
trigger TEXT NOT NULL DEFAULT 'manual',
content TEXT NOT NULL,
revision_number INTEGER NOT NULL,
created_at TIMESTAMPTZ NOT NULL DEFAULT now(),
UNIQUE(item_id, filename)
);
CREATE INDEX idx_item_macros_item ON item_macros (item_id);
COMMIT;

View File

@@ -77,6 +77,9 @@ if systemctl is-active --quiet silod 2>/dev/null; then
sudo systemctl stop silod
fi
# Clean old frontend assets before extracting
sudo rm -rf "$DEPLOY_DIR/web/dist/assets"
# Extract
echo " Extracting..."
sudo tar -xzf /tmp/silo-deploy.tar.gz -C "$DEPLOY_DIR"

108
scripts/migrate-storage.sh Executable file
View File

@@ -0,0 +1,108 @@
#!/bin/bash
# Migrate storage from MinIO to filesystem on a remote Silo host.
#
# Builds the migrate-storage binary locally, uploads it to the target host,
# then runs it over SSH using credentials from /etc/silo/silod.env.
#
# Usage: ./scripts/migrate-storage.sh <silo-host> <psql-host> <minio-host> [flags...]
#
# Examples:
# ./scripts/migrate-storage.sh silo.kindred.internal psql.kindred.internal minio.kindred.internal -dry-run -verbose
# ./scripts/migrate-storage.sh silo.kindred.internal psql.kindred.internal minio.kindred.internal
set -euo pipefail
if [ $# -lt 3 ]; then
echo "Usage: $0 <silo-host> <psql-host> <minio-host> [flags...]"
echo " flags are passed to migrate-storage (e.g. -dry-run -verbose)"
exit 1
fi
TARGET="$1"
DB_HOST="$2"
MINIO_HOST="$3"
shift 3
EXTRA_FLAGS="$*"
DEST_DIR="/opt/silo/data"
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_DIR="${SCRIPT_DIR}/.."
echo "=== Migrate Storage: MinIO -> Filesystem ==="
echo " Target: ${TARGET}"
echo " DB host: ${DB_HOST}"
echo " MinIO: ${MINIO_HOST}"
echo " Dest: ${DEST_DIR}"
[ -n "$EXTRA_FLAGS" ] && echo " Flags: ${EXTRA_FLAGS}"
echo ""
# --- Build locally ---
echo "[1/3] Building migrate-storage binary..."
cd "$PROJECT_DIR"
GOOS=linux GOARCH=amd64 go build -o migrate-storage ./cmd/migrate-storage
echo " Built: $(du -h migrate-storage | cut -f1)"
# --- Upload ---
echo "[2/3] Uploading to ${TARGET}..."
scp migrate-storage "${TARGET}:/tmp/migrate-storage"
rm -f migrate-storage
# --- Run remotely ---
echo "[3/3] Running migration on ${TARGET}..."
ssh "$TARGET" DB_HOST="$DB_HOST" MINIO_HOST="$MINIO_HOST" DEST_DIR="$DEST_DIR" EXTRA_FLAGS="$EXTRA_FLAGS" bash -s <<'REMOTE'
set -euo pipefail
CONFIG_DIR="/etc/silo"
# Source credentials
if [ ! -f "$CONFIG_DIR/silod.env" ]; then
echo "ERROR: $CONFIG_DIR/silod.env not found on $(hostname)"
exit 1
fi
set -a
source "$CONFIG_DIR/silod.env"
set +a
# Ensure destination directory exists
sudo mkdir -p "$DEST_DIR"
sudo chown silo:silo "$DEST_DIR" 2>/dev/null || true
chmod +x /tmp/migrate-storage
# Write temporary config with the provided hosts
cat > /tmp/silo-migrate.yaml <<EOF
database:
host: "${DB_HOST}"
port: 5432
name: "silo"
user: "silo"
password: "${SILO_DB_PASSWORD}"
sslmode: "require"
max_connections: 5
storage:
endpoint: "${MINIO_HOST}:9000"
access_key: "${SILO_MINIO_ACCESS_KEY}"
secret_key: "${SILO_MINIO_SECRET_KEY}"
bucket: "silo"
use_ssl: false
region: "us-east-1"
EOF
chmod 600 /tmp/silo-migrate.yaml
echo " Config written to /tmp/silo-migrate.yaml"
echo " Starting migration..."
echo ""
# Run the migration
/tmp/migrate-storage -config /tmp/silo-migrate.yaml -dest "$DEST_DIR" $EXTRA_FLAGS
# Clean up
rm -f /tmp/silo-migrate.yaml /tmp/migrate-storage
echo ""
echo " Cleaned up temp files."
REMOTE
echo ""
echo "=== Migration complete ==="
echo " Files written to ${TARGET}:${DEST_DIR}"

View File

@@ -1,12 +1,13 @@
<!doctype html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>Silo</title>
</head>
<body>
<div id="root"></div>
<script type="module" src="/src/main.tsx"></script>
</body>
<head>
<meta charset="UTF-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<link rel="icon" type="image/svg+xml" href="/favicon.svg" />
<title>Silo</title>
</head>
<body>
<div id="root"></div>
<script type="module" src="/src/main.tsx"></script>
</body>
</html>

106
web/public/favicon.svg Normal file
View File

@@ -0,0 +1,106 @@
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!-- Created with Inkscape (http://www.inkscape.org/) -->
<svg
width="1028"
height="1028"
viewBox="0 0 271.99167 271.99167"
version="1.1"
id="svg1"
inkscape:version="1.4.2 (2aeb623e1d, 2025-05-12)"
sodipodi:docname="kindred-logo.svg"
inkscape:export-filename="../3290ed6b/kindred-logo-blue-baack.png"
inkscape:export-xdpi="96"
inkscape:export-ydpi="96"
xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
xmlns="http://www.w3.org/2000/svg"
xmlns:svg="http://www.w3.org/2000/svg">
<sodipodi:namedview
id="namedview1"
pagecolor="#ffffff"
bordercolor="#000000"
borderopacity="0.25"
inkscape:showpageshadow="2"
inkscape:pageopacity="0.0"
inkscape:pagecheckerboard="0"
inkscape:deskcolor="#d1d1d1"
inkscape:document-units="mm"
showgrid="true"
inkscape:zoom="1.036062"
inkscape:cx="397.6596"
inkscape:cy="478.25323"
inkscape:window-width="2494"
inkscape:window-height="1371"
inkscape:window-x="1146"
inkscape:window-y="1112"
inkscape:window-maximized="1"
inkscape:current-layer="layer1"
inkscape:export-bgcolor="#79c0c500">
<inkscape:grid
type="axonomgrid"
id="grid6"
units="mm"
originx="0"
originy="0"
spacingx="0.99999998"
spacingy="1"
empcolor="#0099e5"
empopacity="0.30196078"
color="#0099e5"
opacity="0.14901961"
empspacing="5"
dotted="false"
gridanglex="30"
gridanglez="30"
enabled="true"
visible="true" />
</sodipodi:namedview>
<defs
id="defs1">
<inkscape:perspective
sodipodi:type="inkscape:persp3d"
inkscape:vp_x="0 : 123.49166 : 1"
inkscape:vp_y="0 : 999.99998 : 0"
inkscape:vp_z="210.00001 : 123.49166 : 1"
inkscape:persp3d-origin="105 : 73.991665 : 1"
id="perspective1" />
</defs>
<g
inkscape:label="Layer 1"
inkscape:groupmode="layer"
id="layer1">
<path
sodipodi:type="star"
style="fill:#7c4a82;fill-opacity:1;stroke:#12101c;stroke-width:5;stroke-linejoin:miter;stroke-dasharray:none;stroke-opacity:1"
id="path6-81-5"
inkscape:flatsided="true"
sodipodi:sides="6"
sodipodi:cx="61.574867"
sodipodi:cy="103.99491"
sodipodi:r1="25.000006"
sodipodi:r2="22.404818"
sodipodi:arg1="-1.5707963"
sodipodi:arg2="-1.0471974"
inkscape:rounded="0.77946499"
inkscape:randomized="0"
d="m 61.574868,78.994905 c 19.486629,10e-7 11.907325,-4.375912 21.65064,12.500004 9.743314,16.875911 9.743314,8.12409 -1e-6,25.000001 -9.743315,16.87592 -2.164011,12.50001 -21.65064,12.50001 -19.486629,0 -11.907326,4.37591 -21.65064,-12.50001 -9.743314,-16.875912 -9.743314,-8.12409 0,-25.000002 9.743315,-16.875916 2.164012,-12.500003 21.650641,-12.500003 z"
transform="matrix(1.9704344,0,0,1.8525167,-28.510585,-40.025402)" />
<path
sodipodi:type="star"
style="fill:#ff9701;fill-opacity:1;stroke:#12101c;stroke-width:5;stroke-linejoin:miter;stroke-dasharray:none;stroke-opacity:1"
id="path6-81-5-6"
inkscape:flatsided="true"
sodipodi:sides="6"
sodipodi:cx="61.574867"
sodipodi:cy="103.99491"
sodipodi:r1="25.000006"
sodipodi:r2="22.404818"
sodipodi:arg1="-1.5707963"
sodipodi:arg2="-1.0471974"
inkscape:rounded="0.77946499"
inkscape:randomized="0"
d="m 61.574868,78.994905 c 19.486629,10e-7 11.907325,-4.375912 21.65064,12.500004 9.743314,16.875921 9.743314,8.12409 -1e-6,25.000001 -9.743315,16.87592 -2.164011,12.50001 -21.65064,12.50001 -19.48663,0 -11.907326,4.37591 -21.65064,-12.50001 -9.743314,-16.875913 -9.743315,-8.12409 10e-7,-25.000002 9.743315,-16.875916 2.164011,-12.500003 21.65064,-12.500003 z"
transform="matrix(1.9704344,0,0,1.8525167,56.811738,-86.338327)" />
</g>
</svg>

After

Width:  |  Height:  |  Size: 4.0 KiB

View File

@@ -352,6 +352,35 @@ export interface UpdateSchemaValueRequest {
description: string;
}
// Admin settings — module discovery
export interface ModuleInfo {
enabled: boolean;
required: boolean;
name: string;
version?: string;
depends_on?: string[];
config?: Record<string, unknown>;
}
export interface ModulesResponse {
modules: Record<string, ModuleInfo>;
server: { version: string; read_only: boolean };
}
// Admin settings — config management
export type AdminSettingsResponse = Record<string, Record<string, unknown>>;
export interface UpdateSettingsResponse {
updated: string[];
restart_required: boolean;
}
export interface TestConnectivityResponse {
success: boolean;
message: string;
latency_ms: number;
}
// Revision comparison
export interface RevisionComparison {
from: number;

View File

@@ -1,24 +1,67 @@
import { NavLink, Outlet } from "react-router-dom";
import { useCallback, useEffect, useState } from "react";
import { Outlet } from "react-router-dom";
import { useAuth } from "../hooks/useAuth";
import { useDensity } from "../hooks/useDensity";
const navLinks = [
{ to: "/", label: "Items" },
{ to: "/projects", label: "Projects" },
{ to: "/schemas", label: "Schemas" },
{ to: "/audit", label: "Audit" },
{ to: "/settings", label: "Settings" },
];
const roleBadgeStyle: Record<string, React.CSSProperties> = {
admin: { background: "rgba(203,166,247,0.2)", color: "var(--ctp-mauve)" },
editor: { background: "rgba(137,180,250,0.2)", color: "var(--ctp-blue)" },
viewer: { background: "rgba(148,226,213,0.2)", color: "var(--ctp-teal)" },
};
import { useModules } from "../hooks/useModules";
import { useSSE } from "../hooks/useSSE";
import { Sidebar } from "./Sidebar";
export function AppShell() {
const { user, loading, logout } = useAuth();
const [density, toggleDensity] = useDensity();
const { modules, refresh: refreshModules } = useModules();
const { on } = useSSE();
const [toast, setToast] = useState<string | null>(null);
// Listen for settings.changed SSE events
useEffect(() => {
return on("settings.changed", (raw) => {
try {
const data = JSON.parse(raw) as {
module: string;
changed_keys: string[];
updated_by: string;
};
refreshModules();
if (data.updated_by !== user?.username) {
setToast(`Settings updated by ${data.updated_by}`);
}
} catch {
// ignore malformed events
}
});
}, [on, refreshModules, user?.username]);
// Auto-dismiss toast
useEffect(() => {
if (!toast) return;
const timer = setTimeout(() => setToast(null), 5000);
return () => clearTimeout(timer);
}, [toast]);
const [sidebarOpen, setSidebarOpen] = useState(() => {
return localStorage.getItem("silo-sidebar") !== "closed";
});
const toggleSidebar = useCallback(() => {
setSidebarOpen((prev) => {
const next = !prev;
localStorage.setItem("silo-sidebar", next ? "open" : "closed");
return next;
});
}, []);
// Ctrl+J to toggle sidebar
useEffect(() => {
const handler = (e: KeyboardEvent) => {
if (e.ctrlKey && e.key === "j") {
e.preventDefault();
toggleSidebar();
}
};
window.addEventListener("keydown", handler);
return () => window.removeEventListener("keydown", handler);
}, [toggleSidebar]);
if (loading) {
return (
@@ -36,119 +79,40 @@ export function AppShell() {
}
return (
<div style={{ display: "flex", flexDirection: "column", height: "100vh" }}>
<header
style={{
backgroundColor: "var(--ctp-mantle)",
borderBottom: "1px solid var(--ctp-surface0)",
padding: "var(--d-header-py) var(--d-header-px)",
display: "flex",
alignItems: "center",
justifyContent: "space-between",
flexShrink: 0,
}}
>
<h1
style={{
fontSize: "var(--d-header-logo)",
fontWeight: 600,
color: "var(--ctp-mauve)",
}}
>
Silo
</h1>
<nav style={{ display: "flex", gap: "var(--d-nav-gap)" }}>
{navLinks.map((link) => (
<NavLink
key={link.to}
to={link.to}
end={link.to === "/"}
style={({ isActive }) => ({
color: isActive ? "var(--ctp-mauve)" : "var(--ctp-subtext1)",
backgroundColor: isActive
? "var(--ctp-surface1)"
: "transparent",
fontWeight: 500,
padding: "var(--d-nav-py) var(--d-nav-px)",
borderRadius: "var(--d-nav-radius)",
textDecoration: "none",
transition: "all 0.15s ease",
})}
>
{link.label}
</NavLink>
))}
</nav>
{user && (
<div
style={{
display: "flex",
alignItems: "center",
gap: "var(--d-user-gap)",
}}
>
<span
style={{
color: "var(--ctp-subtext1)",
fontSize: "var(--d-user-font)",
}}
>
{user.display_name}
</span>
<span
style={{
display: "inline-block",
padding: "0.15rem 0.5rem",
borderRadius: "1rem",
fontSize: "0.75rem",
fontWeight: 600,
...roleBadgeStyle[user.role],
}}
>
{user.role}
</span>
<button
onClick={toggleDensity}
title={`Switch to ${density === "comfortable" ? "compact" : "comfortable"} view`}
style={{
padding: "0.25rem 0.5rem",
fontSize: "var(--font-sm)",
borderRadius: "0.375rem",
cursor: "pointer",
border: "1px solid var(--ctp-surface1)",
background: "var(--ctp-surface0)",
color: "var(--ctp-subtext1)",
fontFamily: "'JetBrains Mono', monospace",
letterSpacing: "0.05em",
}}
>
{density === "comfortable" ? "COM" : "CMP"}
</button>
<button
onClick={logout}
style={{
padding: "0.35rem 0.75rem",
fontSize: "var(--font-table)",
borderRadius: "0.4rem",
cursor: "pointer",
border: "none",
background: "var(--ctp-surface1)",
color: "var(--ctp-subtext1)",
}}
>
Logout
</button>
</div>
)}
</header>
<main
style={{ flex: 1, padding: "1rem 1rem 0 1rem", overflow: "hidden" }}
>
<div style={{ display: "flex", height: "100vh" }}>
<Sidebar
open={sidebarOpen}
onToggle={toggleSidebar}
modules={modules}
user={user}
density={density}
onToggleDensity={toggleDensity}
onLogout={logout}
/>
<main style={{ flex: 1, overflow: "auto", padding: "1rem" }}>
<Outlet />
</main>
{toast && (
<div
style={{
position: "fixed",
bottom: "1rem",
right: "1rem",
padding: "0.5rem 1rem",
backgroundColor: "var(--ctp-surface1)",
color: "var(--ctp-text)",
borderRadius: "0.5rem",
fontSize: "var(--font-body)",
border: "1px solid var(--ctp-surface2)",
boxShadow: "0 2px 8px rgba(0,0,0,0.3)",
zIndex: 1000,
cursor: "pointer",
}}
onClick={() => setToast(null)}
>
{toast}
</div>
)}
</div>
);
}

View File

@@ -80,7 +80,7 @@ export function ContextMenu({ x, y, items, onClose }: ContextMenuProps) {
alignItems: "center",
gap: "0.5rem",
width: "100%",
padding: "0.35rem 0.75rem",
padding: "0.25rem 0.75rem",
background: "none",
border: "none",
color: item.disabled ? "var(--ctp-overlay0)" : "var(--ctp-text)",

View File

@@ -1,4 +1,4 @@
import type { ReactNode } from 'react';
import type { ReactNode } from "react";
interface PageFooterProps {
stats?: ReactNode;
@@ -8,32 +8,40 @@ interface PageFooterProps {
onPageChange?: (page: number) => void;
}
export function PageFooter({ stats, page, pageSize, itemCount, onPageChange }: PageFooterProps) {
export function PageFooter({
stats,
page,
pageSize,
itemCount,
onPageChange,
}: PageFooterProps) {
const hasPagination = page !== undefined && onPageChange !== undefined;
return (
<div style={{
position: 'fixed',
bottom: 0,
left: 0,
right: 0,
height: 'var(--d-footer-h)',
backgroundColor: 'var(--ctp-surface0)',
borderTop: '1px solid var(--ctp-surface1)',
display: 'flex',
alignItems: 'center',
justifyContent: 'space-between',
padding: '0 var(--d-footer-px)',
fontSize: 'var(--d-footer-font)',
color: 'var(--ctp-subtext0)',
zIndex: 100,
}}>
<div style={{ display: 'flex', gap: '1.5rem', alignItems: 'center' }}>
<div
style={{
position: "fixed",
bottom: 0,
left: 0,
right: 0,
height: "var(--d-footer-h)",
backgroundColor: "var(--ctp-surface0)",
borderTop: "1px solid var(--ctp-surface1)",
display: "flex",
alignItems: "center",
justifyContent: "space-between",
padding: "0 var(--d-footer-px)",
fontSize: "var(--d-footer-font)",
color: "var(--ctp-subtext0)",
zIndex: 100,
}}
>
<div style={{ display: "flex", gap: "1.5rem", alignItems: "center" }}>
{stats}
</div>
{hasPagination && (
<div style={{ display: 'flex', gap: '0.5rem', alignItems: 'center' }}>
<div style={{ display: "flex", gap: "0.5rem", alignItems: "center" }}>
<button
onClick={() => onPageChange(Math.max(1, page - 1))}
disabled={page <= 1}
@@ -47,7 +55,11 @@ export function PageFooter({ stats, page, pageSize, itemCount, onPageChange }: P
</span>
<button
onClick={() => onPageChange(page + 1)}
disabled={pageSize !== undefined && itemCount !== undefined && itemCount < pageSize}
disabled={
pageSize !== undefined &&
itemCount !== undefined &&
itemCount < pageSize
}
style={pageBtnStyle}
>
Next
@@ -59,11 +71,11 @@ export function PageFooter({ stats, page, pageSize, itemCount, onPageChange }: P
}
const pageBtnStyle: React.CSSProperties = {
padding: '0.15rem 0.4rem',
fontSize: 'inherit',
border: 'none',
borderRadius: '0.25rem',
backgroundColor: 'var(--ctp-surface1)',
color: 'var(--ctp-text)',
cursor: 'pointer',
padding: "0.25rem 0.5rem",
fontSize: "inherit",
border: "none",
borderRadius: "0.25rem",
backgroundColor: "var(--ctp-surface1)",
color: "var(--ctp-text)",
cursor: "pointer",
};

View File

@@ -0,0 +1,335 @@
import { useEffect, useRef, useState, useCallback } from "react";
import { NavLink, useNavigate } from "react-router-dom";
import {
Package,
FolderKanban,
FileCode2,
ClipboardCheck,
Settings2,
ChevronLeft,
ChevronRight,
LogOut,
} from "lucide-react";
import type { ModuleInfo } from "../api/types";
interface NavItem {
moduleId: string | null;
path: string;
label: string;
icon: React.ComponentType<{ size?: number }>;
}
const allNavItems: NavItem[] = [
{ moduleId: "core", path: "/", label: "Items", icon: Package },
{
moduleId: "projects",
path: "/projects",
label: "Projects",
icon: FolderKanban,
},
{ moduleId: "schemas", path: "/schemas", label: "Schemas", icon: FileCode2 },
{ moduleId: "audit", path: "/audit", label: "Audit", icon: ClipboardCheck },
{ moduleId: null, path: "/settings", label: "Settings", icon: Settings2 },
];
interface SidebarProps {
open: boolean;
onToggle: () => void;
modules: Record<string, ModuleInfo>;
user: { display_name: string; role: string } | null;
density: string;
onToggleDensity: () => void;
onLogout: () => void;
}
const roleBadgeStyle: Record<string, React.CSSProperties> = {
admin: { background: "rgba(203,166,247,0.2)", color: "var(--ctp-mauve)" },
editor: { background: "rgba(137,180,250,0.2)", color: "var(--ctp-blue)" },
viewer: { background: "rgba(148,226,213,0.2)", color: "var(--ctp-teal)" },
};
export function Sidebar({
open,
onToggle,
modules,
user,
density,
onToggleDensity,
onLogout,
}: SidebarProps) {
const navigate = useNavigate();
const [focusIndex, setFocusIndex] = useState(-1);
const navRefs = useRef<(HTMLAnchorElement | null)[]>([]);
const visibleItems = allNavItems.filter(
(item) => item.moduleId === null || modules[item.moduleId]?.enabled,
);
// Focus the item at focusIndex when it changes
useEffect(() => {
if (focusIndex >= 0 && focusIndex < navRefs.current.length) {
navRefs.current[focusIndex]?.focus();
}
}, [focusIndex]);
// Reset focus when sidebar closes
useEffect(() => {
if (!open) setFocusIndex(-1);
}, [open]);
const handleKeyDown = useCallback(
(e: React.KeyboardEvent) => {
if (!open) return;
switch (e.key) {
case "ArrowDown":
e.preventDefault();
setFocusIndex((i) => (i + 1) % visibleItems.length);
break;
case "ArrowUp":
e.preventDefault();
setFocusIndex(
(i) => (i - 1 + visibleItems.length) % visibleItems.length,
);
break;
case "Enter": {
const target = visibleItems[focusIndex];
if (focusIndex >= 0 && target) {
e.preventDefault();
navigate(target.path);
}
break;
}
case "Escape":
e.preventDefault();
onToggle();
break;
}
},
[open, focusIndex, visibleItems, navigate, onToggle],
);
return (
<nav
onKeyDown={handleKeyDown}
style={{
width: open ? "var(--d-sidebar-w)" : "var(--d-sidebar-collapsed)",
minWidth: open ? "var(--d-sidebar-w)" : "var(--d-sidebar-collapsed)",
height: "100vh",
backgroundColor: "var(--ctp-mantle)",
borderRight: "1px solid var(--ctp-surface0)",
display: "flex",
flexDirection: "column",
transition: "width 0.2s ease, min-width 0.2s ease",
overflow: "hidden",
flexShrink: 0,
}}
>
{/* Logo */}
<div
style={{
padding: open ? "0.75rem 1rem" : "0.75rem 0",
display: "flex",
alignItems: "center",
justifyContent: open ? "flex-start" : "center",
borderBottom: "1px solid var(--ctp-surface0)",
minHeight: 44,
}}
>
<span
style={{
fontSize: "1.25rem",
fontWeight: 700,
color: "var(--ctp-mauve)",
whiteSpace: "nowrap",
}}
>
{open ? "Silo" : "S"}
</span>
</div>
{/* Nav items */}
<div
style={{
flex: 1,
padding: "0.5rem 0.5rem",
display: "flex",
flexDirection: "column",
gap: "2px",
}}
>
{visibleItems.map((item, i) => (
<NavLink
key={item.path}
to={item.path}
end={item.path === "/"}
ref={(el) => {
navRefs.current[i] = el;
}}
title={open ? undefined : item.label}
style={({ isActive }) => ({
display: "flex",
alignItems: "center",
gap: "0.75rem",
padding: "var(--d-nav-py) var(--d-nav-px)",
borderRadius: "var(--d-nav-radius)",
textDecoration: "none",
color: isActive ? "var(--ctp-mauve)" : "var(--ctp-subtext1)",
backgroundColor: isActive ? "var(--ctp-surface1)" : "transparent",
fontWeight: 500,
fontSize: "var(--font-body)",
whiteSpace: "nowrap",
transition: "background-color 0.15s ease, color 0.15s ease",
outline: focusIndex === i ? "1px solid var(--ctp-mauve)" : "none",
outlineOffset: -1,
justifyContent: open ? "flex-start" : "center",
})}
onMouseEnter={(e) => {
const target = e.currentTarget;
if (
!target.style.backgroundColor ||
target.style.backgroundColor === "transparent"
) {
target.style.backgroundColor = "var(--ctp-surface0)";
}
}}
onMouseLeave={(e) => {
const target = e.currentTarget;
// Let NavLink's isActive styling handle active items
const isActive = target.getAttribute("aria-current") === "page";
if (!isActive) {
target.style.backgroundColor = "transparent";
}
}}
>
<item.icon size={16} />
{open && <span>{item.label}</span>}
</NavLink>
))}
</div>
{/* Bottom section */}
<div
style={{
borderTop: "1px solid var(--ctp-surface0)",
padding: "0.5rem",
display: "flex",
flexDirection: "column",
gap: "4px",
}}
>
{/* Toggle sidebar */}
<button
onClick={onToggle}
title={open ? "Collapse sidebar (Ctrl+J)" : "Expand sidebar (Ctrl+J)"}
style={{
...btnStyle,
justifyContent: open ? "flex-start" : "center",
}}
>
{open ? <ChevronLeft size={16} /> : <ChevronRight size={16} />}
{open && <span>Collapse</span>}
</button>
{/* Density toggle */}
<button
onClick={onToggleDensity}
title={`Switch to ${density === "comfortable" ? "compact" : "comfortable"} view`}
style={{
...btnStyle,
justifyContent: open ? "flex-start" : "center",
fontFamily: "'JetBrains Mono', monospace",
letterSpacing: "0.05em",
}}
>
<span
style={{
width: 16,
textAlign: "center",
fontSize: "var(--font-sm)",
}}
>
{density === "comfortable" ? "CO" : "CP"}
</span>
{open && (
<span>{density === "comfortable" ? "Comfortable" : "Compact"}</span>
)}
</button>
{/* User */}
{user && (
<div
style={{
display: "flex",
alignItems: "center",
gap: "0.5rem",
padding: "0.375rem var(--d-nav-px)",
justifyContent: open ? "flex-start" : "center",
}}
>
<span
style={{
display: "inline-flex",
alignItems: "center",
justifyContent: "center",
width: 20,
height: 20,
borderRadius: "50%",
fontSize: "var(--font-xs)",
fontWeight: 600,
flexShrink: 0,
...roleBadgeStyle[user.role],
}}
>
{user.role.charAt(0).toUpperCase()}
</span>
{open && (
<span
style={{
color: "var(--ctp-subtext1)",
fontSize: "var(--font-body)",
overflow: "hidden",
textOverflow: "ellipsis",
whiteSpace: "nowrap",
}}
>
{user.display_name}
</span>
)}
</div>
)}
{/* Logout */}
<button
onClick={onLogout}
title="Logout"
style={{
...btnStyle,
justifyContent: open ? "flex-start" : "center",
color: "var(--ctp-overlay1)",
}}
>
<LogOut size={16} />
{open && <span>Logout</span>}
</button>
</div>
</nav>
);
}
const btnStyle: React.CSSProperties = {
display: "flex",
alignItems: "center",
gap: "0.75rem",
padding: "var(--d-nav-py) var(--d-nav-px)",
borderRadius: "var(--d-nav-radius)",
border: "none",
background: "transparent",
color: "var(--ctp-subtext1)",
fontSize: "var(--font-body)",
fontWeight: 500,
cursor: "pointer",
whiteSpace: "nowrap",
width: "100%",
textAlign: "left",
};

View File

@@ -124,7 +124,7 @@ export function TagInput({
padding: "0.25rem 0.5rem",
backgroundColor: "var(--ctp-base)",
border: "1px solid var(--ctp-surface1)",
borderRadius: "0.375rem",
borderRadius: "0.25rem",
cursor: "text",
minHeight: "1.8rem",
}}
@@ -137,7 +137,7 @@ export function TagInput({
display: "inline-flex",
alignItems: "center",
gap: "0.25rem",
padding: "0.15rem 0.5rem",
padding: "0.25rem 0.5rem",
borderRadius: "1rem",
backgroundColor: "rgba(203,166,247,0.15)",
color: "var(--ctp-mauve)",
@@ -187,7 +187,7 @@ export function TagInput({
background: "transparent",
color: "var(--ctp-text)",
fontSize: "var(--font-body)",
padding: "0.15rem 0",
padding: "0.25rem 0",
}}
/>
</div>
@@ -202,7 +202,7 @@ export function TagInput({
marginTop: "0.25rem",
backgroundColor: "var(--ctp-surface0)",
border: "1px solid var(--ctp-surface1)",
borderRadius: "0.375rem",
borderRadius: "0.25rem",
maxHeight: "160px",
overflowY: "auto",
}}

View File

@@ -218,7 +218,7 @@ export function AuditDetailPanel({
<span
style={{
display: "inline-block",
padding: "0.15rem 0.5rem",
padding: "0.25rem 0.5rem",
borderRadius: "1rem",
fontSize: "0.75rem",
fontWeight: 600,
@@ -477,10 +477,10 @@ function FieldRow({
placeholder="---"
style={{
flex: 1,
padding: "0.25rem 0.4rem",
padding: "0.25rem 0.5rem",
fontSize: "var(--font-table)",
border: "1px solid var(--ctp-surface1)",
borderRadius: "0.375rem",
borderRadius: "0.25rem",
backgroundColor: "var(--ctp-surface0)",
color: "var(--ctp-text)",
outline: "none",
@@ -495,7 +495,7 @@ const closeBtnStyle: React.CSSProperties = {
padding: "0.25rem 0.5rem",
fontSize: "var(--font-table)",
border: "none",
borderRadius: "0.375rem",
borderRadius: "0.25rem",
backgroundColor: "var(--ctp-surface1)",
color: "var(--ctp-subtext1)",
cursor: "pointer",

View File

@@ -70,7 +70,7 @@ export function AuditSummaryBar({
style={{
display: "flex",
gap: "1.5rem",
marginTop: "0.4rem",
marginTop: "0.5rem",
fontSize: "var(--font-table)",
color: "var(--ctp-subtext0)",
}}

View File

@@ -103,7 +103,7 @@ export function AuditTable({
<span
style={{
display: "inline-block",
padding: "0.15rem 0.5rem",
padding: "0.25rem 0.5rem",
borderRadius: "1rem",
fontSize: "0.75rem",
fontWeight: 600,

View File

@@ -97,7 +97,7 @@ export function AuditToolbar({
const selectStyle: React.CSSProperties = {
padding: "var(--d-input-py) var(--d-input-px)",
fontSize: "var(--d-input-font)",
borderRadius: "0.4rem",
borderRadius: "0.5rem",
border: "1px solid var(--ctp-surface1)",
backgroundColor: "var(--ctp-surface0)",
color: "var(--ctp-text)",
@@ -106,7 +106,7 @@ const selectStyle: React.CSSProperties = {
const btnStyle: React.CSSProperties = {
padding: "var(--d-input-py) var(--d-input-px)",
fontSize: "var(--d-input-font)",
borderRadius: "0.4rem",
borderRadius: "0.5rem",
border: "none",
backgroundColor: "var(--ctp-surface1)",
color: "var(--ctp-subtext1)",

View File

@@ -118,11 +118,11 @@ export function BOMTab({ partNumber, isEditor }: BOMTabProps) {
};
const inputStyle: React.CSSProperties = {
padding: "0.25rem 0.4rem",
padding: "0.25rem 0.5rem",
fontSize: "var(--font-table)",
backgroundColor: "var(--ctp-base)",
border: "1px solid var(--ctp-surface1)",
borderRadius: "0.375rem",
borderRadius: "0.25rem",
color: "var(--ctp-text)",
width: "100%",
};
@@ -240,7 +240,7 @@ export function BOMTab({ partNumber, isEditor }: BOMTabProps) {
...toolBtnStyle,
display: "inline-flex",
alignItems: "center",
gap: "0.35rem",
gap: "0.25rem",
}}
>
<Download size={14} /> Export CSV
@@ -256,7 +256,7 @@ export function BOMTab({ partNumber, isEditor }: BOMTabProps) {
...toolBtnStyle,
display: "inline-flex",
alignItems: "center",
gap: "0.35rem",
gap: "0.25rem",
}}
>
<Plus size={14} /> Add
@@ -267,9 +267,9 @@ export function BOMTab({ partNumber, isEditor }: BOMTabProps) {
{isEditor && assemblyCount > 0 && (
<div
style={{
padding: "0.35rem 0.5rem",
padding: "0.25rem 0.5rem",
marginBottom: "0.5rem",
borderRadius: "0.375rem",
borderRadius: "0.25rem",
backgroundColor: "rgba(148,226,213,0.1)",
border: "1px solid rgba(148,226,213,0.3)",
fontSize: "0.75rem",
@@ -438,7 +438,7 @@ const toolBtnStyle: React.CSSProperties = {
fontSize: "0.75rem",
fontWeight: 500,
border: "none",
borderRadius: "0.375rem",
borderRadius: "0.25rem",
backgroundColor: "var(--ctp-surface1)",
color: "var(--ctp-text)",
cursor: "pointer",
@@ -451,16 +451,16 @@ const actionBtnStyle: React.CSSProperties = {
cursor: "pointer",
fontSize: "0.75rem",
fontWeight: 500,
padding: "0.15rem 0.25rem",
borderRadius: "0.375rem",
padding: "0.25rem 0.25rem",
borderRadius: "0.25rem",
};
const saveBtnStyle: React.CSSProperties = {
padding: "0.25rem 0.4rem",
padding: "0.25rem 0.5rem",
fontSize: "0.75rem",
fontWeight: 500,
border: "none",
borderRadius: "0.375rem",
borderRadius: "0.25rem",
backgroundColor: "var(--ctp-green)",
color: "var(--ctp-crust)",
cursor: "pointer",
@@ -468,7 +468,7 @@ const saveBtnStyle: React.CSSProperties = {
};
const sourceBadgeBase: React.CSSProperties = {
padding: "0.15rem 0.4rem",
padding: "0.25rem 0.5rem",
borderRadius: "1rem",
fontSize: "var(--font-sm)",
fontWeight: 500,
@@ -487,11 +487,11 @@ const manualBadge: React.CSSProperties = {
};
const cancelBtnStyle: React.CSSProperties = {
padding: "0.25rem 0.4rem",
padding: "0.25rem 0.5rem",
fontSize: "0.75rem",
fontWeight: 500,
border: "none",
borderRadius: "0.375rem",
borderRadius: "0.25rem",
backgroundColor: "var(--ctp-surface1)",
color: "var(--ctp-subtext1)",
cursor: "pointer",

View File

@@ -62,7 +62,7 @@ export function CategoryPicker({
<div
style={{
border: "1px solid var(--ctp-surface1)",
borderRadius: "0.4rem",
borderRadius: "0.5rem",
backgroundColor: "var(--ctp-base)",
overflow: "hidden",
}}
@@ -74,7 +74,7 @@ export function CategoryPicker({
display: "flex",
flexWrap: "wrap",
gap: "0.25rem",
padding: "0.4rem 0.5rem",
padding: "0.5rem 0.5rem",
borderBottom: "1px solid var(--ctp-surface1)",
backgroundColor: "var(--ctp-mantle)",
}}
@@ -99,7 +99,7 @@ export function CategoryPicker({
fontSize: "0.75rem",
fontWeight: 500,
border: "none",
borderRadius: "0.375rem",
borderRadius: "0.25rem",
cursor: "pointer",
backgroundColor: isActive
? "rgba(203,166,247,0.2)"
@@ -133,7 +133,7 @@ export function CategoryPicker({
disabled={isMultiStage && !selectedDomain}
style={{
width: "100%",
padding: "0.4rem 0.5rem",
padding: "0.5rem 0.5rem",
fontSize: "var(--font-table)",
border: "none",
borderBottom: "1px solid var(--ctp-surface1)",

View File

@@ -1,5 +1,5 @@
import { useState, useCallback } from "react";
import { get, post, put } from "../../api/client";
import { get, post } from "../../api/client";
import type {
Project,
FormFieldDescriptor,
@@ -95,34 +95,9 @@ export function CreateItemPane({ onCreated, onCancel }: CreateItemPaneProps) {
[],
);
const handleFilesAdded = useCallback(
(files: PendingAttachment[]) => {
const startIdx = attachments.length;
setAttachments((prev) => [...prev, ...files]);
files.forEach((f, i) => {
const idx = startIdx + i;
setAttachments((prev) =>
prev.map((a, j) =>
j === idx ? { ...a, uploadStatus: "uploading" } : a,
),
);
upload(f.file, (progress) => {
setAttachments((prev) =>
prev.map((a, j) =>
j === idx ? { ...a, uploadProgress: progress } : a,
),
);
}).then((result) => {
setAttachments((prev) =>
prev.map((a, j) => (j === idx ? result : a)),
);
});
});
},
[attachments.length, upload],
);
const handleFilesAdded = useCallback((files: PendingAttachment[]) => {
setAttachments((prev) => [...prev, ...files]);
}, []);
const handleFileRemoved = useCallback((index: number) => {
setAttachments((prev) => prev.filter((_, i) => i !== index));
@@ -136,24 +111,15 @@ export function CreateItemPane({ onCreated, onCancel }: CreateItemPaneProps) {
const file = input.files?.[0];
if (!file) return;
const pending: PendingAttachment = {
setThumbnailFile({
file,
objectKey: "",
uploadProgress: 0,
uploadStatus: "uploading",
};
setThumbnailFile(pending);
upload(file, (progress) => {
setThumbnailFile((prev) =>
prev ? { ...prev, uploadProgress: progress } : null,
);
}).then((result) => {
setThumbnailFile(result);
uploadStatus: "pending",
});
};
input.click();
}, [upload]);
}, []);
const handleSubmit = async () => {
if (!category) {
@@ -188,33 +154,24 @@ export function CreateItemPane({ onCreated, onCancel }: CreateItemPaneProps) {
});
const pn = result.part_number;
const encodedPN = encodeURIComponent(pn);
// Associate uploaded attachments.
const completed = attachments.filter(
(a) => a.uploadStatus === "complete" && a.objectKey,
);
for (const att of completed) {
// Upload attachments via direct multipart POST.
for (const att of attachments) {
try {
await post(`/api/items/${encodeURIComponent(pn)}/files`, {
object_key: att.objectKey,
filename: att.file.name,
content_type: att.file.type || "application/octet-stream",
size: att.file.size,
});
await upload(att.file, `/api/items/${encodedPN}/files/upload`);
} catch {
// File association failure is non-blocking.
// File upload failure is non-blocking.
}
}
// Set thumbnail.
if (
thumbnailFile?.uploadStatus === "complete" &&
thumbnailFile.objectKey
) {
// Upload thumbnail via direct multipart POST.
if (thumbnailFile) {
try {
await put(`/api/items/${encodeURIComponent(pn)}/thumbnail`, {
object_key: thumbnailFile.objectKey,
});
await upload(
thumbnailFile.file,
`/api/items/${encodedPN}/thumbnail/upload`,
);
} catch {
// Thumbnail failure is non-blocking.
}
@@ -382,7 +339,7 @@ export function CreateItemPane({ onCreated, onCancel }: CreateItemPaneProps) {
onClick={handleThumbnailSelect}
style={{
aspectRatio: "4/3",
borderRadius: "0.4rem",
borderRadius: "0.5rem",
border: "1px dashed var(--ctp-surface1)",
display: "flex",
alignItems: "center",
@@ -392,21 +349,12 @@ export function CreateItemPane({ onCreated, onCancel }: CreateItemPaneProps) {
backgroundColor: "var(--ctp-mantle)",
}}
>
{thumbnailFile?.uploadStatus === "complete" ? (
{thumbnailFile ? (
<img
src={URL.createObjectURL(thumbnailFile.file)}
alt="Thumbnail preview"
style={{ width: "100%", height: "100%", objectFit: "cover" }}
/>
) : thumbnailFile?.uploadStatus === "uploading" ? (
<span
style={{
fontSize: "var(--font-table)",
color: "var(--ctp-subtext0)",
}}
>
Uploading... {thumbnailFile.uploadProgress}%
</span>
) : (
<span
style={{
@@ -414,7 +362,7 @@ export function CreateItemPane({ onCreated, onCancel }: CreateItemPaneProps) {
color: "var(--ctp-subtext0)",
}}
>
Click to upload
Click to select
</span>
)}
</div>
@@ -619,7 +567,7 @@ function SidebarSection({
textTransform: "uppercase",
letterSpacing: "0.05em",
color: "var(--ctp-subtext0)",
marginBottom: "0.4rem",
marginBottom: "0.5rem",
}}
>
{title}
@@ -636,7 +584,7 @@ function MetaRow({ label, value }: { label: string; value: string }) {
display: "flex",
justifyContent: "space-between",
fontSize: "var(--font-table)",
padding: "0.15rem 0",
padding: "0.25rem 0",
}}
>
<span style={{ color: "var(--ctp-subtext0)" }}>{label}</span>
@@ -686,7 +634,7 @@ const actionBtnStyle: React.CSSProperties = {
fontSize: "0.75rem",
fontWeight: 500,
border: "none",
borderRadius: "0.375rem",
borderRadius: "0.25rem",
color: "var(--ctp-crust)",
cursor: "pointer",
};
@@ -698,17 +646,17 @@ const cancelBtnStyle: React.CSSProperties = {
color: "var(--ctp-subtext1)",
fontSize: "0.75rem",
fontWeight: 500,
padding: "0.25rem 0.4rem",
borderRadius: "0.375rem",
padding: "0.25rem 0.5rem",
borderRadius: "0.25rem",
};
const inputStyle: React.CSSProperties = {
width: "100%",
padding: "0.35rem 0.5rem",
padding: "0.25rem 0.5rem",
fontSize: "var(--font-body)",
backgroundColor: "var(--ctp-base)",
border: "1px solid var(--ctp-surface1)",
borderRadius: "0.375rem",
borderRadius: "0.25rem",
color: "var(--ctp-text)",
boxSizing: "border-box",
};
@@ -723,7 +671,7 @@ const errorStyle: React.CSSProperties = {
color: "var(--ctp-red)",
backgroundColor: "rgba(243,139,168,0.1)",
padding: "0.5rem",
borderRadius: "0.375rem",
borderRadius: "0.25rem",
marginBottom: "0.5rem",
fontSize: "var(--font-body)",
};

View File

@@ -73,7 +73,7 @@ export function DeleteItemPane({
color: "var(--ctp-red)",
backgroundColor: "rgba(243,139,168,0.1)",
padding: "0.5rem 1rem",
borderRadius: "0.375rem",
borderRadius: "0.25rem",
fontSize: "var(--font-body)",
width: "100%",
textAlign: "center",
@@ -125,7 +125,7 @@ export function DeleteItemPane({
fontSize: "0.75rem",
fontWeight: 500,
border: "none",
borderRadius: "0.375rem",
borderRadius: "0.25rem",
backgroundColor: "var(--ctp-surface1)",
color: "var(--ctp-text)",
cursor: "pointer",
@@ -141,7 +141,7 @@ export function DeleteItemPane({
fontSize: "0.75rem",
fontWeight: 500,
border: "none",
borderRadius: "0.375rem",
borderRadius: "0.25rem",
backgroundColor: "var(--ctp-red)",
color: "var(--ctp-crust)",
cursor: "pointer",
@@ -163,6 +163,6 @@ const headerBtnStyle: React.CSSProperties = {
color: "var(--ctp-subtext1)",
fontSize: "0.75rem",
fontWeight: 500,
padding: "0.25rem 0.4rem",
borderRadius: "0.375rem",
padding: "0.25rem 0.5rem",
borderRadius: "0.25rem",
};

View File

@@ -93,7 +93,7 @@ export function EditItemPane({
fontSize: "0.75rem",
fontWeight: 500,
border: "none",
borderRadius: "0.375rem",
borderRadius: "0.25rem",
backgroundColor: "var(--ctp-blue)",
color: "var(--ctp-crust)",
cursor: "pointer",
@@ -114,7 +114,7 @@ export function EditItemPane({
color: "var(--ctp-red)",
backgroundColor: "rgba(243,139,168,0.1)",
padding: "0.5rem",
borderRadius: "0.375rem",
borderRadius: "0.25rem",
marginBottom: "0.5rem",
fontSize: "var(--font-body)",
}}
@@ -208,11 +208,11 @@ function FormGroup({
const inputStyle: React.CSSProperties = {
width: "100%",
padding: "0.35rem 0.5rem",
padding: "0.25rem 0.5rem",
fontSize: "var(--font-body)",
backgroundColor: "var(--ctp-base)",
border: "1px solid var(--ctp-surface1)",
borderRadius: "0.375rem",
borderRadius: "0.25rem",
color: "var(--ctp-text)",
};
@@ -223,6 +223,6 @@ const headerBtnStyle: React.CSSProperties = {
color: "var(--ctp-subtext1)",
fontSize: "0.75rem",
fontWeight: 500,
padding: "0.25rem 0.4rem",
borderRadius: "0.375rem",
padding: "0.25rem 0.5rem",
borderRadius: "0.25rem",
};

View File

@@ -143,8 +143,8 @@ function FileRow({
display: "flex",
alignItems: "center",
gap: "0.5rem",
padding: "0.25rem 0.4rem",
borderRadius: "0.375rem",
padding: "0.25rem 0.5rem",
borderRadius: "0.25rem",
position: "relative",
}}
>
@@ -153,14 +153,14 @@ function FileRow({
style={{
width: 28,
height: 28,
borderRadius: "0.375rem",
borderRadius: "0.25rem",
backgroundColor: color,
opacity: 0.8,
display: "flex",
alignItems: "center",
justifyContent: "center",
fontSize: "var(--font-xs)",
fontWeight: 700,
fontWeight: 600,
color: "var(--ctp-crust)",
flexShrink: 0,
}}
@@ -239,7 +239,7 @@ function FileRow({
cursor: "pointer",
fontSize: "var(--font-table)",
color: hovered ? "var(--ctp-red)" : "var(--ctp-overlay0)",
padding: "0 0.2rem",
padding: "0 0.25rem",
flexShrink: 0,
transition: "all 0.15s ease",
}}

View File

@@ -90,7 +90,7 @@ export function ImportItemsPane({
color: "var(--ctp-red)",
backgroundColor: "rgba(243,139,168,0.1)",
padding: "0.5rem",
borderRadius: "0.375rem",
borderRadius: "0.25rem",
marginBottom: "0.5rem",
fontSize: "var(--font-body)",
}}
@@ -164,7 +164,7 @@ export function ImportItemsPane({
style={{
display: "flex",
alignItems: "center",
gap: "0.4rem",
gap: "0.5rem",
fontSize: "var(--font-body)",
color: "var(--ctp-subtext1)",
marginBottom: "0.75rem",
@@ -187,11 +187,11 @@ export function ImportItemsPane({
onClick={() => void doImport(true)}
disabled={!file || importing}
style={{
padding: "0.4rem 0.75rem",
padding: "0.5rem 0.75rem",
fontSize: "0.75rem",
fontWeight: 500,
border: "none",
borderRadius: "0.375rem",
borderRadius: "0.25rem",
backgroundColor: "var(--ctp-yellow)",
color: "var(--ctp-crust)",
cursor: "pointer",
@@ -205,11 +205,11 @@ export function ImportItemsPane({
onClick={() => void doImport(false)}
disabled={importing || (result?.error_count ?? 0) > 0}
style={{
padding: "0.4rem 0.75rem",
padding: "0.5rem 0.75rem",
fontSize: "0.75rem",
fontWeight: 500,
border: "none",
borderRadius: "0.375rem",
borderRadius: "0.25rem",
backgroundColor: "var(--ctp-green)",
color: "var(--ctp-crust)",
cursor: "pointer",
@@ -227,7 +227,7 @@ export function ImportItemsPane({
style={{
padding: "0.5rem",
backgroundColor: "var(--ctp-surface0)",
borderRadius: "0.4rem",
borderRadius: "0.5rem",
fontSize: "var(--font-table)",
}}
>
@@ -262,7 +262,7 @@ export function ImportItemsPane({
style={{
color: "var(--ctp-red)",
fontSize: "0.75rem",
padding: "0.15rem 0",
padding: "0.25rem 0",
}}
>
Row {err.row}
@@ -296,6 +296,6 @@ const headerBtnStyle: React.CSSProperties = {
color: "var(--ctp-subtext1)",
fontSize: "0.75rem",
fontWeight: 500,
padding: "0.25rem 0.4rem",
borderRadius: "0.375rem",
padding: "0.25rem 0.5rem",
borderRadius: "0.25rem",
};

View File

@@ -1,5 +1,5 @@
import { useState, useEffect } from "react";
import { X } from "lucide-react";
import { X, Pencil, Trash2 } from "lucide-react";
import { get } from "../../api/client";
import type { Item } from "../../api/types";
import { MainTab } from "./MainTab";
@@ -103,7 +103,7 @@ export function ItemDetail({
</span>
<span
style={{
padding: "0.15rem 0.5rem",
padding: "0.25rem 0.5rem",
borderRadius: "1rem",
fontSize: "var(--font-sm)",
fontWeight: 500,
@@ -114,22 +114,6 @@ export function ItemDetail({
{item.item_type}
</span>
<span style={{ flex: 1 }} />
{isEditor && (
<>
<button
onClick={() => onEdit(item.part_number)}
style={headerBtnStyle}
>
Edit
</button>
<button
onClick={() => onDelete(item.part_number)}
style={{ ...headerBtnStyle, color: "var(--ctp-red)" }}
>
Delete
</button>
</>
)}
<button
onClick={onClose}
style={{
@@ -142,11 +126,11 @@ export function ItemDetail({
</button>
</div>
{/* Tabs */}
{/* Tabs + actions */}
<div
style={{
display: "flex",
gap: "0",
alignItems: "center",
borderBottom: "1px solid var(--ctp-surface1)",
backgroundColor: "var(--ctp-mantle)",
flexShrink: 0,
@@ -157,7 +141,7 @@ export function ItemDetail({
key={tab.key}
onClick={() => setActiveTab(tab.key)}
style={{
padding: "0.4rem 0.75rem",
padding: "0.5rem 0.75rem",
fontSize: "var(--font-table)",
border: "none",
borderBottom:
@@ -175,6 +159,33 @@ export function ItemDetail({
{tab.label}
</button>
))}
<span style={{ flex: 1 }} />
{isEditor && (
<div
style={{ display: "flex", gap: "0.25rem", paddingRight: "0.5rem" }}
>
<button
onClick={() => onEdit(item.part_number)}
style={{
...tabActionBtnStyle,
color: "var(--ctp-subtext1)",
}}
title="Edit item"
>
<Pencil size={13} /> Edit
</button>
<button
onClick={() => onDelete(item.part_number)}
style={{
...tabActionBtnStyle,
color: "var(--ctp-red)",
}}
title="Delete item"
>
<Trash2 size={13} /> Delete
</button>
</div>
)}
</div>
{/* Tab Content */}
@@ -205,5 +216,17 @@ const headerBtnStyle: React.CSSProperties = {
cursor: "pointer",
color: "var(--ctp-subtext1)",
fontSize: "var(--font-table)",
padding: "0.25rem 0.4rem",
padding: "0.25rem 0.5rem",
};
const tabActionBtnStyle: React.CSSProperties = {
display: "inline-flex",
alignItems: "center",
gap: "0.25rem",
background: "none",
border: "none",
cursor: "pointer",
fontSize: "var(--font-table)",
padding: "0.25rem 0.5rem",
borderRadius: "0.25rem",
};

View File

@@ -268,7 +268,7 @@ export function ItemTable({
<td key={col.key} style={tdStyle}>
<span
style={{
padding: "0.15rem 0.5rem",
padding: "0.25rem 0.5rem",
borderRadius: "1rem",
fontSize: "0.75rem",
fontWeight: 500,
@@ -398,6 +398,6 @@ const actionBtnStyle: React.CSSProperties = {
cursor: "pointer",
fontSize: "0.75rem",
fontWeight: 500,
padding: "0.15rem 0.4rem",
borderRadius: "0.375rem",
padding: "0.25rem 0.5rem",
borderRadius: "0.25rem",
};

View File

@@ -41,7 +41,7 @@ export function ItemsToolbar({
fontSize: "0.75rem",
fontWeight: 500,
border: "none",
borderRadius: "0.375rem",
borderRadius: "0.25rem",
cursor: "pointer",
backgroundColor:
filters.searchScope === scope
@@ -81,7 +81,7 @@ export function ItemsToolbar({
padding: "var(--d-input-py) var(--d-input-px)",
backgroundColor: "var(--ctp-surface0)",
border: "1px solid var(--ctp-surface1)",
borderRadius: "0.4rem",
borderRadius: "0.5rem",
color: "var(--ctp-text)",
fontSize: "var(--d-input-font)",
}}
@@ -144,7 +144,7 @@ export function ItemsToolbar({
...toolBtnStyle,
display: "inline-flex",
alignItems: "center",
gap: "0.35rem",
gap: "0.25rem",
}}
title="Export CSV"
>
@@ -159,7 +159,7 @@ export function ItemsToolbar({
...toolBtnStyle,
display: "inline-flex",
alignItems: "center",
gap: "0.35rem",
gap: "0.25rem",
}}
title="Import CSV"
>
@@ -177,7 +177,7 @@ export function ItemsToolbar({
color: "var(--ctp-crust)",
display: "inline-flex",
alignItems: "center",
gap: "0.35rem",
gap: "0.25rem",
}}
>
<Plus size={14} /> New
@@ -191,7 +191,7 @@ const selectStyle: React.CSSProperties = {
padding: "var(--d-input-py) var(--d-input-px)",
backgroundColor: "var(--ctp-surface0)",
border: "1px solid var(--ctp-surface1)",
borderRadius: "0.4rem",
borderRadius: "0.5rem",
color: "var(--ctp-text)",
fontSize: "var(--d-input-font)",
};
@@ -200,7 +200,7 @@ const toolBtnStyle: React.CSSProperties = {
padding: "var(--d-input-py) var(--d-input-px)",
backgroundColor: "var(--ctp-surface1)",
border: "none",
borderRadius: "0.375rem",
borderRadius: "0.25rem",
color: "var(--ctp-text)",
fontSize: "0.75rem",
fontWeight: 500,

View File

@@ -134,7 +134,7 @@ export function MainTab({ item, onReload, isEditor }: MainTabProps) {
marginTop: "0.75rem",
padding: "0.5rem",
backgroundColor: "var(--ctp-surface0)",
borderRadius: "0.4rem",
borderRadius: "0.5rem",
fontSize: "var(--font-body)",
}}
>
@@ -177,7 +177,7 @@ export function MainTab({ item, onReload, isEditor }: MainTabProps) {
display: "inline-flex",
alignItems: "center",
gap: "0.25rem",
padding: "0.15rem 0.5rem",
padding: "0.25rem 0.5rem",
borderRadius: "1rem",
backgroundColor: "rgba(203,166,247,0.15)",
color: "var(--ctp-mauve)",
@@ -208,11 +208,11 @@ export function MainTab({ item, onReload, isEditor }: MainTabProps) {
value={addProject}
onChange={(e) => setAddProject(e.target.value)}
style={{
padding: "0.15rem 0.25rem",
padding: "0.25rem 0.25rem",
fontSize: "0.75rem",
backgroundColor: "var(--ctp-surface0)",
border: "1px solid var(--ctp-surface1)",
borderRadius: "0.375rem",
borderRadius: "0.25rem",
color: "var(--ctp-text)",
}}
>
@@ -229,12 +229,12 @@ export function MainTab({ item, onReload, isEditor }: MainTabProps) {
<button
onClick={() => void handleAddProject()}
style={{
padding: "0.15rem 0.4rem",
padding: "0.25rem 0.5rem",
fontSize: "var(--font-sm)",
border: "none",
backgroundColor: "var(--ctp-mauve)",
color: "var(--ctp-crust)",
borderRadius: "0.375rem",
borderRadius: "0.25rem",
cursor: "pointer",
}}
>
@@ -253,7 +253,7 @@ export function MainTab({ item, onReload, isEditor }: MainTabProps) {
marginTop: "0.75rem",
padding: "0.5rem",
backgroundColor: "var(--ctp-surface0)",
borderRadius: "0.4rem",
borderRadius: "0.5rem",
}}
>
<div
@@ -298,7 +298,7 @@ export function MainTab({ item, onReload, isEditor }: MainTabProps) {
border: "none",
backgroundColor: "var(--ctp-surface1)",
color: "var(--ctp-text)",
borderRadius: "0.375rem",
borderRadius: "0.25rem",
cursor: "pointer",
}}
>

View File

@@ -125,11 +125,11 @@ export function PropertiesTab({
};
const inputStyle: React.CSSProperties = {
padding: "0.25rem 0.4rem",
padding: "0.25rem 0.5rem",
fontSize: "var(--font-table)",
backgroundColor: "var(--ctp-base)",
border: "1px solid var(--ctp-surface1)",
borderRadius: "0.375rem",
borderRadius: "0.25rem",
color: "var(--ctp-text)",
};
@@ -165,7 +165,7 @@ export function PropertiesTab({
padding: "0.25rem 0.75rem",
fontSize: "var(--font-table)",
border: "none",
borderRadius: "0.375rem",
borderRadius: "0.25rem",
backgroundColor: "var(--ctp-mauve)",
color: "var(--ctp-crust)",
cursor: "pointer",
@@ -250,7 +250,7 @@ export function PropertiesTab({
marginTop: "0.25rem",
display: "inline-flex",
alignItems: "center",
gap: "0.35rem",
gap: "0.25rem",
}}
>
<Plus size={14} /> Add Property
@@ -274,7 +274,7 @@ export function PropertiesTab({
fontSize: "var(--font-table)",
backgroundColor: "var(--ctp-base)",
border: "1px solid var(--ctp-surface1)",
borderRadius: "0.4rem",
borderRadius: "0.5rem",
color: "var(--ctp-text)",
resize: "vertical",
}}
@@ -300,7 +300,7 @@ const tabBtn: React.CSSProperties = {
padding: "0.25rem 0.5rem",
fontSize: "var(--font-table)",
border: "none",
borderRadius: "0.375rem",
borderRadius: "0.25rem",
backgroundColor: "var(--ctp-surface0)",
color: "var(--ctp-subtext1)",
cursor: "pointer",

View File

@@ -97,11 +97,11 @@ export function RevisionsTab({ partNumber, isEditor }: RevisionsTabProps) {
);
const selectStyle: React.CSSProperties = {
padding: "0.25rem 0.4rem",
padding: "0.25rem 0.5rem",
fontSize: "var(--font-table)",
backgroundColor: "var(--ctp-surface0)",
border: "1px solid var(--ctp-surface1)",
borderRadius: "0.375rem",
borderRadius: "0.25rem",
color: "var(--ctp-text)",
};
@@ -147,7 +147,7 @@ export function RevisionsTab({ partNumber, isEditor }: RevisionsTabProps) {
padding: "0.25rem 0.5rem",
fontSize: "var(--font-table)",
border: "none",
borderRadius: "0.375rem",
borderRadius: "0.25rem",
backgroundColor: "var(--ctp-mauve)",
color: "var(--ctp-crust)",
cursor: "pointer",
@@ -164,7 +164,7 @@ export function RevisionsTab({ partNumber, isEditor }: RevisionsTabProps) {
style={{
padding: "0.5rem",
backgroundColor: "var(--ctp-surface0)",
borderRadius: "0.4rem",
borderRadius: "0.5rem",
fontSize: "var(--font-table)",
marginBottom: "0.75rem",
fontFamily: "'JetBrains Mono', monospace",
@@ -250,10 +250,10 @@ export function RevisionsTab({ partNumber, isEditor }: RevisionsTabProps) {
)
}
style={{
padding: "0.15rem 0.25rem",
padding: "0.25rem 0.25rem",
fontSize: "0.75rem",
border: "none",
borderRadius: "0.375rem",
borderRadius: "0.25rem",
backgroundColor: "transparent",
color: statusColors[rev.status] ?? "var(--ctp-text)",
cursor: "pointer",

View File

@@ -0,0 +1,180 @@
import { useEffect, useState } from "react";
import { get } from "../../api/client";
import type {
ModuleInfo,
ModulesResponse,
AdminSettingsResponse,
UpdateSettingsResponse,
} from "../../api/types";
import { ModuleCard } from "./ModuleCard";
const infraModules = ["core", "schemas", "database", "storage"];
const featureModules = [
"auth",
"projects",
"audit",
"freecad",
"odoo",
"jobs",
"dag",
];
export function AdminModules() {
const [modules, setModules] = useState<Record<string, ModuleInfo> | null>(
null,
);
const [settings, setSettings] = useState<AdminSettingsResponse | null>(null);
const [loading, setLoading] = useState(true);
const [error, setError] = useState<string | null>(null);
const [restartRequired, setRestartRequired] = useState(false);
useEffect(() => {
Promise.all([
get<ModulesResponse>("/api/modules"),
get<AdminSettingsResponse>("/api/admin/settings"),
])
.then(([modsResp, settingsResp]) => {
setModules(modsResp.modules);
setSettings(settingsResp);
})
.catch((e) =>
setError(e instanceof Error ? e.message : "Failed to load settings"),
)
.finally(() => setLoading(false));
}, []);
const handleSaved = (moduleId: string, result: UpdateSettingsResponse) => {
if (result.restart_required) setRestartRequired(true);
// Refresh the single module's settings
get<Record<string, unknown>>(`/api/admin/settings/${moduleId}`)
.then((updated) =>
setSettings((prev) => (prev ? { ...prev, [moduleId]: updated } : prev)),
)
.catch(() => {});
};
const handleToggled = (moduleId: string, enabled: boolean) => {
setModules((prev) => {
if (!prev || !prev[moduleId]) return prev;
const updated: Record<string, ModuleInfo> = {
...prev,
[moduleId]: { ...prev[moduleId], enabled },
};
return updated;
});
};
if (loading) {
return (
<div style={sectionStyle}>
<h3 style={sectionTitleStyle}>Module Configuration</h3>
<p style={{ color: "var(--ctp-overlay0)" }}>Loading modules...</p>
</div>
);
}
if (error) {
return (
<div style={sectionStyle}>
<h3 style={sectionTitleStyle}>Module Configuration</h3>
<p style={{ color: "var(--ctp-red)", fontSize: "var(--font-body)" }}>
{error}
</p>
</div>
);
}
if (!modules || !settings) return null;
const renderGroup = (title: string, ids: string[]) => {
const available = ids.filter((id) => modules[id]);
if (available.length === 0) return null;
return (
<div style={{ marginBottom: "1.25rem" }}>
<div style={groupTitleStyle}>{title}</div>
{available.map((id) => {
const meta = modules[id];
if (!meta) return null;
return (
<ModuleCard
key={id}
moduleId={id}
meta={meta}
settings={settings[id] ?? {}}
allModules={modules}
onSaved={handleSaved}
onToggled={handleToggled}
/>
);
})}
</div>
);
};
return (
<div style={sectionStyle}>
<h3 style={sectionTitleStyle}>Module Configuration</h3>
{restartRequired && (
<div style={restartBannerStyle}>
<span style={{ fontWeight: 600 }}>Restart required</span>
<span>Some changes require a server restart to take effect.</span>
<button
onClick={() => setRestartRequired(false)}
style={dismissBtnStyle}
>
Dismiss
</button>
</div>
)}
{renderGroup("Infrastructure", infraModules)}
{renderGroup("Features", featureModules)}
</div>
);
}
// --- Styles ---
const sectionStyle: React.CSSProperties = {
marginTop: "0.5rem",
};
const sectionTitleStyle: React.CSSProperties = {
marginBottom: "1rem",
fontSize: "var(--font-title)",
};
const groupTitleStyle: React.CSSProperties = {
fontSize: "0.7rem",
fontWeight: 600,
textTransform: "uppercase",
letterSpacing: "0.08em",
color: "var(--ctp-overlay1)",
marginBottom: "0.5rem",
};
const restartBannerStyle: React.CSSProperties = {
display: "flex",
gap: "0.75rem",
alignItems: "center",
padding: "0.75rem 1rem",
marginBottom: "1rem",
borderRadius: "0.75rem",
background: "rgba(249, 226, 175, 0.1)",
border: "1px solid rgba(249, 226, 175, 0.3)",
color: "var(--ctp-yellow)",
fontSize: "var(--font-body)",
};
const dismissBtnStyle: React.CSSProperties = {
marginLeft: "auto",
padding: "0.25rem 0.5rem",
borderRadius: "0.25rem",
border: "none",
background: "rgba(249, 226, 175, 0.15)",
color: "var(--ctp-yellow)",
cursor: "pointer",
fontSize: "0.7rem",
fontWeight: 500,
};

Some files were not shown because too many files have changed in this diff Show More