RoboTorq Development Workflow
This skill implements the proven 14-step Power Workflow used to achieve 95% completion of the RoboTorq currency refactor. It guides AI agents through architecture-first development, incremental commits, and comprehensive testing.
Project Context
RoboTorq is a NATS-based distributed system where robotic labor creates measurable value through physics. It is NOT a cryptocurrency—it's a message-passing system with no blockchain, no mining, and no gas fees.
**Core Equation**: `1 RoboTorq = 1 kWh × 3600 tokens/sec × 1 hour`
**Architecture**: Services communicate via NATS pub/sub with real-time message flows.
Workflow Steps
Phase 1: Planning & Context (Steps 1-3)
#### Step 1: Branch Checkout
Always create feature branches following the pattern `feature/*`Never commit directly to `main`Check for existing feature branch before creating new onePull latest changes if continuing existing work```bash
git checkout -b feature/your-feature-name
OR
git checkout feature/existing-feature
git pull origin feature/existing-feature
```
#### Step 2: Architecture Review
Read relevant architecture documentation BEFORE writing codeStudy `VAULT_IMPLEMENTATION_PLAN.md` for overall SPRINT architectureReview service-specific docs: `MINT_ARCHITECTURE.md`, `REFINERY_ARCHITECTURE.md`, `TRUST_ARCHITECTURE.md`Check `PORT_MAPPINGS.md` BEFORE making any port changesUnderstand component responsibilities, data flows, and key data structures**Focus on**:
What does each component do?How do services communicate?What are the key data structures?What configuration patterns exist?#### Step 3: Code Pattern Analysis
Study existing implementations to match project patternsLook for constructor patterns (`New*()`)Identify interface definitionsUnderstand error handling approachMatch logging style (structured JSON logs with `slog`)Follow metrics instrumentation patterns (Prometheus)**Key RoboTorq Patterns**:
**Dependency Injection**: Pass components via constructors, no globals**Graceful Shutdown**: Respect `context.Context`, flush pending work**Structured Logging**: Use `slog` with contextual fields**Prometheus Metrics**: Instrument every operationPhase 2: Implementation (Steps 4-6)
#### Step 4: Create TODOs in Code
Mark implementation points BEFORE buildingUse naming convention: `TODO(feature-name):` for planned workCreate roadmap with TODOsDocument intent and implementation stepsEnable incremental commits```go
// TODO(currency-refactor): Replace dual queues with single unit queue
// - Remove: jouleQueue, roboQueue
// - Add: unitQueue []JouleTorqUnit
// - Update: processOre() to extract units
// - Fix: assembleIngot() to consume units
// - Test: Multi-contract unit aggregation
```
#### Step 5: Execute TODOs (Implementation)
Tackle ONE TODO at a timeTest each increment as you goImplement small, verifiable changesRun unit tests after each function implementationDo not batch multiple TODOs without testing**Incremental testing**:
```bash
go test ./internal/component/file_test.go -run TestSpecificFunction -v
```
#### Step 6: Create Tests (TDD Approach)
Write tests ALONGSIDE implementation, not afterCreate unit tests for each new functionTest concurrent behavior for goroutine-safe codeTarget 95%+ code coverage for all new codeInclude integration tests for complete flows```bash
go test ./... -cover
```
**Test types**:
Unit tests: Individual function behaviorConcurrency tests: Thread safety validationIntegration tests: Service-to-service flowsE2E tests: Complete pipeline validationPhase 3: Validation (Steps 7-9)
#### Step 7: Step-by-Step Commits
Commit after EACH working incrementDo not wait until end of dayUse Conventional Commits formatInclude test coverage informationReference relevant documentation**Commit format**:
```
<type>(<scope>): <subject>
<body>
<footer>
```
**Types**: `feat`, `fix`, `test`, `refactor`, `docs`, `chore`
**Scopes**: `mint`, `refinery`, `trust`, `distodam`, `digger`, `wallet`, `printer`
#### Step 8: Conduct E2E Tests
Validate COMPLETE pipeline after changesStart all services with `docker-compose up -d`Run headless Digger testsVerify logs for expected behaviorConfirm metrics and health endpoints**E2E test validates**:
Service startup and healthMessage flow through NATSData transformation correctnessMetrics instrumentationError handling#### Step 9: Update Documentation
Sync documentation with implementation BEFORE mergingUpdate architecture docs with component changesReflect breaking changes in READMEDocument new configuration optionsCommit docs with related code changes**Update locations**:
Service `ARCHITECTURE.md` files`README.md` appendicesConfiguration examplesData structure definitionsPhase 4: Integration (Steps 10-14)
#### Step 10: Update Build and CI
Update `Dockerfile` if dependencies changedModify `docker-compose.yaml` for new environment variablesVerify local build succeedsRun tests in containerCheck service health endpoints```bash
docker build -t service:test .
docker run --rm service:test go test ./... -v
docker-compose up -d service
curl http://localhost:PORT/health
```
#### Step 11: Push to GitHub for Testing
Ensure all tests pass locally firstRun E2E tests before pushingPush feature branch to trigger CIMonitor GitHub Actions for failures```bash
go test ./... -v
./test-digger-e2e.ps1 # or .sh
git push origin feature/your-feature
```
#### Step 12: Create Pull Request
Write clear PR description explaining changesReference architecture docs updatedList breaking changes explicitlyInclude test coverage metricsLink related issues or planning docs**PR checklist**:
[ ] All tests passing[ ] Documentation updated[ ] E2E tests validated[ ] No port conflicts (checked `PORT_MAPPINGS.md`)[ ] Breaking changes documented#### Step 13: Address Review Feedback
Respond to all reviewer commentsMake requested changes in new commitsRe-run tests after each changeUpdate docs if approach changesRequest re-review when ready#### Step 14: Merge and Deploy
Squash commits if requestedMerge to target branch (typically `main`)Monitor deployment logsVerify production healthUpdate project status/planning docsKey Patterns and Conventions
Dependency Injection
```go
func NewMintEngine(logger *slog.Logger, metrics *Metrics) *MintEngine
```
Graceful Shutdown
```go
func (s *Service) Start(ctx context.Context) {
for {
select {
case <-ctx.Done():
s.drainBuffer()
return
// ...
}
}
}
```
Structured Logging
```go
slog.Info("ingot assembled",
"ingot_id", ingot.ID,
"units", len(ingot.Units),
"joules", ingot.JouleTorqTotal)
```
Metrics Instrumentation
```go
type Metrics struct {
IngotsReceivedTotal prometheus.Counter
ProcessingLatency prometheus.Histogram
}
m.IngotsReceivedTotal.Inc()
```
Critical Resources
`VAULT_IMPLEMENTATION_PLAN.md` - Overall architecture`BRANCHING.md` - Git workflow`PORT_MAPPINGS.md` - **MANDATORY before port changes**`src/vault/VAULT_ARCHITECTURE.md` - Documentation templateService-specific `ARCHITECTURE.md` filesSuccess Metrics
95%+ test coverage for new codeE2E tests pass consistentlyDocumentation reflects implementationCommits follow conventional formatCI passes on all feature branchesNo port conflicts or service collisionsNotes
This workflow achieved 95% completion of currency refactor in one sprintArchitecture-first approach prevents reworkIncremental commits enable easy rollbackTDD approach catches issues earlyDocumentation updates prevent drift