Expert guidance for developing JAM Auto's job scheduling system - a monorepo with Next.js portal, Node.js scheduler, and Python optimizer with Google OR-Tools integration.
This skill provides comprehensive guidance for working with JAM Auto's job scheduling and optimization system - a monorepo managing technician assignments and routes for automotive services.
Guides AI agents through:
**CRITICAL**: Always consult the project memory system before making changes:
1. **Start with** `.cursor/rules/project_memory.mdc` (master index)
- Current project state and recent changes
- Links to all specialized memory files
- Quick reference for common tasks
2. **Core memory files**:
- `project_context.mdc` - Architecture, conventions, tech stack
- `recent_decisions.mdc` - Implementation choice log
- `pattern_library.mdc` - Reusable code patterns
- `known_issues.mdc` - Common problems and solutions
- `implementation_history.mdc` - Feature timeline
3. **Feature-specific rules**:
- **Scheduler**: `sched-*.mdc` files
- **Optimizer**: `optimizer-*.mdc` files
- **Testing**: `integration-testing-*.mdc`, `scenario-*.mdc`
- **Database**: `database_schema.mdc`
- **Workflow**: `dev_workflow.mdc`
4. **When making changes**:
- Check relevant rule files first
- Follow patterns from `pattern_library.mdc`
- Update memory files with new patterns/decisions
- Refer to `self_improve.mdc` for memory system maintenance
**Monorepo Structure** (pnpm workspaces):
**Data Flow**:
1. Frontend → Supabase (direct RLS queries)
2. Supabase webhooks → Edge Function → Scheduler
3. Scheduler → External APIs + Optimizer
4. Optimizer → Supabase (job updates)
```bash
pnpm install
pnpm run dev # Web app (port 3000)
pnpm run dev:scheduler # Scheduler service
pnpm run dev:optimiser # Optimizer (port 8080)
pnpm run build:web
pnpm run build:scheduler
```
**Unit tests**:
```bash
pnpm run test # All tests
pnpm run test:web # Web tests
pnpm run test:scheduler # Scheduler tests
pnpm test -- path/to/test.test.ts # Single file
pnpm test -- --watch # Watch mode
```
**Integration tests** (CRITICAL ORDER):
```bash
pnpm run db:seed:staging
pnpm db:seed:staging -- --action scenario --name SCENARIO_NAME --techs N \
--baseline-metadata tests/integration/.baseline-metadata.json \
--output-metadata tests/integration/.current-scenario-metadata.json
START_TIME=$(date -u +"%Y-%m-%dT%H:%M:%S.%3NZ")
pnpm jest tests/integration/scheduler/SCENARIO_NAME.test.ts
END_TIME=$(date -u +"%Y-%m-%dT%H:%M:%S.%3NZ")
docker logs test_scheduler --since "$START_TIME" --until "$END_TIME" > "debug/SCENARIO_NAME_scheduler.log" 2>&1
```
**Note**: Use MCP Supabase tool to inspect staging database (nvjdgldtvuhowarpulyl) during debugging.
**E2E tests**:
```bash
pnpm run test:e2e --generate
pnpm run e2e:run # Interactive runner
```
```bash
pnpm run db:generate-types
pnpm run db:seed:staging
pnpm run db:clean:staging
cd simulation && docker-compose up -d
```
```bash
pnpm run lint # Lint all code
pnpm run format # Format code
pnpm run type-check # Type checking (web)
```
1. **Creation**: Orders → Jobs with priority (insurance > commercial > residential)
2. **Trigger**: Database webhooks or Cloud Scheduler → `/run-replan`
3. **Orchestration** (`runFullReplan`):
- Fetch relevant jobs (queued, en_route, in_progress, fixed_time)
- Calculate technician availability (hours - exceptions - locked jobs)
- Bundle jobs by order_id (except fixed_time)
- Validate equipment eligibility
- Prepare optimization payload per day
- Call optimizer service
- Update job assignments
4. **Optimization**: Python OR-Tools VRP solver
1. **Run all commands from root directory**
2. **Frontend queries Supabase directly** (no backend API layer)
3. **Scheduler is webhook-triggered**, not user-initiated
4. **Use simulation environment** for backend testing
5. **Integration tests require staging database** with test data
6. **Always check memory files** before implementing features
7. **Follow Task Master workflow** (see `dev_workflow.mdc`)
**Adding a new scheduler feature**:
1. Check `.cursor/rules/sched-*.mdc` for orchestration patterns
2. Review `pattern_library.mdc` for code examples
3. Check `database_schema.mdc` for table structure
4. Implement following established patterns
5. Update `recent_decisions.mdc` with new choices
6. Add integration test scenario
7. Update relevant memory files
**Debugging integration test**:
1. Seed baseline data
2. Seed specific scenario with correct tech count
3. Run test with timestamp-coordinated logging
4. Use MCP Supabase tool to inspect staging DB
5. Review scheduler logs in `debug/` directory
6. Check `known_issues.mdc` for similar problems
**Running full development cycle**:
```bash
pnpm install
cd simulation && docker-compose up -d
pnpm run dev # in separate terminals for each service
pnpm run lint
pnpm run type-check
pnpm run test
pnpm run test:integration
pnpm run db:generate-types # after schema changes
```
Leave a review
No reviews yet. Be the first to review this skill!
# Download SKILL.md from killerskills.ai/api/skills/jam-auto-customer-portal-development-guide/raw