AI-Enabled LLC Job Matching Platform
Development rules and guidelines for building an anonymous, capability-first job matching platform with integrated AI decision-making, XDMIQ credentialing, and Cursor multi-agent coordination.
Project Overview
This platform enables anonymous job matching for LLC owners based on capabilities rather than credentials. It integrates XDMIQ scoring, maintains zero-knowledge identity architecture, and implements human-in-the-loop AI decisions with state recovery support.
**Tech Stack:**
Backend: FastAPI with Pydantic models, SQLAlchemy ORMFrontend: Next.js 14+ with TypeScript, React Server ComponentsAI: OpenAI API with fallbacksTools: Warp (terminal), Claude Code (AI assistance), Cursor (IDE with agents)**Scale Target:** 1B+ users
Core Principles
When working on this project, always adhere to these foundational principles:
1. **Human-in-the-loop is paramount** — AI assists, humans decide. Every AI decision must queue for human review.
2. **Anonymous identity preserved** — Never expose real identity. All auth providers link to anonymous profiles.
3. **Capability over credentials** — Match on skills and abilities, not résumés or titles.
4. **State recovery required** — Create checkpoints for all AI decisions to support rollback and testing.
5. **Consistency with existing patterns** — Reference the codebase before creating new patterns.
Code Style Guidelines
Backend (FastAPI)
Use Pydantic models for all request/response schemasLeverage SQLAlchemy ORM for database operationsInclude comprehensive error handling with proper HTTP status codesMaintain anonymous identity in all database models and API responsesCreate state checkpoints for AI-driven operationsFrontend (Next.js 14+)
Use TypeScript strictlyPrefer React Server Components where possibleClient components only when interactivity requiredMaintain anonymous user context throughout UIHandle loading and error states explicitlyGeneral
Write self-documenting code with clear variable namesAdd comments only when logic is non-obviousLog all AI decisions for human review queueDesign for horizontal scalabilityTest state recovery pathsAuthentication System
Implement multi-provider authentication while preserving anonymity:
**Supported Providers:**
FacebookLinkedInGoogleMicrosoftAppleEmailSMS/VoIP**Key Requirements:**
All providers link to a single anonymous identity per userNever store or expose real names, emails, or phone numbers in public-facing featuresUse anonymous identifiers (e.g., `user_<uuid>`) throughout the systemAuthentication tokens must not leak identity informationAI Integration Guidelines
Human-in-the-Loop Architecture
1. **Every AI decision creates a review item** — Store decision, context, and timestamp
2. **Humans can approve, reject, or modify** — AI learns from human feedback
3. **Create decision checkpoints** — Allow rollback to any previous state
4. **Log all AI reasoning** — Store prompt, response, confidence score
OpenAI API Usage
Use OpenAI API with fallback to alternative providersImplement retry logic with exponential backoffCache responses when appropriate (non-user-specific decisions)Monitor API costs and set rate limits per user tierState Recovery Support
Save state before every AI operationInclude: input data, AI response, timestamp, decision IDSupport rollback command to restore previous stateEnable replay mode for testing AI decision pathsXDMIQ Integration
XDMIQ measures decision-making intelligence through preference-based questions.
Question Format
**Prompt:** "Which do you prefer?" (present two options)**Follow-up:** "Why?" (capture reasoning)Never use direct skill assessments or test-like questionsScoring
Calculate XDMIQ score (0-100) based on: - Consistency of preferences
- Depth of reasoning
- Alignment with role requirements
Store scores in user profile (anonymous)Update scores incrementally as users answer more questionsMatching Algorithm Integration
Weight XDMIQ score alongside capability matchesHigher XDMIQ = better decision-making fit for roleUse XDMIQ to differentiate between similarly skilled candidatesCursor Agents System
This project uses a **15-agent coordination system** defined in `.cursor/agents/`. Each agent specializes in a domain and can coordinate with others automatically.
Agent Registry Location
**Registry:** `.cursor/agents/agents-registry.json`**Agent Configs:** `.cursor/agents/<agent-name>.json`**Agent Scripts:** `E:\agents\` directoryAvailable Agents by Category
**C-Suite (Strategic Decision-Making):**
`@fractional-ceo-agent` — Strategic vision, prioritization, business decisions`@fractional-cfo-agent` — Financial analysis, budgeting, cost optimization`@fractional-coo-agent` — Operational efficiency, process improvement**Development (Code & Architecture):**
`@principal-dev-agent` — Architecture review, code quality, technical decisions`@dev-sync-agent` — Development coordination, dependency management`@review-agent` — Code review, PR analysis**Project Management:**
`@git-project-manager-agent` — Git/GitHub operations, branch strategy`@project-manager-agent` — Sprint planning, task tracking`@program-manager-agent` — Multi-project coordination`@product-agent` — Feature prioritization, user stories**Operations:**
`@change-management-agent` — Change control, deployment planning`@infrastructure-agent` — Infrastructure, DevOps, scaling**Specialized:**
`@historian-agent` — Documentation, decision history`@data-governance-agent` — Compliance, data privacy, security audits`@link-validation-agent` — Link checking, external dependency validationWhen to Invoke Agents
Use agents to delegate specialized tasks:
| Task | Agent |
|------|-------|
| "Should we prioritize feature X or Y?" | `@fractional-ceo-agent` |
| "What's the cost impact of this change?" | `@fractional-cfo-agent` |
| "How can we optimize this workflow?" | `@fractional-coo-agent` |
| "Review this architecture design" | `@principal-dev-agent` |
| "Create a PR for this feature" | `@git-project-manager-agent` |
| "Plan next sprint" | `@project-manager-agent` |
| "Is this GDPR compliant?" | `@data-governance-agent` |
| "Document this decision" | `@historian-agent` |
Agent Usage Syntax
```
@agent-name [command or question]
```
**Examples:**
`@fractional-ceo-agent Should we pivot to focus on mobile-first?``@principal-dev-agent Review the auth flow architecture``@data-governance-agent Audit anonymous identity implementation`Agent Verification
At the start of each session, verify agents are loaded:
"What agents are available?""Show me the agent registry""List all agents"Claude Code Hooks Integration
This project uses `.claude-code/hooks.json` for contextual AI assistance. Reference hooks to:
Understand project context before generating codeAlign with existing patterns and conventionsTrigger pre/post-commit checksMaintain consistency across the codebaseTesting & Quality Assurance
Required Testing
Unit tests for all business logic (FastAPI routes, utility functions)Integration tests for authentication flowsEnd-to-end tests for critical user journeys (sign-up → match → apply)AI decision replay tests (verify state recovery works)Quality Checks
Anonymous identity never leaks in API responsesAll AI decisions logged and reviewableState checkpoints created before AI operationsHuman review queue never bypassedDeployment & Scalability
Scalability Considerations
Design for horizontal scaling (stateless services)Use connection pooling for databaseCache frequently accessed data (e.g., XDMIQ questions)Rate limit AI API calls per user tierMonitoring
Track AI decision accuracy (human approval rate)Monitor authentication provider uptimeAlert on anonymous identity leaks (if any real data exposed)Log all state recovery eventsExample Workflow: Adding a New Feature
1. **Strategic review:** `@fractional-ceo-agent Is this feature aligned with our roadmap?`
2. **Financial check:** `@fractional-cfo-agent What's the cost impact?`
3. **Architecture design:** `@principal-dev-agent Design the implementation approach`
4. **Implementation:** Write code following existing patterns (reference codebase)
5. **AI integration:** If feature uses AI, add human review queue + state checkpoints
6. **Testing:** Write tests covering state recovery paths
7. **Documentation:** `@historian-agent Document this feature and decision rationale`
8. **Deployment:** `@change-management-agent Plan rollout strategy`
Important Reminders
**Never hardcode API keys** — Use environment variables**Always preserve anonymity** — Audit all API responses for identity leaks**Human review is mandatory** — No AI decision goes live without human approval**State recovery is non-negotiable** — Every AI operation must support rollback**Consult agents before major decisions** — Leverage the 15-agent system for specialized guidance---
**For questions about specific patterns, reference the existing codebase first. For strategic decisions, consult the appropriate agent. For AI integration, always implement human-in-the-loop with state recovery.**