Provides comprehensive guidance for developing ContextLinc, a next-generation context engineering platform with 11-layer architecture, multi-modal processing, and three-tier memory systems for building sophisticated AI agents.
Comprehensive guidance for Claude Code when working on ContextLinc - a next-generation context engineering platform that transforms simple chatbots into intelligent assistants through advanced context management.
ContextLinc is a comprehensive context engineering platform implementing the **11-layer Context Window Architecture (CWA)** to build sophisticated AI agents. This is a BRAINSAIT LTD healthcare technology project currently in the specification and documentation phase.
**Core Philosophy**: Most agent failures are context failures, not model failures. ContextLinc provides the right information, in the right format, at the right time through dynamic context systems.
When implementing features, always consider the complete context stack:
1. **Instructions Layer** - AI constitution, persona, goals, ethical boundaries
2. **User Info Layer** - Personalization data, preferences, account details
3. **Knowledge Layer** - Retrieved documents and domain expertise (RAG)
4. **Task/Goal State** - Multi-step task management and workflow tracking
5. **Memory Layer** - Three-tier memory system (short/medium/long-term)
6. **Tools Layer** - External tool definitions and capabilities
7. **Examples Layer** - Few-shot learning examples and demonstrations
8. **Context Layer** - Current conversation state and immediate context
9. **Constraints Layer** - Operational limits and safety guidelines
10. **Output Format** - Response structure and formatting requirements
11. **User Query** - The immediate input triggering generation
```
contextlinc/
├── apps/
│ ├── web/ # Next.js PWA
│ ├── mobile/ # React Native (Phase 2)
│ └── api/ # Node.js API services
├── packages/
│ ├── ui/ # Shared UI components
│ ├── context-engine/ # Core context engineering logic
│ ├── multi-modal/ # File processing pipeline
│ ├── memory-system/ # Three-tier memory architecture
│ └── shared/ # Shared utilities
├── services/
│ ├── context-service/ # Context management microservice
│ ├── inference-service/ # AI model inference
│ ├── file-processor/ # Multi-modal file processing
│ ├── memory-service/ # Memory management
│ └── auth-service/ # Authentication
└── infrastructure/
├── docker/ # Container configurations
├── kubernetes/ # K8s deployment manifests
└── terraform/ # Infrastructure as code
```
```bash
npx create-turbo@latest contextlinc --package-manager pnpm
cd contextlinc
pnpm install
pnpm db:setup
pnpm db:migrate
pnpm dev
```
```bash
pnpm dev # Start all development servers
pnpm build # Build all packages
pnpm test # Run all tests
pnpm lint # Lint all packages
pnpm db:migrate # Run database migrations
pnpm db:seed # Seed database with test data
pnpm deploy:staging # Deploy to staging
pnpm deploy:prod # Deploy to production
```
When implementing file processing, follow this pipeline:
```
Input → Format Detection → Preprocessing → AI Analysis →
Metadata Extraction → Vector Generation → Context Integration → Storage
```
**Supported Formats**:
Design all features to support three memory tiers:
1. **Short-term Memory**
- Current conversation context within LLM context window
- Immediate reference for ongoing interaction
- Automatic pruning based on token limits
2. **Medium-term Memory**
- Session-based continuity across conversations
- Stored in Redis with TTL
- Maintains user context within active sessions
3. **Long-term Memory**
- Persistent semantic memory using pgvector
- Cross-session knowledge retention
- Relevance-based retrieval using embeddings
All implementations must aim for:
This repository contains:
**Status**: Documentation and specification phase. Implementation has not yet begun.
When starting implementation:
1. **Read All Documentation**: Review `contextlinc-claude-instructions.md` for complete technical specifications
2. **Follow Architecture**: Adhere strictly to the 11-layer CWA pattern
3. **Start with Foundation**: Implement Phase 1 features before advancing
4. **Test Thoroughly**: Write tests before implementation (TDD approach)
5. **Document Decisions**: Create ADRs for architectural choices
6. **Performance First**: Keep performance targets in mind from day one
7. **Security by Design**: Implement security controls from the start, not as an afterthought
For architectural decisions or implementation questions:
When asked to implement a feature:
```
User: "Add document upload to the web app"
Expected Response:
1. Confirm understanding of 11-layer architecture impact
2. Design multi-modal processing pipeline integration
3. Implement with Apache Tika for document parsing
4. Generate embeddings using Voyage Multimodal-3
5. Store vectors in pgvector with metadata
6. Update context layer to include document references
7. Write comprehensive tests
8. Document API endpoints and data flow
```
---
**Project Context**: BRAINSAIT LTD healthcare technology innovation focused on transforming AI agent interactions through sophisticated context engineering. This represents a paradigm shift from traditional chatbots to intelligent assistants with sophisticated context awareness.
Leave a review
No reviews yet. Be the first to review this skill!
# Download SKILL.md from killerskills.ai/api/skills/contextlinc-engineering-guide/raw