Oros Multi-LLM Blockchain Chat Development
Development guidelines for Oros (formerly "kavachat"), a multi-LLM chat application that bridges user prompts to AI for blockchain task interactions.
Project Overview
Oros is an evolving codebase for a multi-LLM chat application focused on blockchain interactions. The current architecture includes:
1. **Golang Proxy** - Stateless server between frontend and LLM that routes chat requests, handles prompt engineering, streams responses, and prevents code injection
2. **React Frontend** - Chat interface with wallet connection and real-time streaming replies
3. **Comprehensive Testing** - Unit tests (Go/TypeScript), integration tests, and Docker-based e2e checks
**Marketing tagline:** "Oros: bring deAI to any dApp"
Future Vision
Multi-LLM support (community open models + GPT, deModels)Ephemeral/partial memory for conversation contextMulti-chain interactions and bridgingAI agents proposing on-chain transactionsAdditional interfaces (Telegram, mobile, etc.)Development Instructions
1. Architecture Principles
**Maintain Stateless Proxy Design:**
Keep the Go proxy straightforward and statelessDefer complex session logic to future expansionsMaintain minimal dependencies for fast iterationDocument critical behaviors in tests**Code Organization:**
Separate concerns between proxy logic, UI, and integrationUse clear function signatures and type definitionsKeep components modular for future multi-LLM expansion2. Testing Requirements
**Comprehensive Test Coverage:**
Write unit tests for all critical Go proxy functionsCreate TypeScript tests for React componentsBuild integration tests verifying LLM response correctnessUse Docker environment for end-to-end validationDocument expected behaviors through test casesEnsure tests pass before any merge**Test Focus Areas:**
Prompt engineering and injection preventionStreaming response handlingWallet connection flowsError handling and resilienceLLM integration points3. Workflow & PRs
**Branch Strategy:**
Create short-lived feature branches off main/devUse descriptive branch names (e.g., `feature/multi-llm-support`)**Pull Request Process:**
1. Open PR with succinct description
2. Reference relevant issue or user story
3. Tag appropriate reviewers (product lead, lead engineer, domain experts)
4. Ensure all tests pass
5. Address review feedback inline
6. Merge with squash commit when approved
**Commit Standards:**
Write clear, atomic commitsUse conventional commit format when possibleKeep commit history clean and reviewable4. Incremental Development
**Feature Implementation:**
Prioritize small, frequent PRs over large changesUse feature flags or environment configs for new functionalityEnsure backward compatibility during transitionsTest in isolation before integration**Configuration Management:**
Use environment variables for LLM endpointsMake new features configurableDocument all config options5. Code Quality
**Linting & Standards:**
Run linters before committingFollow Go and TypeScript/React best practicesMaintain consistent code style across codebase**CI/CD Pipeline (GitHub Actions):**
Linting checksUnit and integration test executionBuild validation for Go and ReactDocker build for e2e tests6. Documentation
**Code Documentation:**
Add inline comments for complex logicDocument all exported functions/componentsMaintain clear API endpoint documentationUpdate .cursorrules as architecture evolves**Architectural Changes:**
Propose major changes in `docs/architecture` or ADRsDiscuss with team before implementingDocument decisions and trade-offs7. User Experience Focus
**Frontend Development:**
Prioritize smooth chat interactionsEnsure responsive streaming updatesHandle loading and error states gracefullyDesign for future blockchain transaction flows**Developer Experience:**
Keep setup instructions clearMaintain readable function signaturesProvide helpful error messagesDocument integration patterns8. Communication
**Channels:**
**Slack:** Day-to-day engineering and urgent matters**GitHub:** Issues, PRs, bug tracking, code feedback**Stand-ups:** Regular or async team updates**Feedback:**
Provide inline PR comments for code suggestionsEscalate architectural discussions to team docsKeep conversations focused and actionable9. Versioning
Follow semantic versioning (currently v0.x.x in early stage)Frequent minor/patch releases expectedDocument breaking changes clearly10. Future Considerations
**Plan for Evolution:**
Structure code to accommodate multi-LLM switchingDesign APIs with multi-chain support in mindConsider memory/session patterns for future implementationKeep mobile/alternative interface expansion viableKey Constraints
**Stateless Backend:** No session storage in current proxy**Early Stage:** Architecture will evolve; maintain flexibility**Security:** Always validate inputs and prevent code injection**Performance:** Optimize for streaming and real-time responsesCommon Questions
**Q: Are conversations stored in the backend?**
A: Currently no. The proxy is stateless. Any memory is ephemeral in the UI or planned for future enhancement.
**Q: What's the main development goal right now?**
A: Solidify core functionality (chat proxy + tests + minimal UI) to enable easy expansion to multi-LLM and multi-chain usage.
**Q: How do I add a new LLM provider?**
A: Design for pluggability. Add provider logic behind feature flags, ensure tests cover new integration, and document configuration requirements.
Example Workflow
```bash
1. Create feature branch
git checkout -b feature/streaming-optimization
2. Implement changes with tests
... develop ...
3. Run tests locally
go test ./...
npm test
4. Lint code
golangci-lint run
npm run lint
5. Build and verify
docker-compose up --build
6. Open PR
git push origin feature/streaming-optimization
Create PR on GitHub with description
```
Notes
This is living documentation - update as the project evolvesKeep the team informed of architectural changesPrioritize clarity and maintainability over clevernessTest-driven development is encouragedSecurity is paramount when handling blockchain interactions