Development guidelines for WitsV3 LLM wrapper system with async patterns, ReAct architecture, and tool orchestration
Development guidelines for WitsV3, an LLM orchestration system with CLI-first approach, ReAct pattern, and tool registry architecture.
Before starting any work:
1. **Read `PLANNING.md`** to understand WitsV3's LLM wrapper architecture and goals
2. **Check `TASK.md`** for current tasks and add new tasks with dates if not listed
3. Understand that WitsV3 is an **async-first LLM orchestration system** with:
- CLI-first approach
- ReAct pattern (Reason-Act-Observe)
- Tool registry for extensibility
4. **Core components structure**:
- `agents/` - Agent implementations (BaseAgent, Orchestrators, Control Center)
- `core/` - Core functionality (config, LLM interface, memory, schemas)
- `tools/` - Tool implementations extending BaseTool
When writing code:
1. **Use Python 3.10+** with type hints throughout all code
2. **All agent/tool methods must be async** - the system is fully asynchronous
3. **Follow PEP8** and format code with black
4. **Use Pydantic models** for data validation (reference `core/schemas.py`)
5. **Import conventions**:
- Use relative imports within packages
- Use absolute imports for cross-package references
6. **Handle async operations properly**:
- Use `await` for all I/O operations
- Use `AsyncGenerator` for streaming responses
- Never use synchronous I/O operations
Maintain clean, modular code:
1. **Maximum 500 lines per file** - split into modules when approaching this limit
2. **Maintain existing directory structure**:
- `agents/` - All agent implementations must extend BaseAgent
- `core/` - Core system functionality only
- `tools/` - All tool implementations must extend BaseTool
3. **Adding new tools**:
- Create in `tools/` directory
- Extend BaseTool class
- Register with `get_llm_description()` method
4. **Adding new agents**:
- Create in `agents/` directory
- Extend BaseAgent class
- Implement required async methods
Ensure comprehensive testing:
1. **Create pytest tests** in `/tests` directory mirroring source structure
2. **Test async functions** using pytest-asyncio
3. **Mock external services** (Ollama, file system) in tests
4. **Minimum coverage per feature**:
- Happy path test
- Error handling test
- Edge case test
5. **Run existing test functions** at bottom of modules when updating code
6. Test with **Ollama running** for integration tests
Follow these core patterns:
1. **StreamData for agent communication**:
- Use `stream_thinking()` for reasoning steps
- Use `stream_action()` for actions
- Follow existing streaming patterns
2. **Tool Registry pattern**:
- All tools must register with `get_llm_description()`
- Tools should provide clear descriptions for LLM consumption
3. **Memory Manager integration**:
- Use `store_memory()` for persistence
- Use `search_memory()` for retrieval
- Ensure MemorySegment serialization works
4. **Config-driven approach**:
- Use WitsV3Config for all settings
- Never hardcode configuration values
- Reference `config.yaml` for settings
5. **Logging pattern**:
- Use `self.logger` in all classes
- Log important operations and errors
When working with LLM functionality:
1. **Ollama is the primary LLM** - ensure compatibility
2. **Support streaming responses** via AsyncGenerator
3. **Parse LLM responses robustly** using ResponseParser
4. **Handle tool calls** through proper ToolCall/ToolResult schemas
5. **Handle connection failures gracefully** with appropriate error messages
6. **Unicode handling** is important - ensure clean text output
Maintain clear documentation:
1. **Update README.md** when adding features or changing setup procedures
2. **Use Google-style docstrings** for all functions:
```python
async def method_name(self, param: str) -> StreamData:
"""
Brief description of method.
Args:
param: Description of parameter
Yields:
StreamData objects for streaming responses
"""
```
3. **Comment non-obvious logic** with `# Reason:` explanations
4. Document architectural decisions in code comments
Track work properly:
1. **Update TASK.md** after completing tasks
2. **Add discovered work** to TASK.md under "Discovered During Work"
3. **Log progress** in task descriptions with dates
4. **Commit messages**:
- Summarize actual changes (e.g., "Remove historical scripts and ignore log directory")
- Follow repository commit style guidelines
- Include detailed descriptions in PR descriptions
**Never do these things:**
**Always do these things:**
Before submitting code, verify:
```python
from tools.base import BaseTool
from typing import AsyncGenerator
import logging
class ExampleTool(BaseTool):
"""Example tool following WitsV3 patterns."""
def __init__(self):
super().__init__()
self.logger = logging.getLogger(__name__)
async def execute(self, param: str) -> AsyncGenerator[str, None]:
"""
Execute the tool operation.
Args:
param: Input parameter
Yields:
Results as strings
"""
self.logger.info(f"Executing with param: {param}")
# Reason: Async operation for I/O
result = await self._async_operation(param)
yield result
def get_llm_description(self) -> dict:
"""Tool description for LLM registry."""
return {
"name": "example_tool",
"description": "Example tool demonstrating WitsV3 patterns",
"parameters": {"param": "string"}
}
```
```python
import pytest
from tools.example_tool import ExampleTool
@pytest.mark.asyncio
async def test_example_tool_happy_path():
"""Test normal operation."""
tool = ExampleTool()
result = []
async for chunk in tool.execute("test"):
result.append(chunk)
assert len(result) > 0
@pytest.mark.asyncio
async def test_example_tool_error_handling():
"""Test error handling."""
tool = ExampleTool()
with pytest.raises(ValueError):
async for _ in tool.execute(None):
pass
```
Leave a review
No reviews yet. Be the first to review this skill!
# Download SKILL.md from killerskills.ai/api/skills/witsv3-llm-orchestration-development-cqjuj7/raw