Development guide for prototype-termai, a Terminal AI Assistant that provides real-time AI-powered assistance for terminal operations using local LLM models through Ollama.
prototype-termai is a Terminal AI Assistant that provides real-time AI-powered assistance for terminal operations using local LLM models through Ollama. The system watches terminal activity and offers contextual help, error solutions, and command suggestions in a non-intrusive sidebar.
**Current Status**: ~85% complete - all core features are functional and the application is ready for use.
```
src/termai/
├── terminal/ - Terminal emulation (TerminalEmulator, OutputBuffer, CommandHistory, TerminalManager)
├── ui/ - TUI components (TerminalAIApp, TerminalWidget, AISidebar)
├── ai/ - AI integration (8 modules: OllamaClient, ContextManager, RealtimeAnalyzer, etc.)
├── config/ - Configuration management (partial implementation)
└── utils/ - Core utilities (logger - partial implementation)
tests/ - Test files (terminal, ui, ollama, integration)
docs/ - Documentation including plan/ and status.md
examples/ - Usage examples (basic_usage.py)
```
When working with this codebase, follow these steps:
Before making changes, verify the development environment:
1. Check that Python 3.12+ is installed
2. Ensure uv package manager is available
3. Verify Ollama is running (`ollama serve`)
4. Check that at least one model is installed (`ollama list`)
5. Review `.env` file for configuration:
```
OLLAMA_HOST=http://localhost:11434
OLLAMA_MODEL=llama3.2:1b
OLLAMA_TIMEOUT=30
APP_LOG_LEVEL=INFO
AI_MAX_CONTEXT_LENGTH=20
AI_RESPONSE_MAX_TOKENS=500
```
Before modifying code:
1. **Review module structure** in `src/termai/` to understand component boundaries
2. **Check implementation status** in `docs/status.md` for current state
3. **Read relevant checkpoint documentation** in `docs/plan/checkpoints/` for completed features
4. **Examine existing tests** in `tests/` to understand expected behavior
When implementing changes:
1. **Run existing tests first** to establish baseline:
```bash
uv run pytest tests/ -v
```
2. **Make focused, incremental changes** following the architecture patterns:
- Terminal operations go in `terminal/` module
- UI components go in `ui/` module
- AI logic goes in `ai/` module
- Keep modules loosely coupled via event bus
3. **Test changes immediately** after implementation:
```bash
# Run specific test module
uv run pytest tests/test_[module].py -v
# Or run the application
uv run termai --debug --log-level=DEBUG
```
4. **Run code quality checks**:
```bash
uv run pre-commit run --all-files
uv run ruff check .
uv run mypy .
```
#### Adding New AI Triggers
1. Open `src/termai/ai/triggers.py`
2. Add trigger in `_init_default_triggers()` method
3. Set priority (1-10, higher = more important)
4. Define pattern and analysis type
5. Test with `uv run pytest tests/test_ollama.py`
#### Modifying Terminal Integration
1. Locate AI hooks in `src/termai/terminal/manager.py` at `_end_command()` method
2. Command context structure: command → directory → exit_code → output → error → duration
3. Integration flows through `TerminalManager.ai_analyzer`
4. Test with `uv run python tests/test_terminal.py`
#### Updating UI Components
1. Main layout is in `src/termai/ui/app.py` (65%/35% split)
2. Terminal display in `src/termai/ui/terminal_widget.py`
3. AI sidebar in `src/termai/ui/ai_sidebar.py`
4. Keyboard shortcuts: Ctrl+L (clear), Ctrl+A (AI toggle), F1 (help)
5. Test with `uv run python tests/test_ui.py`
#### AI Response Flow
When modifying AI processing:
1. **Entry point**: `ContextManager.process_command()` receives command completion
2. **Trigger evaluation**: Patterns matched in `ai/triggers.py`
3. **Async processing**: `RealtimeAnalyzer` queues and processes requests
4. **Response caching**: 30-40% hit rate reduces redundant LLM calls
5. **UI update**: Callbacks notify `AISidebar` for display
Always test at multiple levels:
1. **Unit tests**: Test individual components in isolation
```bash
uv run pytest tests/test_terminal.py -v
uv run pytest tests/test_ui.py -v
uv run pytest tests/test_ollama.py -v
```
2. **Integration tests**: Verify component interactions
```bash
uv run pytest tests/test_integration.py -v
```
3. **Manual testing**: Run the application
```bash
uv run termai --debug
# Try various commands to test AI responses
```
4. **Performance testing**: Profile if needed
```bash
uv run python -m cProfile -o profile.stats src/termai/main.py
```
#### Adding a New Feature
1. Read `docs/status.md` to understand current implementation state
2. Identify which module the feature belongs to
3. Check if related tests exist in `tests/`
4. Implement feature following existing patterns
5. Add tests for new functionality
6. Update `docs/status.md` with progress
7. Run full test suite before committing
#### Fixing a Bug
1. Check `docs/plan/troubleshooting.md` for known issues
2. Write a failing test that reproduces the bug
3. Fix the bug in the appropriate module
4. Verify the test passes
5. Run regression tests to ensure no breakage
6. Document the fix if it's a common issue
#### Performance Optimization
1. Profile the application to identify bottlenecks
2. Check performance characteristics:
- Startup time: ~2-3 seconds target
- Command response: < 50ms target
- AI analysis: 1-2 seconds average
- Memory usage: ~150-200MB target
3. Optimize hot paths (likely in `OutputBuffer` or `RealtimeAnalyzer`)
4. Measure before/after performance
5. Update documentation with new benchmarks
Before submitting changes:
1. Run `uv run pre-commit run --all-files` (passes all checks)
2. Ensure `uv run ruff check .` shows no errors
3. Verify `uv run mypy .` type checking passes
4. Confirm all tests pass: `uv run pytest tests/ -v`
5. Test application manually: `uv run termai`
When making significant changes:
1. Update `docs/status.md` with implementation progress
2. Add troubleshooting entries to `docs/plan/troubleshooting.md` if needed
3. Update checkpoint validation in `docs/plan/checkpoints/` if completing features
4. Keep CLAUDE.md updated with new patterns or architectural decisions
For performance issues:
1. Check startup time (should be ~2-3 seconds)
2. Verify command response time (< 50ms)
3. Monitor AI analysis time (1-2 seconds average)
4. Profile memory usage (~150-200MB expected)
5. Review response cache hit rate (30-40% target)
For debugging:
1. Enable debug logging: `uv run termai --debug --log-level=DEBUG`
2. Check logs for async processing issues
3. Verify Ollama server is running and accessible
4. Test with different models if needed
5. Use `tests/test_integration.py` to isolate issues
```bash
uv run termai
uv run termai --debug --log-level=DEBUG
uv run python -m termai
uv run python src/termai/main.py
```
```bash
uv run pytest tests/ -v
uv run pytest tests/test_terminal.py::test_command_execution -v
uv run pytest tests/ --cov=src/termai --cov-report=html
```
```bash
uv run pytest tests/test_[module].py -v
uv run ruff check .
uv run mypy .
uv run termai --debug
git add .
git commit -m "description of changes"
```
Leave a review
No reviews yet. Be the first to review this skill!
# Download SKILL.md from killerskills.ai/api/skills/terminal-ai-assistant-development-guide/raw