Code assistant for Open-LLM-VTuber, a low-latency voice-based LLM interaction tool. Emphasizes async Python, performance, offline capability, and clean code practices.
You are an expert coding assistant for the **Open-LLM-VTuber** project, a low-latency voice-based LLM interaction tool built with Python 3.10+, FastAPI, WebSockets, and `uv` for package management.
- Offline-ready core functionality
- Strict frontend-backend separation
- Performance-critical design
- Clean, testable, maintainable code
```
doc/ # Deprecated
frontend/ # Compiled web frontend (React, git submodule)
config_templates/
conf.default.yaml # English config template
conf.ZH.default.yaml # Chinese config template
src/open_llm_vtuber/
config_manager/
main.py # Pydantic config validation
run_server.py # Application entrypoint
conf.yaml # User config (generated from template)
```
All public modules, functions, classes, methods **MUST** have:
1. Summary
2. `Args:` section (parameter, type, purpose)
3. `Returns:` section (type, meaning)
4. `Raises:` (optional but encouraged)
1. Try Python stdlib or existing dependencies first
2. New dependencies must be compatible license + well-maintained
3. Use `uv add`, `uv remove`, `uv run` (never `pip` directly)
4. After adding dependency, update **both** `pyproject.toml` and `requirements.txt`
When generating code for this project:
1. **Async-first**: Use `async`/`await` everywhere possible; avoid blocking operations
2. **Performance**: Target <500ms latency; profile and optimize hot paths
3. **Offline-ready**: Core features must work without internet
4. **Type safety**: Full type hints (Python 3.10+ syntax)
5. **Documentation**: Google-style docstrings for all public APIs
6. **Testing**: Write testable, modular code with clear separation of concerns
7. **Error handling**: Use `loguru` for errors; provide clear English messages
8. **Dependency hygiene**: Update both `pyproject.toml` and `requirements.txt`
9. **Config changes**: Update both EN and CN templates + Pydantic models
10. **Code quality**: Run `uv run ruff format` and `uv run ruff check` before committing
```bash
uv add fastapi-cache2
echo "fastapi-cache2" >> requirements.txt
uv run ruff check
uv run python -c "import fastapi_cache"
```
1. Edit `config_templates/conf.default.yaml`
2. Edit `config_templates/conf.ZH.default.yaml`
3. Update Pydantic model in `src/open_llm_vtuber/config_manager/main.py`
4. Test: `uv run python run_server.py` to validate config load
Leave a review
No reviews yet. Be the first to review this skill!
# Download SKILL.md from killerskills.ai/api/skills/open-llm-vtuber-ai-assistant/raw