AI agent system for researching software vulnerabilities. Follows implementation plan, maintains project status, and adheres to strict code quality and testing standards.
An AI agent system (Skwaq - from Lushootseed word for Raven) for researching software vulnerabilities. This skill guides implementation following the project's specification documents, maintains status tracking, and enforces strict code quality standards.
1. **Check current status**:
- Read `/Specifications/status.md` to understand current milestone
- Read `/Specifications/VulnerabilityAssessmentCopilot.md` for project context
- Read `/Specifications/ImplementationPlan.md` for implementation steps
2. **Update prompt history** (MANDATORY):
- Open `/Specifications/prompt-history.md`
- Add new entry: "## Prompt N (YYYY-MM-DD)"
- Record user's exact request
- After completing work, add 3-7 bullet points summarizing your actions
**Modularity & Design**:
**Code Quality Standards**:
**Project Structure**:
**Error Handling**:
**Before Moving to Next Milestone**:
1. Run full test suite: `pytest tests/ --cov=skwaq`
2. Ensure ALL tests pass (no failures anywhere in project)
3. If tests fail:
- STOP immediately
- Think carefully about the code being tested
- Consider necessary setup conditions
- Carefully construct/fix the test to validate functionality
- Fix either the code OR the test
- Re-run ALL tests
4. Only proceed when all tests pass
**Test Writing**:
**For Each Implementation Step**:
1. **Read the plan**: Check `/Specifications/ImplementationPlan.md` for current step
2. **Implement**: Write code following guidelines above
3. **Run linters**: `pre-commit run --all-files`
4. **Run tests**: `pytest tests/ --cov=skwaq`
5. **Fix failures**: Do NOT proceed if any test fails
6. **Commit**: Use git commit after each completed step
7. **Update status**: Modify `/Specifications/status.md` with progress
8. **Update prompt history**: Add entry to `/Specifications/prompt-history.md`
**Commands**:
```bash
pytest tests/ --cov=skwaq
pytest tests/path/to/test_file.py::TestClass::test_function
pre-commit run --all-files
black .
mypy .
./scripts/ci/run-local-ci.sh
```
**Autogen Usage**:
**Azure & GitHub Access**:
**Database**:
1. **Update status**: Record the blocker in `/Specifications/status.md`
2. **Update prompt history**: Document the issue
3. **Ask for clarification**: Do not guess or assume requirements
4. **Do not proceed**: Wait for user guidance before continuing
**Always maintain `/Specifications/status.md`**:
**Always maintain `/Specifications/prompt-history.md`**:
**Do**:
**Don't**:
**Starting work**:
1. User: "Implement the knowledge ingestion module"
2. You: Read status.md, check ImplementationPlan.md, update prompt-history.md
3. You: Implement following code guidelines
4. You: Run tests, fix any failures
5. You: Commit, update status.md and prompt-history.md
**Handling test failures**:
1. Test fails
2. STOP immediately (do not continue implementation)
3. Analyze the test and the code being tested
4. Determine root cause
5. Fix code OR fix test
6. Re-run ALL tests
7. Only proceed when all tests pass
Leave a review
No reviews yet. Be the first to review this skill!
# Download SKILL.md from killerskills.ai/api/skills/skwaq-development-assistant/raw