AutoScore Conversion Development Assistant
An AI coding assistant for Python-based projects using FastAPI, Pydantic, and SQLAlchemy. Enforces modular architecture, comprehensive testing, and task-driven development workflows.
What This Skill Does
This skill guides AI agents to follow best practices for a specific Python project architecture:
Maintains strict modularity (max 500 lines per file)Enforces comprehensive Pytest testing for all featuresTracks tasks systematically in TASK.mdFollows consistent code organization patternsPrevents scope creep and over-engineeringEnsures documentation stays currentInstructions for AI Agent
1. Project Awareness & Context
**At the start of every conversation:**
Read the root `README.md` to understand intro, tech stack, and directory structureCheck for `PLANNING.md` to understand architecture, goals, style, and constraintsReview `TASK.md` before starting work — add new tasks if yours isn't listed (include brief description + today's date)Use consistent naming conventions, file structure, and architecture patterns from planning docsFor more context, find the nearest README or markdown file (repo may contain subprojects)2. Code Structure & Modularity
**File Size Rule:**
**Never create files longer than 500 lines of code**When approaching this limit, refactor into modules or helper files**Modular Organization:**
For agent-based code, use this structure:
`agent.py` - Main agent definition and execution logic`tools.py` - Tool functions used by the agent`prompts.py` - System prompts (declared standalone, not inside objects/functions)**Import Style:**
Use clear, consistent importsPrefer relative imports within packages3. Task Completion
**Mark completed tasks in TASK.md immediately** after finishingAdd newly discovered sub-tasks or TODOs under a "Discovered During Work" section**Commit and push regularly** to prevent losing work, especially at session end4. Style & Conventions
**Language & Standards:**
Use Python as primary languageFollow PEP8 style guideUse type hints throughoutUse `pydantic` for data validationUse `FastAPI` for APIsUse `SQLAlchemy` or `SQLModel` for ORM**Naming & Documentation:**
Use descriptive names so code is self-documentingMinimize comments/docstrings — reserve for non-obvious logic onlyFor complex logic, add inline `# Reason:` comments explaining **why**, not whatDeclare prompts standalone, not within objects or functions5. Documentation & Explainability
Update docs when adding features, changing dependencies, or modifying setup stepsComment non-obvious code for mid-level developer comprehensionAlways explain your thought process and rulesDuring planning, if existing code could be modified, compare reuse vs. rebuild and explain your choice6. AI Behavior Rules
**Never:**
Assume missing context — ask questions if uncertainHallucinate libraries or functions — only use verified Python packagesReference file paths or modules before confirming they existDelete or overwrite code unless explicitly instructed or part of TASK.mdRemove TODO comments from code — tell user separately if you think they should be removed**Always:**
Confirm file paths and module names exist before using themAsk for clarification when uncertain7. Scope Control & Communication
**Implement ONLY what is requested:**
No extra functionality unless explicitly askedIf you think enhancements are needed, **ask for clarification first****Reason:** Prevents over-engineering, code bloat, and merge conflicts with other maintainers.
**Communication Style:**
Be concise — if it fits on a post-it, keep it that wayAvoid lengthy reports when unnecessary8. Testing & Reliability
**Pytest Requirements:**
Create Pytest tests for all new features (functions, classes, routes)Place tests in `/tests` folder mirroring main app structure (see `tests/README.md`)**Minimum Test Coverage Per Feature:**
1. One test for expected use
2. One edge case test
3. One failure case test
**Test Maintenance:**
After updating logic, check if existing tests need updates — do it immediatelyUse mocks when objects require unavailable resources (like API keys)See `testing_guide.md` for detailed guidance9. Iterating on Failures
If you fail to accomplish instructions:
1. Think of alternate solutions and try multiple approaches
2. Use Perplexity search tool (if available) to obtain external information
3. Document attempts and failures — report back to user
4. **Remember:** Failure is okay as long as you try intelligently
5. **Critical:** Commit and push changes regularly to prevent work loss
Example Usage
**Starting a new feature:**
1. Read `README.md`, `PLANNING.md`, and `TASK.md`
2. Add task to `TASK.md`: "Add user authentication endpoint - 2026-02-03"
3. Create modular files: `auth/agent.py`, `auth/tools.py`, `auth/prompts.py`
4. Implement with type hints, Pydantic validation, PEP8 compliance
5. Create tests: `tests/auth/test_auth_agent.py` (expected, edge, failure cases)
6. Update docs if new dependencies or setup steps added
7. Mark task complete in `TASK.md`
8. Commit and push
Constraints
Max 500 lines per file (hard limit)Only verified Python packages (no hallucinated imports)All new features require Pytest testsNo code deletion unless explicitly instructedNo scope expansion without user approvalRegular commits required (especially before session end)