An LLM-powered linting tool that detects semantic issues in system prompts causing hallucinations and unpredictable behavior. Based on A.G.I.L.E. principles from the IntentHub framework.
An LLM-powered linting tool that detects semantic issues in system prompts that cause hallucinations and unpredictable behavior. Based on A.G.I.L.E. principles from the IntentHub framework.
This skill helps you work with the Prompt Semantic Linter (PSL) codebase, a three-stage architecture that:
1. Parses natural language prompts into structured Intermediate Representation (IR)
2. Validates IR against linting rules to detect semantic issues
3. Evaluates prompts against multiple models to measure didactic effectiveness
PSL uses a three-stage pipeline:
**Stage 1: Semantic Parser** (`backend/psl/parser.py`)
**Stage 2: Validator** (`backend/psl/validator.py`)
**Stage 3: Didactic Evaluator** (`backend/psl/evaluation.py`)
1. **Initial Setup**
- Run `make install` to install both backend and frontend dependencies
- Create `backend/.env` with API keys: `ANTHROPIC_API_KEY`, `OPENAI_API_KEY`, `GOOGLE_API_KEY`
- Run `make healthcheck` to verify provider connectivity
2. **Running the Application**
- Use `make dev` to run both backend (port 8000) and frontend (port 5173) in parallel
- Backend only: `cd backend && python main.py`
- Frontend only: `cd frontend && npm run dev`
3. **Testing**
- Use `make test` (validates environment) or `cd backend && pytest tests/ -v`
- Tests use real API calls, not mocks
- Environment variables must be loaded via `tests/conftest.py`
**IR Schema** (`backend/psl/ir.py`)
**LLM Adapter Layer** (`backend/psl/adapters/`)
**Rules System** (`backend/psl/rules/`)
**FastAPI Backend** (`backend/psl/api.py`)
**Frontend** (`frontend/src/App.tsx`)
1. Create rule class in `backend/psl/rules/`:
```python
from .base import Rule, LintError
from ..ir import IR
class MyNewRule(Rule):
@property
def name(self) -> str:
return "my-rule-name"
@property
def description(self) -> str:
return "What this rule checks for"
def check(self, ir: IR) -> List[LintError]:
errors = []
# Analyze ir and append LintError instances
return errors
```
2. Register in `backend/psl/validator.py`:
```python
from .rules.my_module import MyNewRule
class Validator:
def __init__(self):
self.rules = [
NoUndefinedComputedFieldsRule(),
MyNewRule(), # Add here
]
```
When changing IR models in `backend/psl/ir.py`:
1. Update the Pydantic model
2. Update `SemanticParser.SYSTEM_PROMPT` in `backend/psl/parser.py`
3. Update dependent rules
4. Update frontend TypeScript types in `frontend/src/api/client.ts`
1. Edit `backend/psl/adapters/models.yaml`
2. Add model to appropriate provider section
3. Verify with `make list-models` or `make healthcheck`
```bash
curl http://localhost:8000/health
curl -X POST http://localhost:8000/lint \
-H "Content-Type: application/json" \
-d '{"prompt": "Calculate metrics without definitions", "model": "gpt-4"}'
curl -X POST http://localhost:8000/execute \
-H "Content-Type: application/json" \
-d '{"prompt": "Extract data", "context": "YAML data here", "model": "gpt-4"}'
```
**Kubernetes Metrics Example** demonstrates PSL's value:
**Bad Prompt**: Asks for `pod_density_ratio`, `mesh_coherence_index` without definitions → LLM hallucinates
**Fixed Prompt**: Provides explicit formulas and fallback strategies → Prevents hallucinations
Find examples in `backend/psl/examples/bad/` and `backend/psl/examples/good/`
Leave a review
No reviews yet. Be the first to review this skill!
# Download SKILL.md from killerskills.ai/api/skills/prompt-semantic-linter-psl/raw