Expert assistant for DSPy framework projects with marimo notebook support. Handles LLM prompt optimization, model persistence, Azure OpenAI/OpenAI configuration, and marimo reactive patterns.
Expert assistant for working with DSPy framework projects that use marimo interactive notebooks for LLM prompt optimization and development.
This skill provides guidance for DSPy framework projects that demonstrate automated prompt tuning through data-driven optimization. DSPy eliminates manual prompt engineering by using algorithms like MIPROv2 to optimize prompts based on training data.
1. **DSPy Module Development**: Create and modify DSPy Signatures and Modules following framework patterns
2. **marimo Notebook Support**: Handle reactive notebook patterns, UI elements, and button triggers correctly
3. **Model Persistence**: Implement versioned model saving/loading with timestamp tracking
4. **LM Configuration**: Set up Azure OpenAI or OpenAI providers with environment-based configuration
5. **Optimization Workflows**: Guide through before/after tuning demonstrations
When setting up a new DSPy project:
1. Install dependencies: `uv sync`
2. Create environment configuration: `cp .env.example .env`
3. Configure provider credentials in `.env` (Azure OpenAI or OpenAI)
4. Verify setup with quality checks: `uv run ruff check .` and `uv run mypy .`
Execute marimo notebooks from project root or notebook directory:
1. **Baseline**: Run unoptimized version to establish baseline performance
2. **Tuning**: Execute optimization (e.g., MIPROv2) on training data, save model to `artifact/`
3. **Evaluation**: Load optimized model and compare before/after results
Always follow this pattern when creating DSPy modules:
```python
class MySignature(dspy.Signature):
"""Clear description of the task this signature performs"""
input_field = dspy.InputField(desc="Description of input")
output_field = dspy.OutputField(desc="Description of expected output")
class MyModule(dspy.Module):
def __init__(self):
super().__init__()
self.predictor = dspy.Predict(MySignature)
def forward(self, input_field: str) -> dspy.Prediction:
return self.predictor(input_field=input_field)
```
Use `common/config.py` for unified provider setup:
```python
from common.config import configure_lm
configure_lm()
```
Provider is controlled by `PROVIDER_NAME` environment variable ("azure" or "openai").
Use `common/model_saver.py` for saving/loading models:
```python
from common.model_saver import save_model_with_timestamp, load_latest_model
save_model_with_timestamp(module, artifact_dir, "chatbot", score=85.5)
module = load_latest_model(artifact_dir, "chatbot", ModuleClass)
```
Models are stored in `notebooks/{demo}/artifact/` directories with:
The last expression in a marimo cell must be evaluated for display. Follow this pattern:
```python
if condition:
display = mo.md("content")
else:
display = mo.md("")
display # Last expression gets displayed
```
NEVER use return statements or leave if/else as statements - they prevent display.
Buttons require `mo.state()` triggers in a stable cell:
```python
@app.cell
def _(mo):
get_trigger, set_trigger = mo.state(0)
button = mo.ui.button(
label="Execute",
on_change=lambda _: set_trigger(lambda v: v + 1)
)
return get_trigger, button
@app.cell
def _(mo, get_trigger, other_dependencies):
trigger_value = get_trigger()
if trigger_value > 0:
result = perform_action()
display = mo.md(f"Result: {result}")
else:
display = mo.md("")
display
```
The trigger state must be in a cell depending only on `mo` (stable) to maintain reactivity.
Use `__file__` for absolute paths:
```python
import os
notebook_dir = os.path.dirname(os.path.abspath(__file__))
artifact_dir = os.path.join(notebook_dir, "artifact")
```
```env
PROVIDER_NAME=azure
AZURE_OPENAI_ENDPOINT=https://your-resource.openai.azure.com/
AZURE_OPENAI_API_KEY=your-api-key
AZURE_OPENAI_API_VERSION=2025-04-01-preview
```
```env
PROVIDER_NAME=openai
OPENAI_API_KEY=your-api-key
```
```env
SMART_MODEL=gpt-4.1
FAST_MODEL=gpt-4.1-nano
EMBEDDING_MODEL=text-embedding-3-small
```
Before committing changes:
1. **Lint check**: `uv run ruff check .`
2. **Type check**: `uv run mypy .` (excludes notebooks/)
3. **Test notebooks**: Run each notebook to verify functionality
**Adding a new DSPy module**: Create Signature with clear task description, implement Module with forward method, follow existing patterns in codebase.
**Creating optimization notebook**: Set up training data, configure optimizer (MIPROv2), define metric function, save optimized model with score.
**Debugging marimo UI issues**: Check that UI elements are last expression, verify button triggers use `mo.state()` pattern, ensure state cells depend only on `mo`.
**Switching LM providers**: Update `PROVIDER_NAME` in `.env`, verify corresponding credentials are set, test with simple query.
Leave a review
No reviews yet. Be the first to review this skill!
# Download SKILL.md from killerskills.ai/api/skills/dspy-framework-assistant/raw