Create, test, evaluate, and deploy production-quality LLM applications using Microsoft's Prompt Flow framework. Build executable flows that link LLMs, prompts, Python code, and tools together with built-in tracing, evaluation, and deployment capabilities.
This skill has safety concerns that you should review before use. Some patterns were detected that may pose a risk.Safety score: 60/100.
KillerSkills scans all public content for safety. Use caution before installing or executing flagged content.
Streamline the end-to-end development cycle of LLM-based applications using Microsoft's Prompt Flow framework. Create executable flows, debug with LLM tracing, evaluate with datasets, and deploy to production.
This skill guides you through building production-quality LLM applications using Prompt Flow. You'll create flows that link LLMs, prompts, Python code, and tools together, then iteratively develop, test, evaluate, and deploy them. Prompt Flow provides built-in tracing for LLM interactions, dataset-based evaluation, and streamlined deployment options.
Install the required packages:
```bash
pip install promptflow promptflow-tools
```
Create a new flow from a template. For a chatbot:
```bash
pf flow init --flow ./my_chatbot --type chat
```
This creates a `my_chatbot` folder with the necessary structure including `flow.dag.yaml` (flow definition), connection config files, and node implementations.
**For OpenAI:**
```bash
pf connection create --file ./my_chatbot/openai.yaml --set api_key=<your_api_key> --name open_ai_connection
```
**For Azure OpenAI:**
```bash
pf connection create --file ./my_chatbot/azure_openai.yaml --set api_key=<your_api_key> api_base=<your_api_base> --name open_ai_connection
```
The `flow.dag.yaml` file defines:
Run interactive testing (press Ctrl+C to exit):
```bash
pf flow test --flow ./my_chatbot --interactive
```
For single test runs with specific inputs:
```bash
pf flow test --flow ./my_chatbot --inputs question="What is prompt flow?"
```
Prompt Flow automatically traces LLM interactions. View traces in the UI:
```bash
pf flow test --flow ./my_chatbot --ui
```
This opens a web interface showing:
Create an evaluation dataset (JSON or JSONL format), then run batch evaluation:
```bash
pf run create --flow ./my_chatbot --data ./test_data.jsonl --name my_eval_run
```
View evaluation results:
```bash
pf run show-details --name my_eval_run
pf run visualize --name my_eval_run
```
Based on evaluation results:
Re-test after each change to measure improvement.
Create a Docker container for serving:
```bash
pf flow build --source ./my_chatbot --output ./deploy --format docker
```
This generates a Dockerfile, requirements.txt, and serving application.
**Option A: Docker container**
```bash
cd ./deploy
docker build -t my-chatbot .
docker run -p 8080:8080 my-chatbot
```
**Option B: Azure AI (recommended for collaboration)**
Deploy to Azure Machine Learning:
```bash
pf run create --flow ./my_chatbot --data ./test_data.jsonl --stream --cloud
```
Then deploy the run as an online endpoint through Azure portal or CLI.
Create reusable Python tools for your flows:
```bash
pf tool init --package ./my_tools
```
Integrate evaluation into your pipeline:
```bash
pf run create --flow ./my_chatbot --data ./test_data.jsonl
pf run show-metrics --name <run_name>
```
For maximum flexibility, use Python functions directly:
```python
from promptflow import flow
@flow
def my_flow(question: str) -> str:
# Your custom logic here
return answer
```
Use `.prompty` files for simplified prompt template management:
```yaml
---
name: QA Prompt
model: gpt-4
---
system:
You are a helpful assistant.
user:
{{question}}
```
| Command | Purpose |
|---------|---------|
| `pf flow init` | Create new flow from template |
| `pf flow test` | Test flow locally |
| `pf run create` | Execute batch run/evaluation |
| `pf run visualize` | View run results in UI |
| `pf connection create` | Configure API credentials |
| `pf flow build` | Build deployment artifacts |
| `pf flow serve` | Start local serving endpoint |
1. **Logging format**: Set via `PF_LOGGING_FORMAT` environment variable
2. **Serving engine**: Switch between `flask` and `fastapi` using `--engine` parameter
3. **Tracing**: Disable with `PF_DISABLE_TRACING=true` if not needed
4. **Interactive login**: Control Azure auth with `PF_NO_INTERACTIVE_LOGIN` variable
Leave a review
No reviews yet. Be the first to review this skill!
# Download SKILL.md from killerskills.ai/api/skills/build-llm-apps-with-prompt-flow/raw