A systematic approach to building LLM applications where humans design the system architecture and AI agents handle implementation using PocketFlow's graph-based workflow framework.
A systematic methodology for building LLM applications using PocketFlow, a 100-line minimalist framework. This approach emphasizes human-led design with AI-assisted implementation.
Agentic Coding is a collaboration between human system design and agent implementation. PocketFlow provides core abstractions (Node, Flow, Shared Store, Batch, Async) to implement popular LLM design patterns (Agents, RAG, Map Reduce, Workflow, Multi-Agent).
1. **Start small and simple** - Begin with minimal viable solutions
2. **Design before implementing** - Document high-level design in `docs/design.md` before writing code
3. **Iterate frequently** - Ask humans for feedback and clarification throughout development
4. **Keep It User-Centric** - Explain problems from the user's perspective, not just features
5. **Fail Fast** - Leverage built-in retry mechanisms to quickly identify weak points
Clarify project requirements and evaluate if an AI system is appropriate.
**Understand AI system strengths and limitations:**
**Key Actions:**
Outline how your AI system orchestrates nodes at a high level.
**Design Process:**
1. **Identify applicable design patterns:**
- **Map Reduce**: Specify how to split (map) and combine (reduce)
- **Agent**: Specify inputs (context) and possible actions
- **RAG**: Specify what to embed; note offline (indexing) and online (retrieval) workflows
- **Workflow**: Chain multiple tasks into pipelines
2. **Create high-level node descriptions:**
- One-line description for each node
- Define inputs and outputs
- Identify dependencies
3. **Draw flow diagram in Mermaid:**
```mermaid
flowchart LR
start[Start] --> batch[Batch]
batch --> check[Check]
check -->|OK| process
check -->|Error| fix[Fix]
fix --> check
subgraph process[Process]
step1[Step 1] --> step2[Step 2]
end
process --> endNode[End]
```
**IMPORTANT**: If humans can't specify the flow, AI agents can't automate it. Manually solve example inputs first to develop intuition.
Identify and implement external utility functions the AI system needs to interact with the real world.
**Utility Function Categories:**
**NOTE**: LLM-based tasks (summarizing, sentiment analysis) are NOT utility functions - they are core internal functions.
**Implementation Guidelines:**
1. **Create one file per API call** in `utils/` directory:
```python
# utils/call_llm.py
from openai import OpenAI
def call_llm(prompt):
client = OpenAI(api_key="YOUR_API_KEY_HERE")
r = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": prompt}]
)
return r.choices[0].message.content
if __name__ == "__main__":
prompt = "What is the meaning of life?"
print(call_llm(prompt))
```
2. **Document each utility:**
- `name`: Function name and file path
- `input`: Input type/description
- `output`: Output type/description
- `necessity`: Why it's needed in the flow
3. **Write simple tests** for each utility function
4. **AVOID exception handling in utilities** - Let Node's built-in retry mechanism handle failures
**TIP**: Sometimes design utilities before flow, especially when interfacing with legacy systems. Start by designing the hardest utilities, then build the flow around them.
Design the shared store that nodes use to communicate.
**Design Principles:**
**Example Shared Store:**
```python
shared = {
"user": {
"id": "user123",
"context": {
"weather": {"temp": 72, "condition": "sunny"},
"location": "San Francisco"
}
},
"results": {} # Empty dict to store outputs
}
```
Plan how each node reads/writes data and uses utility functions.
**For each Node, specify:**
**Example Node Specification:**
Implement nodes and flows based on the design.
**Project Structure:**
```
my_project/
├── main.py
├── nodes.py
├── flow.py
├── utils/
│ ├── __init__.py
│ ├── call_llm.py
│ └── search_web.py
├── requirements.txt
└── docs/
└── design.md
```
**Implementation Guidelines:**
1. **docs/design.md**: High-level, no-code project documentation
2. **nodes.py**: All node definitions
```python
from pocketflow import Node
from utils.call_llm import call_llm
class GetQuestionNode(Node):
def exec(self, _):
user_question = input("Enter your question: ")
return user_question
def post(self, shared, prep_res, exec_res):
shared["question"] = exec_res
return "default"
class AnswerNode(Node):
def prep(self, shared):
return shared["question"]
def exec(self, question):
return call_llm(question)
def post(self, shared, prep_res, exec_res):
shared["answer"] = exec_res
```
3. **flow.py**: Flow creation functions
```python
from pocketflow import Flow
from nodes import GetQuestionNode, AnswerNode
def create_qa_flow():
get_question_node = GetQuestionNode()
answer_node = AnswerNode()
get_question_node >> answer_node
return Flow(start=get_question_node)
```
4. **main.py**: Entry point
```python
from flow import create_qa_flow
def main():
shared = {
"question": None,
"answer": None
}
qa_flow = create_qa_flow()
qa_flow.run(shared)
print(f"Question: {shared['question']}")
print(f"Answer: {shared['answer']}")
if __name__ == "__main__":
main()
```
**Implementation Rules:**
Evaluate results and optimize performance.
**Evaluation Approach:**
1. **Use intuition first** - Human intuition is a good starting point
2. **Redesign flow if needed** (back to Step 3):
- Break down tasks further
- Introduce agentic decisions
- Better manage input contexts
**Micro-optimizations (if flow is solid):**
**EXPECT ITERATION**: You'll likely repeat Steps 3-6 hundreds of times.
Ensure robust error handling and quality control.
**Reliability Strategies:**
Handles simple LLM tasks with three methods:
Connects nodes through labeled edges (Actions). Use `>>` operator to chain nodes.
Dictionary-like structure enabling communication between nodes.
Process multiple items in parallel (e.g., batch processing documents).
Handle asynchronous tasks that require waiting (e.g., external API calls).
1. **Fail Fast**: Don't use try-except in utility functions called from nodes - let Node retry mechanism handle failures
2. **One Utility Per File**: Dedicate one Python file per API call with a `main()` test function
3. **High-Level Design First**: Document in `docs/design.md` before coding
4. **Avoid Over-Engineering**: Only add features that are directly requested or clearly necessary
5. **Log Everything**: Add comprehensive logging to facilitate debugging
**Good For:**
**Not Good For:**
Leave a review
No reviews yet. Be the first to review this skill!
# Download SKILL.md from killerskills.ai/api/skills/agentic-coding-with-pocketflow-ay05t4/raw