Comprehensive guide for building AI-powered marketing automation using LangGraph multi-agent orchestration, Context Engineering, and modern full-stack architecture.
Build an AI-powered marketing automation platform with natural language campaign management using LangGraph multi-agent orchestration and Context Engineering principles.
This skill guides you through building a conversational marketing automation platform where non-technical users can create, manage, and optimize campaigns using natural language—"Claude Code for Marketing."
Before implementing features:
1. **Environment Setup**: Verify Python 3.9+ and Node.js 18+ installed
2. **Dependencies**: Run `pip install -r requirements.txt` and `npm install`
3. **LangGraph**: Confirm LangGraph and LangChain properly installed
4. **API Keys**: Ensure `.env` contains OPENAI_API_KEY, ANTHROPIC_API_KEY, etc.
5. **Database**: Confirm PostgreSQL/SQLite running for development
```
src/
├── agents/ # Python agent implementations
├── workflows/ # LangGraph workflow definitions
├── app/ # Next.js frontend
├── components/ # React components
├── lib/ # Utilities and integrations
├── db/ # Database schemas
└── types/ # TypeScript/Python type definitions
```
```python
from typing import Dict, Any, List
from langchain.tools import tool
from pydantic import BaseModel, Field
import logging
class AgentInput(BaseModel):
"""Input schema for agent"""
task: str = Field(description="Task description")
context: Dict[str, Any] = Field(default_factory=dict)
class MarketingAgent:
"""Base agent implementation"""
def __init__(self, name: str):
self.name = name
self.logger = logging.getLogger(name)
self.tools = self._setup_tools()
@tool
async def process_task(self, input: AgentInput) -> Dict[str, Any]:
"""Main agent entry point"""
try:
# Implementation logic here
return {"status": "success", "result": result}
except Exception as e:
self.logger.error(f"Error in {self.name}: {e}")
return {"status": "error", "message": str(e)}
def _setup_tools(self) -> List:
"""Define agent-specific tools"""
return []
```
```python
from langgraph.graph import StateGraph, END
from typing import TypedDict, Annotated
import operator
class WorkflowState(TypedDict):
messages: Annotated[list, operator.add]
current_agent: str
task_status: str
results: dict
workflow = StateGraph(WorkflowState)
workflow.add_node("supervisor", supervisor_agent)
workflow.add_node("specialist", specialist_agent)
workflow.set_entry_point("supervisor")
workflow.add_conditional_edges(
"supervisor",
route_decision,
{
"continue": "specialist",
"complete": END
}
)
app = workflow.compile(checkpointer=checkpointer)
```
```python
from typing import TypedDict, Annotated, Sequence, Literal
from langchain_core.messages import BaseMessage
from datetime import datetime
import operator
class AgentState(TypedDict):
# Message history
messages: Annotated[Sequence[BaseMessage], operator.add]
# Workflow control
current_agent: str
next_agent: str
# Task tracking
task_id: str
task_status: Literal["pending", "in_progress", "completed", "failed"]
# Results
results: Dict[str, Any]
errors: List[str]
# Metadata
user_id: str
session_id: str
timestamp: datetime
```
```python
from functools import lru_cache
import hashlib
import json
@lru_cache(maxsize=100)
def get_cached_response(query_hash: str):
"""Cache frequently requested data"""
pass
def hash_query(query: dict) -> str:
"""Create cache key from query"""
return hashlib.md5(json.dumps(query, sort_keys=True).encode()).hexdigest()
```
```python
async def stream_agent_response(agent, input_data):
"""Stream responses for better UX"""
async for chunk in agent.astream(input_data):
yield f"data: {json.dumps(chunk)}\n\n"
```
1. **State Updates**: Must use reducers (e.g., `operator.add`) for list fields in state
2. **Checkpointing**: Requires fully serializable state—avoid complex objects
3. **Async Execution**: All node functions must be async
4. **Graph Compilation**: Happens once at startup—plan topology accordingly
5. **Human-in-the-Loop**: Requires `interrupt_before` configuration
6. **Token Limits**: Monitor token usage, implement streaming for long outputs
7. **API Rate Limits**: Use exponential backoff for retries
8. **Memory Management**: Clear agent memory periodically to prevent context overflow
9. **Tool Timeouts**: Set appropriate timeouts for external API calls
10. **Error Propagation**: Catch and handle errors at agent boundaries
11. **LangSmith**: Requires specific environment variables for tracing
12. **Streaming**: Use Server-Sent Events (SSE) for real-time updates
13. **CORS**: Configure properly for cross-origin API requests
14. **Authentication**: Integrate Supabase Auth carefully—validate tokens server-side
15. **Database Connections**: Use connection pooling to avoid exhaustion
**Problem**: Railway services maintain their initial configuration type. A service configured for Next.js will NOT properly run Python apps even if you push Python code.
**Solution**:
**app.py**:
```python
from flask import Flask, jsonify
import os
app = Flask(__name__)
@app.route('/')
def health():
return jsonify({"status": "healthy"})
if __name__ == '__main__':
port = int(os.environ.get('PORT', 8080))
app.run(host='0.0.0.0', port=port)
```
**requirements.txt**:
```
flask==3.0.0
gunicorn==21.2.0
```
**Procfile**:
```
web: gunicorn app:app --bind 0.0.0.0:$PORT
```
**runtime.txt** (use only major.minor):
```
python-3.11
```
```bash
git add -A
git commit -m "Your change description"
git push origin main
railway link -p <project-id>
railway up --service <python-service-name>
railway logs --service <python-service-name>
curl https://<service-url>.up.railway.app
```
1. Check environment variables: `railway variables`
2. Verify correct service: `railway status`
3. View logs: `railway logs`
4. Ensure health check endpoint exists at `/`
5. **Never deploy Python to Next.js-configured services**
```bash
python -m pytest tests/
python -m mypy src/
npm run lint
npm run typecheck
npm run build
python tests/integration/test_workflows.py
python scripts/validate_langsmith.py
```
1. Create INITIAL.md documenting requirements
2. Generate PRP using Context Engineering approach
3. Implement agents following established patterns
4. Create LangGraph workflow connecting agents
5. Add frontend integration (Next.js)
6. Write comprehensive tests (unit + integration)
7. Deploy with monitoring enabled (LangSmith)
1. Enable LangSmith tracing for the workflow
2. Add debug logging to agent nodes
3. Use LangGraph visualizer to inspect graph topology
4. Check state at each step using breakpoints
5. Validate tool inputs/outputs with type checking
Leave a review
No reviews yet. Be the first to review this skill!
# Download SKILL.md from killerskills.ai/api/skills/ai-marketing-automation-platform-guide/raw