Persistent memory and cross-session learning for AI coding assistants. Cloud-based context management via MCP protocol with semantic search, dependency analysis, and project-wide context.
Persistent memory and cross-session learning for AI coding assistants. Cloud-based context management via MCP protocol with semantic search, dependency analysis, and project-wide context.
ContextStream v0.4.x provides ~11 consolidated domain tools for intelligent context management with ~75% token reduction vs previous versions. This skill enables AI assistants to maintain persistent memory, perform semantic code search, track dependencies, and learn from past sessions.
**First message of any session:**
1. Call `session_init(folder_path="<cwd>", context_hint="<user_message>")`
2. Immediately call `context_smart(user_message="<user_message>", format="minified", max_tokens=400)`
**Every subsequent message:**
If Context Pack is enabled, use enhanced queries:
```
context_smart(..., mode="pack", distill=true)
```
If unavailable or disabled, omit `mode` parameter - the API will fall back automatically.
**Primary tool for code/file discovery:**
```
search(mode="semantic", query="authentication logic", limit=3)
```
**Modes:**
**CRITICAL:** Before using local Search/Glob/Grep/Read/Explore tools, ALWAYS call `search(mode="hybrid")` first. Only use local tools if ContextStream returns 0 results.
**Capture decisions, recall context, and manage lessons:**
```
session(action="capture", event_type="decision", title="Migration to TypeScript", content="...")
session(action="get_lessons", query="database migrations")
session(action="capture_lesson", title="API timeout issue", trigger="...", impact="...", prevention="...")
session(action="smart_search", query="previous authentication approaches")
session(action="recall", query="login flow decision")
```
**Actions:** capture, capture_lesson, get_lessons, recall, remember, user_context, summary, compress, delta, smart_search
**CRUD operations for events, nodes, and timeline:**
```
memory(action="list_events", limit=10)
memory(action="search", query="error handling")
memory(action="timeline", event_type="decision")
memory(action="create_event", ...)
memory(action="update_event", event_id="...", ...)
memory(action="delete_event", event_id="...")
```
**Dependency analysis and impact tracking:**
```
graph(action="dependencies", file_path="src/auth.ts")
graph(action="impact", symbol="authenticateUser")
graph(action="call_path", from="login", to="database")
graph(action="related", file_path="src/api.ts")
graph(action="ingest")
```
**Project-level operations and indexing:**
```
project(action="index_status")
project(action="ingest_local", path="<cwd>")
project(action="overview")
project(action="files", pattern="*.ts")
project(action="statistics")
```
**Actions:** list, get, create, update, index, overview, statistics, files, index_status, ingest_local
**Workspace management and association:**
```
workspace(action="list")
workspace(action="get", workspace_id="...")
workspace(action="associate", project_id="...", workspace_id="...")
```
**GitHub and Slack integrations:**
```
integration(provider="github", action="search", query="authentication")
integration(provider="github", action="repo_info", repo="owner/name")
```
**Get documentation and version info:**
```
help(action="tools")
help(action="auth")
help(action="version")
help(action="editor_rules")
```
When user requests a plan or implementation roadmap:
1. **Create the plan:**
```
session(action="capture_plan",
title="API Refactoring Plan",
description="Modernize REST API to GraphQL",
goals=["Improve performance", "Better type safety"],
steps=[
{id: "1", title: "Audit existing endpoints", order: 1},
{id: "2", title: "Design GraphQL schema", order: 2},
{id: "3", title: "Implement resolvers", order: 3}
])
```
2. **Create tasks for the plan:**
```
memory(action="create_task",
title="Audit authentication endpoints",
plan_id="<plan_id_from_response>",
priority="high",
description="Document all /auth/* endpoints")
```
```
session(action="list_plans")
session(action="get_plan", plan_id="<uuid>", include_tasks=true)
memory(action="list_tasks", plan_id="<uuid>")
memory(action="list_tasks")
memory(action="update_task",
task_id="<uuid>",
task_status="in_progress") # pending|in_progress|completed|blocked
memory(action="update_task", task_id="<uuid>", plan_id="<plan_uuid>")
memory(action="update_task", task_id="<uuid>", plan_id=null)
memory(action="delete_task", task_id="<uuid>")
memory(action="delete_event", event_id="<plan_uuid>")
```
1. **First message:** Call `session_init` with context_hint, then `context_smart` before responding
2. **Every message:** Call `context_smart` BEFORE any response
3. **After work:** Capture decisions with `session(action="capture")`
4. **On mistakes:** Immediately capture lessons with `session(action="capture_lesson")`
5. **Before risky work:** Call `session(action="get_lessons")`
6. **For discovery:** Use `search(mode="hybrid")` or `session(action="smart_search")`
1. **Never** use local Search/Glob/Grep/Read before checking ContextStream
2. **Never** skip `context_smart` on any message
3. **Never** ignore [INGEST_RECOMMENDED] notices
4. **Never** use local tools if ContextStream returns relevant results
**[INGEST_RECOMMENDED]:** Ask user if they want to enable semantic code search. Explain benefits (AI-powered search, dependency analysis, better context) and time required. If user agrees, run `project(action="ingest_local")`.
**[RULES_NOTICE]:** Use `generate_rules()` to update rules.
Before searching files/code:
1. Check `project(action="index_status")`
2. If missing/stale, run `project(action="ingest_local", path="<cwd>")` or `project(action="index")`
3. Use `graph(action="ingest")` if dependency analysis is needed
1. Check `project(action="index_status")`
2. Use `search(mode="hybrid")` or `session(action="smart_search")`
3. Use `graph(action="dependencies")` or `graph(action="impact")` for code analysis
4. Only if ContextStream returns 0 results → use local tools
5. If ContextStream returns results → do NOT use local Search/Explore/Read
The exact tool names depend on your MCP client configuration. Claude Code typically uses:
```
mcp__<server>__<tool>
```
Where `<server>` matches your MCP config (commonly `contextstream`).
If a tool call fails with "No such tool available", refresh rules and match the tool list in your environment.
Full documentation: https://contextstream.io/docs/mcp/tools
Rules Version: 0.4.27
Leave a review
No reviews yet. Be the first to review this skill!
# Download SKILL.md from killerskills.ai/api/skills/contextstream-mcp-integration-24dfry/raw