Cursorrules Database - Code Search & Analysis
A comprehensive development skill for building robust code search and analysis applications. This skill provides battle-tested guidelines for Go and Python development, with special focus on Temporal workflows, test-driven development, and maintaining code quality through incremental, deterministic changes.
What This Skill Does
This skill guides AI assistants to follow professional software engineering practices when working on code search, repository analysis, and workflow automation projects. It ensures:
Small, incremental commits that always compile and pass testsDeterministic, testable code following TDD principlesProper handling of Temporal workflows (both Go and Python)Clear documentation with visual diagramsStructured problem-solving with maximum 3 failed attempts before reassessmentConsistent code organization and architecture patternsInstructions
Development Philosophy
**Core Principles:**
Prefer small, incremental, compiling commits over large "big bang" changesStudy and learn from existing code before writing new functionalityChoose pragmatic, clear, and maintainable code over clever abstractionsEach function or struct should have one clear responsibilityAvoid premature abstractions or unnecessary complexityWhen in doubt, choose the simplest solution that clearly conveys intentImplementation Workflow
**For Every Feature or Change:**
1. Follow test-driven development: red → green → refactor
2. Break complex work into documented stages in `IMPLEMENTATION_PLAN.md`
3. Ensure every commit:
- Compiles successfully
- Passes all tests
- Includes new tests for new functionality
- Conforms to project formatting/linting standards
4. Write commit messages that explain **why** changes are made, not just what
Go File Organization
**Structure Go files from general → specific:**
1. Package declaration
2. Imports
3. File-level constants (only if broadly used)
4. Type declarations (interfaces before structs)
5. Constructor functions immediately after their type
6. Methods on a type, grouped and ordered logically
7. Standalone helper functions at the bottom
**Key Rules:**
Keep all methods for a struct grouped togetherOrder methods by conceptual flow (constructor → configuration → execute → cleanup), NOT alphabeticallySplit files by domain if they become "junk drawers" (e.g., `model.go`, `service.go`, `handlers.go`)Place new types with their constructors and methods togetherTemporal Workflows - Go
**When working with `go.temporal.io/sdk/workflow`:**
**Avoid Non-Deterministic Patterns:**
❌ Do NOT use `math/rand` for decision-making (use Activities or deterministic alternatives)❌ Do NOT call external systems directly (HTTP, DB, file I/O) — use Activities❌ Do NOT use `time.Now`, `time.Sleep` — use `workflow.Now()`, `workflow.Sleep()`❌ Do NOT use native goroutines, channels, `select` — use `workflow.Go()`, `workflow.Channel()`, `workflow.Selector()`❌ Do NOT rely on map iteration order with `range` if ordering matters**Testing & Versioning:**
Add Replay tests using `worker.NewWorkflowReplayer()` with Event History JSONStore Event History test artifacts under `tests/` directoryTreat failing replay tests as hard stopsUse `workflow.GetVersion` for versioning changes to long-lived workflowsTemporal Workflows - Python
**When working with `temporalio.workflow`:**
**Use Deterministic Primitives:**
✅ Use `workflow.random()` instead of `random.*`✅ Use `workflow.now()`, `workflow.time()` instead of `datetime.now()`, `time.time()`✅ Use `workflow.uuid4()` instead of `uuid.uuid4()`✅ Use `workflow.logger` for logging to avoid duplicate replay logs**Testing:**
Add replay tests using `temporalio.testing.Replayer` and `WorkflowHistory.from_json(...)`Use `WorkflowEnvironment` with `start_time_skipping()` for long-running behavior testsKeep Event History JSON files in `tests/` directoryProblem-Solving Guidelines
**When facing challenges:**
1. Maximum **3 failed attempts** before reassessing approach
2. Document what failed, why, and alternatives explored
3. When multiple approaches exist, prioritize:
- Testability
- Readability
- Consistency
- Simplicity
- Reversibility
Testing & Quality Gates
**Definition of Done:**
✅ All tests passing✅ Code linted and formatted✅ No TODOs without issue references✅ Implementation matches documented plan**Test Guidelines:**
Test behavior, not implementation detailsKeep tests deterministic and isolatedOne clear assertion per test where possibleFollow existing testing utilities and conventionsNever disable tests — fix or adapt themDocumentation
**Visual Documentation:**
Use Mermaid diagrams for complex flows, architecture, or relationshipsPrefer diagrams over lengthy text when they add clarityCommon types: flowcharts, sequence diagrams, class diagrams, state diagramsKeep diagrams simple and focused on one conceptAlways accompany diagrams with brief explanatory textPull Request Descriptions
**When generating PR descriptions:**
1. Locate and read `PR_DESCRIPTION_GUIDE.md` at workspace root
2. Use template from `PULL_REQUEST_TEMPLATE.md` at workspace root
3. Output **only** the completed description in a single Markdown code block
4. Include: summary, context, changes, testing, risks
5. **NEVER** create, modify, or update the guide/template files
6. **NEVER** create or push actual pull requests
Atlassian Integration
**For Jira & Confluence links:**
Always use Atlassian MCP tools to access contentTreat as read-only (never create, edit, or delete)Use Jira MCP tool for issue/epic/project/sprint/filter URLsUse Confluence MCP tool for page/space/attachment URLsIf unavailable via MCP, state it's unavailable (don't guess)Development Safety
**Never:**
❌ Use `--no-verify` to bypass hooks❌ Commit broken or uncompiled code❌ Skip documenting implementation stages❌ Introduce new tools without justification**Always:**
✅ Learn from similar patterns in the codebase✅ Keep dependencies explicit✅ Fail fast with descriptive, contextual errors✅ Use composition and interfaces over inheritanceExamples
Example 1: Adding a New Search Feature
```
User: Add CSV search functionality to the repository analyzer