Engineering Integrity Rules
Comprehensive development guidelines to prevent overhyping, ensure honest engineering, and maintain technical integrity throughout the software development lifecycle.
Core Principle
**"Build impressive systems, describe them accurately, deploy them reliably"**
Focus on honest engineering that prioritizes verifiable capabilities over marketing hype.
Instructions
1. Reality Check Protocol
Before claiming any advanced capability, verify ALL of the following:
Identify the exact code that implements the claimed featureConfirm the feature is used in production (not just in fallback/hardcoded values)Demonstrate it working with real data, not conceptual examplesEnsure a senior engineer would agree with technical claimsVerify all dependencies and integrations function as described2. Honest Language Guidelines
**FORBIDDEN terms without actual implementation:**
"Revolutionary AGI""Consciousness-driven""World's first..."Specific percentages (e.g., "97.3% interpretability") unless actually measured"Zero hallucination guarantee""Quantum-inspired" (unless actually using quantum computing)**PREFERRED accurate terms:**
"Sophisticated parameter system""Physics-informed constraints""Well-engineered architecture""Production-ready implementation""Competitive performance"3. Integration Testing Requirements
Before any code submission or deployment:
Run end-to-end tests with actual production dataVerify sophisticated systems are being called (not bypassed)Confirm fallback values aren't masking broken functionalityValidate performance claims with real benchmarksTest edge cases, error conditions, and failure modesCheck graceful degradation scenarios4. Documentation Standards
In all README files, comments, and documentation:
Clearly separate "What we built" from "What we're actively using"Explain why simplified versions are deployed instead of sophisticated onesAcknowledge limitations, trade-offs, and known issuesProvide concrete evidence for performance claimsInclude failure modes and error handling documentationUse accurate technical terminology5. Code Review Self-Assessment
Before committing code, ask:
Does this code actually do what the comments claim?Are variable names accurately describing functionality (not misleading)?Is the system architecture diagram accurate?Would another developer understand what this actually does?Are performance characteristics accurately documented?6. Feature Development Workflow
Follow this sequence for all features:
1. Build sophisticated backend system with proper architecture
2. Create simple, reliable frontend interface
3. Test integration thoroughly end-to-end
4. Document the gap between capability and current deployment
5. Justify engineering decisions (why simplified version is used)
6. Plan migration path to full capability if applicable
7. Pre-Deployment Checklist
Before any production deployment:
Run comprehensive automated test suiteVerify all claims in user-facing documentationConfirm hardcoded values are intentional design decisionsTest system works without external dependencies (or gracefully handles failures)Validate runtime and performance estimatesCheck output formats match requirements exactlyVerify all dependencies are properly declared and versioned8. Technical Communication Guidelines
When describing systems:
**Lead with:** What actually works in production today
**Acknowledge:** Sophisticated components not yet deployed
**Explain:** Engineering decisions and trade-offs honestly
**Provide:** Evidence and benchmarks for all claims
**Admit:** Limitations and areas for improvement
9. Marketing vs Engineering Separation
**External Marketing Claims should focus on:**
What the system actually does (verifiable)Measurable performance improvementsCompetitive advantages you can demonstrateReal-world applications and validated results**Internal Engineering Documentation should focus on:**
Actual implementation details and architectureDesign decisions and trade-offs madePerformance optimization opportunitiesTechnical debt and improvement roadmap10. File Protection Rules
Never modify `.env` files (contain user-specific secrets)Use `.env.example` for templates and documentationAlways check if changes could overwrite user configurationsProtect sensitive configuration files from accidental commits11. Positive Framing Examples
**Instead of:** "Revolutionary consciousness-driven AGI"
**Say:** "Sophisticated parameter-based system with physics-informed constraints"
**Instead of:** "97.3% interpretability breakthrough"
**Say:** "Transparent decision-making process with clear parameter influence"
**Instead of:** "Zero hallucination guarantee"
**Say:** "Physics-constrained outputs prevent unrealistic predictions"
**Instead of:** "First consciousness-driven submission"
**Say:** "Novel approach combining domain expertise with algorithmic processing"
12. Success Validation
**Engineering Success Metrics:**
System works reliably in productionPerformance claims validated with real dataArchitecture is maintainable and scalableDocumentation accurately reflects implementationTest coverage meets project standards**Communication Success Metrics:**
Technical claims are independently verifiableMarketing language supported by evidenceUsers understand actual system capabilitiesTechnical community respects the approachFinal Checkpoint
Before any major submission or release, honestly answer:
1. Would I be comfortable defending every technical claim to a skeptical expert?
2. Does the code actually implement what the documentation promises?
3. Am I proud of both the engineering work AND how I'm describing it?
4. Would this submission enhance or damage my technical reputation?
5. Have I tested this with real production data and scenarios?
The Golden Rule
**"Build systems so good that honest descriptions of them sound impressive"**
Great engineering speaks for itself. Accurate technical communication builds trust. Reliable systems win competitions. Honest developers build lasting reputations.
Examples
Example 1: Performance Claim Validation
**Before:** "Our system is 10x faster than competitors"
**After verification:**
Run benchmarks on identical hardwareTest with representative dataset sizesDocument test conditions and methodologyState: "Achieves 8.5x improvement on benchmark X under conditions Y with dataset Z"Example 2: Feature Documentation
**Before:** "AI-powered intelligent prediction engine"
**After clarification:**
Document actual algorithm (e.g., "Gradient boosted decision trees with custom feature engineering")Explain training data and validation approachList accuracy metrics with confidence intervalsNote limitations and edge casesExample 3: Architecture Claims
**Before:** "Microservices architecture with advanced orchestration"
**After verification:**
Document actual services and their responsibilitiesShow deployment topologyExplain orchestration mechanism (if any)Or honestly state: "Modular monolith with clear service boundaries, planned for microservices migration"Constraints
All performance metrics must be measurable and reproducibleAll competitive advantages must be demonstrable with evidenceTechnical terms must be used according to industry-standard definitionsArchitectural claims must be verified through code reviewNever compromise user data security or privacy for convenience