Go service that syncs anime data from message queues (Pulsar/Kafka) via Redis to Algolia search index with create, update, and delete operations
A Go service that syncs anime search data from message queues (Pulsar/Kafka) to Algolia search index, using Redis as intermediate storage for reliable batch processing.
The application follows a layered architecture with Redis as an intermediate storage layer:
When working with this codebase, follow these steps:
Before making changes, understand the two-phase processing:
**Phase 1: Real-time Message Processing**
**Phase 2: Batch Sync to Algolia (Cron)**
```bash
go build ./cmd/
go test ./...
make mocks
```
```bash
go run ./cmd serve-algolia-sync
go run ./cmd serve-algolia-sync-kafka
go run ./cmd sync-redis-to-algolia
```
Configuration uses `config/config.dev.json` and environment variables:
**Pulsar Configuration:**
**Kafka Configuration:**
**Algolia Configuration:**
**Redis Configuration:**
When modifying the codebase:
**Adding New Message Types:**
1. Update `Payload` struct in appropriate eventing handler
2. Modify transformation logic in `redis_processor` or `redis_processor_kafka`
3. Ensure Redis serialization handles new fields
4. Update tests to cover new message types
**Changing Processing Logic:**
1. Locate relevant processor in `internal/services/redis_processor/`
2. Update `Process()` method with new logic
3. Ensure action type determination (create/update/delete) remains correct
4. Update unit tests with mocks
5. Test end-to-end flow with Redis and Algolia
**Modifying Algolia Schema:**
1. Update transformation in `redis_processor` to match new schema
2. Modify Algolia client wrapper in `internal/services/algolia/` if needed
3. Update index settings in Algolia dashboard
4. Test batch sync with `sync-redis-to-algolia` command
**Adding New Commands:**
1. Create command file in `internal/commands/`
2. Register command in `cmd/main.go`
3. Add configuration parameters to `config/config.dev.json`
4. Document environment variables required
```bash
make migrate-create name=migration_name
```
1. **Decoupled Architecture**: Message processing and Algolia sync are separate. Never bypass Redis storage.
2. **Idempotency**: All operations must be idempotent as messages may be reprocessed.
3. **Error Handling**: Failed Algolia syncs leave items in Redis for retry - never lose data.
4. **URL Encoding**: Anime titles must be URL-encoded before storage.
5. **Batch Processing**: Algolia updates are batched for performance - respect flush timeout.
6. **ObjectID Management**: Maintain consistent ObjectIDs across create/update/delete operations.
Deploy as two separate processes:
1. Message consumer service (Pulsar or Kafka) - runs continuously
2. Cron job - runs `sync-redis-to-algolia` on schedule (e.g., every 5 minutes)
Ensure Redis is available and accessible to both processes.
Leave a review
No reviews yet. Be the first to review this skill!
# Download SKILL.md from killerskills.ai/api/skills/algolia-anime-search-sync/raw