Expert guidance for building and maintaining the FigureYa LLM-powered biomedical visualization recommendation system with Go backend and modern web interface.
Expert guidance for working with the FigureYa LLM recommendation system - a Go-based HTTP server that provides intelligent biomedical chart module recommendations using LLM semantic understanding.
This is a production-ready recommendation system featuring:
**Backend Files**:
**Frontend Files**:
**Data Model**:
```go
type Module struct {
Module string `json:"module"`
需求描述 string `json:"需求描述"` // Requirement description
实用场景 string `json:"实用场景"` // Use cases
图片类型 string `json:"图片类型"` // Chart types
LLMStatus string `json:"llm_status"`
}
```
**CRITICAL**: Go's JSON decoder does not properly handle Chinese characters in struct tags. You MUST use map-based parsing:
```go
// ❌ THIS DOES NOT WORK
type Module struct {
需求描述 string `json:"需求描述"` // Won't parse correctly
}
// ✅ USE THIS APPROACH
var data map[string]interface{}
json.Unmarshal(bytes, &data)
description := getString(moduleMap, "需求描述")
```
When modifying JSON parsing code:
1. Always use `map[string]interface{}` for initial decode
2. Extract Chinese-named fields using helper functions like `getString()`
3. Test with actual JSON data to verify field extraction
Required variables (via `.env` file or environment):
```bash
OPENAI_API_KEY=sk-xxxxx # LLM service API key
BASE_URL=https://api.deepseek.com # LLM API base URL (auto-appends /v1/chat/completions)
MODEL=deepseek-chat # Model name
PROVIDER=openai # Provider identifier
PORT=8080 # Server port (default: 8080)
```
The system automatically appends `/v1/chat/completions` to `BASE_URL`, so provide only the base domain.
**Static File Paths**: Always use absolute paths with `/static/` prefix:
**Module URL Generation**: Frontend generates documentation links using pattern:
```
https://ying-ge.github.io/FigureYa/{module}/{module}.html
```
**Response Handling**: API returns structured JSON with recommendations array. Each recommendation includes:
```bash
go mod tidy
go run main.go load_env.go
PORT=8080 go run main.go load_env.go
curl -X POST http://localhost:8080/recommend \
-H "Content-Type: application/json" \
-d '{"query": "我想绘制生存曲线图"}'
curl http://localhost:8080/health
curl http://localhost:8080/modules | jq '.modules[0]'
```
```bash
./deploy.sh start
docker build -t figureya-recommend .
docker run -p 8080:8080 --env-file .env figureya-recommend
docker-compose up -d
./deploy.sh production
docker-compose logs -f figureya-recommend
```
When making changes, verify:
1. **Module Loading**: `curl http://localhost:8080/modules | jq '.count'` should return 317
2. **Health Check**: `curl http://localhost:8080/health` should return `{"status":"healthy"}`
3. **Recommendation API**: Test with Chinese query to verify LLM integration
4. **Frontend**: Open `http://localhost:8080` and verify search UI loads
5. **Static Files**: Check browser console for 404 errors on CSS/JS files
**Static files return 404**:
**Empty module fields (需求描述, 实用场景, etc.)**:
**LLM API errors**:
**Port conflicts**:
**Docker build failures**:
```bash
curl http://localhost:8080/modules | jq '.modules[] | select(.module | contains("survival"))'
curl -X POST http://localhost:8080/recommend \
-H "Content-Type: application/json" \
-d '{"query": "生存分析"}' | jq .
docker inspect --format='{{.State.Health.Status}}' figureya-recommend
docker-compose logs -f figureya-recommend
docker exec figureya-recommend env | grep -E 'MODEL|BASE_URL'
```
1. **Temperature**: Set to 0.3 for consistent recommendations
2. **Prompt Structure**: Maintain JSON format for structured responses
3. **Context Building**: Include all module metadata for accurate matching
4. **Error Handling**: Check for empty responses and API errors
5. **Timeout**: Consider adding request timeout for slow LLM APIs
1. **Static Paths**: Always use `/static/` prefix
2. **API Calls**: Use `/recommend` endpoint with JSON POST
3. **Error Display**: Show user-friendly messages for API failures
4. **Loading States**: Display spinner during LLM processing
5. **Responsive Design**: Test on mobile and desktop viewports
1. **JSON Format**: Maintain Chinese field names (需求描述, 实用场景, 图片类型)
2. **Filtering**: Keep `llm_status == "OK"` filter logic
3. **Validation**: Ensure all required fields are present
4. **Count Verification**: Check `/health` endpoint reports correct count
1. **Follow Existing Patterns**: Use similar structure to existing endpoints
2. **Environment Config**: Add new variables to `.env.example` and `load_env.go`
3. **Docker Updates**: Update Dockerfile and docker-compose.yml if needed
4. **Documentation**: Update this file with new commands and configurations
5. **Testing**: Add curl commands to verify new functionality
```bash
go run main.go load_env.go
```
Use for:
```bash
docker build -t figureya-recommend .
docker run -p 8080:8080 --env-file .env figureya-recommend
```
Use for:
```bash
docker-compose --profile production up -d
```
Use for:
1. **API Keys**: Never commit `.env` file to version control
2. **CORS**: Configure allowed origins in production
3. **Rate Limiting**: Consider adding rate limits for `/recommend` endpoint
4. **Input Validation**: Sanitize user queries before passing to LLM
5. **Error Messages**: Avoid exposing internal paths or configurations
When extending this system:
1. **Add Authentication**: Implement user accounts and API key management
2. **Caching**: Cache LLM responses for common queries (Redis/Memcached)
3. **Analytics**: Track popular queries and recommendation click-through rates
4. **A/B Testing**: Test different LLM prompts and models
5. **Monitoring**: Add Prometheus metrics and Grafana dashboards
6. **Batch Recommendations**: Support multiple queries in single request
7. **Module Ranking**: Implement more sophisticated scoring algorithms
8. **User Feedback**: Allow users to rate recommendations
Leave a review
No reviews yet. Be the first to review this skill!
# Download SKILL.md from killerskills.ai/api/skills/figureya-llm-chart-recommendation-system/raw