Expert assistant for developing a Flask-based enterprise search system with PostgreSQL and Google Search API integration
You are an expert assistant for developing the Google Search AI project, an integral enterprise search system that combines local document search with Google Search API integration.
This is a Flask-based web application that provides:
When working with this codebase, you'll find:
```
backend/ - Flask application core
- app.py - Main Flask server and routes
- models.py - SQLAlchemy database models
- config.py - Application configuration
- database_service.py - Database operations and queries
- google_search.py - Google Search API integration
frontend/ - Web interface
- index.html - Chat interface HTML
- styles.css - UI styling
- script.js - Client-side logic
database/
- schema.sql - PostgreSQL schema definition
config/
- .env.example - Environment variables template
```
1. **Follow Flask Patterns**: Use Flask blueprints for organization, proper error handling with try-except blocks, and return consistent JSON responses
2. **Database Operations**: Use SQLAlchemy ORM via `database_service.py` for all database interactions. Leverage PostgreSQL's full-text search capabilities
3. **API Integration**: Handle Google Search API calls through `google_search.py`. Implement proper error handling for API failures and rate limits
4. **CORS**: CORS is enabled for development. Ensure proper origin configuration for production
1. **Schema**: The PostgreSQL schema is defined in `database/schema.sql`. Use this as the source of truth
2. **Migrations**: After schema changes, update `models.py` to match and regenerate migrations if using Alembic
3. **Full-Text Search**: The system uses PostgreSQL's native full-text search. Optimize queries with proper indexes
4. **User History**: Track all searches per user ID for analytics and personalization
The application exposes these core endpoints:
All endpoints should return JSON with consistent structure:
```json
{
"success": true/false,
"data": {},
"error": "error message if applicable"
}
```
1. **Environment Variables**: All sensitive configuration lives in `.env` (not committed)
2. **Required Variables**:
- `DATABASE_URL` - PostgreSQL connection string
- `GOOGLE_API_KEY` - Google Search API key
- `GOOGLE_SEARCH_ENGINE_ID` - Custom Search Engine ID
- `FLASK_ENV` - development/production
3. **Use `.env.example`**: Keep this updated as a template for new developers
**Backend Setup**:
```bash
cd backend
pip install -r requirements.txt
python app.py
```
**Load Sample Data**:
```bash
python load_documents.py
```
**Frontend**:
Open `frontend/index.html` in a web browser (no build step required)
1. **Unified Search**: Combines local PostgreSQL full-text search with Google Search API results
2. **User Context**: Each search is associated with a user ID for personalization and history
3. **Document Management**: CRUD operations for internal document corpus
4. **Search Analytics**: Track search patterns, popular queries, and result effectiveness
1. **Error Handling**: Always use try-except blocks for database and API operations
2. **Logging**: Log important operations, errors, and API calls for debugging
3. **Type Hints**: Use Python type hints for function signatures
4. **SQL Injection Prevention**: Use parameterized queries through SQLAlchemy ORM
5. **API Rate Limits**: Respect Google Search API rate limits and implement backoff strategies
**Adding a New Endpoint**:
1. Define route in `app.py`
2. Implement business logic (consider extracting to service module)
3. Add database operations in `database_service.py` if needed
4. Update frontend `script.js` to call the endpoint
5. Test with various input scenarios
**Modifying the Search Algorithm**:
1. Update ranking logic in `database_service.py` for local search
2. Modify result merging in `app.py` for unified search
3. Consider weighting factors: relevance, recency, user preferences
**Adding Document Types**:
1. Extend `models.py` with new document schema
2. Update `schema.sql` to match
3. Modify parsing logic in document upload handlers
4. Update frontend to support new document types
1. Test search with various query types: simple keywords, phrases, complex queries
2. Verify Google API fallback when local results are insufficient
3. Test with multiple concurrent users to ensure proper isolation
4. Validate input sanitization to prevent injection attacks
5. Check performance with large document corpus
1. **Database**: Ensure PostgreSQL is properly configured with connection pooling
2. **Secrets**: Never commit `.env` file. Use environment variables in production
3. **CORS**: Update CORS settings to restrict origins in production
4. **Monitoring**: Implement logging and monitoring for API calls and errors
5. **Scaling**: Consider caching strategies for frequently accessed documents
Leave a review
No reviews yet. Be the first to review this skill!
# Download SKILL.md from killerskills.ai/api/skills/google-search-ai-project-assistant/raw