A webhook-driven FastAPI application that provides AI-powered code reviews for Bitbucket Enterprise Server. Automatically analyzes pull requests and commits, posting intelligent feedback using configurable LLM providers.
This skill guides you through working with a webhook-driven FastAPI application that provides AI-powered code reviews for Bitbucket Enterprise Server. The agent automatically analyzes pull requests and commits, posting intelligent feedback using configurable LLM providers.
The application follows a 3-tier webhook architecture:
Key data flow: Bitbucket webhook → fetch diff → send to LLM → post review comment.
All configuration is centralized in `config.py` using environment variables with defaults:
```python
BITBUCKET_TOKEN = os.getenv("BITBUCKET_TOKEN") # Required
LLM_PROVIDER = os.getenv("LLM_PROVIDER", "openai") # 'openai' or 'local_ollama'
```
When working with configuration:
Both `bitbucket_client.py` and `llm_client.py` follow this consistent pattern:
```python
async def _make_request(self, method: str, endpoint: str, **kwargs):
async with httpx.AsyncClient(verify=False, timeout=30.0) as client:
response = await client.request(method, url, headers=self.headers, **kwargs)
```
When implementing new API calls:
Main webhook handler processes events via background tasks:
```python
@app.post("/webhook/code-review")
async def webhook_handler(request: Request, background_tasks: BackgroundTasks):
if event_key in ["pr:opened", "pr:modified", "pr:from_ref_updated"]:
background_tasks.add_task(process_pull_request_review, payload)
```
When handling webhooks:
Apply this error handling pattern across all modules:
```python
try:
# Operation
logger.info(f"Success message with {details}")
except Exception as e:
logger.error(f"Error context: {str(e)}")
return None # or appropriate fallback
```
Error handling guidelines:
Run comprehensive test suite with `python run_tests.py`:
Test fixtures in `conftest.py` provide mocked clients and sample payloads.
Use `./lint.sh` for comprehensive code linting:
```bash
cp .env.example .env # Edit with test values
pip install -r requirements-dev.txt # Includes all dependencies
python main.py # Starts on localhost:8000
curl http://localhost:8000/health
```
```bash
docker-compose up -d
docker-compose --profile local-llm up -d
```
Extend `LLMClient` class with provider-specific methods:
```python
async def _test_provider_connection(self) -> Dict[str, Any]:
# Test connectivity and return status
async def _get_provider_review(self, prompt: str) -> Optional[str]:
# Send prompt and return review text
```
Then update `get_code_review()` and `test_connection()` to handle the new provider.
Modify `REVIEW_PROMPT_TEMPLATE` in `config.py`:
The `/health` endpoint validates:
```bash
POST /manual-review
{
"project_key": "PROJ",
"repo_slug": "repository",
"pr_id": 123 # OR "commit_id": "abc123"
}
```
Leave a review
No reviews yet. Be the first to review this skill!
# Download SKILL.md from killerskills.ai/api/skills/ai-code-reviewer-for-bitbucket/raw