Configure, build, test, and manage OpenProxy - a high-performance Rust-based proxy server that intelligently routes and load-balances requests between multiple LLM providers (OpenAI, Gemini, Anthropic) with built-in health monitoring and authentication management.
Expert guidance for working with OpenProxy, a high-performance Rust-based proxy server that routes and load-balances requests between multiple LLM providers.
Helps you build, test, configure, and understand OpenProxy's architecture - a sophisticated proxy server written in Rust that intelligently routes requests to OpenAI, Gemini, Anthropic, and other LLM providers based on the Host header, with support for weighted load balancing, health checks, HTTP/2, WebSocket proxying, and hot reloading.
```bash
cargo build --release
cargo test
cargo test --lib
cargo test test_name
cargo clippy
cargo fmt
./target/release/openproxy start -c config.yml
./target/release/openproxy start -c config.yml -p /var/run/openproxy.pid
```
Navigate to the `e2e` directory and install Python dependencies:
```bash
cd e2e
pip install openai pydantic "httpx[http2]" websockets websocket-client pyyaml anthropic
```
Run individual test suites:
OpenProxy routes requests to multiple LLM providers based on the `Host` header with intelligent load balancing and health monitoring.
**Configuration & Core State (`src/lib.rs`)**
**Request Handling (`src/worker/mod.rs`)**
**Provider Implementations (`src/provider/mod.rs`)**
**HTTP Parsing (`src/http/mod.rs`)**
**Streaming Helpers (`src/http/reader/mod.rs`)**
**Connection Management (`src/executor/mod.rs`)**
**HTTP/2 Client (`src/h2client/mod.rs`)**
**WebSocket Support (`src/websocket/mod.rs`)**
1. **Client Connection**: TCP connection established (HTTP or HTTPS)
2. **TLS Handshake** (HTTPS): ALPN negotiation selects h2 or http/1.1
3. **Worker Spawned**: `Executor` creates a `Worker` to handle the connection
4. **Request Parsing**: Worker reads request, extracts `Host` header
5. **Provider Selection**: `Program::select_provider()` finds matching provider(s) by host
6. **Weighted Selection**: If multiple healthy providers match, weighted random selection is applied
7. **Request Proxying**: Worker proxies request to selected provider
8. **Response Streaming**: Response streamed back to client
9. **WebSocket Upgrade** (if applicable): Bidirectional tunnel established for WebSocket connections
The `Provider` trait defines how each LLM provider handles authentication and headers:
**Static Authentication**
**Dynamic Authentication**
**Header Transformation**
**HTTP/1.1 (`http/mod.rs`)**
**HTTP/2 (`worker/mod.rs`)**
YAML-based configuration with hot-reload via SIGHUP signal.
**Required Fields**
**Provider Configuration**
**Optional Settings**
Custom `Error` enum in `lib.rs` with variants:
1. **Understand the Module Structure**: Identify which module(s) your change affects (core state, worker, provider, HTTP parsing, etc.)
2. **Review the Provider Trait**: If adding a new provider or modifying authentication, check the `Provider` trait interface in `src/provider/mod.rs`
3. **Test Header Processing**: If changing header handling, understand the difference between HTTP/1.1 (`Payload` state machine) and HTTP/2 (`UpstreamInfo` pre-computation)
4. **Run Unit Tests**: After changes, run `cargo test --lib` to verify unit tests pass
5. **Run E2E Tests**: Navigate to `e2e/` and run relevant test suites to verify end-to-end behavior
6. **Check Linting**: Run `cargo clippy` and fix any warnings
7. **Format Code**: Run `cargo fmt` before committing
1. **Create Provider Implementation**: Add a new struct in `src/provider/mod.rs` implementing the `Provider` trait
2. **Define Authentication**: Implement `auth_header()` and `auth_header_key()` for static auth, or `uses_dynamic_auth()` and `get_dynamic_auth_header()` for OAuth
3. **Add Extra Headers**: If the provider requires special headers, implement `extra_headers()` and `transform_extra_header()`
4. **Update Configuration Enum**: Add the new provider type to the configuration parser
5. **Test**: Create an E2E test suite in `e2e/test_<provider>.py` following existing patterns
1. **Enable Logging**: Run with `RUST_LOG=debug ./target/release/openproxy start -c config.yml`
2. **Trace Request Path**: Follow the request through:
- Worker receives connection (`worker/mod.rs`)
- Headers parsed (`http/mod.rs`)
- Provider selected (`lib.rs::select_provider()`)
- Request proxied (`worker/mod.rs::proxy_h1()` or `proxy_h2()`)
3. **Check Header Filtering**: Verify which headers are being filtered in `split_header_chunks()` (HTTP/1.1) or `UpstreamInfo` construction (HTTP/2)
4. **Monitor Health Checks**: If provider selection fails, check health check status in logs
1. **Create YAML Configuration**: Define providers with appropriate weights, endpoints, and API keys
2. **Enable Health Checks**: Set `health_check.enabled: true` and adjust interval as needed
3. **Configure TLS**: Provide valid `cert_file` and `private_key_file` for HTTPS
4. **Set Bind Addresses**: Use `https_bind_address` and `http_bind_address` to control network interfaces
5. **Test Configuration**: Run `./target/release/openproxy start -c config.yml` and verify startup logs
6. **Enable Hot Reload**: Send SIGHUP signal to reload configuration without downtime
7. **Plan for Upgrades**: Use SIGUSR2 for hot upgrades (new binary takes over, old process gracefully shuts down)
**Configuration file (config.yml)**:
```yaml
https_port: 8443
http_port: 8080
cert_file: /path/to/cert.pem
private_key_file: /path/to/key.pem
health_check:
enabled: true
interval: 60
providers:
- type: openai
host: api.openai.com
endpoint: https://api.openai.com
api_key: sk-...
weight: 3
tls: true
- type: anthropic
host: api.anthropic.com
endpoint: https://api.anthropic.com
api_key: $(aws secretsmanager get-secret-value --secret-id anthropic-key --query SecretString --output text)
weight: 2
tls: true
- type: gemini
host: generativelanguage.googleapis.com
endpoint: https://generativelanguage.googleapis.com
api_key: AIza...
weight: 1
tls: true
is_fallback: true
```
**Build and run**:
```bash
cargo build --release
./target/release/openproxy start -c config.yml
```
**Test with curl**:
```bash
curl -H "Host: api.openai.com" https://localhost:8443/v1/chat/completions
curl -H "Host: api.anthropic.com" https://localhost:8443/v1/messages
```
**Hot reload configuration**:
```bash
kill -HUP $(cat /var/run/openproxy.pid)
```
**Hot upgrade binary**:
```bash
kill -USR2 $(cat /var/run/openproxy.pid)
```
Leave a review
No reviews yet. Be the first to review this skill!
# Download SKILL.md from killerskills.ai/api/skills/openproxy-multi-provider-llm-routing/raw