AgentDoG Security Agent
An AI agent specifically trained for security-focused tasks, built on Llama 3.1 8B with enhanced safety and security capabilities. Runs locally using GGUF quantization for privacy and performance.
What This Agent Does
AgentDoG is designed to assist with:
Security code review and vulnerability detectionThreat analysis and risk assessmentSafe AI agent operations with built-in guardrailsPrivacy-preserving local inferenceSecurity-conscious decision makingInstallation & Setup
Step 1: Choose Your Runtime
**Option A: llama.cpp**
```bash
Clone llama.cpp if not already installed
git clone https://github.com/ggerganov/llama.cpp
cd llama.cpp
make
```
**Option B: Ollama**
```bash
Install Ollama from https://ollama.ai
curl -fsSL https://ollama.ai/install.sh | sh
```
**Option C: LM Studio**
Download from https://lmstudio.ai
Step 2: Download the Model
Choose a quantization level based on your hardware:
**Recommended (balanced quality/speed):**
Q4_K_M (5.0 GB) - Fast, good qualityQ5_K_M (5.8 GB) - Better quality, slightly slower**Lower memory (reduced quality):**
IQ3_S (3.8 GB) - Minimal quality lossQ3_K_M (4.1 GB) - Acceptable for most tasks**Best quality:**
Q6_K (6.7 GB) - Nearly full precisionDownload from: https://huggingface.co/mradermacher/AgentDoG-Llama3.1-8B-i1-GGUF
Step 3: Load the Model
**llama.cpp:**
```bash
./llama-cli -m AgentDoG-Llama3.1-8B.i1-Q4_K_M.gguf -p "Your security prompt here" --temp 0.7
```
**Ollama:**
```bash
Create Modelfile
echo 'FROM ./AgentDoG-Llama3.1-8B.i1-Q4_K_M.gguf' > Modelfile
ollama create agentdog -f Modelfile
ollama run agentdog
```
**LM Studio:**
1. Open LM Studio
2. Navigate to "Local Models"
3. Click "Import" and select the GGUF file
4. Start chatting in the Chat tab
Usage Examples
Security Code Review
```
Analyze this Python function for security vulnerabilities:
def process_user_input(data):
query = f"SELECT * FROM users WHERE id = {data['user_id']}"
return db.execute(query)
```
Threat Assessment
```
Evaluate the security risks of this scenario:
Public API endpoint accepting file uploadsFiles stored in /tmp directoryNo file type validationFiles served via direct URL accessProvide risk level and mitigation recommendations.
```
Safe Agent Operations
```
I'm designing an AI agent that will:
1. Read user emails
2. Classify them by priority
3. Auto-respond to routine requests
What safety guardrails should I implement?
```
Configuration Tips
Temperature Settings
**0.3-0.5**: Precise, deterministic security analysis**0.7**: Balanced creativity and accuracy (recommended)**0.9-1.0**: Exploratory threat modelingContext Length
Default: 8192 tokensIncrease for longer code reviews: `--ctx-size 16384`Performance Tuning
```bash
llama.cpp with GPU acceleration
./llama-cli -m model.gguf -ngl 35 --mlock
-ngl: number of layers to offload to GPU
--mlock: keep model in RAM
```
Important Notes
**Local Execution**: This model runs entirely on your hardware, ensuring sensitive security data never leaves your system**Quantization Trade-offs**: Lower quantization levels (Q2, Q3) may reduce accuracy for complex security analysis**Base Model**: Built on AI45Research/AgentDoG-Llama3.1-8B with additional safety training**License**: Apache 2.0 - free for commercial useSystem Requirements
**Minimum RAM**: 8 GB (for Q3/Q4 quants)**Recommended RAM**: 16 GB (for Q5/Q6 quants)**GPU**: Optional but recommended (CUDA, Metal, or ROCm)**Storage**: 3-7 GB depending on quantizationWhen to Use This vs Cloud APIs
Use AgentDoG when:
Analyzing proprietary or sensitive codeOperating in air-gapped environmentsRequiring deterministic, auditable outputsCost per inference matters (local is free after download)Use cloud APIs when:
Need cutting-edge reasoning (GPT-4, Claude Opus)Limited local compute resourcesRequire multimodal capabilitiesTroubleshooting
**Model loads slowly:**
Use `--mlock` to keep model in RAMReduce context size with `--ctx-size 4096`Try a smaller quantization (Q3/Q4)**Out of memory errors:**
Use a more aggressive quantization (IQ3_S, Q3_K_M)Close other applicationsEnable GPU offloading if available**Low quality outputs:**
Upgrade to Q5_K_M or Q6_KAdjust temperature (try 0.5-0.7)Provide more detailed prompts with examples