A specialized AI agent model focused on security, safety, and agent-based tasks. Based on Llama 3.1 8B architecture, fine-tuned for security analysis and conversational safety assessment. Available in multiple GGUF quantization formats optimized for different hardware constraints.
This skill has safety concerns that you should review before use. Some patterns were detected that may pose a risk.Safety score: 60/100.
KillerSkills scans all public content for safety. Use caution before installing or executing flagged content.
A specialized language model fine-tuned for security, safety, and agent-based tasks. Built on Meta's Llama 3.1 8B architecture and optimized for security analysis, threat assessment, and conversational safety evaluation.
AgentDoG-FG-Llama3.1-8B is a fine-tuned model specifically designed for security-focused agent applications. This model excels at analyzing code for vulnerabilities, assessing safety implications of actions, and providing security-aware conversational responses. Available in GGUF format with multiple quantization options to balance quality and resource requirements.
Select a GGUF quantization based on your hardware and quality requirements:
**Recommended for Most Users:**
**For Constrained Hardware:**
**For Experimental/Extreme Constraints:**
Download your chosen quantization from HuggingFace:
```bash
huggingface-cli download mradermacher/AgentDoG-FG-Llama3.1-8B-i1-GGUF \
AgentDoG-FG-Llama3.1-8B.i1-Q4_K_M.gguf \
--local-dir ./models
```
**llama.cpp:**
```bash
./main -m models/AgentDoG-FG-Llama3.1-8B.i1-Q4_K_M.gguf \
-n 512 \
-p "Analyze the following code for security vulnerabilities:"
```
**Ollama:**
```bash
echo 'FROM ./models/AgentDoG-FG-Llama3.1-8B.i1-Q4_K_M.gguf' > Modelfile
ollama create agentdog -f Modelfile
ollama run agentdog "Review this authentication implementation for security issues"
```
**LM Studio:**
1. Open LM Studio
2. Click "Import" and select the downloaded GGUF file
3. Load the model from your local collection
4. Start chatting with security-focused prompts
For security analysis tasks, adjust parameters based on your use case:
```python
from llama_cpp import Llama
llm = Llama(
model_path="models/AgentDoG-FG-Llama3.1-8B.i1-Q4_K_M.gguf",
n_ctx=4096, # Context window
n_threads=8, # CPU threads
n_gpu_layers=35 # Offload layers to GPU if available
)
response = llm(
"Analyze this SQL query for injection vulnerabilities: SELECT * FROM users WHERE id = " + user_input,
max_tokens=512,
temperature=0.3, # Lower for more focused security analysis
top_p=0.9
)
```
**Code Vulnerability Analysis:**
```
Analyze the following code for security vulnerabilities, including but not limited to injection attacks, authentication flaws, and data exposure:
[CODE HERE]
Provide a detailed assessment with severity ratings and remediation recommendations.
```
**Safety Assessment:**
```
Evaluate the safety implications of the following action in an autonomous agent context:
Action: [DESCRIPTION]
Context: [RELEVANT CONTEXT]
Consider potential risks, unintended consequences, and safety constraints.
```
**Threat Modeling:**
```
Perform threat modeling for the following system:
Architecture: [DESCRIPTION]
Components: [LIST]
Data Flow: [DESCRIPTION]
Identify potential attack vectors and security controls needed.
```
**Input:**
```
Review this database query function for security issues:
def get_user(username):
query = f"SELECT * FROM users WHERE username = '{username}'"
return db.execute(query)
```
**Expected Output:**
The model should identify the SQL injection vulnerability and recommend parameterized queries.
**Input:**
```
Assess the security of this authentication implementation:
if user_password == stored_password:
create_session(user)
```
**Expected Output:**
The model should flag plaintext password comparison, timing attacks, and recommend proper hashing with constant-time comparison.
**Input:**
```
An autonomous agent wants to execute: rm -rf /tmp/cache/*
Context: Cleaning temporary files as part of routine maintenance.
Is this action safe?
```
**Expected Output:**
The model should analyze the risk, consider context, and provide safety recommendations including validation of the path, verification of permissions, and fallback strategies.
Fine-tuned from: AI45Research/AgentDoG-FG-Llama3.1-8B
Base architecture: Meta Llama 3.1 8B
License: Apache 2.0
Quantized by: mradermacher
Original model: AI45Research
Quantization infrastructure: nethype GmbH and @nicoboss
Leave a review
No reviews yet. Be the first to review this skill!
# Download SKILL.md from killerskills.ai/api/skills/agentdog-fine-grained-security-agent/raw