Quantized GGUF model for function calling and tool use based on Meta's Llama-3 8B, optimized by Groq. Available in multiple quantization levels for various hardware requirements.
A quantized GGUF version of the Llama-3 8B model fine-tuned by Groq for tool use and function calling. This model enables AI agents to interact with tools and APIs through structured function calls.
This skill provides access to multiple quantization levels of the Llama-3-Groq-8B-Tool-Use model, allowing you to balance between model quality, speed, and hardware requirements. The model is based on Meta's Llama-3 8B architecture and has been specifically trained for function calling and tool use scenarios.
Choose a quantization based on your hardware and quality requirements:
**Recommended Options:**
**Lower Quality (for resource-constrained systems):**
**Minimal Options (desperate situations only):**
Download your chosen quantization from the HuggingFace repository:
```bash
wget https://huggingface.co/mradermacher/Llama-3-Groq-8B-Tool-Use-i1-GGUF/resolve/main/Llama-3-Groq-8B-Tool-Use.i1-Q4_K_M.gguf
```
```bash
./llama-cli -m Llama-3-Groq-8B-Tool-Use.i1-Q4_K_M.gguf \
-p "You are a helpful assistant with access to tools." \
--temp 0.7 \
-n 512
```
Create a Modelfile:
```dockerfile
FROM ./Llama-3-Groq-8B-Tool-Use.i1-Q4_K_M.gguf
PARAMETER temperature 0.7
PARAMETER top_p 0.9
SYSTEM "You are a helpful assistant capable of using tools and functions to help users accomplish tasks."
```
Import and run:
```bash
ollama create llama3-tool-use -f Modelfile
ollama run llama3-tool-use
```
Structure your prompts to enable tool use:
```
Available tools:
User: What's the weather in San Francisco?
Leave a review
No reviews yet. Be the first to review this skill!
# Download SKILL.md from killerskills.ai/api/skills/llama-3-groq-8b-tool-use-model/raw