Fine-tuned Llama 3 70B model optimized for function calling and tool use. Based on Meta-Llama-3-70B-Instruct with enhanced capabilities for structured API interactions and function invocation.
A specialized version of Meta's Llama 3 70B Instruct model fine-tuned on the Trelis function calling dataset. This model excels at understanding when and how to call functions, making it ideal for building AI agents that interact with APIs and tools.
This is a quantized GGUF version of the Meta-Llama-3-70B-Instruct model that has been specifically trained for function calling scenarios. It can:
Select a quantization level based on your hardware constraints:
**Recommended Options:**
**Memory-Constrained Options:**
Download from: `https://huggingface.co/mradermacher/Meta-Llama-3-70B-Instruct-function-calling-GGUF`
For multi-part files (Q6_K, Q8_0), concatenate parts:
```bash
cat Meta-Llama-3-70B-Instruct-function-calling.Q8_0.gguf.part* > Meta-Llama-3-70B-Instruct-function-calling.Q8_0.gguf
```
Use a local inference server like llama.cpp, Ollama, or LM Studio:
**With llama.cpp:**
```bash
./server -m Meta-Llama-3-70B-Instruct-function-calling.Q4_K_M.gguf \
--host 0.0.0.0 --port 8080 \
--ctx-size 8192 --n-gpu-layers 35
```
**With Ollama:**
```bash
FROM ./Meta-Llama-3-70B-Instruct-function-calling.Q4_K_M.gguf
ollama create llama3-function-calling -f Modelfile
ollama run llama3-function-calling
```
Structure your prompts with clear function definitions:
```
You have access to the following functions:
{
"name": "get_weather",
"description": "Get current weather for a location",
"parameters": {
"type": "object",
"properties": {
"location": {"type": "string", "description": "City name"},
"unit": {"type": "string", "enum": ["celsius", "fahrenheit"]}
},
"required": ["location"]
}
}
User: What's the weather in Paris?
Leave a review
No reviews yet. Be the first to review this skill!
# Download SKILL.md from killerskills.ai/api/skills/meta-llama-3-70b-function-calling/raw