Function calling and tool use with Llama 3 8B optimized for API interactions, structured data manipulation, and complex tool orchestration. Achieves 89% accuracy on Berkeley Function Calling Leaderboard.
A specialized 8B parameter Llama 3 model fine-tuned for advanced tool use and function calling. Optimized through full fine-tuning and Direct Preference Optimization (DPO), achieving state-of-the-art performance for open-source 8B models on function calling tasks.
This skill enables AI agents to call external functions and tools with high accuracy using a quantized version of Groq's Llama 3 Tool Use model. It excels at:
**Performance:** 89.06% overall accuracy on Berkeley Function Calling Leaderboard (BFCL)
Download the quantized GGUF model from HuggingFace:
Load the model using your preferred GGUF-compatible runtime (llama.cpp, Ollama, LM Studio, etc.)
**IMPORTANT:** This model is sensitive to sampling parameters. Start with these recommended values:
```
temperature: 0.5
top_p: 0.65
```
Adjust up or down based on your specific use case:
Use the Llama 3 chat template with XML-tagged tool definitions:
```
<|start_header_id|>system<|end_header_id|>
You are a function calling AI model. You are provided with function signatures within <tools></tools> XML tags. You may call one or more functions to assist with the user query. Don't make assumptions about what values to plug into functions. For each function call return a json object with function name and arguments within <tool_call></tool_call> XML tags as follows:
<tool_call>
{"name": <function-name>,"arguments": <args-dict>}
</tool_call>
Here are the available tools:
<tools>
[INSERT YOUR FUNCTION DEFINITIONS HERE AS JSON SCHEMA]
</tools><|eot_id|><|start_header_id|>user<|end_header_id|>
[USER QUERY]<|eot_id|><|start_header_id|>assistant<|end_header_id|>
```
Define each function using JSON Schema format:
```json
{
"name": "function_name",
"description": "Clear description of what the function does",
"parameters": {
"type": "object",
"properties": {
"param_name": {
"type": "string",
"description": "Parameter description"
}
},
"required": ["param_name"]
}
}
```
Extract function calls from the model's response:
1. Look for `<tool_call>` XML tags
2. Parse the JSON object inside
3. Extract `name` and `arguments` fields
4. Execute the corresponding function
5. Return results in `<tool_response>` format for multi-turn interactions
For tool response feedback:
```
<|start_header_id|>tool<|end_header_id|>
<tool_response>
{"id":"call_id","result":{...}}
</tool_response><|eot_id|><|start_header_id|>assistant<|end_header_id|>
```
The model will continue the conversation using the tool results.
**User Query:** "What's the weather in San Francisco?"
**System Prompt with Tools:**
```
<tools>
{
"name": "get_current_weather",
"description": "Get the current weather in a given location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA"
},
"unit": {
"type": "string",
"enum": ["celsius", "fahrenheit"]
}
},
"required": ["location"]
}
}
</tools>
```
**Model Output:**
```xml
<tool_call>
{"id":"call_deok","name":"get_current_weather","arguments":{"location":"San Francisco","unit":"celsius"}}
</tool_call>
```
Leave a review
No reviews yet. Be the first to review this skill!
# Download SKILL.md from killerskills.ai/api/skills/llama-3-groq-tool-use-vd07ar/raw