A specialized AI model for advanced tool use and function calling tasks, fine-tuned from Llama 3 8B with 89.06% accuracy on the Berkeley Function Calling Leaderboard.
Use the Groq/Llama-3-Groq-8B-Tool-Use model for advanced function calling and tool use tasks with industry-leading accuracy among open-source 8B models.
This skill configures your AI agent to leverage the Llama-3-Groq-8B-Tool-Use model, which excels at:
The model achieved 89.06% overall accuracy on the Berkeley Function Calling Leaderboard (BFCL), the best performance among all open-source 8B language models.
When a user requests to use the Groq Llama-3 8B Tool Use model:
1. **Verify Access**
- Confirm the user has access to the Groq API (https://console.groq.com)
- Request their Groq API key if not already configured
- Note: This model is also available via Hugging Face
2. **Configure the Model**
- Set model identifier: `groq/llama-3-groq-8b-tool-use`
- Configure recommended sampling parameters:
- `temperature`: Start at 0.5 (adjust 0.3-0.7 based on creativity needs)
- `top_p`: Start at 0.65 (adjust 0.5-0.8 based on output diversity needs)
- Note: This model is particularly sensitive to temperature and top_p settings
3. **Format Tool Definitions**
- Provide function signatures within `<tools></tools>` XML tags
- Use JSON schema format with:
- `name`: Function name
- `description`: Clear description of what the function does
- `parameters`: Object with `properties`, `required`, and `type` fields
4. **Structure System Prompt**
Use this exact format:
```
You are a function calling AI model. You are provided with function signatures within <tools></tools> XML tags. You may call one or more functions to assist with the user query. Don't make assumptions about what values to plug into functions. For each function call return a json object with function name and arguments within <tool_call></tool_call> XML tags as follows:
<tool_call>
{"name": <function-name>,"arguments": <args-dict>}
</tool_call>
Here are the available tools:
<tools> [INSERT TOOL DEFINITIONS HERE] </tools>
```
5. **Handle Model Responses**
- Parse `<tool_call>` XML tags containing JSON function calls
- Extract function name and arguments from the JSON object
- Execute the called function with provided arguments
- Return results in `<tool_response>` XML tags with format:
```
<tool_response>
{"id":"call_[id]","result":{[result_data]}}
</tool_response>
```
6. **Process Multi-Turn Conversations**
- Use Llama 3 chat format with header IDs:
- `<|start_header_id|>system<|end_header_id|>` for system instructions
- `<|start_header_id|>user<|end_header_id|>` for user messages
- `<|start_header_id|>assistant<|end_header_id|>` for model responses
- `<|start_header_id|>tool<|end_header_id|>` for tool results
- End each turn with `<|eot_id|>` (end of turn token)
7. **Optimize for Use Case**
- For precise API calls: Lower temperature (0.3-0.4)
- For creative tool orchestration: Higher temperature (0.6-0.7)
- Monitor function calling accuracy and adjust sampling as needed
8. **Set Appropriate Expectations**
- Inform users this model is optimized for tool use and function calling
- For general knowledge or open-ended tasks, recommend a general-purpose model
- Note that the model may occasionally produce inaccurate outputs
- Advise implementing application-specific safety measures
**User Request:** "Set up Groq's tool use model to check weather in multiple cities"
**Agent Actions:**
1. Configure model with `groq/llama-3-groq-8b-tool-use`
2. Set `temperature=0.5`, `top_p=0.65`
3. Create system prompt with weather function definition in XML tags
4. Process user query: "What's the weather in San Francisco and New York?"
5. Parse model's `<tool_call>` outputs for both cities
6. Execute weather API calls
7. Return results in `<tool_response>` format
8. Let model synthesize final answer from tool results
Leave a review
No reviews yet. Be the first to review this skill!
# Download SKILL.md from killerskills.ai/api/skills/groq-llama-3-8b-tool-use/raw