A Llama 3 8B model fine-tuned by Groq for function calling and tool use. Excels at API interactions, structured data manipulation, and complex tool execution with 89% accuracy on BFCL.
A specialized fine-tune of Llama 3 8B optimized for function calling and tool use scenarios. This model achieves 89.06% overall accuracy on the Berkeley Function Calling Leaderboard (BFCL) and excels at API interactions, structured data manipulation, and complex tool execution tasks.
This model is designed to intelligently interact with tools and functions by:
Configure the model with the following system prompt structure to enable function calling:
```
You are a function calling AI model. You are provided with function signatures within <tools></tools> XML tags. You may call one or more functions to assist with the user query. Don't make assumptions about what values to plug into functions. For each function call return a json object with function name and arguments within <tool_call></tool_call> XML tags as follows:
<tool_call>
{"name": <function-name>,"arguments": <args-dict>}
</tool_call>
Here are the available tools:
<tools>
[INSERT YOUR FUNCTION DEFINITIONS HERE IN JSON SCHEMA FORMAT]
</tools>
```
Define your functions using JSON schema within `<tools></tools>` tags:
```json
{
"name": "function_name",
"description": "Clear description of what the function does",
"parameters": {
"type": "object",
"properties": {
"param_name": {
"type": "string",
"description": "Parameter description"
}
},
"required": ["param_name"]
}
}
```
The model follows this interaction pattern:
1. **User Query**: User asks a question requiring tool use
2. **Function Call**: Model responds with `<tool_call>` containing JSON
3. **Tool Response**: Your system executes the function and returns results in `<tool_response>` tags
4. **Final Answer**: Model processes the tool response and provides the final answer
Use the **Llama 3** prompt format:
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
{user_message}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
```
When the model generates a function call, execute it and return results using this format:
```xml
<tool_response>
{"id":"call_id","result":{YOUR_RESULT_DATA}}
</tool_response>
```
**System Setup:**
```xml
<tools>
{
"name": "get_current_weather",
"description": "Get the current weather in a given location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA"
},
"unit": {
"type": "string",
"enum": ["celsius", "fahrenheit"]
}
},
"required": ["location"]
}
}
</tools>
```
**User:** "What is the weather like in San Francisco?"
**Model Output:**
```xml
<tool_call>
{"id":"call_deok","name":"get_current_weather","arguments":{"location":"San Francisco","unit":"celsius"}}
</tool_call>
```
**Tool Response:**
```xml
<tool_response>
{"id":"call_deok","result":{"temperature":"72","unit":"celsius"}}
</tool_response>
```
**Model:** "The current weather in San Francisco is 72 degrees Celsius."
1. **Clear Function Descriptions**: Provide detailed descriptions for functions and parameters to help the model understand when to use each tool
2. **Required Parameters**: Always specify which parameters are required in your function definitions
3. **No Assumptions**: The model is trained to NOT make assumptions about parameter values - it will ask for clarification if needed
4. **Multiple Tools**: The model can call multiple functions in sequence or parallel when needed
5. **Error Handling**: Include error states in your tool responses so the model can handle failures gracefully
Leave a review
No reviews yet. Be the first to review this skill!
# Download SKILL.md from killerskills.ai/api/skills/llama-3-groq-tool-use-6jhyfv/raw