Fine-tuned Gemma 2 9B model for function calling with human-annotated data. Generates structured function calls from natural language requests using Python docstrings for tool definitions.
A skill for implementing function calling patterns using the DiTy/gemma-2-9b-it-function-calling model. This model is fine-tuned specifically for generating structured function calls from natural language, making it ideal for building AI agents that need to invoke external tools and APIs.
This skill guides you through implementing function calling capabilities using a specialized Gemma 2 9B model. The model can:
Before implementing, identify:
Create Python functions with detailed docstrings following this pattern:
```python
def function_name(param: type) -> return_type:
"""
Clear description of what the function does.
Args:
param: Description of the parameter.
"""
# Implementation
pass
```
**Requirements:**
Install dependencies and load the model:
```python
pip install -U transformers torch
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained(
"DiTy/gemma-2-9b-it-function-calling-GGUF",
device_map="auto",
torch_dtype=torch.bfloat16,
)
tokenizer = AutoTokenizer.from_pretrained(
"DiTy/gemma-2-9b-it-function-calling-GGUF"
)
```
Build conversation history using these roles:
Use `apply_chat_template` with the `tools` parameter:
```python
history_messages = [
{"role": "system", "content": "You are a helpful assistant with access to the following functions. Use them if required - "},
{"role": "user", "content": "User request here"},
]
inputs = tokenizer.apply_chat_template(
history_messages,
tokenize=False,
add_generation_prompt=True,
tools=[func1, func2], # Pass function objects
)
terminator_ids = [
tokenizer.eos_token_id,
tokenizer.convert_tokens_to_ids("<end_of_turn>"),
]
prompt_ids = tokenizer.encode(inputs, add_special_tokens=False, return_tensors='pt').to(model.device)
generated_ids = model.generate(
prompt_ids,
max_new_tokens=512,
eos_token_id=terminator_ids,
bos_token_id=tokenizer.bos_token_id,
)
response = tokenizer.decode(generated_ids[0][prompt_ids.shape[-1]:], skip_special_tokens=False)
```
**Important:** Use `add_special_tokens=False` when tokenizing after `apply_chat_template`.
When the model generates a function call:
1. Extract JSON from response (format: `Function call: {...}<end_of_turn>`)
2. Parse the function name and arguments
3. Execute the corresponding Python function
4. Format the result as JSON for `function-response` role
Add function results back to conversation history:
```python
history_messages.append({"role": "function-call", "content": '{"name": "func_name", "arguments": {...}}'})
history_messages.append({"role": "function-response", "content": '{"result_key": result_value}'})
```
Then generate the next model response using the updated history.
For complex interactions:
**Scenario:** Weather and sunrise information assistant
```python
def get_weather(city: str):
"""
Returns the current weather in a given city.
Args:
city: The city to get the weather for.
"""
return "sunny"
def get_sunrise_sunset_times(city: str):
"""
Returns sunrise and sunset times for a given city.
Args:
city: The city to get the sunrise and sunset times for.
"""
return ["6:00 AM", "6:00 PM"]
```
For simpler implementations, use the transformers pipeline:
```python
from transformers import pipeline
generation_pipeline = pipeline(
"text-generation",
model="DiTy/gemma-2-9b-it-function-calling-GGUF",
model_kwargs={"torch_dtype": torch.bfloat16},
device_map="auto",
)
outputs = generation_pipeline(
inputs,
max_new_tokens=512,
eos_token_id=terminator_ids,
)
```
Leave a review
No reviews yet. Be the first to review this skill!
# Download SKILL.md from killerskills.ai/api/skills/gemma-2-9b-function-calling/raw