Fine-tuned 270M parameter model for function calling and tool use. Based on Google's FunctionGemma, optimized for simple tool calling tasks with conversational capabilities.
Use the fine-tuned FunctionGemma 270M model for function calling and tool use tasks. This model is optimized for simple tool calling scenarios and conversational AI applications.
This skill helps you integrate and use the `mkswami01/functiongemma-270m-it-simple-tool-calling` model from HuggingFace. The model is a fine-tuned version of Google's FunctionGemma-270M specifically trained for tool calling and function execution tasks using Supervised Fine-Tuning (SFT) with the TRL library.
When the user requests to use the FunctionGemma tool calling model, follow these steps:
1. **Verify Dependencies**
- Check if `transformers` library is installed (required version: 4.57.3+)
- Check if `torch` is installed with CUDA support for GPU acceleration (optional but recommended)
- Check if `datasets` and `tokenizers` libraries are available
- If missing, ask the user if you should install them with `pip install transformers torch datasets tokenizers`
2. **Set Up the Model**
- Import the necessary libraries: `from transformers import pipeline`
- Initialize the text generation pipeline with model: `mkswami01/functiongemma-270m-it-simple-tool-calling`
- Use `device="cuda"` if GPU is available, otherwise `device="cpu"`
- Note: Model size is 270M parameters, relatively lightweight for local execution
3. **Prepare Input Format**
- Format user queries as chat messages: `[{"role": "user", "content": "query here"}]`
- The model expects conversational format with role-based messages
- Support function calling scenarios where the model can identify and structure tool calls
4. **Generate Responses**
- Use `max_new_tokens` parameter to control output length (default: 128)
- Set `return_full_text=False` to get only the generated response
- Extract the generated text from output: `output[0]["generated_text"]`
5. **Handle Tool Calling Output**
- Parse the model's output for structured function calls
- The model is trained to identify when to use tools and format the calls appropriately
- Extract function names, parameters, and arguments from the response
6. **Error Handling**
- Check for CUDA availability if using GPU
- Handle model loading errors (network issues, missing files)
- Validate input format before passing to the model
- Catch generation errors and provide fallback options
```python
from transformers import pipeline
question = "What tools would I need to analyze a CSV file?"
generator = pipeline("text-generation",
model="mkswami01/functiongemma-270m-it-simple-tool-calling",
device="cuda")
output = generator([{"role": "user", "content": question}],
max_new_tokens=128,
return_full_text=False)[0]
print(output["generated_text"])
```
```python
question = "Calculate the sum of 15 and 27"
output = generator([{"role": "user", "content": question}],
max_new_tokens=128,
return_full_text=False)[0]
print(output["generated_text"])
```
```python
conversation = [
{"role": "user", "content": "I need to process some data"},
{"role": "assistant", "content": "What kind of data processing do you need?"},
{"role": "user", "content": "Extract names from a list of emails"}
]
output = generator(conversation, max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
Leave a review
No reviews yet. Be the first to review this skill!
# Download SKILL.md from killerskills.ai/api/skills/functiongemma-tool-calling-051pym/raw