Deploy a specialized 1.2B parameter Liquid Neural Network model fine-tuned for converting natural language queries into structured JSON function calls. Achieves 97% syntax reliability while running efficiently on low-resource hardware.
This skill has safety concerns that you should review before use. Some patterns were detected that may pose a risk.Safety score: 60/100.
KillerSkills scans all public content for safety. Use caution before installing or executing flagged content.
Deploy a state-of-the-art function calling model based on Liquid AI's revolutionary Liquid Neural Network architecture. This 1.2B parameter model rivals much larger models in function calling tasks while running efficiently on resource-constrained devices.
This skill helps you integrate the LFM2.5-1.2B-Nova-Function-Calling model into your application for:
The model achieves 97% JSON syntax reliability and was trained on 15,000 high-complexity examples specifically curated for function calling tasks.
First, determine the user's deployment environment to choose the optimal model format:
**Ask the user:**
**Decision matrix:**
Based on the chosen deployment path, install required dependencies:
**For Transformers (Python):**
```bash
pip install unsloth transformers torch accelerate bitsandbytes
```
**For llama.cpp/GGUF:**
```bash
git clone https://github.com/ggerganov/llama.cpp
cd llama.cpp
make
```
**For Ollama:**
```bash
curl -fsSL https://ollama.com/install.sh | sh
```
Guide the user to download the appropriate model format:
**Transformers (Full Model):**
```python
from unsloth import FastLanguageModel
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="NovachronoAI/LFM2.5-1.2B-Nova-Function-Calling-Full",
max_seq_length=4096,
dtype=None,
load_in_4bit=True,
)
FastLanguageModel.for_inference(model)
```
**GGUF (Standard - Broad Compatibility):**
Download from: `https://huggingface.co/mradermacher/LFM2.5-1.2B-Nova-Function-Calling-GGUF`
**GGUF (Imatrix - Best Quality for Size):**
Download from: `https://huggingface.co/mradermacher/LFM2.5-1.2B-Nova-Function-Calling-i1-GGUF`
Recommend quantization levels:
Create a function calling implementation using the ChatML format:
**Python/Transformers Example:**
```python
from unsloth import FastLanguageModel
import torch
import json
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="NovachronoAI/LFM2.5-1.2B-Nova-Function-Calling-Full",
max_seq_length=4096,
dtype=None,
load_in_4bit=True,
)
FastLanguageModel.for_inference(model)
def call_function(user_query: str, available_tools: list = None) -> dict:
"""
Convert natural language query to function call.
Args:
user_query: User's natural language request
available_tools: Optional list of tool definitions
Returns:
dict: Parsed function call with name and arguments
"""
# Build prompt in ChatML format
prompt = f"""<|im_start|>user
{user_query}
<|im_end|>
<|im_start|>assistant
"""
# Generate
inputs = tokenizer([prompt], return_tensors="pt").to("cuda")
outputs = model.generate(
**inputs,
max_new_tokens=256,
temperature=0.1, # Low temperature for structured output
top_p=0.95,
use_cache=True
)
# Extract response
response = tokenizer.batch_decode(outputs)[0]
assistant_response = response.split("<|im_start|>assistant")[-1].strip()
# Parse tool call
if "<tool_call>" in assistant_response:
tool_json = assistant_response.split("<tool_call>")[1].split("</tool_call>")[0].strip()
return json.loads(tool_json)
return {"error": "No function call generated"}
result = call_function("Calculate the area of a circle with radius 5")
print(result)
```
**llama.cpp Example:**
```bash
./llama.cpp/main \
-m ./models/LFM2.5-1.2B-Nova-Function-Calling-Q4_K_M.gguf \
-p "<|im_start|>user\nBook a flight from NYC to LAX on March 15th\n<|im_end|>\n<|im_start|>assistant\n" \
-n 256 \
--temp 0.1 \
--top-p 0.95
```
**Ollama Modelfile:**
```dockerfile
FROM ./LFM2.5-1.2B-Nova-Function-Calling-Q4_K_M.gguf
TEMPLATE """<|im_start|>user
{{ .Prompt }}
<|im_end|>
<|im_start|>assistant
"""
PARAMETER temperature 0.1
PARAMETER top_p 0.95
PARAMETER stop "<|im_end|>"
```
Then create and run:
```bash
ollama create nova-function-calling -f Modelfile
ollama run nova-function-calling "Send an email to [email protected] with subject 'Meeting Reminder'"
```
For production use, implement a tool registry system:
```python
TOOL_REGISTRY = {
"calculate_circle_area": {
"description": "Calculate the area of a circle given radius",
"parameters": {
"radius": {"type": "number", "required": True}
}
},
"send_email": {
"description": "Send an email to a recipient",
"parameters": {
"to": {"type": "string", "required": True},
"subject": {"type": "string", "required": True},
"body": {"type": "string", "required": False}
}
},
"book_flight": {
"description": "Book a flight between cities",
"parameters": {
"origin": {"type": "string", "required": True},
"destination": {"type": "string", "required": True},
"date": {"type": "string", "required": True}
}
}
}
def validate_function_call(func_call: dict) -> bool:
"""Validate generated function call against schema."""
func_name = func_call.get("name")
if func_name not in TOOL_REGISTRY:
return False
schema = TOOL_REGISTRY[func_name]["parameters"]
args = func_call.get("arguments", {})
# Check required parameters
for param, spec in schema.items():
if spec.get("required") and param not in args:
return False
return True
```
Add robust error handling for production deployment:
```python
import json
from typing import Optional
def safe_function_call(user_query: str, max_retries: int = 3) -> Optional[dict]:
"""
Generate function call with validation and retry logic.
"""
for attempt in range(max_retries):
try:
result = call_function(user_query)
# Validate JSON structure
if "name" not in result or "arguments" not in result:
raise ValueError("Invalid function call structure")
# Validate against schema
if not validate_function_call(result):
raise ValueError("Function call validation failed")
return result
except json.JSONDecodeError as e:
print(f"Attempt {attempt + 1}: JSON parsing failed - {e}")
continue
except ValueError as e:
print(f"Attempt {attempt + 1}: Validation failed - {e}")
continue
except Exception as e:
print(f"Attempt {attempt + 1}: Unexpected error - {e}")
continue
return None
```
Apply these optimizations based on use case:
**For Batch Processing:**
```python
inputs = tokenizer(prompt_list, return_tensors="pt", padding=True).to("cuda")
outputs = model.generate(**inputs, max_new_tokens=256)
```
**For Low Latency:**
```python
outputs = model.generate(**inputs, max_new_tokens=128)
outputs = model.generate(**inputs, use_cache=True)
```
**For Memory Constrained:**
```python
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="NovachronoAI/LFM2.5-1.2B-Nova-Function-Calling-Full",
load_in_8bit=True, # or load_in_4bit=True
device_map="auto" # Automatic device placement
)
```
Create a test suite to validate model behavior:
```python
TEST_CASES = [
{
"query": "Calculate the area of a circle with radius 10",
"expected": {"name": "calculate_circle_area", "arguments": {"radius": 10}}
},
{
"query": "Send an email to [email protected] about the bug",
"expected": {"name": "send_email", "arguments": {"to": "[email protected]"}}
},
{
"query": "What's the weather like?",
"expected": {"name": "get_weather", "arguments": {}}
}
]
def run_tests():
passed = 0
for test in TEST_CASES:
result = safe_function_call(test["query"])
if result and result["name"] == test["expected"]["name"]:
passed += 1
print(f"✓ {test['query']}")
else:
print(f"✗ {test['query']}: got {result}")
print(f"\nPassed {passed}/{len(TEST_CASES)} tests")
```
1. **Chatbot with Tool Use:** Integrate into conversational agents that need to call APIs
2. **Voice Assistants:** Convert speech-to-text queries into executable functions
3. **Workflow Automation:** Parse natural language automation requests into structured actions
4. **API Gateway:** Route natural language API requests to appropriate endpoints
5. **Edge AI Agents:** Deploy function calling on IoT devices or mobile apps
Leave a review
No reviews yet. Be the first to review this skill!
# Download SKILL.md from killerskills.ai/api/skills/lfm25-nova-function-calling/raw