A compact 1.2B parameter liquid neural network model specialized in function calling and tool use, optimized for conversational AI applications with structured function execution capabilities.
A lightweight 1.2B parameter liquid neural network model fine-tuned for function calling and tool use in conversational contexts. This GGUF quantized model provides efficient function execution capabilities while maintaining a small memory footprint.
LFM2.5-1.2B-Nova-Function-Calling is based on NovachronoAI's liquid neural network architecture and trained on the Nova-Synapse-Function-Calling dataset. The model excels at:
This skill provides weighted/imatrix GGUF quantizations optimized for various performance/quality tradeoffs, from ultra-compressed (IQ1_S ~0.4GB) to high-quality (Q6_K ~1.1GB) variants.
Choose a quantization based on your requirements:
**Recommended for most use cases:**
**For limited resources:**
**For maximum quality:**
**For extreme compression (not recommended):**
Download your chosen quantization from the model repository:
```bash
wget https://huggingface.co/mradermacher/LFM2.5-1.2B-Nova-Function-Calling-i1-GGUF/resolve/main/LFM2.5-1.2B-Nova-Function-Calling.i1-Q4_K_M.gguf
```
Load with llama.cpp:
```bash
./main -m LFM2.5-1.2B-Nova-Function-Calling.i1-Q4_K_M.gguf \
--ctx-size 4096 \
--temp 0.7 \
--repeat-penalty 1.1
```
Or with Ollama:
```bash
FROM ./LFM2.5-1.2B-Nova-Function-Calling.i1-Q4_K_M.gguf
PARAMETER temperature 0.7
PARAMETER stop "<|im_end|>"
ollama create lfm-nova -f Modelfile
ollama run lfm-nova
```
Structure your prompts to include function definitions and user requests:
```
Available functions:
User: What's the weather like in Paris?
Leave a review
No reviews yet. Be the first to review this skill!
# Download SKILL.md from killerskills.ai/api/skills/lfm25-nova-function-calling-fkk28l/raw