A 27B parameter Bulgarian language model optimized for function calling, tool use, and MCP integration. Quantized in GGUF format for efficient local deployment.
A 27B parameter language model specialized for Bulgarian language with advanced function calling and tool use capabilities. This model is based on Tucan-27B-v1.0 and has been quantized into GGUF format for efficient local deployment.
This skill provides access to LLMBG-ToolUse-27B-v1.0, a Bulgarian language model optimized for:
The model is available in multiple quantization levels (Q2_K through Q8_0) to balance quality and resource requirements.
When a user requests to use this Bulgarian tool-use model:
1. **Explain the model capabilities**
- Inform the user this is a 27B parameter model specialized for Bulgarian language
- Highlight its function calling and tool use capabilities
- Mention it's quantized in GGUF format for local deployment
- Note it supports MCP integration
2. **Help select the appropriate quantization**
Based on user's hardware capabilities, recommend:
- **Q4_K_M** (16.7 GB): Fast, recommended for most users with 20+ GB RAM
- **Q5_K_M** (19.5 GB): Better quality, requires 24+ GB RAM
- **Q6_K** (22.4 GB): Very good quality, requires 28+ GB RAM
- **Q8_0** (29.0 GB): Best quality, requires 36+ GB RAM
- **Q2_K** (10.5 GB): Smallest, lower quality but runs on limited hardware
- **Q3_K_M** (13.5 GB): Lower quality compromise
3. **Provide download instructions**
```bash
# Example for Q4_K_M quantization
wget https://huggingface.co/mradermacher/LLMBG-ToolUse-27B-v1.0-GGUF/resolve/main/LLMBG-ToolUse-27B-v1.0.Q4_K_M.gguf
```
4. **Explain usage with common GGUF runners**
**llama.cpp:**
```bash
./main -m LLMBG-ToolUse-27B-v1.0.Q4_K_M.gguf -p "Your prompt here" -n 512
```
**Ollama:**
```bash
# Create Modelfile
echo "FROM ./LLMBG-ToolUse-27B-v1.0.Q4_K_M.gguf" > Modelfile
ollama create llmbg-tooluse -f Modelfile
ollama run llmbg-tooluse
```
**LM Studio or Jan:**
- Import the GGUF file through the UI
- Load the model and start chatting
5. **Highlight tool-use features**
- This model is specifically trained for function calling
- Explain how to structure function/tool definitions in prompts
- Mention MCP compatibility for context protocol integration
6. **Note the base model license**
- Licensed under Gemma license terms
- Base model: llm-bg/Tucan-27B-v1.0
- Quantized by mradermacher
**User request:** "I need a Bulgarian language model that can call functions"
**Response:**
"I recommend the LLMBG-ToolUse-27B-v1.0 model - a 27B parameter Bulgarian language model optimized for function calling and tool use.
For most users, I suggest the Q4_K_M quantization (16.7 GB) which offers a good balance of quality and performance. You'll need at least 20 GB of RAM.
Download it with:
```bash
wget https://huggingface.co/mradermacher/LLMBG-ToolUse-27B-v1.0-GGUF/resolve/main/LLMBG-ToolUse-27B-v1.0.Q4_K_M.gguf
```
Then run with llama.cpp, Ollama, LM Studio, or Jan. This model excels at Bulgarian language understanding and can integrate with MCP for enhanced context handling."
Leave a review
No reviews yet. Be the first to review this skill!
# Download SKILL.md from killerskills.ai/api/skills/bulgarian-tool-use-language-model-27b-gguf/raw