A specialized CodeLlama-based model fine-tuned to generate travel-related commands and assist with travel planning tasks. Available in multiple GGUF quantization formats for efficient local deployment.
A specialized AI model for generating travel-related commands and assisting with travel planning tasks, based on CodeLlama-7B and fine-tuned with Unsloth for optimized performance.
This skill provides access to the `cmd_gen_travel_assistant_codellama_7b_unsloth_params` model, a fine-tuned version of CodeLlama-7B specifically trained for travel command generation. The model is available in GGUF format with multiple quantization options, allowing you to balance between model size and quality based on your hardware constraints.
When using this skill, you will help users:
1. **Select appropriate quantization**: Recommend a GGUF quantization based on user's hardware:
- For limited RAM (8GB): Q2_K (2.36GB) or IQ3_XS (2.6GB)
- For moderate RAM (16GB): Q4_K_M (3.8GB) or Q5_K_M (4.45GB)
- For high performance (32GB+): Q6_K (5.15GB) or Q8_0 (6.67GB)
2. **Download and setup**: Guide users to download the appropriate GGUF file from HuggingFace:
- Repository: `RichardErkhov/dakshvar22_-_cmd_gen_travel_assistant_codellama_7b_unsloth_params-gguf`
- Provide direct links to specific quantization files
3. **Integration guidance**: Help integrate the model with local inference tools:
- llama.cpp
- Ollama
- LM Studio
- Text generation WebUI
4. **Travel command generation**: Assist users in:
- Generating travel planning commands
- Creating travel itineraries
- Booking assistance code
- Travel-related API interactions
- Conversational travel assistance
5. **Model information**: Provide context about:
- Base model: CodeLlama-7B (4-bit quantized via Unsloth)
- Training: Fine-tuned using Unsloth and HuggingFace TRL
- License: Apache 2.0
- Original creator: dakshvar22
```
User: "I have 12GB RAM, which version should I use?"
Assistant: "For 12GB RAM, I recommend Q4_K_M (3.8GB) or Q5_K_S (4.33GB).
Q4_K_M offers a good balance of quality and size, while Q5_K_S provides
slightly better quality with minimal size increase."
```
```
User: "How do I get the Q4_K_M version?"
Assistant: "Download directly from HuggingFace:
https://huggingface.co/RichardErkhov/dakshvar22_-_cmd_gen_travel_assistant_codellama_7b_unsloth_params-gguf/blob/main/cmd_gen_travel_assistant_codellama_7b_unsloth_params.Q4_K_M.gguf
Or use huggingface-cli:
huggingface-cli download RichardErkhov/dakshvar22_-_cmd_gen_travel_assistant_codellama_7b_unsloth_params-gguf cmd_gen_travel_assistant_codellama_7b_unsloth_params.Q4_K_M.gguf"
```
```
User: "How do I run this with llama.cpp?"
Assistant: "After downloading your preferred GGUF file:
./main -m cmd_gen_travel_assistant_codellama_7b_unsloth_params.Q4_K_M.gguf \
-p 'Generate a travel itinerary for a 5-day trip to Tokyo' \
-n 512 \
--temp 0.7"
```
Leave a review
No reviews yet. Be the first to review this skill!
# Download SKILL.md from killerskills.ai/api/skills/travel-assistant-command-generator/raw