A specialized code assistant model fine-tuned on the Glaive code dataset, designed to help with programming tasks, code generation, and technical problem-solving across multiple programming languages.
A specialized code assistant model fine-tuned from Meta's Llama 3.1 8B on the Glaive code assistant dataset. This model excels at understanding programming contexts, generating code, debugging, and providing technical guidance across multiple programming languages.
This is a conversational AI model specifically trained to assist with software development tasks. It has been fine-tuned on high-quality code instruction data to provide accurate, contextual programming assistance.
**Model Details:**
When using this model for code assistance tasks, follow these steps:
Use the Hugging Face Transformers library to load the model:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "mlfoundations-dev/oh_v3-1_only_glaive_code_assistant"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
device_map="auto",
torch_dtype="auto"
)
```
Format programming questions or requests in a clear, conversational style. The model responds well to:
Use standard text generation with appropriate parameters:
```python
messages = [
{"role": "user", "content": "Write a Python function to calculate fibonacci numbers"}
]
inputs = tokenizer.apply_chat_template(messages, return_tensors="pt").to(model.device)
outputs = model.generate(inputs, max_new_tokens=512, temperature=0.7)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
```
This model performs best on:
**Minimum:**
**Recommended:**
Full conversational assistant integration:
```python
def code_assistant(user_query: str, conversation_history: list = None):
"""
Process a code-related query using the Glaive assistant model.
"""
if conversation_history is None:
conversation_history = []
conversation_history.append({"role": "user", "content": user_query})
inputs = tokenizer.apply_chat_template(
conversation_history,
return_tensors="pt",
add_generation_prompt=True
).to(model.device)
outputs = model.generate(
inputs,
max_new_tokens=1024,
temperature=0.7,
top_p=0.9,
repetition_penalty=1.1,
do_sample=True
)
response = tokenizer.decode(outputs[0][inputs.shape[1]:], skip_special_tokens=True)
conversation_history.append({"role": "assistant", "content": response})
return response, conversation_history
```
**User:** "Write a Python function to validate email addresses using regex"
**Model Response:** Will provide a complete implementation with regex pattern, explanation, and usage examples.
**User:** "I'm getting a TypeError: 'NoneType' object is not subscriptable. What does this mean?"
**Model Response:** Will explain the error, common causes, and provide debugging strategies.
Based on the training results:
Leave a review
No reviews yet. Be the first to review this skill!
# Download SKILL.md from killerskills.ai/api/skills/glaive-code-assistant-llama-31-8b/raw