Jade8b
Jade8b is a Brazilian Portuguese conversational finetune of Qwen3 8B built to express a strong, persistent persona. It is designed for PT-BR chat, chatbot use cases, and character-style interaction, with colloquial language, abbreviations, slang, and a WhatsApp-like tone.
Model Summary
Jade8b is a persona-first model. It was intentionally finetuned so the model speaks like Jade even without a strong system prompt. Because of that, the model often answers in PT-BR with informal phrasing such as vc, slang, and a friendly conversational tone from the first turn.
Compared to larger models in the Jade family, Jade8b is the lighter and more accessible option. It is easier to test, easier to run, and more practical for real-world deployments where latency and hardware cost matter.
Model Details
- Developed by:
Madras1 - Base model:
qwen/qwen3-8b - Model type: conversational text-generation finetune
- Primary language: Brazilian Portuguese (
pt-BR) - License:
apache-2.0
Intended Behavior
This model was trained to:
- speak naturally in Brazilian Portuguese
- maintain a consistent Jade persona
- sound informal, friendly, and chat-oriented
- work well in casual assistant and conversational use cases
Typical behavior includes:
- abbreviations like
vc - light slang and colloquial wording
- expressions such as
tmj,mano,tlgd - a more human and less robotic tone
If Jade already sounds like a recurring character during inference, that is expected behavior, not an error.
Training Intent
The finetune objective was to make the persona live in the weights, not only in prompting.
High-level training approach:
- synthetic PT-BR prompt generation for chat-like situations
- persona-driven response distillation
- supervised finetuning on conversational data
- removal of
systempersona instructions during SFT so the model directly internalizes the Jade style
This is why the model can already answer with personality, abbreviations, and slang even with a simple user-only prompt.
Training Setup
High-level setup used for this finetune:
- around
25,000examples 4epochs- Unsloth-based SFT pipeline
- chat-style data in Portuguese
Why Jade8b
Jade8b is the practical member of the Jade family.
Best advantages of the 8B version:
- easier to run than larger variants
- faster to test and iterate with
- more realistic for apps and local experimentation
- likely to be the most accessible entry point into the Jade family
Recommended Use
Best fit:
- PT-BR chat assistants
- persona bots
- WhatsApp-style conversational agents
- lightweight entertainment or social AI experiences
- local or lower-cost deployments
Less ideal for:
- formal writing
- highly neutral assistant behavior
- high-stakes legal, medical, or financial contexts
Prompting Tips
For the strongest Jade behavior:
- use a simple user message
- avoid a formal system prompt that fights the finetune
- keep prompts conversational when possible
Example prompts:
oi jade, tudo bem?jade, me explica isso de um jeito simplesvc acha que vale a pena estudar python hoje?
Example Inference
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_id = "Madras1/Jade8b"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
)
messages = [
{"role": "user", "content": "oi jade, tudo bem?"}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
)
inputs = tokenizer(text, return_tensors="pt").to(model.device)
outputs = model.generate(
**inputs,
max_new_tokens=256,
temperature=0.7,
top_p=0.9,
)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Model Family
Current and planned Jade variants:
Jade72bJade8b
Jade8b is intended to be the more accessible model in the lineup, while larger variants can focus on stronger reasoning, richer responses, or higher overall capacity.
Limitations
Because this is a persona-oriented finetune:
- it may sound informal in contexts where a neutral tone would be better
- it may over-index on chat style depending on the prompt
- it is optimized more for persona consistency than strict formality
- Downloads last month
- -