Instructions to use prithivMLmods/Viper-OneCoder-UIGEN with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use prithivMLmods/Viper-OneCoder-UIGEN with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="prithivMLmods/Viper-OneCoder-UIGEN") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("prithivMLmods/Viper-OneCoder-UIGEN") model = AutoModelForCausalLM.from_pretrained("prithivMLmods/Viper-OneCoder-UIGEN") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use prithivMLmods/Viper-OneCoder-UIGEN with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "prithivMLmods/Viper-OneCoder-UIGEN" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "prithivMLmods/Viper-OneCoder-UIGEN", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/prithivMLmods/Viper-OneCoder-UIGEN
- SGLang
How to use prithivMLmods/Viper-OneCoder-UIGEN with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "prithivMLmods/Viper-OneCoder-UIGEN" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "prithivMLmods/Viper-OneCoder-UIGEN", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "prithivMLmods/Viper-OneCoder-UIGEN" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "prithivMLmods/Viper-OneCoder-UIGEN", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use prithivMLmods/Viper-OneCoder-UIGEN with Docker Model Runner:
docker model run hf.co/prithivMLmods/Viper-OneCoder-UIGEN
Viper-OneCoder-UIGEN
Viper-OneCoder-UIGEN is based on the Qwen 2.5 14B modality architecture, designed to be the best for web development and structured coding logic. It has been fine-tuned on a synthetic dataset leveraging the latest coding logits and CoT datasets, further optimizing its step-by-step logic breakdown and front-end problem-solving abilities. The model demonstrates significant improvements in context understanding, structured UI development, and long-context comprehension, making it ideal for web-based coding tasks, HTML/CSS/Tailwind development, and detailed instruction following.
Key Improvements
- Best-in-Class Web Development Proficiency: Advanced understanding of HTML, CSS, Tailwind, JavaScript, and front-end frameworks.
- Fine-Tuned Step-by-Step Logic Breakdown: Optimized for structured explanations, component-based UI coding, and logic-driven development.
- Advanced Instruction Following: Delivers precise responses, structured outputs (e.g., JSON, YAML), and extended text generation (8K+ tokens).
- Long-Context Mastery: Handles up to 128K tokens with an output capability of 8K tokens per response.
- Multilingual Code Support: Excels in HTML, CSS, JavaScript, React, Tailwind CSS, Python, and other major web-related languages, with documentation in 29+ languages.
Quickstart with Transformers
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "prithivMLmods/Viper-OneCoder-UIGEN"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto",
trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "Create a responsive navigation bar using Tailwind CSS."
messages = [
{"role": "system", "content": "You are an advanced AI assistant with expert-level UI coding and reasoning abilities."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
Intended Use
- Elite Web Development & UI Design: Best-in-class model for writing, analyzing, and optimizing front-end code.
- Step-by-Step Coding Logic Breakdown: Guides developers through structured programming approaches and best practices.
- Component-Based UI Development: Generates reusable Tailwind and React components with clear explanations.
- Structured Data Processing: Handles JSON, XML, and structured UI component automation.
- Multilingual Programming Support: Proficient in HTML, CSS, Tailwind, JavaScript, React, Python, and Go.
- Extended Technical Content Generation: Ideal for writing documentation, blog posts, and front-end tutorials.
Limitations
- High Computational Demand: Requires powerful GPUs/TPUs for smooth inference due to 14B parameters.
- Framework-Specific Variability: Performance may vary across different front-end frameworks.
- Possible Error Propagation: Extended text outputs might introduce logical inconsistencies.
- Limited Real-World Awareness: The model does not have access to real-time internet updates.
- Prompt Sensitivity: Performance depends on how well the prompt is structured.
- Downloads last month
- 10
Model tree for prithivMLmods/Viper-OneCoder-UIGEN
Base model
prithivMLmods/Megatron-Opus-14B-Exp
docker model run hf.co/prithivMLmods/Viper-OneCoder-UIGEN