Dataset Viewer

The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.

This uv-script allows you to run batch inference on vllm over an hf dataset as long as it has a messages column. It's based on the script https://huggingface.co/datasets/uv-scripts/vllm/raw/main/generate-responses.py the only diference is that it uses llm.chat() instead of llm.generate() so the response format is more familar to the openai response format and easier to use.

Launch Job via SDK

#!/usr/bin/env python3
from dotenv import load_dotenv
from huggingface_hub import HfApi

load_dotenv()

import os

DATASET_REPO_ID = "tytodd/test-job-dataset"
SCRIPT_URL = "https://huggingface.co/datasets/modaic/batch-vllm/raw/main/generate_responses.py"


def main() -> None:
    api = HfApi()
    job_info = api.run_uv_job(
        SCRIPT_URL,
        script_args=[
            DATASET_REPO_ID,
            DATASET_REPO_ID,
            "--model-id",
            # "Qwen/Qwen3-235B-A22B-Instruct-2507",
            "deepseek-ai/DeepSeek-V3.2",
            # "zai-org/GLM-5", # transformers > 5
            # "moonshotai/Kimi-K2.5",
            "--messages-column",
            "messages",
        ],
        dependencies=["transformers<5"],
        image="vllm/vllm-openai:latest",
        flavor="h200x4",
        secrets={"HF_TOKEN": os.getenv("HF_TOKEN")},
    )
    print(f"Created job {job_info.id}")
    print(job_info.url)


if __name__ == "__main__":
    main()

Launch Job via CLI

uvx hf jobs uv run \
    --flavor l4x4 \
    --secrets HF_TOKEN \
    https://huggingface.co/datasets/modaic/batch-vllm/resolve/main/generate_responses.py \
    username/input-dataset \
    username/output-dataset \
    --messages-column messages \
    --model-id Qwen/Qwen3-30B-A3B-Instruct-2507 \
    --temperature 0.7 \
    --max-tokens 16384
Downloads last month
11