repo stringclasses 147
values | number int64 1 172k | title stringlengths 2 476 | body stringlengths 0 5k | url stringlengths 39 70 | state stringclasses 2
values | labels listlengths 0 9 | created_at timestamp[ns, tz=UTC]date 2017-01-18 18:50:08 2026-01-06 07:33:18 | updated_at timestamp[ns, tz=UTC]date 2017-01-18 19:20:07 2026-01-06 08:03:39 | comments int64 0 58 โ | user stringlengths 2 28 |
|---|---|---|---|---|---|---|---|---|---|---|
vllm-project/vllm | 31,091 | [Usage]: Image Embedding Models (CLIP, Siglip, etc) | ### Your current environment
```text
root@3904bdeddb91:/vllm-workspace# python3 collect_env.py
Collecting environment information...
==============================
System Info
==============================
OS : Ubuntu 22.04.5 LTS (x86_64)
GCC version : (Ubuntu 11.4.0... | https://github.com/vllm-project/vllm/issues/31091 | closed | [
"usage"
] | 2025-12-21T04:10:10Z | 2025-12-23T03:26:40Z | 2 | JamesDConley |
huggingface/lerobot | 2,690 | [Bug] Pi0 Inference RuntimeError: Dimension mismatch in Gemma eager_attention_forward (Causal Mask vs Attn Weights) | https://github.com/huggingface/lerobot/issues/2690 | closed | [
"bug",
"question",
"policies",
"dataset",
"CI",
"performance",
"robots",
"examples",
"training"
] | 2025-12-20T16:08:36Z | 2025-12-22T09:34:57Z | null | SMWTDDY | |
huggingface/lerobot | 2,689 | problem regarding to update aloha sim dataset version v2.1 to v3.0 | ### Ticket Type
๐ Bug Report (Something isn't working)
### Environment & System Info
```Shell
lerobot version 3.0, h100 gpu, openpi repository, training aloha simulation with pi0.5
```
### Description
During training aloha simulation, I updated lerobot aloha sim insertion dataset from compatible with 2.1 to 3.0, ... | https://github.com/huggingface/lerobot/issues/2689 | open | [
"bug",
"question",
"dataset",
"simulation",
"CI",
"robots",
"training"
] | 2025-12-20T13:42:39Z | 2025-12-24T00:06:09Z | null | conscious-choi |
sgl-project/sglang | 15,524 | [Bug] Deepseek R1 multi-turn tool calling not working | ### Checklist
- [x] I searched related issues but found no solution.
- [x] The bug persists in the latest version.
- [ ] Issues without environment info and a minimal reproducible demo are hard to resolve and may receive no feedback.
- [x] If this is not a bug report but a general question, please start a discussion a... | https://github.com/sgl-project/sglang/issues/15524 | closed | [] | 2025-12-20T10:31:36Z | 2025-12-21T01:29:43Z | 2 | ynwang007 |
vllm-project/vllm | 31,066 | [Doc]: Formatting issue in markdown file | ### ๐ The doc issue
in [paged_attention.md](https://github.com/vllm-project/vllm/blob/ff2168bca3a195b835c64a5c9012d7b6a9f34e61/docs/design/paged_attention.md#query), there is an issue where a pictures arent formatted correctly and only show the html link .
For example, specifically, in the Query subsection, we can se... | https://github.com/vllm-project/vllm/issues/31066 | closed | [
"documentation"
] | 2025-12-20T06:23:44Z | 2025-12-22T01:38:56Z | 1 | ssaketh-ch |
pytorch/pytorch | 170,926 | Could we have a unified method on c10::Stream to access the underlying pointer that the c10::Stream wraps? | As title.
As I understand it, the device generic c10::Stream object is intended to wrap an underlying pointer to the stream object for the accelerator (e.g. `cudaStream_t` for CUDA, `hipStream_t` for ROCM `sycl::queue&` for XPU etc.). I see that there are methods like the following on `CUDAStream`/`XPUStream` that al... | https://github.com/pytorch/pytorch/issues/170926 | open | [
"triaged",
"module: PrivateUse1",
"module: accelerator"
] | 2025-12-20T00:35:56Z | 2025-12-31T02:31:09Z | 3 | mikaylagawarecki |
pytorch/torchtitan | 2,168 | Wrong commands in compiler_toolkit .md? | ### Bug description
The commands in the readme page of https://github.com/pytorch/torchtitan/tree/main/torchtitan/experiments/compiler_toolkit are wrong?
Only the first flex_attention command has `--model.flavor=debugmodel_flex_attn`, the other three don't, and I don't see flex_attention ops in the graph modules if I... | https://github.com/pytorch/torchtitan/issues/2168 | open | [] | 2025-12-19T23:25:19Z | 2025-12-19T23:31:37Z | 2 | yushangdi |
vllm-project/vllm | 31,044 | [CI Failure]: Blackwell Fusion Tests | ### Name of failing test
FAILED tests/compile/test_fusion_attn.py::test_attention_quant_pattern[AttentionBackendEnum.TRITON_ATTN-nvidia/Llama-4-Scout-17B-16E-Instruct-FP8-TestAttentionFp8StaticQuantPatternModel--quant_fp8-dtype1-533-128-40-8] - AssertionError: Tensor-likes are not close!
### Basic information
- [x] ... | https://github.com/vllm-project/vllm/issues/31044 | open | [
"help wanted",
"torch.compile",
"ci-failure"
] | 2025-12-19T18:49:59Z | 2025-12-26T21:58:25Z | 3 | robertgshaw2-redhat |
vllm-project/vllm | 31,043 | [BugFix]: move torch.Size across graphs in split_graph | ### ๐ The feature, motivation and pitch
When fixing a moe x cudagraph issue (see #30914), we found that `split_graph` may generate a submodule that returns a torch.Size and later another submodule that takes torch.Size. This errors since pt2 somehow does not support `torch.Size` as output yet.
One fix is to manuall... | https://github.com/vllm-project/vllm/issues/31043 | open | [
"help wanted",
"feature request",
"torch.compile"
] | 2025-12-19T18:24:58Z | 2025-12-22T21:23:04Z | 1 | BoyuanFeng |
vllm-project/vllm | 31,039 | [Feature]: Integrate Sonic MoE | ### ๐ The feature, motivation and pitch
https://x.com/wentaoguo7/status/2001773245318541324?s=46&t=jLcDgQXDbYe6HgFmTNYgpg
https://github.com/Dao-AILab/sonic-moe
Curious to see benchmarks!
### Alternatives
_No response_
### Additional context
_No response_
### Before submitting a new issue...
- [x] Make sure yo... | https://github.com/vllm-project/vllm/issues/31039 | open | [
"help wanted",
"good first issue",
"feature request"
] | 2025-12-19T17:29:59Z | 2026-01-04T14:10:21Z | 4 | robertgshaw2-redhat |
sgl-project/sglang | 15,481 | [Bug] Seeded Deterministic/Batch Invariant Inference Not Working on v1/completions endpoint | ### Checklist
- [x] I searched related issues but found no solution.
- [x] The bug persists in the latest version.
- [x] Issues without environment info and a minimal reproducible demo are hard to resolve and may receive no feedback.
- [x] If this is not a bug report but a general question, please start a discussion a... | https://github.com/sgl-project/sglang/issues/15481 | closed | [
"bug",
"high priority"
] | 2025-12-19T15:04:26Z | 2025-12-20T04:32:15Z | 8 | jamesheavey |
huggingface/lerobot | 2,684 | How to manually push a dataset | Say you `lerobot-record` a dataset with the flag `--dataset.push_to_hub=False`, or you encounter any problem at uploading time.
Is using `hf upload` enough, or does `lerobot` datasets need additional stuff? | https://github.com/huggingface/lerobot/issues/2684 | open | [
"documentation",
"question",
"dataset"
] | 2025-12-19T13:00:20Z | 2025-12-19T15:41:42Z | null | mcres |
vllm-project/vllm | 31,023 | [Doc]: FP8 KV Cache: Does softmax output multiply with FP8 V directly or after dequantization? | ### ๐ The doc issue
https://docs.vllm.ai/en/v0.8.5.post1/features/quantization/quantized_kvcache.html
Question:
In the FP8 KV Cache implementation, after computing attention scores and softmax at higher precision (FP16/BF16), is the resulting attention weight matrix:
Quantized to FP8 and multiplied directly with FP8 ... | https://github.com/vllm-project/vllm/issues/31023 | closed | [
"documentation"
] | 2025-12-19T10:33:22Z | 2025-12-22T00:41:38Z | 0 | jorjiang |
pytorch/pytorch | 170,867 | Operator benchmark: option to measure GPU execution time only (less CPU noise) | ### ๐ The feature, motivation and pitch
Hello,
[Operator benchmark](https://github.com/pytorch/pytorch/tree/main/benchmarks/operator_benchmark) currently measures time in a way that [could be prone to CPU noise](https://github.com/pytorch/pytorch/blob/eba9265a580c6dc3e928ef341c23cab96ccf8b07/benchmarks/operator_benc... | https://github.com/pytorch/pytorch/issues/170867 | open | [
"oncall: profiler"
] | 2025-12-19T10:31:38Z | 2025-12-20T22:52:15Z | 0 | apakbin |
vllm-project/vllm | 31,019 | [Bug]: Qwen3-VL 2:4 sparsity llm-compressor RuntimeError: shape mismatch (0.12, 0.13rc2) | ### Your current environment
<details>
<summary>The output of <code>python collect_env.py</code></summary>
```text
Collecting environment information...
==============================
System Info
==============================
OS : Ubuntu 24.04.3 LTS (x86_64)
GCC version ... | https://github.com/vllm-project/vllm/issues/31019 | open | [
"bug",
"help wanted",
"good first issue"
] | 2025-12-19T09:18:00Z | 2025-12-24T12:16:01Z | 4 | SorenDreano |
vllm-project/vllm | 31,016 | [Bug]: FlashInfer Incompatible with Sleep Mode | ### Your current environment
<details>
<summary>The output of <code>python collect_env.py</code></summary>
```text
Your output of `python collect_env.py` here
```
</details>
### ๐ Describe the bug
Here is a script to reproduce the bug:
I use vllm=v0.10.1 and flashinfer-python=v0.5.3.
```
from vllm import LLM, S... | https://github.com/vllm-project/vllm/issues/31016 | open | [
"bug",
"help wanted"
] | 2025-12-19T08:04:19Z | 2025-12-19T23:17:47Z | 1 | xiaoxiaosuaxuan |
huggingface/transformers.js | 1,490 | Example models for each pipeline | ### Question
Right now, I sorta use the docs and some searches to find good default models for https://workglow.dev/ for each pipeline that transformerjs has to offer. But they are not really the best, either in size or performance.
It would be great to have a list for each pipeline for fast and effective, best of br... | https://github.com/huggingface/transformers.js/issues/1490 | open | [
"question"
] | 2025-12-19T07:37:16Z | 2025-12-19T17:41:01Z | null | sroussey |
vllm-project/vllm | 31,004 | [New Model]: T5Gemma 2 | ### The model to consider.
https://huggingface.co/collections/google/t5gemma-2
### The closest model vllm already supports.
_No response_
### What's your difficulty of supporting the model you want?
I know vLLM dropped encoder-decoder support, but can we bring it back?
https://huggingface.co/docs/transformers/mo... | https://github.com/vllm-project/vllm/issues/31004 | open | [
"new-model"
] | 2025-12-19T03:55:00Z | 2025-12-20T21:37:34Z | 1 | ducviet00-h2 |
sgl-project/sglang | 15,443 | SGLang Diffusion Cookbook Proposal | # ๐จ [Community Contribution] Create SGLang Diffusion Models Cookbook
## ๐ฏ Goal
Create a comprehensive cookbook for diffusion models in SGLang, demonstrating SGLang's performance advantages for image and video generation workloads.
## ๐ Scope
### Models to Cover
**Image Generation:**
- Flux-1 Dev
- Flux-2
- SDX... | https://github.com/sgl-project/sglang/issues/15443 | open | [] | 2025-12-19T03:44:33Z | 2025-12-23T13:09:31Z | 1 | Richardczl98 |
vllm-project/vllm | 30,969 | [Bug]: SmolLM3-3B FP8 Fails to Load [`compressed-tensors` and `transformers-impl` compatibility issue] | ### Your current environment
<details>
<summary>The output of <code>python collect_env.py</code></summary>
Running in official Docker image: vllm/vllm-openai:v0.11.1
GPU: NVIDIA L4 (GCP g2-standard-8)
`| NVIDIA-SMI 570.195.03 Driver Version: 570.195.03 CUDA Version: 12.9 |`
vLLM version: 0.11.1
`... | https://github.com/vllm-project/vllm/issues/30969 | closed | [
"bug",
"help wanted",
"good first issue"
] | 2025-12-18T14:36:30Z | 2025-12-20T21:54:47Z | 3 | GauthierRoy |
huggingface/lerobot | 2,680 | Invalid frame index when training on merged datasets [RuntimeError] | ### Ticket Type
๐ Bug Report (Something isn't working)
### Environment & System Info
```Shell
- LeRobot version: 0.4.3
- Platform: Linux-5.4.0-165-generic-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface Hub version: 0.35.3
- Datasets version: 4.1.1
- Numpy version: 2.2.6
- FFmpeg version: 4.4.2-0ubunt... | https://github.com/huggingface/lerobot/issues/2680 | open | [
"bug",
"question",
"dataset",
"visualization",
"examples",
"training"
] | 2025-12-18T13:29:50Z | 2025-12-26T06:26:37Z | null | RiccardoIzzo |
huggingface/trl | 4,719 | Loss calculation of `GKDTrainer` may be inaccurate when performing gradient accumulation? | It seems that `GKDTrainer` averages the loss of tokens in a micro batch ahead?
https://github.com/huggingface/trl/blob/8918c9836a3e0b43a6851c08d01b69072f56ca52/trl/experimental/gkd/gkd_trainer.py#L284 | https://github.com/huggingface/trl/issues/4719 | open | [
"๐ bug",
"๐ GKD"
] | 2025-12-18T12:50:05Z | 2025-12-18T12:50:49Z | 0 | jue-jue-zi |
huggingface/lerobot | 2,679 | Merging datasets removes fps from scalar features | ### Ticket Type
๐ Bug Report (Something isn't working)
### Environment & System Info
```Shell
- LeRobot version: 0.4.3
- Platform: Linux-6.17.9-arch1-1-x86_64-with-glibc2.42
- Python version: 3.12.11
- Huggingface Hub version: 0.34.4
- Datasets version: 4.1.1
- Numpy version: 2.3.5
- FFmpeg version: n8.0.1
- PyTorc... | https://github.com/huggingface/lerobot/issues/2679 | open | [
"bug",
"enhancement",
"question",
"dataset",
"performance",
"examples"
] | 2025-12-18T12:47:14Z | 2025-12-18T15:25:12Z | null | reeceomahoney |
vllm-project/vllm | 30,956 | [Feature]: could output the given format logger ? | ### ๐ The feature, motivation and pitch
hi,dear ,
i have def the logger from py scripts ,etc, logger_utils.py
and could i use shell run the command with the logger,
such as ,
`vllm serve qwen3-embedding-0.6b --logger_file logger_utils.py `
thx
i really need your help
SOS ,thx
### Alternatives
_No response_
#... | https://github.com/vllm-project/vllm/issues/30956 | open | [
"feature request"
] | 2025-12-18T09:35:22Z | 2025-12-19T01:52:41Z | 5 | ucas010 |
huggingface/lerobot | 2,678 | Bug: lerobot-dataset-viz IndexError when visualizing specific episodes | # Bug Report: `lerobot-dataset-viz` IndexError when visualizing specific episodes
## Description
The `lerobot-dataset-viz` command fails with an `IndexError` when trying to visualize a specific episode using the `--episode-index` parameter. The issue is caused by `EpisodeSampler` using global dataset indices while th... | https://github.com/huggingface/lerobot/issues/2678 | open | [
"bug",
"question",
"dataset",
"visualization",
"python",
"examples"
] | 2025-12-18T08:45:05Z | 2025-12-24T08:31:00Z | null | apeSh1t |
vllm-project/vllm | 30,941 | [Performance]: Why Does Latency Remain Unchanged in vLLM 0.11.0 When Input Token Count Decreases for qwen3-vl-30b-a3b? | ### Proposal to improve performance
_No response_
### Report of performance regression
_No response_
### Misc discussion on performance
Using vLLM version 0.11.0 to run the qwen3-vl-30b-a3b model, the stress test results show that although the number of input tokens decreases, the latency does not change.
The mod... | https://github.com/vllm-project/vllm/issues/30941 | open | [
"performance"
] | 2025-12-18T07:40:35Z | 2025-12-18T07:40:35Z | 0 | Hormoney |
pytorch/pytorch | 170,750 | CUDA: Tensor.index_select out-of-bounds index triggers device-side assert (Indexing.cu:1237) instead of a regular error | ### ๐ Describe the bug
### Bug description
On CUDA, calling `Tensor.index_select` with an out-of-bounds index triggers a device-side assert in `../aten/src/ATen/native/cuda/Indexing.cu:1237` (`indexSelectSmallIndex`), and then raises `RuntimeError: CUDA error: device-side assert triggered`.
On CPU, similar out-of-bo... | https://github.com/pytorch/pytorch/issues/170750 | open | [
"module: cuda",
"triaged"
] | 2025-12-18T06:48:01Z | 2025-12-20T23:31:57Z | 0 | DeLightor |
vllm-project/vllm | 30,933 | [Usage]: What is the latest instruction to run DeepSeek V3.2? | ### Your current environment
vLLM 0.12.0
### How would you like to use vllm
I am following the guidelines here https://docs.vllm.ai/projects/recipes/en/latest/DeepSeek/DeepSeek-V3_2.html for running DeepSeek v3.2. By following the instructions I installed vLLM 0.12.0 on my H200 node. However, when I try to run it wi... | https://github.com/vllm-project/vllm/issues/30933 | open | [
"usage"
] | 2025-12-18T06:18:29Z | 2025-12-18T15:50:29Z | 1 | IKACE |
vllm-project/vllm | 30,923 | [Bug]: Use the offical doucment vllm online method deploy DeepSeek-OCR๏ผthe result is very bad . but I ust the offline method the result is normal. why ? | ### Your current environment
<details>
<summary>The output of <code>python collect_env.py</code></summary>
```text
Your output of `python collect_env.py` here
```
</details>
### ๐ Describe the bug
I use https://github.com/vllm-project/recipes/blob/main/DeepSeek/DeepSeek-OCR.md
the offline and online mehtod is wo... | https://github.com/vllm-project/vllm/issues/30923 | closed | [
"bug"
] | 2025-12-18T04:14:33Z | 2025-12-18T04:25:20Z | 0 | git-liweichao |
vllm-project/vllm | 30,922 | [Bug]: Use the offical doucment vllm online method deploy DeepSeek-OCR๏ผthe result is very bad . but I ust the offline method the result is normal. why ? | ### Your current environment
<details>
<summary>The output of <code>python collect_env.py</code></summary>
```text
Your output of `python collect_env.py` here
```
</details>
### ๐ Describe the bug
I use https://github.com/vllm-project/recipes/blob/main/DeepSeek/DeepSeek-OCR.md
the offline and online mehtod is wo... | https://github.com/vllm-project/vllm/issues/30922 | open | [
"bug"
] | 2025-12-18T04:08:46Z | 2025-12-18T04:25:36Z | 1 | git-liweichao |
sgl-project/sglang | 15,359 | [Bug] The handling logic for tool_choice = 'auto' in the DeepseekV3.2 model may be incorrect. | ### Checklist
- [ ] I searched related issues but found no solution.
- [ ] The bug persists in the latest version.
- [ ] Issues without environment info and a minimal reproducible demo are hard to resolve and may receive no feedback.
- [ ] If this is not a bug report but a general question, please start a discussion a... | https://github.com/sgl-project/sglang/issues/15359 | closed | [] | 2025-12-18T02:47:26Z | 2025-12-18T03:36:38Z | 4 | JerryKwan |
huggingface/lerobot | 2,673 | Dataset v2 not working anymore | ### Ticket Type
Feature
### Environment & System Info
```Shell
- LeRobot version: 0.4.3
- Platform: macOS-26.2-arm64-arm-64bit
- Python version: 3.10.19
- Huggingface Hub version: 0.35.3
- Datasets version: 4.1.1
- Numpy version: 2.2.6
- FFmpeg version: 7.1.1
- PyTorch version: 2.7.1
- Is PyTorch built with CUDA sup... | https://github.com/huggingface/lerobot/issues/2673 | closed | [
"enhancement",
"question",
"dataset",
"dependencies",
"training"
] | 2025-12-17T21:35:31Z | 2025-12-17T23:26:54Z | null | imstevenpmwork |
huggingface/lerobot | 2,670 | Async inference for simulation (libero benchmark) | ### Issue Type
{"label" => "โ Technical Question"}
### Environment & System Info
```Shell
```
### Description
Is there any way that we can support async inference for simulator (e.g., libero)? This makes it possible to test RTC with simulators.
### Context & Reproduction
A question re a feature.
### Expected... | https://github.com/huggingface/lerobot/issues/2670 | open | [
"question",
"simulation",
"performance",
"evaluation"
] | 2025-12-17T18:57:07Z | 2026-01-02T05:40:18Z | null | dywsjtu |
huggingface/transformers | 42,930 | Inconsistent handling of video_metadata in Qwen3VLVideoProcessor usage example | ### System Info
transformers==4.57.3
### Who can help?
@zucchini-nlp @yonigozlan @molbap
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details belo... | https://github.com/huggingface/transformers/issues/42930 | closed | [
"bug"
] | 2025-12-17T17:21:00Z | 2025-12-18T10:32:23Z | 3 | wagoriginal |
vllm-project/vllm | 30,882 | [Bug]: Marlin Fp8 Block Quant Failure | ### Your current environment
<details>
<summary>The output of <code>python collect_env.py</code></summary>
```text
Your output of `python collect_env.py` here
```
</details>
### ๐ Describe the bug
```bash
MODEL := "Qwen/Qwen3-Coder-30B-A3B-Instruct-FP8"
#MODEL := "RedHatAI/Mixtral-8x7B-Instruct-v0.1-FP8"
launch... | https://github.com/vllm-project/vllm/issues/30882 | closed | [
"bug",
"help wanted",
"good first issue"
] | 2025-12-17T15:55:18Z | 2025-12-17T16:02:54Z | 2 | robertgshaw2-redhat |
vllm-project/vllm | 30,879 | [Doc]: Add some documentation about encoder compilation | ### ๐ The doc issue
I want something like a design doc for encoder compilation. For example:
- It uses support_torch_compile and set_model_tag to avoid cache collisions
- it supports or doesn't support the following features that VllmBackend does: cudagraphs, compile_ranges, and a high-level explanation for how these... | https://github.com/vllm-project/vllm/issues/30879 | open | [
"documentation",
"torch.compile"
] | 2025-12-17T15:44:50Z | 2025-12-17T16:27:38Z | 1 | zou3519 |
vllm-project/vllm | 30,865 | [Usage]:Tools GLM4.6v with vLLM | ### Your current environment
Hello,
I am running tests on this model, which I find excellent. However, I am encountering a few issues and would like to know whether it is possible to fix them or if I am simply asking for the impossible.
First of all, here is my vLLM configuration:
`docker run -d \ --name vllm-llm \... | https://github.com/vllm-project/vllm/issues/30865 | open | [
"usage"
] | 2025-12-17T10:51:34Z | 2025-12-18T08:33:44Z | 1 | qBrabus |
sgl-project/sglang | 15,321 | [Feature][VLM] Support ViT Piecewise CUDA Graph for VLMs | ### Checklist
- [ ] If this is not a feature request but a general question, please start a discussion at https://github.com/sgl-project/sglang/discussions. Otherwise, it will be closed.
- [ ] Please use English. Otherwise, it will be closed.
### Motivation
Support ViT Piecewise CUDA Graph for VLMs can improve prefi... | https://github.com/sgl-project/sglang/issues/15321 | open | [
"performance",
"Multi-modal",
"vlm"
] | 2025-12-17T09:17:18Z | 2026-01-04T02:09:13Z | 0 | yuan-luo |
vllm-project/vllm | 30,859 | [Bug]: set_current_vllm_config() is only done during the initialization stage but not the runtime stage | ### Your current environment
Any env
### ๐ Describe the bug
# Issue Statement
Currently, `set_current_vllm_config()` is only done during the initialization stage but not the runtime stage. If the code tries to call `get_current_vllm_config()`, vLLM prints a warning "Current vLLM config is not set." and returns a d... | https://github.com/vllm-project/vllm/issues/30859 | open | [
"bug"
] | 2025-12-17T08:59:49Z | 2025-12-22T18:09:55Z | 7 | nvpohanh |
sgl-project/sglang | 15,319 | [Feature] RFC: AutoSpec, Automatic Runtime Speculative Inference Parameter Tuning | ### Checklist
- [x] If this is not a feature request but a general question, please start a discussion at https://github.com/sgl-project/sglang/discussions. Otherwise, it will be closed.
- [x] Please use English. Otherwise, it will be closed.
### Motivation
## Summary
This proposal introduces automatic runtime tuni... | https://github.com/sgl-project/sglang/issues/15319 | open | [] | 2025-12-17T08:53:57Z | 2025-12-22T03:37:45Z | 3 | maodoudou168 |
vllm-project/vllm | 30,855 | [Usage]: Qwen3-30B-A3B-NVFP4 fails on Dell Pro Max GB10 with "no kernel image is available for execution on the device" | ### Your current environment
```
Hardware: Dell Pro Max GB10
OS: Ubuntu 24
CUDA: cuda_13.0.r13.0
Cuda compilation tools, release 13.0, V13.0.88;
vllm: V0.12.0
torch_version: 2.9.0+cu128
model: RedHatAI/Qwen3-30B-A3B-NVFP4 or nvidia/Qwen3-30B-A3B-NVFP4 or nvidia/Qwen3-30B-A3B-FP4
```
### How would you like to use... | https://github.com/vllm-project/vllm/issues/30855 | open | [
"usage"
] | 2025-12-17T08:44:11Z | 2025-12-17T08:44:11Z | 0 | nanbogong |
vllm-project/vllm | 30,847 | [Bug]: Qwen 3VL via Efficient Video Sampling (EVS) to trim video embeddings and found that the number of tokens after timestamp in the Prompt was not aligned with the actual number of tokens after pruning? | ### Your current environment
<details>
vllm serve Qwen3-VL-8B --video-pruning-rate=0.75
messages=[
{
"role": "user",
"content": [
# {"type": "text", "text": "What's in this video?"},
{"type": "text", "text": "่ฟไธช่ง้ขๅๅพ็ๅๅซๆ่ฟฐ็ๆฏไปไนๅ
ๅฎน?"},
... | https://github.com/vllm-project/vllm/issues/30847 | open | [
"bug"
] | 2025-12-17T06:46:15Z | 2026-01-04T07:39:17Z | 5 | xshqhua |
vllm-project/vllm | 30,832 | [Performance]: DeepSeek-V3.2 on 8xH20 30 decode tokens/sec | ### Proposal to improve performance
**My Env:**
vllm 0.13.0rc2.dev178+g676db55ee
deep_gemm 2.1.1+c9f8b34
cuda. 12.9
python. 3.10.18
**command** is the same as:
vllm serve mypath/DeepSeek-V3.2 \
--tensor-parallel-size 8 \
--tokenizer-mode deepseek_v32 \
-... | https://github.com/vllm-project/vllm/issues/30832 | open | [
"performance"
] | 2025-12-17T03:08:52Z | 2025-12-18T08:01:30Z | 1 | lisp2025 |
pytorch/pytorch | 170,635 | Use cvt.rp.satfinite.ue8m0x2.f32 PTX instruction in Inductor codegen for mxfp8 quantization | ## Summary
For MXFP8 quantization, NVIDIA recommends using the "RCEIL" rounding mode to convert a fp32 scale factor to the e8m0 format for MXFP8. On Blackwell/sm100, they support a PTX instruction to convert fp32 scales to the e8m0 format for MXFP8 using a single instruction, rather than several operations: `cvt.rp.sa... | https://github.com/pytorch/pytorch/issues/170635 | open | [
"triaged",
"oncall: pt2",
"module: inductor",
"module: floatx (formerly float8)"
] | 2025-12-17T02:03:40Z | 2025-12-19T09:36:51Z | 0 | danielvegamyhre |
pytorch/pytorch | 170,604 | CUDAGraph capturing of iterating the same function/module (outside and inside fullgraph) | ### ๐ Describe the bug
The example from https://docs.pytorch.org/docs/stable/torch.compiler_cudagraph_trees.html#limitations throws an error as warned in the docs:
```
RuntimeError: Error: accessing tensor output of CUDAGraphs that has been overwritten by a subsequent run. Stack trace: File ".../bug.py", line 7, in ... | https://github.com/pytorch/pytorch/issues/170604 | open | [
"module: cuda",
"triaged",
"module: cuda graphs"
] | 2025-12-16T22:07:19Z | 2025-12-17T05:01:56Z | 0 | vadimkantorov |
huggingface/candle | 3,247 | Parakeet V3 support? | Any plans to support Parakeet V3 by any chance? Thank you ๐ | https://github.com/huggingface/candle/issues/3247 | open | [] | 2025-12-16T19:05:33Z | 2025-12-16T19:05:33Z | 0 | mobicham |
vllm-project/vllm | 30,798 | [Usage]: vllm offline server lora model | ### Your current environment
Hi team,
I have a question about deploying LoRA models with a vLLM offline server.
Currently, we have a base model **A**. After LoRA training, we obtain adapter parameters **P**. When we serve model A with vLLM (offline server) and enable LoRA, we can select either the **base model A**... | https://github.com/vllm-project/vllm/issues/30798 | open | [
"usage"
] | 2025-12-16T16:38:49Z | 2025-12-18T11:52:39Z | 4 | zapqqqwe |
sgl-project/sglang | 15,266 | Multi-Adapter Support for Embed Qwen3 8B Embedding Model | ### Checklist
- [x] If this is not a feature request but a general question, please start a discussion at https://github.com/sgl-project/sglang/discussions. Otherwise, it will be closed.
- [x] Please use English. Otherwise, it will be closed.
### Motivation
Hi Team, do we currently support multi-adapter (LoRA) suppo... | https://github.com/sgl-project/sglang/issues/15266 | open | [] | 2025-12-16T14:14:16Z | 2025-12-16T14:14:22Z | 0 | dawnik17 |
vllm-project/vllm | 30,776 | [Usage]: Qwen3-omni's offline usage | ### Your current environment
I used the code below in vllm==0.12.0, but failed.
```
import os
import torch
from vllm import LLM, SamplingParams
from transformers import Qwen3OmniMoeProcessor
from qwen_omni_utils import process_mm_info
def build_input(processor, messages, use_audio_in_video):
text = processor.app... | https://github.com/vllm-project/vllm/issues/30776 | open | [
"bug",
"usage"
] | 2025-12-16T12:30:18Z | 2025-12-17T17:03:34Z | 50 | Auraithm |
sgl-project/sglang | 15,260 | SGLang installs newer PyTorch automatically โ is there an official SGLang โ PyTorch compatibility guide? | Hi SGLang team, thank you for the great project!
I have a question regarding **PyTorch version compatibility and installation**.
Currently, the recommended installation command from the website is:
```bash
uv pip install "sglang" --prerelease=allow
```
However, when using this command, `pip/uv` automatically upgrad... | https://github.com/sgl-project/sglang/issues/15260 | open | [] | 2025-12-16T12:27:59Z | 2025-12-16T12:27:59Z | 0 | David-19940718 |
vllm-project/vllm | 30,757 | [Performance]: Async sched: Why return AsyncGPUModelRunnerOutput util func sample_tokens | ### Proposal to improve performance
Why is AsyncGPUModelRunnerOutput returned only after sample_tokens, not immediately after execute_model?
https://github.com/vllm-project/vllm/blob/0d0c929f2360cde5bae6817ad0f555641329e79d/vllm/v1/engine/core.py#L420-L422
If we defer returning AsyncGPUModelRunnerOutput until after sa... | https://github.com/vllm-project/vllm/issues/30757 | open | [
"performance"
] | 2025-12-16T08:26:08Z | 2025-12-16T08:26:49Z | 0 | iwzbi |
pytorch/executorch | 16,271 | Android: load model from assets | ### ๐ The feature, motivation and pitch
It is simple, there is no way to read model directly from assets. The assets are files bundled in the Android apps.
The assets are not handled the same way as regular files -- they can be accessed only through [assets manager](https://developer.android.com/reference/kotlin/andr... | https://github.com/pytorch/executorch/issues/16271 | open | [] | 2025-12-16T03:40:03Z | 2025-12-17T21:10:12Z | 2 | Bludator |
vllm-project/vllm | 30,736 | [Bug] DCP/DBO: 'NoneType' error building attention_metadata during DeepSeek-V3.1 deployment dummy run | ### Your current environment
```bash
==============================
System Info
==============================
OS : Ubuntu 22.04.5 LTS (x86_64)
GCC version : (Ubuntu 11.4.0-1ubuntu1~22.04.2) 11.4.0
Clang version : Could not collect
CMake version ... | https://github.com/vllm-project/vllm/issues/30736 | open | [
"bug",
"help wanted"
] | 2025-12-16T03:07:59Z | 2025-12-22T17:11:48Z | 3 | Butterfingrz |
huggingface/transformers.js | 1,487 | License clarification for some of the converted models | ### Question
Hello!
I want to use [Xenova/whisper-small](https://huggingface.co/Xenova/whisper-small) and [Xenova/UAE-Large-V1](https://huggingface.co/Xenova/UAE-Large-V1) in a project, but I noticed that these model cards on Hugging Face do not have a license specified in their metadata or README.
Since the origina... | https://github.com/huggingface/transformers.js/issues/1487 | closed | [
"question"
] | 2025-12-16T00:27:16Z | 2025-12-16T19:13:09Z | null | rmahdav |
vllm-project/vllm | 30,722 | [Bug]: llama4_pythonic tool parser fails with SyntaxError on nested list parameters | ### Your current environment
I don't have direct access to the cluster the model is running in. But it's running on 8x H100 GPUs using TP 8, expert parallel.
This is the fp8 model from Huggingface.
These are the vllm serve args I'm using:
VLLM Version: 0.11.0
```
--port 8002
--model /config/models/maverick
--de... | https://github.com/vllm-project/vllm/issues/30722 | open | [
"bug"
] | 2025-12-15T21:26:24Z | 2025-12-15T21:26:24Z | 0 | mphilippnv |
pytorch/executorch | 16,265 | viable/strict is advancing even if docker build failed | ### ๐ Describe the bug
Can we block viable/strict advancement when docker build failed?
### Versions
CI only | https://github.com/pytorch/executorch/issues/16265 | closed | [] | 2025-12-15T20:42:25Z | 2025-12-17T22:56:47Z | 0 | kirklandsign |
pytorch/executorch | 16,263 | Android Documentation - Improve Llama example | ### ๐ The doc issue
Feedback from UnSloth on how to run Android llama example : https://docs.google.com/document/d/1GB3edTlBQfc4Ar0yiBTELKynhwa1hstwKhJxpq3ATVE/edit?tab=t.0
### Suggest a potential alternative/fix
_No response_ | https://github.com/pytorch/executorch/issues/16263 | open | [
"android_ux"
] | 2025-12-15T19:32:41Z | 2025-12-15T19:32:41Z | 0 | psiddh |
pytorch/executorch | 16,260 | Android UX: Prebuilt APKs for Android apps | Helps in overall E2E experience for the Devs, With least friction Android Devs can install and test prebuilt apk w/o having to setup a more cumbersome path of building from sources.
- Llama Demo apk
- dl3 demo apk | https://github.com/pytorch/executorch/issues/16260 | open | [
"android_ux"
] | 2025-12-15T19:23:14Z | 2025-12-15T19:38:33Z | 0 | psiddh |
huggingface/tokenizers | 1,913 | Wrong and unsuppressable print when instantiating BPE | I am running Python code that is of the form
```python
from transformers import PreTrainedTokenizerFast
from tokenizers import Tokenizer
from tokenizers.models import BPE
vocab = {"a": 5, "b": 6, "ab": 7}
merges = [("a","b")]
backend_of_backend_of_backend = BPE(vocab=vocab, merges=merges, dropout=None)
backend_of_ba... | https://github.com/huggingface/tokenizers/issues/1913 | closed | [] | 2025-12-15T16:30:46Z | 2026-01-05T13:02:45Z | 4 | bauwenst |
pytorch/torchtitan | 2,153 | [Question] composable activation checkpoint | I'm looking for a way that not to use module wrapper to apply activation checkpoint, and I found this https://github.com/pytorch/pytorch/pull/87664/files.
Is this method works fine? Or it just a demo code | https://github.com/pytorch/torchtitan/issues/2153 | open | [
"question"
] | 2025-12-15T13:54:08Z | 2025-12-16T22:25:59Z | null | Irvingwangjr |
vllm-project/vllm | 30,694 | [Feature]: CompressedTensors: NVFP4A16 not supported for MoE models | ### ๐ The feature, motivation and pitch
NVFP4A16 (W4A16 FP4) quantization via compressed_tensors works for dense models but fails on MoE models like Qwen3-30B-A3B.
Looking at `compressed_tensors_moe.py`, `_is_fp4a16_nvfp4` is checked for Linear layers but not in `get_moe_method()` for FusedMoE. Only W4A4 has a MoE m... | https://github.com/vllm-project/vllm/issues/30694 | open | [
"feature request"
] | 2025-12-15T13:29:09Z | 2025-12-21T09:27:38Z | 2 | zhangyimi |
pytorch/pytorch | 170,426 | argmax over multiple axis | ### ๐ The feature, motivation and pitch
Is there any chance we are getting `argmax` to work also on multiple axis?
I feel that the usage of [unravel_index](https://docs.pytorch.org/docs/stable/generated/torch.unravel_index.html) is so error prone that would make sense to just have it part of the library... and to be ... | https://github.com/pytorch/pytorch/issues/170426 | open | [
"triaged",
"module: python frontend"
] | 2025-12-15T11:03:36Z | 2025-12-18T15:37:31Z | 0 | AlbertoSinigaglia |
vllm-project/vllm | 30,685 | [Feature]: fp8 kv cache for finer-grained scaling factors (e.g., per channel). | ### ๐ The feature, motivation and pitch
Currently, the FP8 KV cache feature (in the FlashMLA interface) only supports per-tensor (scalar) scaling factors. Are you developing support for finer-grained scaling factors (e.g., per-channel)? If so, when can we expect the FP8 KV cache with such finer-grained scaling factor... | https://github.com/vllm-project/vllm/issues/30685 | open | [
"feature request"
] | 2025-12-15T09:32:48Z | 2025-12-15T09:32:48Z | 0 | zx-ai |
huggingface/transformers | 42,868 | sdpa_paged: How does it handle paged cache without padding? | Hi @ArthurZucker ,
I was analyzing the [sdpa_paged](https://github.com/huggingface/transformers/blob/main/src/transformers/integrations/sdpa_paged.py#L18) implementation and found the approach quite fascinating. I have a question regarding how the input shapes are handled.
If I have a batch of 4 sequences with length... | https://github.com/huggingface/transformers/issues/42868 | closed | [] | 2025-12-15T08:39:00Z | 2025-12-16T03:08:27Z | 4 | jiqing-feng |
pytorch/executorch | 16,244 | How to let executorch export intput output int8 | Hi,
I use https://github.com/pytorch/executorch/blob/main/examples/arm/ethos_u_minimal_example.ipynb to export example model and run on FVP.
The output is 2.0(float).
But I modify the code and that it output and intput to be int8. But the output at FVP show 1(char).
I think that it is wrong. How can I fix it?
<img w... | https://github.com/pytorch/executorch/issues/16244 | open | [
"partner: arm"
] | 2025-12-15T06:45:32Z | 2025-12-24T01:36:24Z | null | kris-himax |
huggingface/trl | 4,692 | LLVM error during GRPO training with Apple M4 Max | I have the below error while doing GRPO training. I am using HuggingFace example codes for GRPO. I couldn't run the model on MPS because of this issue.
How can I run GRPO on MPS?
loc("mps_matmul"("(mpsFileLoc): /AppleInternal/Library/BuildRoots/4~B_wkugAG-524HdEQLaK0kvU7Y_D8Jtm6UxMaIoY/Library/Caches/com.apple.xbs/S... | https://github.com/huggingface/trl/issues/4692 | open | [
"๐ bug",
"๐ GRPO"
] | 2025-12-14T23:01:49Z | 2025-12-14T23:02:11Z | 0 | neslihaneti |
vllm-project/vllm | 30,654 | [Feature][Attention][UX]: Incorporate Features into Attention Selection | ### ๐ The feature, motivation and pitch
SUMMARY:
* we have default attention backends by priority and a notion of which backend supports what hw
* however, certain features are not considered in this (e.g. fp8 kv cache, e.g. attention sinks)
Recent example, we had test failures because we updated the logic to load k... | https://github.com/vllm-project/vllm/issues/30654 | open | [
"help wanted",
"good first issue",
"feature request"
] | 2025-12-14T18:04:14Z | 2025-12-30T05:38:40Z | 11 | robertgshaw2-redhat |
pytorch/pytorch | 170,400 | Clarify inverted boolean mask logic between nn.MultiHeadAttention and F.scaled_dot_product_attention | ### ๐ The doc issue
### ๐ Motivation
I am opening this issue to suggest a documentation improvement regarding a common "gotcha" when migrating between `nn.MultiHeadAttention` (MHA) and `F.scaled_dot_product_attention` (SDPA).
Many users (including myself) have noticed that the boolean mask semantics are inverted b... | https://github.com/pytorch/pytorch/issues/170400 | closed | [
"module: docs",
"module: nn",
"triaged",
"module: sdpa"
] | 2025-12-14T13:07:19Z | 2025-12-23T20:44:24Z | 1 | konodiodaaaaa1 |
huggingface/diffusers | 12,838 | Merge Loras for FLUX | The issue is based on https://huggingface.co/docs/diffusers/main/using-diffusers/merge_loras
Is there a similar procedure for merging loras for FLUX models? The guide seems to be specific for UNet based methods. I'm working on FLUX-dev and I would like to perform a linear merge of my loras. | https://github.com/huggingface/diffusers/issues/12838 | open | [] | 2025-12-14T12:39:41Z | 2025-12-14T12:39:41Z | 0 | shrikrishnalolla |
vllm-project/vllm | 30,633 | [Installation]: How to install vLLM 0.11.0 with CUDA < 12.9 (Driver 535)? No matching wheels found | ### Your current environment
Iโm trying to install vLLM 0.11.0 on a machine with NVIDIA Driver 535, and I ran into issues related to CUDA version compatibility.
Environment
OS: Linux (Ubuntu 20.04 / 22.04)
GPU: NVIDIA GPU H20
NVIDIA Driver: 535.xx
Python: 3.10
vLLM version: 0.11.0
Problem
According to the rel... | https://github.com/vllm-project/vllm/issues/30633 | open | [
"installation"
] | 2025-12-14T04:29:41Z | 2026-01-01T16:50:50Z | 1 | whu125 |
vllm-project/vllm | 30,630 | [Usage]: SymmMemCommunicator: Device capability 10.3 not supported | ### Your current environment
```text
The output of `python collect_env.py`
```
### How would you like to use vllm
Hi, I am seeing following warning using vllm serve on B300 instances.
```
WARNING 12-13 16:31:15 [symm_mem.py:67] SymmMemCommunicator: Device capability 10.3 not supported, communicator is not available... | https://github.com/vllm-project/vllm/issues/30630 | open | [
"usage",
"nvidia"
] | 2025-12-14T01:00:34Z | 2025-12-18T21:17:42Z | 4 | navmarri14 |
huggingface/transformers.js | 1,484 | Should npm @xenova/transformers be deleted or marked deprecated? | ### Question
Hello,
I was surprised that none of the models Iย tried were supported by transformerjs, even if they were using transformerjs in their README, until I realized that I was using the old npm package.
Shouldn't this package be removed ? Or marked as deprecated in favour of huggingface's ?
Best, | https://github.com/huggingface/transformers.js/issues/1484 | open | [
"question"
] | 2025-12-13T19:49:08Z | 2025-12-17T12:21:12Z | null | matthieu-talbot-ergonomia |
huggingface/tokenizers | 1,910 | [Docs] `Visualizer` dead links | It seems like documentation for `Visualizer` is out of date and all the links return 404.
Docs: https://huggingface.co/docs/tokenizers/api/visualizer
Github Source: https://github.com/huggingface/tokenizers/blob/main/bindings/python/py_src/tokenizers/tools/visualizer.py | https://github.com/huggingface/tokenizers/issues/1910 | open | [] | 2025-12-13T19:23:33Z | 2025-12-13T19:23:33Z | 0 | dudeperf3ct |
vllm-project/vllm | 30,621 | [Feature]: Remove MXFP4 Logic From `fused_experts` | ### ๐ The feature, motivation and pitch
SUMMARY:
* as part of effort to refactor MoE, trying to reduce cruft
* we currently only have MX emulation in vLLM
* the logic for this emulation should be moved into quark
https://github.com/vllm-project/vllm/blame/main/vllm/model_executor/layers/fused_moe/fused_moe.py#L1866-... | https://github.com/vllm-project/vllm/issues/30621 | open | [
"help wanted",
"good first issue",
"feature request"
] | 2025-12-13T18:30:30Z | 2026-01-04T14:47:45Z | 13 | robertgshaw2-redhat |
vllm-project/vllm | 30,620 | [Feature]: Remove Chunking From FusedMoE | ### ๐ The feature, motivation and pitch
* we have some chunking logic in the triton kernels to avoid IMA: https://github.com/vllm-project/vllm/blob/main/vllm/model_executor/layers/fused_moe/fused_moe.py#L1807
* we chunk in ~65k tokens
* this case does not happen anymore because of chunked prefill
We should remove th... | https://github.com/vllm-project/vllm/issues/30620 | open | [
"help wanted",
"good first issue",
"feature request"
] | 2025-12-13T18:22:30Z | 2025-12-13T23:27:22Z | 3 | robertgshaw2-redhat |
pytorch/pytorch | 170,361 | [Dynamo] Use VariableBuilder/SourcelessBuilder consistently | There are many places in Dynamo where we directly call a VariableTracker subclass' `create`/`__init__` from a different VariableTracker's, e.g. `call_function`, `var_getattr`. This was done in order to skip the overhead required to go through `VariableBuilder`/`SourcelessBuilder`.
However, this has resulted in a numbe... | https://github.com/pytorch/pytorch/issues/170361 | open | [
"triaged",
"oncall: pt2",
"module: dynamo",
"dynamo-variable-tracker"
] | 2025-12-13T01:26:10Z | 2025-12-13T02:18:29Z | 1 | williamwen42 |
vllm-project/vllm | 30,570 | [Usage]: Why is VLLM still using SSE at all for mcp? | ### Your current environment
This is a broad question: Why is vllm still using/hardcoding sse usage at all, when its been deprecated for well over six months at this point?
### How would you like to use vllm
I want to run inference of a [specific model](put link here). I don't know how to integrate it with vllm.
#... | https://github.com/vllm-project/vllm/issues/30570 | open | [
"usage"
] | 2025-12-12T20:02:08Z | 2025-12-18T10:50:37Z | 1 | bags307 |
pytorch/pytorch | 170,320 | Can't find 'action.yml', 'action.yaml' or 'Dockerfile' under '/home/ec2-user/actions-runner/_work/pytorch/pytorch/.github/actions/check-tpu' | > NOTE: Remember to label this issue with "`ci: sev`"
> If you want autorevert to be disabled, keep the ci: disable-autorevert label
<!-- Add the `merge blocking` label to this PR to prevent PRs from being merged while this issue is open -->
> [!IMPORTANT]
> Comment the following on your PR to rebase
> ```
> @... | https://github.com/pytorch/pytorch/issues/170320 | closed | [
"ci: sev"
] | 2025-12-12T19:30:03Z | 2025-12-14T15:36:06Z | 1 | seemethere |
pytorch/pytorch | 170,302 | DISABLED test_opaque_obj_training_ir_to_decomp_nonstrict (__main__.TrainingIRToRunDecompExportNonStrictTestExport) | Platforms: rocm, xpu
This test was disabled because it is failing on [main and PRs](https://hud.pytorch.org/failure?name=rocm-mi200%20%2F%20linux-jammy-rocm-py3.10%20%2F%20test%20(default%2C%201%2C%206%2C%20linux.rocm.gpu.2%2C%20unstable)&jobName=linux-jammy-rocm-py3.10%20%2F%20test%20(default%2C%201%2C%206%2C%20linux... | https://github.com/pytorch/pytorch/issues/170302 | open | [
"triaged",
"skipped",
"rocm-skipped-tests"
] | 2025-12-12T16:04:41Z | 2025-12-25T00:24:56Z | 2 | jithunnair-amd |
pytorch/pytorch | 170,293 | [wheels] Missing CUDA wheels for pytorch<2.6.0 | ### ๐ Describe the bug
For older versions of pytorch<2.6.0, the CUDA wheels cannot be reached anymore.
System: Windows-11-10.0.22631-SP0
Python version: 3.13
Using pip 25.3
Example of failing installation:
` pip install torch==2.5.1 --index-url https://download.pytorch.org/whl/cu124 --isolated --verbose`
Output i... | https://github.com/pytorch/pytorch/issues/170293 | closed | [] | 2025-12-12T10:33:42Z | 2025-12-12T12:02:58Z | 1 | guibruand |
sgl-project/sglang | 14,984 | Can the source code compilation and installation of sgl-kernel support the SM86 driver for CUDA12.9 | ### Checklist
- [x] I searched related issues but found no solution.
- [ ] The bug persists in the latest version.
- [ ] Issues without environment info and a minimal reproducible demo are hard to resolve and may receive no feedback.
- [ ] If this is not a bug report but a general question, please start a discussion a... | https://github.com/sgl-project/sglang/issues/14984 | open | [] | 2025-12-12T10:29:50Z | 2025-12-15T09:41:18Z | 1 | zwt-1234 |
vllm-project/vllm | 30,548 | [Feature]: Support for Q.ANT Photonic Computing ? | ### ๐ The feature, motivation and pitch
https://qant.com/
https://qant.com/wp-content/uploads/2025/11/20251111_QANT-Photonic-AI-Accelerator-Gen-2.pdf
### Alternatives
_No response_
### Additional context
_No response_
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues... | https://github.com/vllm-project/vllm/issues/30548 | open | [
"feature request"
] | 2025-12-12T10:16:53Z | 2025-12-12T14:45:53Z | 2 | plitc |
pytorch/data | 1,520 | Are there any plans to optimize the fetcher_state in StatefulDataLoader? | Since `_IterableDatasetFetcher` has no state attribute: https://github.com/pytorch/pytorch/blob/v2.6.0/torch/utils/data/_utils/fetch.py#L19, and the current `fetcher_state:dataset_iter_state` is None: https://github.com/meta-pytorch/data/blob/v0.11.0/torchdata/stateful_dataloader/worker.py#L277, could this cause prefet... | https://github.com/meta-pytorch/data/issues/1520 | open | [] | 2025-12-12T09:50:08Z | 2025-12-17T05:23:35Z | 5 | howitry |
huggingface/tokenizers | 1,909 | [Docs] `Encode Inputs` rendering issues | It seems like the documentation for Encode Inputs is not rendered properly.
Official URL: https://huggingface.co/docs/tokenizers/main/en/api/encode-inputs?code=python
GitHub URL: https://github.com/huggingface/tokenizers/blob/main/docs/source-doc-builder/api/encode-inputs.mdx | https://github.com/huggingface/tokenizers/issues/1909 | open | [] | 2025-12-12T09:47:48Z | 2025-12-12T09:47:48Z | 0 | ariG23498 |
pytorch/pytorch | 170,286 | Can torch has a relaxed dependencies instead of strict dependencies on nvidia-cuda-runtime | ### ๐ Describe the bug
Right now, torch uses strict == pins for these packages (see
https://github.com/pytorch/pytorch/blob/main/.github/scripts/generate_binary_build_matrix.py#L106C2-L123C7).
Is there a specific reason these must be strict == requirements? Would it be possible to relax them to version ranges instead... | https://github.com/pytorch/pytorch/issues/170286 | closed | [
"module: binaries",
"triaged"
] | 2025-12-12T08:39:43Z | 2025-12-13T00:30:46Z | 3 | lanluo-nvidia |
vllm-project/vllm | 30,541 | [Usage]: missing dsml token "| DSML | " with DeepSeek-V3.2 tools call | ### Your current environment
Collecting environment information...
==============================
System Info
==============================
OS : Ubuntu 22.04.5 LTS (x86_64)
GCC version : (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version : Could not c... | https://github.com/vllm-project/vllm/issues/30541 | open | [
"usage"
] | 2025-12-12T06:47:03Z | 2025-12-12T20:59:40Z | 1 | crischeng |
pytorch/executorch | 16,217 | make building stop at Built target portable_kernels | Hey, i want to export llama pte model and deploy it on SA8255 device, i refered to https://github.com/pytorch/executorch/blob/main/examples/models/llama/README.md and https://docs.pytorch.ac.cn/executorch/stable/llm/build-run-llama3-qualcomm-ai-engine-direct-backend.html, but when i Built llama runner binary for Androi... | https://github.com/pytorch/executorch/issues/16217 | open | [
"partner: qualcomm",
"module: qnn"
] | 2025-12-12T03:24:46Z | 2025-12-21T00:59:11Z | 16 | imjking |
vllm-project/vllm | 30,511 | Potential Deadlock? | Consider using proper synchronization primitives like threading.Event or queue.Queue.get(timeout=...) | https://github.com/vllm-project/vllm/issues/30511 | closed | [] | 2025-12-11T19:57:43Z | 2025-12-12T18:00:20Z | 1 | ChuanLi1101 |
sgl-project/sglang | 14,903 | Does the current Qwen3-VL (or Qwen3-VL-MoE) officially support TBO? | Hi team,
I noticed that Qwen3-VL and Qwen3-MoE adopt different model architectures.
When profiling the execution path, I found that:
Qwen3-MoE eventually falls back to the Qwen2-MoE implementation, which explicitly supports TBO (Two-Batch Overlap).
However, Qwen3-VL takes the path of Qwen3-VL-MoE, and I did not find... | https://github.com/sgl-project/sglang/issues/14903 | open | [] | 2025-12-11T13:26:50Z | 2025-12-11T13:26:50Z | 0 | jerry-dream-fu |
pytorch/pytorch | 170,183 | [docs] Unable to `git clone` PyTorch wiki on Windows due to colon(`:`) in filename | ### ๐ The doc issue
> Summary : `git checkout` fails when trying to clone the PyTorch wiki on Windows OS.
Windows filesystems do not allow the use of colons (`:`) in filenames.
However, the wiki currently contains a page titled: [PyTorch CI Metrics Dashboards: the HUD](https://github.com/pytorch/pytorch/wiki/PyTorch... | https://github.com/pytorch/pytorch/issues/170183 | open | [
"module: windows",
"triaged",
"module: infra"
] | 2025-12-11T13:00:52Z | 2025-12-15T18:07:48Z | 6 | daehyun99 |
huggingface/transformers | 42,804 | [`Quantization FP8`] Native `from_config` support | ### Feature request
Related to https://github.com/huggingface/transformers/pull/42028#discussion_r2592235170
Since FP8 is becoming more and more standard, it would be nice to create fp8 native models via config or more like using `from_config`. Atm, quant configs are not respected apparently - either that or we need ... | https://github.com/huggingface/transformers/issues/42804 | open | [
"Feature request"
] | 2025-12-11T10:17:47Z | 2025-12-14T22:49:48Z | 3 | vasqu |
huggingface/trl | 4,679 | [SFT] High vRAM consumption during eval loop | ### Reproduction
### Unexpected behavior
When training a model on large sequences (>=20k tokens) with `PEFT LoRA` + `SFTTrainer` + `liger-kernel`, the vRAM usage spikes during the evaluation loop, consuming way more vRAM than during the training.
The size of this vRAM spike seem to scale with the length of the input... | https://github.com/huggingface/trl/issues/4679 | open | [
"๐ bug",
"๐ SFT",
"โก PEFT"
] | 2025-12-11T10:01:49Z | 2026-01-02T09:23:17Z | 3 | Khreas |
vllm-project/vllm | 30,477 | [Usage]: How to disable thinking for Qwen-8B | ### Your current environment
```text
Collecting environment information...
==============================
System Info
==============================
OS : Ubuntu 24.04.3 LTS (x86_64)
GCC version : (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version : Cou... | https://github.com/vllm-project/vllm/issues/30477 | closed | [
"usage"
] | 2025-12-11T09:28:40Z | 2025-12-22T06:10:43Z | 3 | fancyerii |
huggingface/diffusers | 12,823 | How to use quantizer after pipeline loaded? | How to use quantizer after pipeline loaded?
- Currently
```python
# Quantization occurs at load time.
pipe = QwenImagePipeline.from_pretrained(
(
args.model_path
if args.model_path is not None
else os.environ.get(
"QWEN_IMAGE_DIR",
"Qwen/Qwen-Image",
)
... | https://github.com/huggingface/diffusers/issues/12823 | open | [] | 2025-12-11T06:32:38Z | 2025-12-11T14:18:28Z | null | DefTruth |
huggingface/transformers | 42,794 | `decoder_start_token_id` or `bos_token_id` has to be defined for encoder-decoder generation. | ### System Info
latest transformers
### Who can help?
@zucchini-nlp
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction... | https://github.com/huggingface/transformers/issues/42794 | closed | [
"bug"
] | 2025-12-11T06:22:58Z | 2025-12-18T18:33:40Z | 1 | jiqing-feng |
vllm-project/vllm | 30,464 | [Usage]: How can I use the local pre-compiled wheel of vllm | ### Your current environment
```text
The output of `python collect_env.py`
```
### How would you like to use vllm
Every time I use `VLLM_USE_PRECOMPILED=1 uv pip install --editable .` to build vllm, it always takes much time to download the pre-compiled wheel. Would it be possible to build it by using a locally dow... | https://github.com/vllm-project/vllm/issues/30464 | open | [
"usage"
] | 2025-12-11T06:22:43Z | 2025-12-12T01:02:22Z | 1 | gcanlin |
huggingface/transformers | 42,791 | Add support for GPT_OSS with tp_plan or enable native tensor parallelism | ### Model description
#[https://huggingface.co/docs/transformers/main/perf_infer_gpu_multi?tp_plan=auto+plan](url)
> https://github.com/huggingface/transformers/issues/41819
There are a list of supported models here, but GPT-OSS is not one of them. Please add support for GPT_OSS too to enable `tp_plan`. Please help... | https://github.com/huggingface/transformers/issues/42791 | open | [
"New model"
] | 2025-12-11T04:31:19Z | 2025-12-19T08:38:31Z | 1 | quic-akuruvil |
sgl-project/sglang | 14,868 | How to train vicuna EAGLE3 model? | I have carefully reviewed the official tutorials and source code, but I was unable to find the relevant config and template files specific to Vicuna.
Could you please provide an example, specifically regarding the template structure? | https://github.com/sgl-project/sglang/issues/14868 | open | [] | 2025-12-11T03:59:39Z | 2025-12-11T03:59:39Z | 0 | Sylvan820 |
vllm-project/vllm | 30,447 | [Usage]: how to load kv cache data into local file | ### Your current environment
pthon3.10+vllm0.10.0
### How would you like to use vllm
I want to get int8 kv cache data from [qwen-int8](https://www.modelscope.cn/models/Qwen/Qwen-7B-Chat-Int8). I don't know how if vllm can do that? Thank you.
### Before submitting a new issue...
- [x] Make sure you already searched... | https://github.com/vllm-project/vllm/issues/30447 | open | [
"usage"
] | 2025-12-11T01:43:58Z | 2025-12-12T15:11:50Z | 1 | chx725 |
vllm-project/vllm | 30,441 | [Usage]: vllm serve setup issues on B300 | ### Your current environment
The output of `python collect_env.py`
```text
Collecting environment information...
uv is set
==============================
System Info
==============================
OS : Amazon Linux 2023.9.20251208 (x86_64)
GCC version : (GCC) 11.5.0... | https://github.com/vllm-project/vllm/issues/30441 | open | [
"usage"
] | 2025-12-10T23:50:27Z | 2025-12-13T02:01:04Z | 1 | navmarri14 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.