Skip to content

[V1][Spec Decode] Apply torch.compile & cudagraph to EAGLE #17211

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 8 commits into from
Apr 29, 2025

Conversation

luyuzhe111
Copy link
Contributor

@luyuzhe111 luyuzhe111 commented Apr 26, 2025

Task 8 of #15901

A few notes regarding the implementation.

Torch.compile

  1. @support_torch_compile is convenient but requires a specific signature for the model. Changes in vllm/model_executor/models/llama_eagle.py address this requirement.
  2. Further, to make torch.compile work, we need a separate cache dir for EAGLE model. Without edits in vllm/compilation/backends.py, we wouldn't be able to cache EAGLE's compilation properly since the EAGLE module was registered under vllm config from the target model (see here)
  3. One notable bug related to torch.compile is this line. Essentially, the default data type for input ids is int32 and the EAGLE model was also compiled with this data type. However, tensor.argmax() returns int64 by default. Feeding int64 input ids to the compiled model will completely mess things up and lead to gibberish draft tokens. Currently the compiled model does not even give warnings when the input data type mismatches. Wonder if we can prevent similar bugs in the future by some more checks.

CudaGraph

Changes in vllm/v1/spec_decode/eagle.py and vllm/v1/worker/gpu_model_runner.py are mostly for CudaGraph. Nothing fancy other than registering additional persistent buffers and making sure to use them for EAGLE's forward pass. I do want to mention that with torch.compile & CudaGraph, the EAGLE model's forward pass has been drastically improved (2.5x faster), which makes the small but abundant torch operations look inefficient. Any advice to further optimize these overheads is greatly appreciated.

Finally, it would be great to have #17010 reviewed and merged so that we don't have to pull in other PRs to test the acceptance length.

Note that the current PR does not directly make torch.compile & cuda graph available for EAGLE3. I think it's worth a separate PR since the work is non-trivial due to the fact that EAGLE-3's input hidden states have dynamic shapes. Maybe @benchislett could help.

@WoosukKwon @LiuXiaoxuanPKU

Signed-off-by: Bryan Lu <yuzhelu@amazon.com>
Copy link

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

@mergify mergify bot added the v1 label Apr 26, 2025
Copy link

mergify bot commented Apr 26, 2025

This pull request has merge conflicts that must be resolved before it can be
merged. Please rebase the PR, @luyuzhe111.

https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

@mergify mergify bot added the needs-rebase label Apr 26, 2025
@luyuzhe111 luyuzhe111 changed the title Apply torch.compile & cudagraph to EAGLE [V1][Spec Decode] Apply torch.compile & cudagraph to EAGLE Apr 26, 2025
Signed-off-by: Bryan Lu <yuzhelu@amazon.com>
Signed-off-by: Bryan Lu <yuzhelu@amazon.com>
@mergify mergify bot added the documentation Improvements or additions to documentation label Apr 26, 2025
Signed-off-by: Bryan Lu <yuzhelu@amazon.com>
@WoosukKwon
Copy link
Collaborator

Thanks for the PR. This is so cool!
I’ll take a look, but it would be great if we could also get @youkaichao’s review.

Copy link
Collaborator

@WoosukKwon WoosukKwon left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@luyuzhe111 Thanks for the PR!

Left some comments. Please take a look!

Signed-off-by: Bryan Lu <yuzhelu@amazon.com>
Signed-off-by: Bryan Lu <yuzhelu@amazon.com>
Signed-off-by: Bryan Lu <yuzhelu@amazon.com>
Copy link
Collaborator

@WoosukKwon WoosukKwon left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@luyuzhe111 Thanks for addressing the comments! One last thing: Can you please add tests for this?

@luyuzhe111
Copy link
Contributor Author

luyuzhe111 commented Apr 29, 2025

Hi @ekagra-ranjan, thanks for this PR! but wondering if you could create a new PR later that makes sure the draft model takes in vllm_config instead of model_config, if you are interested? This is a requirement for compatibility with torch.compile decorator. for this PR I will just create a condition to handle eagle-1 and eagle-3 models separately. thanks! cc @WoosukKwon

@luyuzhe111
Copy link
Contributor Author

One last thing: Can you please add tests for this?

@WoosukKwon I feel like acceptance length tests are probably the most meaningful tests for this PR. I tested the acceptance length by cherry picking commits from this PR. You can see that the difference is less than 0.01.

With meta-llama/Llama-3.1-8B-Instruct, yuhuili/EAGLE-LLaMA3.1-Instruct-8B

On MT Bench

When max number generated tokens = 256

Number of Speculated Tokens 1 2 3 4 5
Eager 1.71 2.09 2.30 2.38 2.43
Compilation & CudaGraph 1.70 2.10 2.29 2.38 2.43

I can do a follow-up PR adding acceptance length tests after #17010 is merged.

@ekagra-ranjan
Copy link
Contributor

ekagra-ranjan commented Apr 29, 2025

Thank you @luyuzhe111 for the PR!
Wondering if you have any benchmark for speedup in output token/s for with and w/o cuda graph and torch compile for EAGLE llama 3.1 on MTBench TP1 BS1?

I do want to mention that with torch.compile & CudaGraph, the EAGLE model's forward pass has been drastically improved (2.5x faster), which makes the small but abundant torch operations look inefficient.

This is fantastic!
Could you share the script you used to measure just the EAGLE's fwd pass?
By "abundant torch operation", do you mean the torch ops outside the forward pass but within the propose() ?

Signed-off-by: Bryan Lu <yuzhelu@amazon.com>
@luyuzhe111
Copy link
Contributor Author

Hey @ekagra-ranjan Re:

  1. I remember you had some numbers for benchmarking on mt bench. can you share the setup so that I can run the benchmarking again for torch.compile + cuda graph? thanks!
  2. for the forward pass I had to look at the profiler, so no script here.
  3. right I meant those operations that prepare inputs for the EAGLE model.

@WoosukKwon WoosukKwon added the ready ONLY add when PR is ready to merge/full CI is needed label Apr 29, 2025
Copy link
Collaborator

@WoosukKwon WoosukKwon left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. @youkaichao Any final comments?

@ekagra-ranjan
Copy link
Contributor

@luyuzhe111 - here is an example setup for benchmark. Looking fwd to the results!

@WoosukKwon WoosukKwon enabled auto-merge (squash) April 29, 2025 21:08
@WoosukKwon WoosukKwon merged commit 70788bd into vllm-project:main Apr 29, 2025
69 checks passed
@luyuzhe111
Copy link
Contributor Author

@ekagra-ranjan I got the following results:

Target model: meta-llama/Llama-3.1-8B-Instruct
EAGLE model: yuhuili/EAGLE-LLaMA3.1-Instruct-8B
Hardware: A100 (40GB)
Script: VLLM_USE_V1=1 python examples/offline_inference/eagle.py —dataset="./data/mt_bench/question.jsonl" —num_spec_tokens x —max_num_seqs 1 —num_prompts 80

Regular Decoding OTPS: 72
Screenshot 2025-04-29 at 3 21 29 PM

it looks like a further 10% speedup.

@zou3519
Copy link
Collaborator

zou3519 commented May 1, 2025

One notable bug related to torch.compile is this line. Essentially, the default data type for input ids is int32 and the EAGLE model was also compiled with this data type. However, tensor.argmax() returns int64 by default. Feeding int64 input ids to the compiled model will completely mess things up and lead to gibberish draft tokens. Currently the compiled model does not even give warnings when the input data type mismatches. Wonder if we can prevent similar bugs in the future by some more checks.

These checks lead to performance degradation so vLLM decided to drop all of them. Some of the lower compilation_levels (e.g. 1) should do the checking but those also have slightly different behavior for other things

Comment on lines +415 to +421
if compilation_counter.num_graphs_seen > 0:
cache_dir = self.compilation_config.cache_dir + \
f'-{compilation_counter.num_graphs_seen}'
else:
cache_dir = self.compilation_config.cache_dir
os.makedirs(cache_dir, exist_ok=True)
self.compilation_config.cache_dir = cache_dir
Copy link
Collaborator

@zou3519 zou3519 May 1, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@luyuzhe111 This is suspicious. Why do you need a different cache directory for each graph? Also, this looks like it modifies everything, even the models that don't use eagle.

If there isn't a good reason I would prefer going back to the "single cache directory" that we had previously.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@zou3519 thanks for reviewing! if there isn't a separate cache directory, the compiled code for the draft model (EAGLE) will not be saved at all. for models without EAGLE, my understanding is that the backend is invoked only once so this should not impact other models.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@luyuzhe111 thanks for the response and clarifying that. Woosuk also filled me in on some more details offline. I understand why we need a separate cache directory.

Which of the "original model" and the "eagle head" get compiled first? (I'm trying to figure out if the first cache dir is for the original model or for the eagle head)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@zou3519 original model should be compiled first! also if you wanna double check, the transformed code of EAGLE in the cache directory has a slightly different signature with hidden_states as an additional arg. if there is a more elegant solution, that would be great! I think my approach is a bit hacky indeed : )))

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the discussion! I added some comments and an assertion into #17662 , please take a look.

I think in the future we'll want a better way to handle multiple compiled regions in a vLLM model, but that will take some re-designing

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@luyuzhe111 the asserts in #17662 triggered, which means that this PR does affect non-eagle models

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@zou3519 Thanks for the catch! I guess a simple fix would be just to create a separate cache directory only for EAGLE, via looking at the vllm speculative config, for example?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah that would work

tlrmchlsmth added a commit to tlrmchlsmth/vllm that referenced this pull request May 1, 2025
* Revert "[Misc] Add S3 environment variables for better support of MinIO." (vllm-project#17021)

* [misc] tune some env vars for GB200 (vllm-project#16992)

Signed-off-by: youkaichao <youkaichao@gmail.com>

* [INTEL-HPU][v0] Port delayed sampling to upstream (vllm-project#16949)

Signed-off-by: Michal Adamczyk <michal.adamczyk@intel.com>
Signed-off-by: Chendi Xue <chendi.xue@intel.com>
Co-authored-by: Michal Adamczyk <madamczyk@habana.ai>

* [doc] add download path tips (vllm-project#17013)

Signed-off-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>

* [Bugfix] Triton FA function takes no keyword arguments (vllm-project#16902)

Signed-off-by: vllmellm <vllm.ellm@embeddedllm.com>

* [V1] Avoid socket errors during shutdown when requests are in in-flight (vllm-project#16807)

Signed-off-by: Nick Hill <nhill@redhat.com>

* [BugFix] llama4 fa3 fix - RuntimeError: scheduler_metadata must have shape (metadata_size) (vllm-project#16998)

Signed-off-by: Lucas Wilkinson <lwilkinson@neuralmagic.com>

* [Misc] Improve readability of get_open_port function. (vllm-project#17024)

Signed-off-by: gitover22 <qidizou88@gmail.com>

* [Bugfix] Fix AssertionError: skip_special_tokens=False is not supported for Mistral tokenizers (vllm-project#16964)

Signed-off-by: chaunceyjiang <chaunceyjiang@gmail.com>

* [CI] Run v1/test_serial_utils.py in CI (vllm-project#16996)

Signed-off-by: Russell Bryant <rbryant@redhat.com>

* Mistral-format support for compressed-tensors (vllm-project#16803)

Signed-off-by: mgoin <mgoin64@gmail.com>

* Categorize `tests/kernels/` based on kernel type (vllm-project#16799)

Signed-off-by: mgoin <mgoin64@gmail.com>

* [Doc] Add top anchor and a note to quantization/bitblas.md (vllm-project#17042)

Signed-off-by: windsonsea <haifeng.yao@daocloud.io>

* Ensure that `pid` passed to `kill_process_tree` is `int` for `mypy` (vllm-project#17051)

Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>

* [CI] Update structured-output label automation (vllm-project#17055)

Signed-off-by: Russell Bryant <rbryant@redhat.com>

* Improve Transformers backend model loading QoL (vllm-project#17039)

Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>

* `CacheConfig.block_size` should always be `int` when used (vllm-project#17052)

Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>

* Use `@property` and private field for `data_parallel_rank_local` (vllm-project#17053)

Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>

* [Frontend] Support guidance:no-additional-properties for compatibility with xgrammar (vllm-project#15949)

Signed-off-by: Travis Johnson <tsjohnso@us.ibm.com>

* [BugFix][V1] Fix int32 token index overflow when preparing input ids (vllm-project#16806)

* [V1][Spec Decode] Always use argmax for sampling draft tokens  (vllm-project#16899)

Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>

* [CI/Build] workaround for CI build failure (vllm-project#17070)

Signed-off-by: csy1204 <josang1204@gmail.com>
Co-authored-by: Michael Goin <mgoin64@gmail.com>

* [Quantization]add prefix for commandA quantized model (vllm-project#17017)

* [Minor] Use larger batch sizes for A100/B100/B200/MI300x (vllm-project#17073)

Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>

* [Bugfix] Enable V1 usage stats (vllm-project#16986)

Signed-off-by: mgoin <mgoin64@gmail.com>
Signed-off-by: Nick Hill <nhill@redhat.com>
Co-authored-by: Nick Hill <nhill@redhat.com>

* More informative error when using Transformers backend (vllm-project#16988)

Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>

* Addendum Fix to support FIPS enabled machines with MD5 hashing (vllm-project#17043)

Signed-off-by: sydarb <areebsyed237@gmail.com>

* [Bugfix][Core] add seq_id_to_seq_group clearing to avoid memory leak when s… (vllm-project#16472)

Signed-off-by: 开哲 <kaizhe.zy@alibaba-inc.com>
Co-authored-by: 开哲 <kaizhe.zy@alibaba-inc.com>

* [V1] Update structured output (vllm-project#16812)

Signed-off-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>

* [doc] update to hyperlink (vllm-project#17096)

Signed-off-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>

* Add docs for runai_streamer_sharded (vllm-project#17093)

Signed-off-by: Omer Dayan (SW-GPU) <omer@run.ai>
Co-authored-by: Cyrus Leung <cyrus.tl.leung@gmail.com>

* [Chore] Remove Sampler from Model Code (vllm-project#17084)

Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>

* Disable enforce_eager for V1 TPU sampler and structured output tests (vllm-project#17016)

Signed-off-by: mgoin <mgoin64@gmail.com>

* Simplify `TokenizerGroup` (vllm-project#16790)

Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>

* Fix OOT registration test (vllm-project#17099)

Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>

* [V1][PP] Optimization: continue scheduling prefill chunks (vllm-project#17080)

Signed-off-by: Rui Qiao <ruisearch42@gmail.com>

* [Misc] Remove OLMo2 config copy (vllm-project#17066)

Signed-off-by: Isotr0py <2037008807@qq.com>

* Improve static type checking in `LoRAModelRunnerMixin` (vllm-project#17104)

Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>

* [V1][Structured Output] Clear xgrammar compiler object when engine core shut down to avoid nanobind leaked warning (vllm-project#16954)

Signed-off-by: shen-shanshan <467638484@qq.com>

* [Frontend] Using matryoshka_dimensions control the allowed output dimensions. (vllm-project#16970)

* Add missing rocm_skinny_gemms kernel test to CI (vllm-project#17060)

Signed-off-by: mgoin <mgoin64@gmail.com>

* [Misc] refactor example series - structured outputs (vllm-project#17040)

Signed-off-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>

* [V1][Spec Decoding] Add num_drafts and num_accepted_tokens_per_position metrics (vllm-project#16665)

Signed-off-by: Mark McLoughlin <markmc@redhat.com>

* [CI] Add automation for the `tool-calling` github label (vllm-project#17118)

Signed-off-by: Russell Bryant <rbryant@redhat.com>

* Updating builkite job for IBM Power  (vllm-project#17111)

Signed-off-by: Aaruni Aggarwal <aaruniagg@gmail.com>

* existing torch installation pip command fix for docs (vllm-project#17059)

* Molmo Requirements (vllm-project#17026)

Signed-off-by: Eyshika Agarwal <eyshikaengineer@gmail.com>
Signed-off-by: eyshika <eyshikaengineer@gmail.com>

* Add `:markdownhelp:` to `EngineArgs` docs so markdown docstrings render properly (vllm-project#17124)

Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>

* Improve configs - `LoRAConfig` + `PromptAdapterConfig` (vllm-project#16980)

Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>

* [Docs] Generate correct github links for decorated functions (vllm-project#17125)

Signed-off-by: Russell Bryant <rbryant@redhat.com>

* Add collective_rpc to llm engine (vllm-project#16999)

Signed-off-by: Yinghai Lu <yinghai@thinkingmachines.ai>

* Add chat template for Llama 4 models (vllm-project#16428)

Signed-off-by: Max de Bayser <mbayser@br.ibm.com>

* [Misc] Add example to run DeepSeek with Ray Serve LLM (vllm-project#17134)

Signed-off-by: Rui Qiao <ruisearch42@gmail.com>

* Better error message for missing mistral params.json (vllm-project#17132)

Signed-off-by: mgoin <mgoin64@gmail.com>

* Use custom address for listening socket (vllm-project#15988)

Signed-off-by: Jens Glaser <glaserj@ornl.gov>

* [FEAT] [ROCm]: AITER Fused MOE V1 Support (vllm-project#16752)

Signed-off-by: vllmellm <vllm.ellm@embeddedllm.com>
Co-authored-by: tjtanaa <tunjian.tan@embeddedllm.com>

* [Attention] FA3 decode perf improvement - single mma warp group support for head dim 128 (vllm-project#16864)

Signed-off-by: Lucas Wilkinson <lwilkinson@neuralmagic.com>

* fix float16 support for kimi-vl (vllm-project#17156)

Co-authored-by: zhouzaida <zhouzaida@msh.team>

* [Doc] V1 : Update LoRA status (vllm-project#17133)

Signed-off-by: varun sundar rabindranath <vsundarr@redhat.com>
Co-authored-by: varun sundar rabindranath <vsundarr@redhat.com>

* [Docs] Fix True->true in supported_models.md (vllm-project#17141)

* Move missed `SchedulerConfig` args into scheduler config group in `EngineArgs` (vllm-project#17131)

Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>

* [Misc] Clean up redundant code in uniproc_executor.py (vllm-project#16762)

Signed-off-by: Lifu Huang <lifu.hlf@gmail.com>

* [Bugfix][Misc] Use TritonPlaceholderModule to defensively import triton (vllm-project#15099)

Signed-off-by: Mengqing Cao <cmq0113@163.com>

* [Misc] Benchmark Serving Script Support Appending Results (vllm-project#17028)

Signed-off-by: Lucas Wilkinson <lwilkinson@neuralmagic.com>

* [Perf]Optimize rotary_emb implementation to use Triton operator for improved inference performance (vllm-project#16457)

Signed-off-by: cynthieye <yexin93@qq.com>
Co-authored-by: MagnetoWang <magnetowang@outlook.com>

* [Bugfix] remove fallback in guided_json (int range, patterns) (vllm-project#16725)

Signed-off-by: csy1204 <josang1204@gmail.com>
Co-authored-by: 조상연[플레이스 AI] <sang-yeon.cho@navercorp.com>

* [Quantization][FP8] Add support for FP8 models with input_scale for output projection and QK quantization (vllm-project#15734)

Signed-off-by: Randall Smith <Randall.Smith@amd.com>
Signed-off-by: Luka Govedič <lgovedic@redhat.com>
Co-authored-by: Luka Govedič <lgovedic@redhat.com>

* [Doc] Add headings to improve gptqmodel.md (vllm-project#17164)

Signed-off-by: windsonsea <haifeng.yao@daocloud.io>

* Only turn on FastIncrementalDetokenizer when tokenizers >= 0.21.1 (vllm-project#17158)

* [Doc] Add two links to disagg_prefill.md (vllm-project#17168)

Signed-off-by: windsonsea <haifeng.yao@daocloud.io>

* [Doc] Move todo out of beam search docstring (vllm-project#17183)

Signed-off-by: Alex-Brooks <Alex.Brooks@ibm.com>

* [Bugfix] Fix mistral model tests (vllm-project#17181)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* [Bugfix] Fix Mistral ChatCompletionRequest Body Exception (vllm-project#16769)

Signed-off-by: Jasmond Loh <Jasmond.Loh@hotmail.com>
Co-authored-by: Cyrus Leung <cyrus.tl.leung@gmail.com>

* Bump Transformers to 4.51.3 (vllm-project#17116)

Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>

* Use Transformers helper `get_text_config()` instead of checking for `text_config` (vllm-project#17105)

Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>

* [doc] update wrong hf model links (vllm-project#17184)

Signed-off-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>

* [Misc] Inline Molmo requirements (vllm-project#17190)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* [Security] Use safe serialization and fix zmq setup for mooncake pipe (vllm-project#17192)

Signed-off-by: Shangming Cai <caishangming@linux.alibaba.com>
Co-authored-by: Shangming Cai <caishangming@linux.alibaba.com>

* [V1] Move usage stats to worker and start logging TPU hardware (vllm-project#16211)

* [Bugfix] Fix hybrid model tests (vllm-project#17182)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* Fix Python packaging edge cases (vllm-project#17159)

Signed-off-by: Christian Heimes <christian@python.org>

* [BugFix][Frontend] Fix `LLM.chat()` tokenization (vllm-project#16081)

Signed-off-by: Nick Hill <nhill@redhat.com>

* [V1][Spec Decode] EAGLE-3 Support (vllm-project#16937)

Signed-off-by: Bryan Lu <yuzhelu@amazon.com>
Signed-off-by: Benjamin Chislett <benjamin.chislett@centml.ai>
Co-authored-by: Bryan Lu <yuzhelu@amazon.com>

* [Misc] Refine ray_serve_deepseek example (vllm-project#17204)

Signed-off-by: Rui Qiao <ruisearch42@gmail.com>

* [Bugfix] gemma[2,3] interleaved attention when sliding window is disabled (vllm-project#17180)

Signed-off-by: Chen Zhang <zhangch99@outlook.com>

* [AMD][FP8][BugFix] Remove V1 check in arg_utils.py for FP8 since it is not necessary (vllm-project#17215)

Signed-off-by: Randall Smith <Randall.Smith@amd.com>

* [v1] [P/D] Adding LMCache KV connector for v1 (vllm-project#16625)

* [Bugfix] [pytorch] Patch AOTAutogradCache._get_shape_env (vllm-project#17142)

Signed-off-by: James Wu <jjwu@meta.com>

* [MISC][AMD] Add unused annotation to rocm kernel file (vllm-project#17097)

Signed-off-by: Lu Fang <lufang@fb.com>

* [doc] add Anything LLM integration (vllm-project#17216)

Signed-off-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>

* [Minor][Spec Decode] Add use_eagle to SpeculativeConfig (vllm-project#17213)

Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>

* [Doc] Minor fix for the vLLM TPU setup page (vllm-project#17206)

Signed-off-by: Yarong Mu <ymu@google.com>

* [Minor][Models] Fix Return Types of Llama & Eagle (vllm-project#17220)

Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>

* Allocate kv_cache with stride order (vllm-project#16605)

Signed-off-by: shuw <shuw@nvidia.com>

* [ROCm][Misc] Follow-ups for Skinny Gemms on ROCm. (vllm-project#17011)

Signed-off-by: charlifu <charlifu@amd.com>

* [V1][Metrics] Allow V1 AsyncLLM to use custom logger (vllm-project#14661)

Signed-off-by: Zijing Liu <liuzijing2014@gmail.com>
Signed-off-by: Mark McLoughlin <markmc@redhat.com>
Signed-off-by: Nick Hill <nhill@redhat.com>
Co-authored-by: Mark McLoughlin <markmc@redhat.com>
Co-authored-by: Nick Hill <nhill@redhat.com>

* [BugFix] Avoid race conditions in zero-copy tensor transmission (vllm-project#17203)

Signed-off-by: Nick Hill <nhill@redhat.com>

* [CI/test] Fix Eagle Correctness Test (vllm-project#17209)

Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>

* [Core] Remove prompt string from engine core data structures (vllm-project#17214)

Signed-off-by: Nick Hill <nhill@redhat.com>

* [Bugfix] Fix missing int type for `-n` in multi-image example (vllm-project#17223)

* [Bugfix] Fix standard models tests (vllm-project#17217)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* [Hardware][Intel-Gaudi] Update hpu-extension and update bucketing system for HPU device (vllm-project#17186)

Signed-off-by: Agata Dobrzyniewicz <adobrzyniewicz@habana.ai>

* [V1] Add `structural_tag` support using xgrammar (vllm-project#17085)

* [BUGFIX] use random for NONE_HASH only when PYTHONHASHSEED not set (vllm-project#17088)

Signed-off-by: Andy Xie <andy.xning@gmail.com>

* [Chore] added stubs for `vllm_flash_attn` during development mode (vllm-project#17228)

Signed-off-by: Aaron Pham <contact@aarnphm.xyz>

* [Docs] Update structured output doc for V1 (vllm-project#17135)

Signed-off-by: Russell Bryant <rbryant@redhat.com>

* [Bugfix] fix error due to an uninitialized tokenizer when using `skip_tokenizer_init` with `num_scheduler_steps` (vllm-project#9276)

Signed-off-by: changjun.lee <pord7457@gmail.com>

* Disable the torch.compile cache checks when VLLM_DISABLE_COMPILE_CACHE=1 (vllm-project#16573)

Signed-off-by: Lu Fang <lufang@fb.com>

* [MISC] rename interval to max_recent_requests (vllm-project#14285)

* [Bugfix] Fix Qwen2.5-Omni M-RoPE position ids generation (vllm-project#16878)

Signed-off-by: imkero <kerorek@outlook.com>

* [Minor] Fix lint error in main branch (vllm-project#17233)

Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>

* [CI/Build] remove -t for run-lm-eval-gsm-hf-baseline.sh (vllm-project#16271)

Signed-off-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>

* Update test_flash_attn.py (vllm-project#17102)

Signed-off-by: ShuaibinLi <lishuaibin@live.cn>

* [Kernel][Triton][FP8] Adding fp8 and variable length sequence support to Triton FAv2 kernel (vllm-project#12591)

Signed-off-by: Randall Smith <Randall.Smith@amd.com>

* [Misc] Make cached tokenizer pickle-compatible (vllm-project#17048)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* [Bugfix] Fix QWen2 VL multimodal mapping (vllm-project#17240)

Signed-off-by: Jee Jee Li <pandaleefree@gmail.com>

* [Bugfix] Get a specific type of layer from forward context (vllm-project#17222)

Signed-off-by: Chen Zhang <zhangch99@outlook.com>

* [MISC] Use string annotation types for class definitions (vllm-project#17244)

Signed-off-by: Jade Zheng <zheng.shoujian@outlook.com>

* [Misc] Change buckets of histogram_iteration_tokens to [1, 8, 16, 32, 64, 128, 256, 512, 1024, 2048, 4096, 8096] to represent number of tokens (vllm-project#17033)

Signed-off-by: sfc-gh-zhwang <flex.wang@snowflake.com>

* [Bugfix] Fix Lora Name Parsing (vllm-project#17196)

Signed-off-by: Alex-Brooks <Alex.Brooks@ibm.com>
Co-authored-by: Jee Jee Li <pandaleefree@gmail.com>

* [NVIDIA] Support Cutlass MLA for Blackwell GPUs (vllm-project#16032)

Signed-off-by: kaixih <kaixih@nvidia.com>

* [Feature] support sequence parallelism using compilation pass (vllm-project#16155)

Signed-off-by: cascade812 <cascade812@outlook.com>
Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com>
Co-authored-by: Tyler Michael Smith <tyler@neuralmagic.com>

* [doc] Add feature status legend (vllm-project#17257)

Signed-off-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>

* [Metrics] Fix minor inconsistencies in bucket progression (vllm-project#17262)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* [V1][Spec Decode] Make eagle compatible with prefix caching. (vllm-project#17137)

Signed-off-by: LiuXiaoxuanPKU <lilyliupku@gmail.com>

* [BugFix] Fix vllm_flash_attn install issues (vllm-project#17267)

Signed-off-by: Lucas Wilkinson <lwilkinson@neuralmagic.com>
Co-authored-by: Jee Jee Li <pandaleefree@gmail.com>
Co-authored-by: Aaron Pham <contact@aarnphm.xyz>

* [Bugfix] Fix missing ARG in Dockerfile for arm64 platforms (vllm-project#17261)

Signed-off-by: lkm-schulz <44176356+lkm-schulz@users.noreply.github.com>

* [Bugfix] Fix cutlass dispatch for fp8/int8 to properly invoke M<=16 c… (vllm-project#16751)

Signed-off-by: Ther-LF <2639852836@qq.com>

* [Bugfix] Fix Mistral3 spatial merge error (vllm-project#17270)

Signed-off-by: mgoin <mgoin64@gmail.com>

* [Doc] Fix wrong github link in LMCache examples (vllm-project#17274)

Signed-off-by: KuntaiDu <kuntai@uchicago.edu>

* [Doc] small fix (vllm-project#17277)

Signed-off-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>

* [Misc] Validate `stop_token_ids` contents (vllm-project#17268)

Signed-off-by: Nick Hill <nhill@redhat.com>

* [Minor][Models] Pass partial_rotary_factor parameter to rope (vllm-project#17266)

Signed-off-by: evian <eviantai@u.nus.edu>
Co-authored-by: evian <eviantai@u.nus.edu>

* [Core] Remove legacy input mapper/processor from V0 (vllm-project#15686)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* [Model] Add Granite Speech Support (vllm-project#16246)

Signed-off-by: Alex-Brooks <Alex.brooks@ibm.com>
Signed-off-by: Alex-Brooks <Alex.Brooks@ibm.com>

* Update tpu_worker.py 's typo (vllm-project#17288)

* Add missing class docstring for `PromptAdapterConfig` (vllm-project#17302)

Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>

* [Bugfix] Add missing `get_language_model` to new MLLMs (vllm-project#17300)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* [doc] update wrong model id (vllm-project#17287)

Signed-off-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>

* [Misc] Minor typo/grammar in `platforms/interface.py` (vllm-project#17307)

Signed-off-by: NickLucche <nlucches@redhat.com>

* [Misc] Clean up Qwen2.5-Omni code (vllm-project#17301)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* [Docs] Add a security guide (vllm-project#17230)

Signed-off-by: Russell Bryant <rbryant@redhat.com>
Co-authored-by: Cyrus Leung <cyrus.tl.leung@gmail.com>

* Improve conversion from dataclass configs to argparse arguments (vllm-project#17303)

Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>

* Make name of `compressed-tensors` quant method consistent across vLLM (vllm-project#17255)

Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>

* Explicitly explain quant method override ordering and ensure all overrides are ordered (vllm-project#17256)

Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>

* [Security] Don't bind tcp zmq socket to all interfaces (vllm-project#17197)

Signed-off-by: Russell Bryant <rbryant@redhat.com>

* [Chore] cleanup license indicators in light of SPDX (vllm-project#17259)

Signed-off-by: Aaron Pham <contact@aarnphm.xyz>
Co-authored-by: Russell Bryant <rbryant@redhat.com>

* [BugFix] Fix cascade attention - RuntimeError: scheduler_metadata must have shape (metadata_size) (vllm-project#17283)

Signed-off-by: Lucas Wilkinson <lwilkinson@neuralmagic.com>

* [Bugfix] Fix moe weight losing all extra attrs after `process_weights_after_loading`. (vllm-project#16854)

Signed-off-by: charlifu <charlifu@amd.com>

* [Model] Qwen3 Dense FP8 Compat Fixes (vllm-project#17318)

Signed-off-by: simon-mo <xmo@berkeley.edu>

* Support loading transformers models with named parameters (vllm-project#16868)

Signed-off-by: Alex <alexwu@character.ai>

* [Model] Add tuned triton fused_moe configs for Qwen3Moe (vllm-project#17328)

Signed-off-by: mgoin <mgoin64@gmail.com>

* [Benchmark] Add single turn MTBench to Serving Bench (vllm-project#17202)

* [Optim] Compute multimodal hash only once per item (vllm-project#17314)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* implement Structural Tag with Guidance backend (vllm-project#17333)

Signed-off-by: Michal Moskal <michal@moskal.me>

* [V1][Spec Decode] Make Eagle model arch config driven (vllm-project#17323)

* [model] make llama4 compatible with pure dense layers (vllm-project#17315)

Signed-off-by: Lucia Fang <fanglu@fb.com>

* [Bugfix] Fix `numel()` downcast in fused_layernorm_dynamic_per_token_quant.cu (vllm-project#17316)

* Ignore `'<string>'` filepath (vllm-project#17330)

Signed-off-by: rzou <zou3519@gmail.com>

* [Bugfix] Add contiguous call inside rope kernel wrapper (vllm-project#17091)

Signed-off-by: 苏政渊 <suzhengyuan@moonshot.cn>
Co-authored-by: 苏政渊 <suzhengyuan@moonshot.cn>

* [Misc] Add a Jinja template to support Mistral3 function calling (vllm-project#17195)

Signed-off-by: chaunceyjiang <chaunceyjiang@gmail.com>

* [Model] support MiniMax-VL-01 model (vllm-project#16328)

Signed-off-by: qingjun <qingjun@minimaxi.com>

* [Misc] Move config fields to MultiModalConfig (vllm-project#17343)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* [Misc]Use a platform independent interface to obtain the device attributes (vllm-project#17100)

* [Fix] Documentation spacing in compilation config help text (vllm-project#17342)

Signed-off-by: Zerohertz <ohg3417@gmail.com>

* [Build][Bugfix] Restrict setuptools version to <80 (vllm-project#17320)

Signed-off-by: Gregory Shtrasberg <Gregory.Shtrasberg@amd.com>

* [Model] Ignore rotary embed load for Cohere model (vllm-project#17319)

* Update docs requirements (vllm-project#17379)

Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>

* [Doc] Fix QWen3MOE info (vllm-project#17381)

Signed-off-by: Jee Jee Li <pandaleefree@gmail.com>

* [Bugfix] Clean up MiniMax-VL and fix processing (vllm-project#17354)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* `pre-commit autoupdate` (vllm-project#17380)

Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>

* [Frontend] Support `chat_template_kwargs` in `LLM.chat` (vllm-project#17356)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* Transformers backend tweaks (vllm-project#17365)

Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>

* Fix: Spelling of inference (vllm-project#17387)

* Improve literal dataclass field conversion to argparse argument (vllm-project#17391)

Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>

* [V1] Remove num_input_tokens from attn_metadata (vllm-project#17193)

Signed-off-by: Chen Zhang <zhangch99@outlook.com>

* [Bugfix] add qwen3 reasoning-parser fix content is None when disable … (vllm-project#17369)

Signed-off-by: mofanke <mofanke@gmail.com>

* fix gemma3 results all zero (vllm-project#17364)

Signed-off-by: mayuyuace <qiming1.zhang@intel.com>

* [Misc][ROCm] Exclude `cutlass_mla_decode` for ROCm build (vllm-project#17289)

Signed-off-by: Tianyuan Wu <Tianyuan.Wu@amd.com>

* Enabling multi-group kernel tests. (vllm-project#17115)

Signed-off-by: Alexei V. Ivanov <alexei.ivanov@amd.com>

* [Docs] Propose a deprecation policy for the project (vllm-project#17063)

Signed-off-by: Russell Bryant <rbryant@redhat.com>
Co-authored-by: Cyrus Leung <cyrus.tl.leung@gmail.com>

* [Doc][Typo] Fixing label in new model requests link in overview.md (vllm-project#17400)

* [TPU][V1][CI] Replace `python3 setup.py develop` with standard `pip install --e` on TPU (vllm-project#17374)

Signed-off-by: NickLucche <nlucches@redhat.com>

* [CI] Uses Python 3.11 for TPU (vllm-project#17359)

Signed-off-by: Aaron Pham <contact@aarnphm.xyz>

* [CI/Build] Add retry mechanism for add-apt-repository (vllm-project#17107)

Signed-off-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>

* [Bugfix] Fix Minicpm-O-int4 GPTQ model inference (vllm-project#17397)

Signed-off-by: Isotr0py <2037008807@qq.com>

* Simplify (and fix) passing of guided decoding backend options (vllm-project#17008)

Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>

* Remove Falcon3 2x7B from CI (vllm-project#17404)

Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>

* Fix: Python package installation for opentelmetry (vllm-project#17049)

Signed-off-by: Dilip Gowda Bhagavan <dilip.bhagavan@ibm.com>

* [V1][Spec Decode] Apply torch.compile & cudagraph to EAGLE (vllm-project#17211)

Signed-off-by: Bryan Lu <yuzhelu@amazon.com>

* Remove Bamba 9B from CI (vllm-project#17407)

Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>

* [V1][Feature] Enable Speculative Decoding with Structured Outputs (vllm-project#14702)

Signed-off-by: Benjamin Chislett <benjamin.chislett@centml.ai>
Signed-off-by: Benjamin Chislett <chislett.ben@gmail.com>

* [release] Always git fetch all to get latest tag on TPU release (vllm-project#17322)

* Truncation control for embedding models (vllm-project#14776)

Signed-off-by: Gabriel Marinho <gmarinho@ibm.com>
Signed-off-by: Max de Bayser <mbayser@br.ibm.com>
Co-authored-by: Max de Bayser <mbayser@br.ibm.com>

* Update PyTorch to 2.7.0 (vllm-project#16859)

* Improve configs - `ModelConfig` (vllm-project#17130)

Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>

* Fix call to `logger.info_once` (vllm-project#17416)

Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>

* Fix some speculative decode tests with tl.dot (vllm-project#17371)

Signed-off-by: Huy Do <huydhn@gmail.com>

* Support LoRA for Mistral3 (vllm-project#17428)

Signed-off-by: mgoin <mgoin64@gmail.com>

* [Intel GPU] [CI]Fix XPU ci, setuptools >=80.0 have build issue (vllm-project#17298)

Signed-off-by: Kunshang Ji <kunshang.ji@intel.com>

* [Hardware][Intel GPU] Upgrade to torch 2.7 (vllm-project#17444)

Signed-off-by: Kunshang Ji <kunshang.ji@intel.com>
Co-authored-by: Qiming Zhang <qiming1.zhang@intel.com>

* [Bugfix] Fix AttributeError: 'State' object has no attribute 'engine_client' (vllm-project#17434)

Signed-off-by: chaunceyjiang <chaunceyjiang@gmail.com>

* [MODEL ADDITION] Ovis2 Model Addition (vllm-project#15826)

Signed-off-by: Marco <121761685+mlinmg@users.noreply.github.com>
Signed-off-by: Isotr0py <2037008807@qq.com>
Signed-off-by: isotr0py <2037008807@qq.com>
Co-authored-by: Isotr0py <2037008807@qq.com>
Co-authored-by: Isotr0py <mozf@mail2.sysu.edu.cn>

* Make the _apply_rotary_emb compatible with dynamo (vllm-project#17435)

* [Misc] Remove deprecated files (vllm-project#17447)

Signed-off-by: chaunceyjiang <chaunceyjiang@gmail.com>

* [V1][Bugfix]: vllm v1 verison metric num_gpu_blocks is None (vllm-project#15755)

Signed-off-by: rongfu.leng <rongfu.leng@daocloud.io>

* [TPU][V1][CI] Update regression test baseline for v6 CI (vllm-project#17064)

Signed-off-by: NickLucche <nlucches@redhat.com>

* [Core] Prevent side-channel attacks via cache salting (vllm-project#17045)

Signed-off-by: Marko Rosenmueller <5467316+dr75@users.noreply.github.com>

* [V1][Metrics] add support for kv event publishing (vllm-project#16750)

Signed-off-by: alec-flowers <aflowers@nvidia.com>
Signed-off-by: Mark McLoughlin <markmc@redhat.com>
Co-authored-by: Mark McLoughlin <markmc@redhat.com>

* [Feature] The Qwen3 reasoning parser supports  guided decoding (vllm-project#17466)

Signed-off-by: chaunceyjiang <chaunceyjiang@gmail.com>

* [Docs] Add command for running mypy tests from CI (vllm-project#17475)

Signed-off-by: Russell Bryant <rbryant@redhat.com>

* [Fix] Support passing args to logger (vllm-project#17425)

Signed-off-by: Aaron Pham <contact@aarnphm.xyz>

* [Bugfix] Fixed mistral tokenizer path when pointing to file (vllm-project#17457)

Signed-off-by: Pete Savage <psavage@redhat.com>

* [V1] Allow turning off pickle fallback in vllm.v1.serial_utils (vllm-project#17427)

Signed-off-by: Russell Bryant <rbryant@redhat.com>
Co-authored-by: Cyrus Leung <cyrus.tl.leung@gmail.com>

* [Docs] Update optimization.md doc (vllm-project#17482)

Signed-off-by: mgoin <mgoin64@gmail.com>

* [BugFix] Fix authorization of openai_transcription_client.py (vllm-project#17321)

Signed-off-by: zh Wang <rekind133@outlook.com>

* [Bugfix][ROCm] Restrict ray version due to a breaking release (vllm-project#17480)

Signed-off-by: Gregory Shtrasberg <Gregory.Shtrasberg@amd.com>

* [doc] add install tips (vllm-project#17373)

Signed-off-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>

* doc: fix bug report Github template formatting (vllm-project#17486)

Signed-off-by: David Xia <david@davidxia.com>

* [v1][Spec Decode] Make sliding window compatible with eagle prefix caching (vllm-project#17398)

Signed-off-by: Chen Zhang <zhangch99@outlook.com>

* Bump Compressed Tensors version to 0.9.4 (vllm-project#17478)

Signed-off-by: Rahul Tuli <rtuli@redhat.com>
Co-authored-by: mgoin <mgoin64@gmail.com>

* [Misc] Rename Audios -> Audio in Qwen2audio Processing (vllm-project#17507)

Signed-off-by: Alex-Brooks <Alex.Brooks@ibm.com>

* [CI][TPU] Skip Multimodal test (vllm-project#17488)

Signed-off-by: Siyuan Liu <lsiyuan@google.com>

* [Bugfix][ROCm] Fix import error on ROCm (vllm-project#17495)

Signed-off-by: Gregory Shtrasberg <Gregory.Shtrasberg@amd.com>

* [Bugfix] Temporarily disable gptq_bitblas on ROCm (vllm-project#17411)

Signed-off-by: Yan Cangang <nalanzeyu@gmail.com>

* [CI][TPU] Skip structured outputs+spec decode tests on TPU (vllm-project#17510)

Signed-off-by: mgoin <mgoin64@gmail.com>

* [CI][Bugfix] Fix failing V1 Test due to missing 'cache_salt' arg (vllm-project#17500)

Signed-off-by: mgoin <mgoin64@gmail.com>

* [CI/Build] Reorganize models tests (vllm-project#17459)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* FIxing the AMD test failures caused by PR#16457 (vllm-project#17511)

Signed-off-by: Alexei V. Ivanov <alexei.ivanov@amd.com>

* [Build] Require setuptools >= 77.0.3 for PEP 639 (vllm-project#17389)

Signed-off-by: Russell Bryant <rbryant@redhat.com>

* [ROCm] Effort to reduce the number of environment variables in command line (vllm-project#17229)

Signed-off-by: Hongxia Yang <hongxia.yang@amd.com>

* [BugFix] fix speculative decoding memory leak when speculation is disabled (vllm-project#15506)

Signed-off-by: Noah Yoshida <noahcy117@gmail.com>

* [BugFix] Fix mla cpu - missing 3 required positional arguments (vllm-project#17494)

Signed-off-by: Lucas Wilkinson <lwilkinson@neuralmagic.com>

* Avoid overwriting vllm_compile_cache.py (vllm-project#17418)

Signed-off-by: Keyun Tong <tongkeyun@gmail.com>

* [Core] Enable IPv6 with vllm.utils.make_zmq_socket() (vllm-project#16506)

Signed-off-by: Russell Bryant <rbryant@redhat.com>

* [Misc] Optimize the Qwen3_ReasoningParser extract_reasoning_content (vllm-project#17515)

Signed-off-by: chaunceyjiang <chaunceyjiang@gmail.com>

* Improve configs - `ObservabilityConfig` (vllm-project#17453)

Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>

* [Bugfix][Benchmarks] Allow benchmark of deepspeed-mii backend to select a model (vllm-project#17285)

Signed-off-by: Teruaki Ishizaki <teruaki.ishizaki@ntt.com>

* [Frontend] Show progress bar for adding requests (vllm-project#17525)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* [Misc] Clean up test docstrings and names (vllm-project#17521)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* [FEAT] [ROCm]: Add Qwen/Qwen3-30B-A3B-FP8 fused moe config for MI300X (vllm-project#17530)

Signed-off-by: tjtanaa <tunjian.tan@embeddedllm.com>

* Fix more broken speculative decode tests (vllm-project#17450)

Signed-off-by: Huy Do <huydhn@gmail.com>

* [doc] add streamlit integration (vllm-project#17522)

Signed-off-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>

* [FEAT] [ROCm]: Add Qwen/Qwen3-235B-A22B-FP8 TP4 triton fused moe config (vllm-project#17535)

Signed-off-by: tjtanaa <tunjian.tan@embeddedllm.com>

* [Feature][Frontend]: Deprecate --enable-reasoning (vllm-project#17452)

Signed-off-by: chaunceyjiang <chaunceyjiang@gmail.com>

* [ROCm] remove unsupported archs from rocm triton flash-attention supported list (vllm-project#17536)

Signed-off-by: Hongxia Yang <hongxia.yang@amd.com>

* [torch.compile] Add torch inductor pass for fusing silu_and_mul with subsequent scaled_fp8_quant operations (vllm-project#10867)

Signed-off-by: Sage Moore <sage@neuralmagic.com>

* [Misc] refactor example - cpu_offload_lmcache (vllm-project#17460)

Signed-off-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>

---------

Signed-off-by: youkaichao <youkaichao@gmail.com>
Signed-off-by: Michal Adamczyk <michal.adamczyk@intel.com>
Signed-off-by: Chendi Xue <chendi.xue@intel.com>
Signed-off-by: reidliu41 <reid201711@gmail.com>
Signed-off-by: vllmellm <vllm.ellm@embeddedllm.com>
Signed-off-by: Nick Hill <nhill@redhat.com>
Signed-off-by: Lucas Wilkinson <lwilkinson@neuralmagic.com>
Signed-off-by: gitover22 <qidizou88@gmail.com>
Signed-off-by: chaunceyjiang <chaunceyjiang@gmail.com>
Signed-off-by: Russell Bryant <rbryant@redhat.com>
Signed-off-by: mgoin <mgoin64@gmail.com>
Signed-off-by: windsonsea <haifeng.yao@daocloud.io>
Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>
Signed-off-by: Travis Johnson <tsjohnso@us.ibm.com>
Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>
Signed-off-by: csy1204 <josang1204@gmail.com>
Signed-off-by: sydarb <areebsyed237@gmail.com>
Signed-off-by: 开哲 <kaizhe.zy@alibaba-inc.com>
Signed-off-by: Omer Dayan (SW-GPU) <omer@run.ai>
Signed-off-by: Rui Qiao <ruisearch42@gmail.com>
Signed-off-by: Isotr0py <2037008807@qq.com>
Signed-off-by: shen-shanshan <467638484@qq.com>
Signed-off-by: Mark McLoughlin <markmc@redhat.com>
Signed-off-by: Aaruni Aggarwal <aaruniagg@gmail.com>
Signed-off-by: Eyshika Agarwal <eyshikaengineer@gmail.com>
Signed-off-by: eyshika <eyshikaengineer@gmail.com>
Signed-off-by: Yinghai Lu <yinghai@thinkingmachines.ai>
Signed-off-by: Max de Bayser <mbayser@br.ibm.com>
Signed-off-by: Jens Glaser <glaserj@ornl.gov>
Signed-off-by: varun sundar rabindranath <vsundarr@redhat.com>
Signed-off-by: Lifu Huang <lifu.hlf@gmail.com>
Signed-off-by: Mengqing Cao <cmq0113@163.com>
Signed-off-by: cynthieye <yexin93@qq.com>
Signed-off-by: Randall Smith <Randall.Smith@amd.com>
Signed-off-by: Luka Govedič <lgovedic@redhat.com>
Signed-off-by: Alex-Brooks <Alex.Brooks@ibm.com>
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
Signed-off-by: Jasmond Loh <Jasmond.Loh@hotmail.com>
Signed-off-by: Shangming Cai <caishangming@linux.alibaba.com>
Signed-off-by: Christian Heimes <christian@python.org>
Signed-off-by: Bryan Lu <yuzhelu@amazon.com>
Signed-off-by: Benjamin Chislett <benjamin.chislett@centml.ai>
Signed-off-by: Chen Zhang <zhangch99@outlook.com>
Signed-off-by: James Wu <jjwu@meta.com>
Signed-off-by: Lu Fang <lufang@fb.com>
Signed-off-by: Yarong Mu <ymu@google.com>
Signed-off-by: shuw <shuw@nvidia.com>
Signed-off-by: charlifu <charlifu@amd.com>
Signed-off-by: Zijing Liu <liuzijing2014@gmail.com>
Signed-off-by: Agata Dobrzyniewicz <adobrzyniewicz@habana.ai>
Signed-off-by: Andy Xie <andy.xning@gmail.com>
Signed-off-by: Aaron Pham <contact@aarnphm.xyz>
Signed-off-by: changjun.lee <pord7457@gmail.com>
Signed-off-by: imkero <kerorek@outlook.com>
Signed-off-by: ShuaibinLi <lishuaibin@live.cn>
Signed-off-by: Jee Jee Li <pandaleefree@gmail.com>
Signed-off-by: Jade Zheng <zheng.shoujian@outlook.com>
Signed-off-by: sfc-gh-zhwang <flex.wang@snowflake.com>
Signed-off-by: kaixih <kaixih@nvidia.com>
Signed-off-by: cascade812 <cascade812@outlook.com>
Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com>
Signed-off-by: LiuXiaoxuanPKU <lilyliupku@gmail.com>
Signed-off-by: lkm-schulz <44176356+lkm-schulz@users.noreply.github.com>
Signed-off-by: Ther-LF <2639852836@qq.com>
Signed-off-by: KuntaiDu <kuntai@uchicago.edu>
Signed-off-by: evian <eviantai@u.nus.edu>
Signed-off-by: Alex-Brooks <Alex.brooks@ibm.com>
Signed-off-by: NickLucche <nlucches@redhat.com>
Signed-off-by: simon-mo <xmo@berkeley.edu>
Signed-off-by: Alex <alexwu@character.ai>
Signed-off-by: Michal Moskal <michal@moskal.me>
Signed-off-by: Lucia Fang <fanglu@fb.com>
Signed-off-by: rzou <zou3519@gmail.com>
Signed-off-by: 苏政渊 <suzhengyuan@moonshot.cn>
Signed-off-by: qingjun <qingjun@minimaxi.com>
Signed-off-by: Zerohertz <ohg3417@gmail.com>
Signed-off-by: Gregory Shtrasberg <Gregory.Shtrasberg@amd.com>
Signed-off-by: mofanke <mofanke@gmail.com>
Signed-off-by: mayuyuace <qiming1.zhang@intel.com>
Signed-off-by: Tianyuan Wu <Tianyuan.Wu@amd.com>
Signed-off-by: Alexei V. Ivanov <alexei.ivanov@amd.com>
Signed-off-by: Dilip Gowda Bhagavan <dilip.bhagavan@ibm.com>
Signed-off-by: Benjamin Chislett <chislett.ben@gmail.com>
Signed-off-by: Gabriel Marinho <gmarinho@ibm.com>
Signed-off-by: Huy Do <huydhn@gmail.com>
Signed-off-by: Kunshang Ji <kunshang.ji@intel.com>
Signed-off-by: Marco <121761685+mlinmg@users.noreply.github.com>
Signed-off-by: isotr0py <2037008807@qq.com>
Signed-off-by: rongfu.leng <rongfu.leng@daocloud.io>
Signed-off-by: Marko Rosenmueller <5467316+dr75@users.noreply.github.com>
Signed-off-by: alec-flowers <aflowers@nvidia.com>
Signed-off-by: Pete Savage <psavage@redhat.com>
Signed-off-by: zh Wang <rekind133@outlook.com>
Signed-off-by: David Xia <david@davidxia.com>
Signed-off-by: Rahul Tuli <rtuli@redhat.com>
Signed-off-by: Siyuan Liu <lsiyuan@google.com>
Signed-off-by: Yan Cangang <nalanzeyu@gmail.com>
Signed-off-by: Hongxia Yang <hongxia.yang@amd.com>
Signed-off-by: Noah Yoshida <noahcy117@gmail.com>
Signed-off-by: Keyun Tong <tongkeyun@gmail.com>
Signed-off-by: Teruaki Ishizaki <teruaki.ishizaki@ntt.com>
Signed-off-by: tjtanaa <tunjian.tan@embeddedllm.com>
Signed-off-by: Sage Moore <sage@neuralmagic.com>
Co-authored-by: Chauncey <chaunceyjiang@gmail.com>
Co-authored-by: youkaichao <youkaichao@gmail.com>
Co-authored-by: Chendi.Xue <chendi.xue@intel.com>
Co-authored-by: Michal Adamczyk <madamczyk@habana.ai>
Co-authored-by: Reid <61492567+reidliu41@users.noreply.github.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: vllmellm <vllm.ellm@embeddedllm.com>
Co-authored-by: Nick Hill <nhill@redhat.com>
Co-authored-by: Lucas Wilkinson <LucasWilkinson@users.noreply.github.com>
Co-authored-by: huafeng <qidizou88@gmail.com>
Co-authored-by: Russell Bryant <rbryant@redhat.com>
Co-authored-by: Michael Goin <mgoin64@gmail.com>
Co-authored-by: Michael Yao <haifeng.yao@daocloud.io>
Co-authored-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>
Co-authored-by: Travis Johnson <tsjohnso@us.ibm.com>
Co-authored-by: Yong Hoon Shin <48474650+sarckk@users.noreply.github.com>
Co-authored-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>
Co-authored-by: Sangyeon Cho <josang1204@gmail.com>
Co-authored-by: Chen Xia <cxia0209@gmail.com>
Co-authored-by: Areeb Syed <areebsyed237@gmail.com>
Co-authored-by: 张宇 <zhangyuygss@outlook.com>
Co-authored-by: 开哲 <kaizhe.zy@alibaba-inc.com>
Co-authored-by: omer-dayan <omdayan@nvidia.com>
Co-authored-by: Cyrus Leung <cyrus.tl.leung@gmail.com>
Co-authored-by: Rui Qiao <161574667+ruisearch42@users.noreply.github.com>
Co-authored-by: Isotr0py <mozf@mail2.sysu.edu.cn>
Co-authored-by: Shanshan Shen <467638484@qq.com>
Co-authored-by: wang.yuqi <noooop@126.com>
Co-authored-by: Mark McLoughlin <markmc@redhat.com>
Co-authored-by: Aaruni Aggarwal <47731267+AaruniAggarwal@users.noreply.github.com>
Co-authored-by: Atilla <48064466+atilla00@users.noreply.github.com>
Co-authored-by: Eyshika Agarwal <eyshikaengineer@gmail.com>
Co-authored-by: Yinghai Lu <yinghai@thinkingmachines.ai>
Co-authored-by: Maximilien de Bayser <mbayser@br.ibm.com>
Co-authored-by: jglaser <glaserj@ornl.gov>
Co-authored-by: tjtanaa <tunjian.tan@embeddedllm.com>
Co-authored-by: Zaida Zhou <58739961+zhouzaida@users.noreply.github.com>
Co-authored-by: zhouzaida <zhouzaida@msh.team>
Co-authored-by: Varun Sundar Rabindranath <varunsundar08@gmail.com>
Co-authored-by: varun sundar rabindranath <vsundarr@redhat.com>
Co-authored-by: Lifu Huang <lifu.hlf@gmail.com>
Co-authored-by: Mengqing Cao <cmq0113@163.com>
Co-authored-by: yexin(叶鑫) <yexin93@qq.com>
Co-authored-by: MagnetoWang <magnetowang@outlook.com>
Co-authored-by: 조상연[플레이스 AI] <sang-yeon.cho@navercorp.com>
Co-authored-by: rasmith <Randall.Smith@amd.com>
Co-authored-by: Luka Govedič <lgovedic@redhat.com>
Co-authored-by: Lu Fang <30275821+houseroad@users.noreply.github.com>
Co-authored-by: Alex Brooks <alex.brooks@ibm.com>
Co-authored-by: Cyrus Leung <tlleungac@connect.ust.hk>
Co-authored-by: Jasmond L <120363110+JasmondL@users.noreply.github.com>
Co-authored-by: Shangming Cai <caishangming@linux.alibaba.com>
Co-authored-by: Daniel Li <dyli@google.com>
Co-authored-by: Christian Heimes <christian@python.org>
Co-authored-by: Benjamin Chislett <benjamin.chislett@centml.ai>
Co-authored-by: Bryan Lu <yuzhelu@amazon.com>
Co-authored-by: Chen Zhang <zhangch99@outlook.com>
Co-authored-by: Yihua Cheng <jc4xvyp@outlook.com>
Co-authored-by: James Wu <james.jz.wu@gmail.com>
Co-authored-by: yarongmu-google <150371854+yarongmu-google@users.noreply.github.com>
Co-authored-by: Shu Wang <shuw@nvidia.com>
Co-authored-by: Charlie Fu <charlifu@amd.com>
Co-authored-by: Zijing Liu <liuzijing2014@users.noreply.github.com>
Co-authored-by: Agata Dobrzyniewicz <160237065+adobrzyn@users.noreply.github.com>
Co-authored-by: Ning Xie <andy.xning@gmail.com>
Co-authored-by: Aaron Pham <contact@aarnphm.xyz>
Co-authored-by: changjun.lee <pord7457@gmail.com>
Co-authored-by: Kero Liang <kerorek@outlook.com>
Co-authored-by: Happy <lsb19@tsinghua.org.cn>
Co-authored-by: Jee Jee Li <pandaleefree@gmail.com>
Co-authored-by: Jade Zheng <zheng.shoujian@outlook.com>
Co-authored-by: Flex Wang <flex.wang@snowflake.com>
Co-authored-by: Kaixi Hou <kaixih@nvidia.com>
Co-authored-by: cascade <guijuzhang11@gmail.com>
Co-authored-by: Lily Liu <lilyliupku@gmail.com>
Co-authored-by: Lennart K. M. Schulz <44176356+lkm-schulz@users.noreply.github.com>
Co-authored-by: TherLF <54900723+Ther-LF@users.noreply.github.com>
Co-authored-by: Kuntai Du <kuntai@uchicago.edu>
Co-authored-by: Wanrui Dai <daiwanrui@u.nus.edu>
Co-authored-by: evian <eviantai@u.nus.edu>
Co-authored-by: idouba <zhangchaomeng@huawei.com>
Co-authored-by: Nicolò Lucchesi <nlucches@redhat.com>
Co-authored-by: Aaron Pham <Aaronpham0103@gmail.com>
Co-authored-by: Simon Mo <simon.mo@hey.com>
Co-authored-by: Alex Wu <alexwu@character.ai>
Co-authored-by: Ekagra Ranjan <3116519+ekagra-ranjan@users.noreply.github.com>
Co-authored-by: Michał Moskal <michal@moskal.me>
Co-authored-by: Lucia Fang <116399278+luccafong@users.noreply.github.com>
Co-authored-by: Richard Barnes <rbarnes@meta.com>
Co-authored-by: Richard Zou <zou3519@users.noreply.github.com>
Co-authored-by: Zhengyuan Su (苏政渊) <su-zy21@mails.tsinghua.edu.cn>
Co-authored-by: 苏政渊 <suzhengyuan@moonshot.cn>
Co-authored-by: qscqesze <qingjun@minimaxi.com>
Co-authored-by: ponix-j <55234879+ponix-j@users.noreply.github.com>
Co-authored-by: Hyogeun Oh (오효근) <ohg3417@gmail.com>
Co-authored-by: Gregory Shtrasberg <156009573+gshtras@users.noreply.github.com>
Co-authored-by: a2q1p <a2q1p.2025@gmail.com>
Co-authored-by: mofanke <54242816+mofanke@users.noreply.github.com>
Co-authored-by: Qiming Zhang <qiming1.zhang@intel.com>
Co-authored-by: TY-AMD <tianyuan.wu@amd.com>
Co-authored-by: Alexei-V-Ivanov-AMD <156011006+Alexei-V-Ivanov-AMD@users.noreply.github.com>
Co-authored-by: casinca <47400729+casinca@users.noreply.github.com>
Co-authored-by: Dilip Gowda Bhagavan <110233170+dilipgb@users.noreply.github.com>
Co-authored-by: Bryan Lu <55512809+luyuzhe111@users.noreply.github.com>
Co-authored-by: Kevin H. Luu <kevin@anyscale.com>
Co-authored-by: Gabriel Marinho <104592062+gmarinho2@users.noreply.github.com>
Co-authored-by: Huy Do <huydhn@gmail.com>
Co-authored-by: Kunshang Ji <kunshang.ji@intel.com>
Co-authored-by: Marco <121761685+mlinmg@users.noreply.github.com>
Co-authored-by: Isotr0py <2037008807@qq.com>
Co-authored-by: rongfu.leng <rongfu.leng@daocloud.io>
Co-authored-by: Marko Rosenmueller <5467316+dr75@users.noreply.github.com>
Co-authored-by: Alec <35311602+alec-flowers@users.noreply.github.com>
Co-authored-by: Pete Savage <psav@users.noreply.github.com>
Co-authored-by: zh Wang <rekind133@outlook.com>
Co-authored-by: David Xia <david@davidxia.com>
Co-authored-by: Rahul Tuli <rtuli@redhat.com>
Co-authored-by: Siyuan Liu <lsiyuan@google.com>
Co-authored-by: NaLan ZeYu <nalanzeyu@gmail.com>
Co-authored-by: Hongxia Yang <62075498+hongxiayang@users.noreply.github.com>
Co-authored-by: Noah Yoshida <noahcy117@gmail.com>
Co-authored-by: Keyun Tong <tongkeyun@gmail.com>
Co-authored-by: Teruaki Ishizaki <tell.ishi@gmail.com>
Co-authored-by: Sage Moore <sage@neuralmagic.com>
radeksm pushed a commit to radeksm/vllm that referenced this pull request May 2, 2025
xjpang pushed a commit to xjpang/vllm that referenced this pull request May 4, 2025
zou3519 added a commit to zou3519/vllm that referenced this pull request May 5, 2025
I'm recording down my understanding of how eagle and the compilation
cache works after discussing
vllm-project#17211 with @luyuzhe111 and
@WoosukKwon.

In the future we likely will have a situation where we want to
torch.compile multiple pieces of code (e.g. decoder and encoder
separately) and then we'll need to refactor the system to support it
(each compiled region needs its own cache directory with its own hash)
But until then the current design seems fine.

Signed-off-by: rzou <zou3519@gmail.com>
zou3519 added a commit to zou3519/vllm that referenced this pull request May 5, 2025
I'm recording down my understanding of how eagle and the compilation
cache works after discussing
vllm-project#17211 with @luyuzhe111 and
@WoosukKwon.

In the future we likely will have a situation where we want to
torch.compile multiple pieces of code (e.g. decoder and encoder
separately) and then we'll need to refactor the system to support it
(each compiled region needs its own cache directory with its own hash)
But until then the current design seems fine.

Signed-off-by: rzou <zou3519@gmail.com>
zou3519 added a commit to zou3519/vllm that referenced this pull request May 9, 2025
I'm recording down my understanding of how eagle and the compilation
cache works after discussing
vllm-project#17211 with @luyuzhe111 and
@WoosukKwon.

In the future we likely will have a situation where we want to
torch.compile multiple pieces of code (e.g. decoder and encoder
separately) and then we'll need to refactor the system to support it
(each compiled region needs its own cache directory with its own hash)
But until then the current design seems fine.

Signed-off-by: rzou <zou3519@gmail.com>
zou3519 added a commit to zou3519/vllm that referenced this pull request May 9, 2025
I'm recording down my understanding of how eagle and the compilation
cache works after discussing
vllm-project#17211 with @luyuzhe111 and
@WoosukKwon.

In the future we likely will have a situation where we want to
torch.compile multiple pieces of code (e.g. decoder and encoder
separately) and then we'll need to refactor the system to support it
(each compiled region needs its own cache directory with its own hash)
But until then the current design seems fine.

Signed-off-by: rzou <zou3519@gmail.com>
RichardoMrMu pushed a commit to RichardoMrMu/vllm that referenced this pull request May 12, 2025
…ect#17211)

Signed-off-by: Bryan Lu <yuzhelu@amazon.com>
Signed-off-by: Mu Huai <tianbowen.tbw@antgroup.com>
zzzyq pushed a commit to zzzyq/vllm that referenced this pull request May 24, 2025
…ect#17211)

Signed-off-by: Bryan Lu <yuzhelu@amazon.com>
Signed-off-by: Yuqi Zhang <yuqizhang@google.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation Improvements or additions to documentation ready ONLY add when PR is ready to merge/full CI is needed speculative-decoding v1
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants