The model is not working with vLLM or transformers library as well

#14
by divyanshusingh - opened

This model is throwing ZeroDivisionError: integer modulo by zero while running with vLLM & TypeError: '>=' not supported between instances of 'Tensor' and 'NoneType' while running with transformer repo

vLLM Detailed Error

``` ERROR 04-29 23:58:39 [core.py:396] ZeroDivisionError: integer modulo by zero Traceback (most recent call last): File "/usr/lib/python3.12/multiprocessing/process.py", line 314, in _bootstrap self.run() File "/usr/lib/python3.12/multiprocessing/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/usr/local/lib/python3.12/dist-packages/vllm/v1/engine/core.py", line 400, in run_engine_core raise e File "/usr/local/lib/python3.12/dist-packages/vllm/v1/engine/core.py", line 387, in run_engine_core engine_core = EngineCoreProc(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/dist-packages/vllm/v1/engine/core.py", line 329, in __init__ super().__init__(vllm_config, executor_class, log_stats, File "/usr/local/lib/python3.12/dist-packages/vllm/v1/engine/core.py", line 64, in __init__ self.model_executor = executor_class(vllm_config) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/dist-packages/vllm/executor/executor_base.py", line 52, in __init__ self._init_executor() File "/usr/local/lib/python3.12/dist-packages/vllm/executor/uniproc_executor.py", line 47, in _init_executor self.collective_rpc("load_model") File "/usr/local/lib/python3.12/dist-packages/vllm/executor/uniproc_executor.py", line 56, in collective_rpc answer = run_method(self.driver_worker, method, args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/dist-packages/vllm/utils.py", line 2456, in run_method return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/dist-packages/vllm/v1/worker/gpu_worker.py", line 162, in load_model self.model_runner.load_model() File "/usr/local/lib/python3.12/dist-packages/vllm/v1/worker/gpu_model_runner.py", line 1332, in load_model self.model = get_model(vllm_config=self.vllm_config) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/model_loader/__init__.py", line 14, in get_model return loader.load_model(vllm_config=vllm_config) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/model_loader/loader.py", line 452, in load_model model = _initialize_model(vllm_config=vllm_config) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/model_loader/loader.py", line 133, in _initialize_model return model_class(vllm_config=vllm_config, prefix=prefix) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/models/mllama4.py", line 673, in __init__ self.language_model = _initialize_model( ^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/model_loader/loader.py", line 133, in _initialize_model return model_class(vllm_config=vllm_config, prefix=prefix) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/models/llama4.py", line 480, in __init__ super().__init__(vllm_config=vllm_config, File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/models/llama.py", line 496, in __init__ self.model = self._init_model(vllm_config=vllm_config, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/models/llama4.py", line 488, in _init_model return Llama4Model(vllm_config=vllm_config, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/dist-packages/vllm/compilation/decorators.py", line 151, in __init__ old_init(self, vllm_config=vllm_config, prefix=prefix, **kwargs) File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/models/llama4.py", line 330, in __init__ super().__init__(vllm_config=vllm_config, File "/usr/local/lib/python3.12/dist-packages/vllm/compilation/decorators.py", line 151, in __init__ old_init(self, vllm_config=vllm_config, prefix=prefix, **kwargs) File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/models/llama.py", line 321, in __init__ self.start_layer, self.end_layer, self.layers = make_layers( ^^^^^^^^^^^^ File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/models/utils.py", line 610, in make_layers maybe_offload_to_cpu(layer_fn(prefix=f"{prefix}.{idx}")) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/models/llama.py", line 323, in lambda prefix: layer_type(config=config, ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/models/llama4.py", line 276, in __init__ is_moe_layer = (self.layer_idx + ^^^^^^^^^^^^^^^^^ ZeroDivisionError: integer modulo by zero ```

Transformer Detailed Error:

``` File "/usr/local/lib/python3.10/dist-packages/transformers/models/llama4/modeling_llama4.py", line 669, in forward causal_mask, chunk_causal_mask = self._update_causal_mask( File "/usr/local/lib/python3.10/dist-packages/transformers/models/llama4/modeling_llama4.py", line 763, in _update_causal_mask cond1 = first_cache_position >= attention_chunk_size TypeError: '>=' not supported between instances of 'Tensor' and 'NoneType' ```

I have the exactly same error with transformer.

Meta Llama org

Hey @divyanshusingh , thanks for reporting the issue!

vLLM support is still in progress in the public package, but the changes where merged to main already: https://github.com/vllm-project/vllm/pull/17315 . A new version should be out by the end of the week.

On transformers, could you share which version of the library you are using?

Thanks

Hi @betodepaola , thank you so much for the quick response. I'm using the 4.52.0.dev0 version of transformers.

Meta Llama org

Hi @divyanshusingh , for transformers you may install from main, or use the following stable release: https://github.com/huggingface/transformers/releases/tag/v4.51.3-LlamaGuard-preview

Thank you so much @pcuenq . Finally, It's working

divyanshusingh changed discussion status to closed

Sign up or log in to comment