Request: DOI
#74 opened about 2 months ago
by
pewpew475
Request: DOI
#73 opened about 2 months ago
by
jake123445

Vocab size differ between tokenizer config and the lm_head
#72 opened 3 months ago
by
bonni93
Image representation interpretation
#71 opened 6 months ago
by
kanishka
Exceeding GPU memory even I have 2 GPUs
#70 opened 6 months ago
by
aymanhelbacha

Request for LLaMA model access on Hugging Face (Already approved on llama.com)
1
#69 opened 6 months ago
by
bwalid13

meta-llama/Llama-3.2-11B-Vision is not the path to a directory containing a file named model-00003-of-00005.safetensors.
#68 opened 7 months ago
by
CKK0331
model struggles with noisy images
#67 opened 7 months ago
by
elenapop
ValueError: The following `model_kwargs` are not used by the model: ['pixel_values', 'aspect_ratio_ids', 'aspect_ratio_mask'] (note: typos in the generate arguments will also show up in this list)
1
#66 opened 8 months ago
by
saraibr99
Repo access request rejected?
#65 opened 8 months ago
by
deleted
Llama3.2
#64 opened 8 months ago
by
shaheerthinkingcode
Request: DOI
#63 opened 8 months ago
by
Krish23kt
Access request
#62 opened 9 months ago
by
hklim2020
Regarding the Llama 3.2 11B Vision model for OCR Task
#61 opened 9 months ago
by
Pavankumar03
Model Access not working
#60 opened 9 months ago
by
rujhanjain07
Request: DOI
#59 opened 9 months ago
by
rachit-bizongo
This model consistently LIES!
👀
2
#58 opened 9 months ago
by
PyrateGFX
Problem with answer
#57 opened 9 months ago
by
mohamedachilij
Usage for *research* purposes in EU
➕
1
#55 opened 10 months ago
by
morenolq

Recommended hyperparameter values
1
#52 opened 11 months ago
by
siliconecomputervision
Rename README.md to token
#51 opened 11 months ago
by
oe2015
Example prompt output
#49 opened 11 months ago
by
aLeX49
NaN in model parameters
1
#48 opened 11 months ago
by
cuong-dyania
How to use llama3.2 11b for text generation
1
#47 opened 11 months ago
by
PyMangekyo
How can I convert to gguf file?
2
#46 opened 11 months ago
by
Jongsun999

multi-image inference
➕
7
3
#45 opened 11 months ago
by
eternal8848
How many vram
2
#44 opened 12 months ago
by
Dizzl500
fix prompt format for Llama-3.2-11B-Vision
👍
2
1
#43 opened 12 months ago
by
chenhegu
local image
1
#42 opened 12 months ago
by
komenge
What broken open source model is not used by Chinese people? What kind of open source model is it if you don't use it? Let's call it a closed source model!
4
#41 opened 12 months ago
by
hanson888

How to model.generate batched data
1
#40 opened 12 months ago
by
Popandos
ValueError: The checkpoint you are trying to load has model type `mllama` but Transformers does not recognize this architecture
👀
😔
10
2
#39 opened 12 months ago
by
KevalRx
Model having troubles understand the prompts?
#35 opened 12 months ago
by
franciscoliu
Interview request: thoughts on genAI evaluation & documentation
#34 opened 12 months ago
by
evatang
Error encountered when fine-tuning
3
#30 opened about 1 year ago
by
yongleyuan
Why is the image size 448 instead of 560?
#28 opened about 1 year ago
by
theo77186
Llama-3.2-11B-vision onnx model generation
👍
5
3
#27 opened about 1 year ago
by
SantoshHF
How to use visual grounding with this model ?
2
#25 opened about 1 year ago
by
r4hul77
How to get embeddings for Image-Text Retrieval?
➕
👀
4
2
#23 opened about 1 year ago
by
wanghaofan

Why EXACTLY this model is not available in Europe?
➕
👍
12
4
#22 opened about 1 year ago
by
MoonRide

model.resize_token_embeddings() method is broken - resizes embedding table but not lm_head
#21 opened about 1 year ago
by
alexpeys
Chat template is removed in the base variant. Can we still use chat template to formulate the prompt?
3
#12 opened about 1 year ago
by
hxgy610
Position of <image> token in prompt for fine-tuning
4
#2 opened about 1 year ago
by
hxgy610