AI & ML interests

None defined yet.

Recent Activity

Qwen's activity

AdinaY 
posted an update about 9 hours ago
view post
Post
1734
Kimi-Audio 🚀🎧 an OPEN audio foundation model released by Moonshot AI
moonshotai/Kimi-Audio-7B-Instruct
✨ 7B
✨ 13M+ hours of pretraining data
✨ Novel hybrid input architecture
✨ Universal audio capabilities (ASR, AQA, AAC, SER, SEC/ASC, end-to-end conversation)
merve 
posted an update 3 days ago
view post
Post
3249
Don't sleep on new AI at Meta Vision-Language release! 🔥

facebook/perception-encoder-67f977c9a65ca5895a7f6ba1
facebook/perception-lm-67f9783f171948c383ee7498

Meta dropped swiss army knives for vision with A2.0 license 👏
> image/video encoders for vision language modelling and spatial understanding (object detection etc) 👏
> The vision LM outperforms InternVL3 and Qwen2.5VL 👏
> They also release gigantic video and image datasets

The authors attempt to come up with single versatile vision encoder to align on diverse set of tasks.

They trained Perception Encoder (PE) Core: a new state-of-the-art family of vision encoders that can be aligned for both vision-language and spatial tasks. For zero-shot image tasks, it outperforms latest sota SigLIP2 👏



> Among fine-tuned ones, first one is PE-Spatial. It's a model to detect bounding boxes, segmentation, depth estimation and it outperforms all other models 😮



> Second one is PLM, Perception Language Model, where they combine PE-Core with Qwen2.5 LM 7B. it outperforms all other models (including InternVL3 which was trained with Qwen2.5LM too!)

The authors release the following checkpoints in sizes base, large and giant:

> 3 PE-Core checkpoints (224, 336, 448)
> 2 PE-Lang checkpoints (L, G)
> One PE-Spatial (G, 448)
> 3 PLM (1B, 3B, 8B)
> Datasets



Authors release following datasets 📑
> PE Video: Gigantic video datasete of 1M videos with 120k expert annotations ⏯️
> PLM-Video and PLM-Image: Human and auto-annotated image and video datasets on region-based tasks
> PLM-VideoBench: New video benchmark on MCQA
  • 2 replies
·
danielhanchen 
posted an update 3 days ago
view post
Post
5006
🦥 Introducing Unsloth Dynamic v2.0 GGUFs!
Our v2.0 quants set new benchmarks on 5-shot MMLU and KL Divergence, meaning you can now run & fine-tune quantized LLMs while preserving as much accuracy as possible.

Llama 4: unsloth/Llama-4-Scout-17B-16E-Instruct-GGUF
DeepSeek-R1: unsloth/DeepSeek-R1-GGUF-UD
Gemma 3: unsloth/gemma-3-27b-it-GGUF

We made selective layer quantization much smarter. Instead of modifying only a subset of layers, we now dynamically quantize all layers so every layer has a different bit. Now, our dynamic method can be applied to all LLM architectures, not just MoE's.

Blog with Details: https://docs.unsloth.ai/basics/dynamic-v2.0

All our future GGUF uploads will leverage Dynamic 2.0 and our hand curated 300K–1.5M token calibration dataset to improve conversational chat performance.

For accurate benchmarking, we built an evaluation framework to match the reported 5-shot MMLU scores of Llama 4 and Gemma 3. This allowed apples-to-apples comparisons between full-precision vs. Dynamic v2.0, QAT and standard iMatrix quants.

Dynamic v2.0 aims to minimize the performance gap between full-precision models and their quantized counterparts.
AdinaY 
posted an update 5 days ago
view post
Post
3143
MAGI-1 🪄 the autoregressive diffusion video model, released by Sand AI

sand-ai/MAGI-1

✨ 24B with Apache 2.0
✨ Strong temporal consistency
✨ Benchmark-topping performance
  • 1 reply
·
merve 
posted an update 5 days ago
view post
Post
3076
New foundation model on image and video captioning just dropped by NVIDIA AI 🔥

Describe Anything Model (DAM) is a 3B vision language model to generate detailed captions with localized references 😮

The team released the models, the dataset, a new benchmark and a demo 🤩 nvidia/describe-anything-680825bb8f5e41ff0785834c

Most of the vision LMs focus on image as a whole, lacking localized references in captions, and not taking in visual prompts (points, boxes, drawings around objects)

DAM addresses this on two levels: new vision backbone that takes in focal crops and the image itself, and a large scale dataset 👀

They generate a dataset by extending existing segmentation and referring expression generation datasets like REFCOCO, by passing in the images and classes to VLMs and generating captions.

Lastly, they also release a new benchmark again with self-supervision, they use an LLM to evaluate the detailed captions focusing on localization 👏
AdinaY 
posted an update 6 days ago
AdinaY 
posted an update 7 days ago
AdinaY 
posted an update 11 days ago
view post
Post
1968
Wan2.1-FLF2V🎥 a 14B start-end frame video generation model just released by Alibaba_Wan🔥

Wan-AI/Wan2.1-FLF2V-14B-720P

✨ Give it two images (start & end), it generates a smooth, high-quality video in between.
✨ Apache 2.0 licensed
✨ Built on DiT + Flow Matching
  • 1 reply
·
AdinaY 
posted an update 13 days ago
view post
Post
817
After yesterday's wave of reveals, here's what's going down today in the Chinese AI community 🔥

✨ Kuaishou unveiled Kling AI 2.0
https://klingai.com/global/

✨ MiniMax AI dropped their latest TTS model Speech-02
https://minimax.io/audio

✨ Tencent Hunyuan teased the upcoming open model - Hunyuan Portrait
HunyuanPortrait: Implicit Condition Control for Enhanced Portrait Animation (2503.18860)

✨ ModelScope launched an MCP Square, with 1,500 MCPs already online
https://modelscope.cn/mcp

And it's only Tuesday🌞
bartowski 
posted an update 13 days ago
view post
Post
12469
Access requests enabled for latest GLM models

While a fix is being implemented (https://github.com/ggml-org/llama.cpp/pull/12957) I want to leave the models up for visibility and continued discussion, but want to prevent accidental downloads of known broken models (even though there are settings that could fix it at runtime for now)

With this goal, I've enabled access requests. I don't really want your data, so I'm sorry that I don't think there's a way around that? But that's what I'm gonna do for now, and I'll remove the gate when a fix is up and verified and I have a chance to re-convert and quantize!

Hope you don't mind in the mean time :D
AdinaY 
posted an update 14 days ago
view post
Post
1009
🔥 Big day for the Chinese open source AI community: zh-ai-community

> Skywork AI :
Released 7B/32B reasoning models excels in math & coding
Skywork/skywork-or1-67fa1bcb41b436ef2def76b9

> Moonshot AI & Numina:
Dropped 1.5B/7B POWERFUL formal math reasoning models
AI-MO/kimina-prover-preview-67fb536b883d60e7ca25d7f9

> Zhipu AI :
Launched 9B/32B reasoning models powering their first general AI agent - AutoGLM ✨
THUDM/glm-4-0414-67f3cbcb34dd9d252707cb2e

> DeepSeek :
Announced to open source its internal inference engine: DeepSeek Inference Engine
https://github.com/deepseek-ai/open-infra-index/blob/main/OpenSourcing_DeepSeek_Inference_Engine/README.md

Can't wait for more exciting releases coming 🥳


  • 1 reply
·
merve 
posted an update 14 days ago
view post
Post
4306
sooo many open AI releases past week, let's summarize! 🤗
merve/april-11-releases-67fcd78be33d241c0977b9d2

multimodal
> Moonshot AI released Kimi VL Thinking, first working open-source multimodal reasoning model and Kimi VL Instruct, both 16B MoEs with 3B active params (OS)
> InternVL3 released based on Qwen2.5VL, 7 ckpts with various sizes (1B to 78B)

LLMs
> NVIDIA released Llama-3_1-Nemotron-Ultra-253B-v1 an LLM built on Llama 405B for reasoning, chat and tool use
> Agentica released DeepCoder-14B-Preview, fine-tuned version of DeepSeek-R1-Distilled-Qwen-14B on problem-test pairs, along with the compiled dataset
> Zyphra/ZR1-1.5B is a new small reasoning LLM built on R1-Distill-1.5B (OS)
> Skywork-OR1-32B-Preview is a new reasoning model by Skywork

Image Generation
> HiDream releases three new models, HiDream I1 Dev, I1 Full, and I1 fast for image generation (OS)

*OS ones have Apache 2.0 or MIT licenses
·
AdinaY 
posted an update 14 days ago
view post
Post
3239
🔥 New reasoning models from the Chinese community, by Skywork 天工-昆仑万维

Skywork/skywork-or1-67fa1bcb41b436ef2def76b9

✨Skywork OR1-Math-7B > Optimized for math reasoning
✨Skywork-OR1-7B-preview > Excels in math & coding
✨Skywork-OR1-32B-preview > Matches Deepseek-R1 on math (AIME24/25) and coding (LiveCodeBench)

Released under the Apache 2.0 license 🥳
Final version coming in 2 weeks!
AdinaY 
posted an update 17 days ago
view post
Post
3229
Shanghai AI Lab - OpenGV team just released InternVL3 🔥

OpenGVLab/internvl3-67f7f690be79c2fe9d74fe9d

✨ 1/2/8/9/14/38/28B with MIT license
✨ Stronger perception & reasoning vs InternVL 2.5
✨ Native Multimodal Pre-Training for even better language performance
  • 1 reply
·
AdinaY 
posted an update 19 days ago
view post
Post
2745
Moonshot AI 月之暗面 🌛 @Kimi_Moonshotis just dropped an MoE VLM and an MoE Reasoning VLM on the hub!!

Model:https://huggingface.co/collections/moonshotai/kimi-vl-a3b-67f67b6ac91d3b03d382dd85

✨3B with MIT license
✨Long context windows up to 128K
✨Strong multimodal reasoning (36.8% on MathVision, on par with 10x larger models) and agent skills (34.5% on ScreenSpot-Pro)
AdinaY 
posted an update 20 days ago
view post
Post
2361
IndexTTS 📢 a TTS built on XTTS + Tortoise, released by BiliBili - a Chinese video sharing platform/community.
Model: IndexTeam/Index-TTS
Demo: IndexTeam/IndexTTS

✨Chinese pronunciation correction via pinyin
✨Pause control via punctuation
✨Improved speaker conditioning & audio quality (BigVGAN2)
✨Trained on 10k+ hours


  • 1 reply
·
AdinaY 
posted an update 20 days ago
view post
Post
1812
MAYE🎈a from-scratch RL framework for Vision Language Models, released by GAIR - an active research group from the Chinese community.

✨Minimal & transparent pipeline with standard tools
✨Standardized eval to track training & reflection
✨Open Code & Dataset

Code:
https://github.com/GAIR-NLP/MAYE?tab=readme-ov-file
Dataset:
ManTle/MAYE
Paper:
Rethinking RL Scaling for Vision Language Models: A Transparent, From-Scratch Framework and Comprehensive Evaluation Scheme (2504.02587)
  • 1 reply
·