Proxmox

QWEN 2.5 72b Benchmarked – World’s Best Open Source Ai Model?



The new Qwen 2.5 72b LLM is getting A LOT of rave reviews, but how does it stack up to real world testing? I’m testing out the 72b …

[ad_2]

source

Related Articles

6 Comments

  1. wdym "failed" the first 2 tests. Does the game actually work if you put a png image in the corresponding folder?
    Btw there's a solid argument that could be made that if the scenario you proposed was the "best" the entire human kind was able to put together as a plan, the correct thing to do not to save us

  2. Hi, I tried, on LM Studio, several versions of Qwen+Calme 70 , 72, 78b with all sorts of quant where Q5 and Q6 seems to perform best but I didn't find any that have a sufficient conversational speed. 3090 seems to work. While I have read the definition of K S K_M K_S and so on… I didn't really fully absorb the concept yet and from a model to the next, the "best performing model for my hardware" isn't always the same… The cozy spot is around 16gb even thought the device have 24gb… What am I missing? What settings should I tweak?

  3. I use this 32b model primarily for coding now. It's done so well, that I wonder if they trained it against claude 3.5 coding output, because it is very good. I wish one of these companies would make a hyper focused coding corpus model so that it can fit into 48gb vram at very high precision

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button