Linux
Intel Ultra 9 285K: How Is It On Linux?
Main channel vid:
**********************************
Thanks for watching our videos! If you want more, check us out online at the following places:
+ Website:
+ Forums:
+ Store:
+ Patreon:
+ L1 Twitter:
+ L1 Facebook:
+ Wendell Twitter:
+ Ryan Twitter:
+ Krista Twitter:
+ Business Inquiries/Brand Integrations: Queries@level1techs.com
*IMPORTANT* Any email lacking βlevel1techs.comβ should be ignored and immediately reported to Queries@level1techs.com.
———————————————————————————————————–
Intro and Outro Music By: Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 3.0 License
[ad_2]
source
I didn't know you had a Linux channel π±
Arow in the lake
now that I have my 14900K running stable and with Stable RAM, I will be staying put for now.
Light travels 7 times around the world in 1 sec, but in one period of a 5.7 GHz clock it only travels 5.3 cm (2") – and electrical signals travel slower than the speed of light. How CPUs function at these clock speeds is amazing!
I wonder how well it will run on Russian Linux
Time for AMD and Intel to start trying out the Apple thing and introduce really fast RAM chiplets/tiles.
We have the technology. Imagine the potential performance advantages. Cheers!
Thanks, just me, I'll stick with the TR1950X as I have a solar farm that powers the CPU I have.π€
I knew it, Windows is just a dead weight at this point. I am all AMD, and most likely will stay like that forever, but I knew the reason why those new cpus were behind was Windows being terrible. I will never update to Windows 11, and I can't wait for some programs such as AMD Adrenaline to be available on Linux so I can fully make the jump there.
keen to see some Optane SSD benchmark comparisons with 14th gen
I must be really old, cause the thumbnail keeps making me think you are holding an original Nintendo game cartridge
Gamers might be disapointed by it, but I think this is Intel trying to compete against the ARM CPUs.
You never did another video after Intel's "final" microcode for 13-14th gen. Could you? Did it fix all the servers? Sorry I don't have any questions about 200S series I don't care about it, you don't seem to either. Your non enthusiasm for this unstable platform (seems yours is unstable with the memory errors) is clear, all you could really do was show their slides and talk about intels slides…and show video of a broken computer from your end.
It's like the question is "Is the new Intel platform good", and your answer is
"Linux is different from windows and I work on servers and thats cool"
Hey Wendel, great work! Just a small correction, the memory latency is not an inherent problem of the Foveros packaging, but the architecture layout of Arrow Lake. In Arrow Lake the memory controller is separated from the computer tile with the cores, thereby necessitating movement over the fabric to access the memory. The, in the short term, better way is to include the memory controller in the compute file, like they do in the Xeon 6 parts. A disaggregated memory controller is in the long term better but Intelβs current implementation has problems.
Does the GCC code path optimization include support and/or tuning for the wider ALU execution units per core?
Zen 2/3/4 have 4 integer ALUs while Intel's Core also had 4 ALU's with Alder Lake 12th through 14th gen, have 5 integer ALUs.
Both Zen5 and Arrow Lake have 6 ALUs.
I feel that both uarch's will require compiler optimizations to take advantage of the wider execution paths per core but Zen5 less so die to it's SMT implementation. Arrow Lake will need even tighter compiler optimization to utilize wider ALU's.
Any optimizations Intel may push for will benefit Zen5 as well.
Aaahh Linux with its 8.9 billion different distros and still 99.9999999% of the population have never even heard it. Good for tools/utilities but other than that, you can keep it.
Thread thrashing is real on Windows …
Im always in the market for new server hardware to refresh my development servers
The number 1 problem is always – once I configure and price up a baselevel offering, I head over to that auction site, and have a look at whats avail for around the same price in barely-used mobo+cpu+ram combos. Then end up in a vast rabbit hole of comparing cheap maxxed out Epyc combos, with limitless cores and a tonne of ECC ram … and waste the next 2 days playing what-if. Then I give up and get back to work
Will get it to run dual RTX 5090 on ProArt boards with dual x16 PCIE slots. 2x32GB will finally get to a reasonable AI platform on a budget while being totally awesome
Any Linux benchmark?
So you also run no Barra, even though you can easily make your own OS like I do yet what I don't understand is why you're such a monster, but you need desktop icons. Do you not have all that fucking free space on your Das bar? Can you not just easily throw your most used shit? On your Das, bar and have a clean desktop. Use variety on Linux and move on with your life. I don't quit keeping digital clutter.On your desktop
Asus the best
Very nice to hear about the CPUs from a Linux standpoint. I'm more focused on the laptop variants
more vids on hardware on linux, plz
Mo betta blues…π Blue; Intel; get it?
tokens per seconds in llama 3.2 405b using cpu+ram? just for the lolz
its six cores for the Ultra 5, wish it was 5, would have gotten it instantly!
What is the CPU frequency scaling doing? What happens to webXPRT if you `cpupower frequency-set -g performance`?
Thanks for your balanced opinion – I value it.
imagine shrinking x2 and not gaining x2 in perf like it was before….
Honestly both Intel and AMD have such atrocious chip names now I've lost interest.
I am very interested in ECC support comparison between current Intel and AMD. Please the most likable person on YT!
Wendel… timestamps, timestamps!
GameModeRun shouldn't need any special adjustments for ArrowLake since the e- vs p-core detection is simply "looking to see if there are differences in the max frequency among the cores and use the set that report the higher number" so that should already work out of the box. At least with the v1.8.2 since my initial implementation in v1.8.0 had a bug for e- vs p-cores (due to not having such a system so could never test it).
edit: ok spoke too soon, I added a 5% safety limit in the detection code and it appears that some individual cores on at least the 13900k can boost more than 5% over the other P cores max frequency leading to GameModeRun pinning the game to only those 4 cores that can boost.