Reviews of the 6, 8, and 10 core versions are out, and the CPUs themselves should be available next week. The 12 core variant is coming somewhat later, and 14+ cores may or may not make it to market this year, as it seems to have been a late modification to the lineup.
As compared to AMD's current top of the line Ryzen 7 1800X, the new Core i9-7900X offers:
more CPU cores
more memory channels
higher max turbo
typically higher IPC
AVX-512 support
So it shouldn't be surprising that the Core i9-7900X nearly always beats the Ryzen 7 1800X. But the real competitor to it is not Ryzen 7 but Threadripper, which will flip around the cores advantage and match the number of memory channels. Threadripper also isn't out quite yet.
So instead, let's compare the 8-core Core i7-7820X to Ryzen 7. That evens the core count, while letting Intel CPUs retain the rest of their advantages. And this time, the Ryzen chip wins at a lot of benchmarks. Not all, nor even a majority. AMD loses badly in benchmarks that are heavily memory bound or rely heavily on AVX-512, and typically loses substantially in single-threaded benchmarks.
But it still wins quite a few, to the degree that Ryzen 7 is a legitimate alternative to the Core i7-7820X. Ryzen 7 also does it while using less power, and costing $160 less--likely a gap exceeding $200 once motherboard costs are factored in.
Stop and think about that for a moment. People considering Intel's HEDT platform have good reason to consider AMD's mainstream consumer platform as a viable alternative. When has that ever happened? I'm leaning toward "never".
And then what happens when Threadripper arrives? Suddenly AMD has more cores and the memory channel count is even. Higher max turbo matters a lot in the consumer space, but a lot less to people looking at HEDT. Typically higher IPC can get eaten up by needing higher power consumption to get it, which can force lower clock speeds.
Even once the 18-core Sky Lake X arrives, it's not at all clear that it will be able to convincingly beat Threadripper. That's not to say that Threadripper will be the clear king of the HEDT market. But for both AMD and Intel to have plausible claims to offering the best HEDT consumer CPU on the market will be quite a change from the last decade.
That's not to say that Zen cores are better than Sky Lake cores. They're not clearly superior, but they are clearly competitive, and today's Sky Lake-X reviews pretty much confirm that.
So what about AVX-512 support, which Sky Lake-X offers and Threadripper won't? I'm inclined to say it doesn't matter much--not just that it doesn't matter much yet, but that it never will outside of a handful of weird corner cases. The wider the vector instructions get, the harder it is to use them well. SSE has been around for many years, and we still don't have a good programming model to use it beyond hoping that the compiler will occasionally do something clever.
The other problem is GPUs. If the algorithm has sufficient parallelism and the programmer has sufficient time to make well-threaded code that uses maximum width AVX instructions all over the place, why not use that effort instead to make a good GPU implementation? While GPU programming is a different paradigm from CPU programming, how to properly exploit parallelism on a GPU is for the most part a solved problem. In many cases, a single $700 GPU will handily outperform the best 2-socket Xeon or Epyc system that you'll be able to buy by the end of this year.
Yes, higher end Xeons still have more cores than Threadripper will. But not necessarily more than the 32-core Epyc, unless you count Xeon Phi, which is an entirely different category of product. And also a stupid idea. Epyc also has a plausible solution to the multi-socket scaling problem that Xeon has suffered for many years due to low QPI bandwidth.
Epyc also has vastly more PCI Express lanes than Xeon, in case the plan was connecting other devices to do the heavy lifting rather than having most of the workload on the CPU. For that use case, Xeon looks like it's going to be pretty ridiculous very shortly. I don't think it's a coincidence that AMD is willing to offer more PCI Express connectivity than Intel, as AMD sells GPUs that need it and Intel doesn't. If Nvidia made server CPUs, they'd probably offer a ton of bandwidth to connect GPUs, too.
I've been saying for months now that Intel's server division should be scared. Today's reviews are more evidence of that proposition.
Comments
Not really any surprises on the CPU benchmarks though. Good to see the 7740K is getting hammered as the pretty redundant SKU that it is.
Far as I am concerned the Kabby-Lake generation is far superior. Anyone recommending an x299 chip should have their head examined. When you get down to it, the minimal performance gain is heavily offset by the price difference.
Really makes me wonder WTF Kaby-X is out there for. There are "Combo Deals" with a motherboard + Kaby-X CPU, and they are running around $500-$600.
I guess, though, looking at it... a ~roughly equivalent~ Z270 motherboard would run you around $150-200 (the current crop of X299's aren't the budget models, they are higher end performance line, so trying to keep roughly apples to apples). If I were to drop a i7 7700 at $300, and one of these motherboards, I'd be around $450-$500.
$0-$100 difference in Kaby vs Kaby X once you look at the entire system, and it nets you quad channel memory, but loses IGP capability (which as useless as it is in a gaming rig, does have some limited functionality when troubleshooting GPU issues or video encoding). It's not ~as bad~ if you look at it like that, but not everyone (in fact, probably a extreme minority) goes out and drops $200+ on a motherboard in a DIY build - I would say the $70-$150 segment is probably the most popular by far. You lose out on the cool looking custom heatsinks and LED lighting schemes, but it's pretty much the same hardware past that.
Really just looks like Intel + vendors are making a cash grab at the gaming culture - maybe we can get them to pay more for essentially the same product ~if~ we put some stickers and LEDs on it and let you change the color of them.
It's not that it's impossible to use it. There just aren't any nice ways to do it analogous to, say, the nice ways that we have for making CPU code scale to many threads. You can use intrinsics, which are guaranteed to work, except that it will mean the code completely fails on older CPUs that don't support the intrinsics you use and doesn't use the maximum width on newer CPUs. You can use OpenMP, which works fine in the simplest cases, but quickly falls apart if you need something tricky. You can use OpenCL and try to cram CPU code into tools designed for GPUs, but if you're going to do that, why not just use a GPU?
Sure makes the Ryzen 1700 look good with 24 pci lanes.
Sure, for servers I could see having a lot of NVMe-type devices, or a lot of GPUs in like a render/crypto farm or AI server application.
But for your standard gamer who has 1 GPU, and maybe 1 or 2 NVMe devices....
Even SLI/CFX hasn't really been all that bottlenecked at x8/x8 versus x16/x16.