Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Sky Lake-X reviews are out

QuizzicalQuizzical Member LegendaryPosts: 25,509
Reviews of the 6, 8, and 10 core versions are out, and the CPUs themselves should be available next week.  The 12 core variant is coming somewhat later, and 14+ cores may or may not make it to market this year, as it seems to have been a late modification to the lineup.

As compared to AMD's current top of the line Ryzen 7 1800X, the new Core i9-7900X offers:

more CPU cores
more memory channels
higher max turbo
typically higher IPC
AVX-512 support

So it shouldn't be surprising that the Core i9-7900X nearly always beats the Ryzen 7 1800X.  But the real competitor to it is not Ryzen 7 but Threadripper, which will flip around the cores advantage and match the number of memory channels.  Threadripper also isn't out quite yet.

So instead, let's compare the 8-core Core i7-7820X to Ryzen 7.  That evens the core count, while letting Intel CPUs retain the rest of their advantages.  And this time, the Ryzen chip wins at a lot of benchmarks.  Not all, nor even a majority.  AMD loses badly in benchmarks that are heavily memory bound or rely heavily on AVX-512, and typically loses substantially in single-threaded benchmarks.

But it still wins quite a few, to the degree that Ryzen 7 is a legitimate alternative to the Core i7-7820X.  Ryzen 7 also does it while using less power, and costing $160 less--likely a gap exceeding $200 once motherboard costs are factored in.

Stop and think about that for a moment.  People considering Intel's HEDT platform have good reason to consider AMD's mainstream consumer platform as a viable alternative.  When has that ever happened?  I'm leaning toward "never".

And then what happens when Threadripper arrives?  Suddenly AMD has more cores and the memory channel count is even.  Higher max turbo matters a lot in the consumer space, but a lot less to people looking at HEDT.  Typically higher IPC can get eaten up by needing higher power consumption to get it, which can force lower clock speeds.

Even once the 18-core Sky Lake X arrives, it's not at all clear that it will be able to convincingly beat Threadripper.  That's not to say that Threadripper will be the clear king of the HEDT market.  But for both AMD and Intel to have plausible claims to offering the best HEDT consumer CPU on the market will be quite a change from the last decade.

That's not to say that Zen cores are better than Sky Lake cores.  They're not clearly superior, but they are clearly competitive, and today's Sky Lake-X reviews pretty much confirm that.

So what about AVX-512 support, which Sky Lake-X offers and Threadripper won't?  I'm inclined to say it doesn't matter much--not just that it doesn't matter much yet, but that it never will outside of a handful of weird corner cases.  The wider the vector instructions get, the harder it is to use them well.  SSE has been around for many years, and we still don't have a good programming model to use it beyond hoping that the compiler will occasionally do something clever.

The other problem is GPUs.  If the algorithm has sufficient parallelism and the programmer has sufficient time to make well-threaded code that uses maximum width AVX instructions all over the place, why not use that effort instead to make a good GPU implementation?  While GPU programming is a different paradigm from CPU programming, how to properly exploit parallelism on a GPU is for the most part a solved problem.  In many cases, a single $700 GPU will handily outperform the best 2-socket Xeon or Epyc system that you'll be able to buy by the end of this year.

Yes, higher end Xeons still have more cores than Threadripper will.  But not necessarily more than the 32-core Epyc, unless you count Xeon Phi, which is an entirely different category of product.  And also a stupid idea.  Epyc also has a plausible solution to the multi-socket scaling problem that Xeon has suffered for many years due to low QPI bandwidth.

Epyc also has vastly more PCI Express lanes than Xeon, in case the plan was connecting other devices to do the heavy lifting rather than having most of the workload on the CPU.  For that use case, Xeon looks like it's going to be pretty ridiculous very shortly.  I don't think it's a coincidence that AMD is willing to offer more PCI Express connectivity than Intel, as AMD sells GPUs that need it and Intel doesn't.  If Nvidia made server CPUs, they'd probably offer a ton of bandwidth to connect GPUs, too.

I've been saying for months now that Intel's server division should be scared.  Today's reviews are more evidence of that proposition.
[Deleted User]xyzercrime

Comments

  • RidelynnRidelynn Member EpicPosts: 7,383
    Biggest takeaway I've seen from the early reviews are that the X299 motherboards are about as flakey as AMD's B350/X370s have been so far. Which says a good deal, I think.

    Not really any surprises on the CPU benchmarks though. Good to see the 7740K is getting hammered as the pretty redundant SKU that it is.
  • OzmodanOzmodan Member EpicPosts: 9,726
    Really do not like the x299 design.  Why should anyone have to buy a very expensive motherboard when, depending on the chip, you only get to use some of the components.  Talk about backasswards design, this is it.

    Far as I am concerned the Kabby-Lake generation is far superior.  Anyone recommending an x299 chip should have their head examined.  When you get down to it, the minimal performance gain is heavily offset by the price difference.


    Asm0deus
  • RidelynnRidelynn Member EpicPosts: 7,383
    Wow are x299 motherboards expensive. Cheapest one I'm seeing right now is $220, and they are averaging closer to $300-400.

    Really makes me wonder WTF Kaby-X is out there for. There are "Combo Deals" with a motherboard + Kaby-X CPU, and they are running around $500-$600.

    I guess, though, looking at it... a ~roughly equivalent~ Z270 motherboard would run you around $150-200 (the current crop of X299's aren't the budget models, they are higher end performance line, so trying to keep roughly apples to apples). If I were to drop a i7 7700 at $300, and one of these motherboards, I'd be around $450-$500.

    $0-$100 difference in Kaby vs Kaby X once you look at the entire system, and it nets you quad channel memory, but loses IGP capability (which as useless as it is in a gaming rig, does have some limited functionality when troubleshooting GPU issues or video encoding). It's not ~as bad~ if you look at it like that, but not everyone (in fact, probably a extreme minority) goes out and drops $200+ on a motherboard in a DIY build - I would say the $70-$150 segment is probably the most popular by far. You lose out on the cool looking custom heatsinks and LED lighting schemes, but it's pretty much the same hardware past that.

    Really just looks like Intel + vendors are making a cash grab at the gaming culture - maybe we can get them to pay more for essentially the same product ~if~ we put some stickers and LEDs on it and let you change the color of them. 
    [Deleted User]
  • QuizzicalQuizzical Member LegendaryPosts: 25,509
    Ridelynn said:
    $0-$100 difference in Kaby vs Kaby X once you look at the entire system, and it nets you quad channel memory, but loses IGP capability (which as useless as it is in a gaming rig, does have some limited functionality when troubleshooting GPU issues or video encoding). 
    Kaby Lake X has two memory channels, not four.  The memory channels are in the CPU die, and sticking a die with two memory channels in a motherboard that can handle four still only lets you use two.
    Ozmodan
  • QuizzicalQuizzical Member LegendaryPosts: 25,509
    Quizzical said:
    SSE has been around for many years, and we still don't have a good programming model to use it beyond hoping that the compiler will occasionally do something clever.
    Sorry, but that's also what libraries are for...
    If you call a library to do something or other that uses SSE or AVX vector instructions internally, then yeah, you get SSE or AVX running in your binary.  But that's not writing code that uses the instructions yourself.  That's just calling someone else's code that uses it.  And writing code to use it is enough of a pain that most people just don't.

    It's not that it's impossible to use it.  There just aren't any nice ways to do it analogous to, say, the nice ways that we have for making CPU code scale to many threads.  You can use intrinsics, which are guaranteed to work, except that it will mean the code completely fails on older CPUs that don't support the intrinsics you use and doesn't use the maximum width on newer CPUs.  You can use OpenMP, which works fine in the simplest cases, but quickly falls apart if you need something tricky.  You can use OpenCL and try to cram CPU code into tools designed for GPUs, but if you're going to do that, why not just use a GPU?
  • OzmodanOzmodan Member EpicPosts: 9,726
    Well I was in Microcenter today and they had a bunch of x299 motherboards, cheapest was 310 and there were a couple over $500, most were in the $400 range.  For $150 I can get a really nice motherboard for a I7 7700.  So that is a big difference in price.  Intel's 4-8 core cpus in the x299 set really use only a part of the motherboards assets, like only 16 pci lanes.  Rather stupid design if you ask me, but of course you can put one of the higher core chips in later when they are available and bingo you can use more of the motherboard.

    Sure makes the Ryzen 1700 look good with 24 pci lanes.
  • CleffyCleffy Member RarePosts: 6,414
    AMD will have the same issue with Threadripper on it's 8 core models. The CPU has less PCI-e lanes so won't fully utilize the motherboard.
  • CleffyCleffy Member RarePosts: 6,414
    Not sure until thread ripper is released. Both Intel and AMD get most of their pci-e lanes from the cpu. What AMD is doing is having 2 chips in 1 cpu doubling available resources. But the 8 cores will be 1 chip in a cpu. Considering they are putting in more capable mobos using more PCI-e lanes and more memory, the 8core chips will be limited on this platform.
  • RidelynnRidelynn Member EpicPosts: 7,383
    Not sure I'm entirely convinced that number of available PCIe lanes will make a meaningful difference for the average gamer.

    Sure, for servers I could see having a lot of NVMe-type devices, or a lot of GPUs in like a render/crypto farm or AI server application. 

    But for your standard gamer who has 1 GPU, and maybe 1 or 2 NVMe devices....

    Even SLI/CFX hasn't really been all that bottlenecked at x8/x8 versus x16/x16.
  • QuizzicalQuizzical Member LegendaryPosts: 25,509
    Cleffy said:
    AMD will have the same issue with Threadripper on it's 8 core models. The CPU has less PCI-e lanes so won't fully utilize the motherboard.
    That would really surprise me.  With their Epyc server CPUs, it's 4 dies for the full 128 PCI Express lanes all up and down the lineup.  The 24-core, 16-core, and 8-core versions are just salvage parts that disable some of the CPU cores on the die.  I expect AMD to do the same with Threadripper, with two dies all up and down the lineup.
    [Deleted User]
Sign In or Register to comment.