Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Intel officially launches Sky Lake Refresh Refresh Refresh

QuizzicalQuizzical Member LegendaryPosts: 25,501
If you really wanted to combine the high turbo clock speeds of Sky Lake Refresh Refresh with the eight cores of Ryzen 7, then now you can.  It only costs $488, and the base frequency is a meager 3.6 GHz, so who knows how aggressive that turbo will be.

This is the first part that Intel is officially calling 9th generation Core; hence the three "refresh"es as part of the title.  A quick refresher on Intel's naming convention:

Sixth generation Core:  Sky Lake
Seventh generation Core:  Kaby Lake, aka, Sky Lake Refresh
Eighth generation Core:  Kaby Lake-R, Coffee Lake-S, Kaby Lake-G, Coffee Lake-U/H, Whiskey Lake-U, Amber Lake-Y, and Cannon Lake-U; some of those could be called Sky Lake Refresh Refresh, but not all of them should have existed
Ninth generation Core:  Coffee Lake Refresh, aka, Sky Lake Refresh Refresh Refresh

(The list on eighth generation was copied from Anandtech, because I lost track of it all on my own.)

Why do I just stick more "refresh" on the name?  Because it's neither a new process node nor a new CPU core.  Sky Lake Refresh topped out at four cores, Sky Lake Refresh Refresh at six cores, and Sky Lake Refresh Refresh Refresh at eight cores, but they're all essentially the same cores.  In contrast, moving from Broadwell to Sky Lake was a genuinely new CPU core, and moving from Haswell to Broadwell was a new process node.

For all Intel's talk about how moving from 14 nm to 14+ nm to 14++ nm (don't they realize that an increment operator is going in the wrong direction?), it's effectively just a more mature version of the same 14 nm process node, and never would have existed if not for their debacle on 10 nm.  Even if they someday make a 14++*^! nm process node, it still won't be competitive with TSMC 7 nm or Samsung 7 nm.
[Deleted User]GdemamiAsm0deusOzmodanceratop001mikeb0817Thupli
«134

Comments

  • RidelynnRidelynn Member EpicPosts: 7,383
    edited October 2018
    I had the exact same feeling when nVidia finally released the RTX:

    >yawn<

    But it's 5GHZ!!!!!!!! *















    *not really
    [Deleted User]Asm0deusOzmodanDragonJockey
  • RidelynnRidelynn Member EpicPosts: 7,383
    Well I have a poor analogy here.

    Audio recording. It started out on the phonograph. That technology progressed pretty rapidly through various speeds and formats and quality until we hit magnetic tape. Then it went digital with the CD. Today it's still digital, but doesn't necessarily even have a physical format anymore.

    Now let's look at the primary function of audio recording: to preserve the sound as accurately as possible. With respect to fidelity, a late era record actually isn't all that bad. A lot of audio purists actually prefer a record to digital recording. Similar with AM radio, to FM with excellent fidelity and good transmission distance, digital satellite transmission that covers entire continents, to today's purely digital transmission that has an effective global reach. 

    So why aren't we continuing to evolve audio recording and reach even better levels of fidelity and transmission efficiency? 

    Because it's good enough.

    Sure, you can make a higher fidelity format. But humans wouldn't be able to perceive the difference. Most people listen to mediocre sound formats anyway because they are more convenient. Very few people invest in the equipment that is needed to tell the difference between what is legitimately "high fidelity" and what is just standard broadcast media. And cellular data (at least at the levels required for audio data) is nearly ubiquitous in most areas.

    I think CPU speed as hit that same wall, at least with our current programming paradigms. In a world where single core IPC is still very much king, physics has us boxed in on speed and programming has us boxed in on parallelism. Even if you could make it faster, let's face it, the vast majority of the world spends the vast majority of their computing time on mobile devices running very small processors. You can throw more cores at it, but that isn't going to make your game run much faster, and it certainly has a very steep level of diminishing returns.

    People will default to what is convenient and inexpensive.... like they have with Audio. And that's why hypothesis on why mobile has been the growth driver for more than a decade. There's no incentive to do anything other than slowly iterate on a desktop design.
    Kyleran[Deleted User]ArglebargleOzmodanThupli
  • centkincentkin Member RarePosts: 1,527
    Part of it is a lack of real competition on the high end -- the other part is silicon is nearly topped out.
    OzmodanDvora
  • GaladournGaladourn Member RarePosts: 1,813
    My Xeon E5 1620 can still play anything you throw at it, so I really don't know what the point is with newer versions of CPUs when even the older ones have not reached their limits...
  • RidelynnRidelynn Member EpicPosts: 7,383
    https://www.forbes.com/sites/jasonevangelho/2018/10/09/intels-i9-9900k-vs-ryzen-2700x-gaming-benchmarks-are-misleading-period/#35fb2b4b4e4e

    This was an interesting read. I don't know that it's true or not.

    If it is true, I think it says more about Intel as a company than it does about the 9000 series CPUs.
    Ozmodanlaserit
  • OzmodanOzmodan Member EpicPosts: 9,726
    Interesting the new I9 series comes without a cooler and it is a well-known fact these cpus run quite hot, so you have to add on an expensive cooler on top of the processor cost while the 2700x comes with a very good one.

    As to any benchmark coming from Intel, you know they are going to cheat and they obviously have with this release.  

    Also not mentioned you can use the most current motherboards for this new chip, but you have to flash the bios to do so, something I do not like doing.

    Intel is floundering right now.  Wait until we start seeing 7nm from AMD, they will be crying.
    SlyLoKGdemami
  • OzmodanOzmodan Member EpicPosts: 9,726
    Ridelynn said:
    https://www.forbes.com/sites/jasonevangelho/2018/10/09/intels-i9-9900k-vs-ryzen-2700x-gaming-benchmarks-are-misleading-period/#35fb2b4b4e4e

    This was an interesting read. I don't know that it's true or not.

    If it is true, I think it says more about Intel as a company than it does about the 9000 series CPUs.
    Seems this is very true.  Intel has major egg on their face.  While this is a 30 minute thread they pretty much nail Intel with a lot of untruths on the performance assertations.



    Makes the people at Principle Technologies look like a bunch of shills.
    [Deleted User]Asm0deusThupli
  • KellerKeller Member UncommonPosts: 602
    I delayed buying a new processor for 2 years, waiting for the new architecture. Last June I bought the anniversary cpu. Now I won't read any hardware websites for the next 2 years, but I don't mind reading this. This won't outshine my cpu.
  • RidelynnRidelynn Member EpicPosts: 7,383
    So the new scandal here:

    https://www.google.com/amp/s/www.techspot.com/amp/news/77313-do-need-re-review-core-i9-9900k.html

    Most motherboards aren’t enforcing Intel Power States properly - which means that they are “overclocking” by default.

    That really isn’t news that an enthusiast board have a default setting that enables some form of overclock. The scandal is that out of all Z370 boards, under “stock” (not the same as default - this should be the baseline Intel stock setting) - all manufacturers except ASUS were still ignoring power limitations. Under fully loaded conditions, the official TDP of 95 W is being blown past, with chips running in excess of 150 W

    Overclocking isn’t a bad thing. So long as you realize your doing it and understand the risks. I don’t know if it’s Intel pushing faulty power state specs to motherboard manufacturers for their firmware (the author finds it odd all save one are exhibiting the same behavior), or if it’s a conspiracy of many vendors. Or is it intentional to make the CPU look more attractive in multithreaded applications versus some other unnamed CPU with many cores available.


    VrikaGdemami[Deleted User]Thupli
  • IsilithTehrothIsilithTehroth Member RarePosts: 616
    2 things here.  Either Intel has gotten extremely lazy or tech is just topped out for the time being.  Not sure which but an i7 4770k is still very viable and it is now 5 generations later.  That should not be the case.
    Honestly a stock sandbridge is still able to handle any title out there. My 3570k ivybridge isn't bottlenecking me at all despite what people clamour.

    MurderHerd

  • VrikaVrika Member LegendaryPosts: 7,990
    edited November 2018
    Ridelynn said:
    So the new scandal here:

    https://www.google.com/amp/s/www.techspot.com/amp/news/77313-do-need-re-review-core-i9-9900k.html

    Most motherboards aren’t enforcing Intel Power States properly - which means that they are “overclocking” by default.

    That really isn’t news that an enthusiast board have a default setting that enables some form of overclock. The scandal is that out of all Z370 boards, under “stock” (not the same as default - this should be the baseline Intel stock setting) - all manufacturers except ASUS were still ignoring power limitations. Under fully loaded conditions, the official TDP of 95 W is being blown past, with chips running in excess of 150 W

    Overclocking isn’t a bad thing. So long as you realize your doing it and understand the risks. I don’t know if it’s Intel pushing faulty power state specs to motherboard manufacturers for their firmware (the author finds it odd all save one are exhibiting the same behavior), or if it’s a conspiracy of many vendors. Or is it intentional to make the CPU look more attractive in multithreaded applications versus some other unnamed CPU with many cores available.
    Ty for info.

    I'm sensitive to how much noise my computer makes when I'm doing quiet work, and if the CPU generates 50% more heat than specified then for me that CPU is unusable due to cooler noise.
     
  • 13lake13lake Member UncommonPosts: 719
    edited November 2018
    Rizen needs 16 logical cores to beat it.
    Ryzen needs 7nm so it can clock better at lower power consumption, and the new chiplet design with 8 cores per die to reduce the latency :), march-april can't come fast enough
    SlyLoKGdemamiThupli
  • KajidourdenKajidourden Member EpicPosts: 3,030
    2 things here.  Either Intel has gotten extremely lazy or tech is just topped out for the time being.  Not sure which but an i7 4770k is still very viable and it is now 5 generations later.  That should not be the case.
    This is what I have now, lmao.  I'm glad for it personally.  I only really needed to upgrade my GPU to get back to max settings.
  • QuizzicalQuizzical Member LegendaryPosts: 25,501
    13lake said:
    Rizen needs 16 logical cores to beat it.
    Ryzen needs 7nm so it can clock better at lower power consumption, and the new chiplet design with 8 cores per die to reduce the latency :), march-april can't come fast enough
    The die shrink will surely bring power consumption down at a given level of performance.  There's no guarantee that clock speeds will go up, though.  Maybe they will, but it's also possible that clock speeds could go down.  The days where a die shrink meant a large, guaranteed clock speed increase ended about 15 years ago.

    There's also no guarantee that third generation Ryzen will go with the chiplet design.  In fact, it probably won't, at least excluding Threadripper.  Having multiple hops to get to memory will kill you on latency, and it's completely unnecessary for a small die with only eight cores.

    It also isn't guaranteed that third generation Ryzen will use exactly the same die as EPYC.  If all the PCI Express controllers and memory controllers are on the I/O die, then what happens to a core chiplet with no PCI Express or memory controllers?  Can the infinity fabric be reallocated to serve those purposes?  Maybe it can, but if it can't, then they would need a different die.
  • RidelynnRidelynn Member EpicPosts: 7,383
    https://www.techspot.com/review/1744-core-i9-9900k-round-two/

    Re-running all the benchmarks with proper thermal limits enabled

    Gaming saw no appreciable change (not many games are CPU intensive anyway). Intel still wins at gaming but it’s matter of various degrees of overkill.

    Productivity saw a big drop. 9900k and 2700X all of a sudden are pretty much equal - one just costs $200 more.
    [Deleted User]Ozmodanlaserit
  • QuizzicalQuizzical Member LegendaryPosts: 25,501
    Some years ago, motherboard vendors noticed that motherboard reviews often contain benchmarks of performance.  That's fine if you're testing performance of various chips added to the motherboard besides the chipset.  It's sensible if you're testing to see which motherboards can overclock the furthest.  And it's even reasonable as a sanity check to ask, is this motherboard giving the CPU performance that it ought to?

    But the problem is when a lot of readers and even some reviewers don't know how to interpret results and look at performance benchmarks of CPUs or GPUs at stock speeds as meaning that this motherboard is simply faster than that one.  Thus, motherboard vendors wanted to "win" at those benchmarks.  And how do you do that?  By making the "stock" settings have some sort of automatic overclock.  This isn't the first time that has happened, though it likely does have the most dramatic results.
    [Deleted User]Thupli
  • RidelynnRidelynn Member EpicPosts: 7,383
    The motherboard manufacturers could certainly be the culprit. And there are several review sites that are explaining this as such.


  • DarkswormDarksworm Member RarePosts: 1,081
    edited November 2018
    Quizzical said:
    If you really wanted to combine the high turbo clock speeds of Sky Lake Refresh Refresh with the eight cores of Ryzen 7, then now you can.  It only costs $488, and the base frequency is a meager 3.6 GHz, so who knows how aggressive that turbo will be.
    Frequencies are not comparable across CPU brands.  Intel has a better Per-MHz performance than AMD, so a 3.6GHz Intel CPU might be comparable to a 4GHz AMD CPU.  Intel tends to be about 8-10% ahead there, compared to AMD.  AMD has always tried to make up for this with more Cores, which is beneficial in many scenarios...

    Not so much for gaming, though, which is why Intel is still king there...  But if I were building a rig for Video Editing, for example, then I'd totally consider one of the 8+ Core Ryzen at their price points over (for example) a 4-6 Core Intel CPU.

    I stopped reading there, because you are clearly clueless.

    16 Core/32 Threads doesn't offer much to the gaming market.  Intel obviously knows this, which is why they aren't breaking their back to change it, yet, but simply adding Cores and refining what they already have.

    The markets where Ryzen is more impactful aren't those that people on this forum are likely to talk about regularly ;-)
  • DarkswormDarksworm Member RarePosts: 1,081
    edited November 2018

    2 things here.  Either Intel has gotten extremely lazy or tech is just topped out for the time being.  Not sure which but an i7 4770k is still very viable and it is now 5 generations later.  That should not be the case.
    Honestly a stock sandbridge is still able to handle any title out there. My 3570k ivybridge isn't bottlenecking me at all despite what people clamour.
    This.  The huge advantage Intel has is that their older CPUs were so much better than AMD, that they have much better longevity as a result.  You can keep those CPUs longer, and they maintain viability longer than older AMD CPUs (Pre-Ryzen).  As laughable as it sounds, Intel customers could afford to wait, and even though their CPUs cost more... they definitely got a TON more value out of them than the cheaper AMD CPUs, which fell out of viability years earlier.

    This means that it was much easier for AMD to improve to be comparable to Intel than for Intel to improve to be much better than itself.  Intel's improvements are more evolutionary, because they already had the "winning formula" of the time.  It's AMD that needed revolutionary changes to bring themselves back to a competitive standing.

    AMD hasn't even really "caught up," they just offer more cores at similar price point...  as they've always done (Pre-Ryzen:  Quad Core A10s competed against Dual Core i3s, and lost).

    I do like the idea of getting CPU and GPU from a single manufacturer, though.  Makes driver maintenance so much easier :-P

    An AMD CPU from the Ivy/Sandy Bridge (or even Haswell) era is going to bottleneck in the latest titles.  They are terrible compared to Intel's, which have definitely withstood the test of time much better.

    This is especially true when you go into the Mobile (Laptop) gaming market - where AMD was all but dead as their APUs were simply too weak, and Intel was even starting to surpass their iGPUs with Iris Pro (which they seem to not be developing as heavily today).  That's also why they just stopped trying with Mobile APUs (earlier than Intel IIRC).  Worse performance, higher TDP and Heat Generation (worse battery life, etc.).
    RidelynnOzmodan
  • Po_ggPo_gg Member EpicPosts: 5,749
    Darksworm said:

    2 things here.  Either Intel has gotten extremely lazy or tech is just topped out for the time being.  Not sure which but an i7 4770k is still very viable and it is now 5 generations later.  That should not be the case.
    Honestly a stock sandbridge is still able to handle any title out there. My 3570k ivybridge isn't bottlenecking me at all despite what people clamour.
    This.
    Second this. On a local hardware forum it is a recurring topic, at least once per year, that "is it finally time now to replace my i5-2500K for something noticeably better?", and the answer is usually not really.

    With the latest refresh-refresh as Quizz called it, the sheer benchmarks are actually better, but even that is just due to the more cores. Practically, for everyday use and in games which are still using 4 cores at most, a decently overclocked 2500K keeps up the pace even today.
    1-2 more years, and that CPU will go to legendary status, especially in this current era of 6 months steps and refresh-refresh-refreshes :smiley:
    RidelynnAsm0deusOzmodan
  • OzmodanOzmodan Member EpicPosts: 9,726
    Well, I got my 9700k upgrade from work as usual, along with a new Z390 MB to go with it.
    The processor is a beast... I lost 4 cores compared to the 8700k, but I have 8 real cores without hyperthreading and the CPU crushes the competition. Rizen needs 16 logical cores to beat it.

    For gaming, the 9700k is the new 4790k.

    I'm very happy of my upgrade.
    Hope you bought a beast of a cpu cooler or you will be buying new outfit next year.

  • OzmodanOzmodan Member EpicPosts: 9,726
    Darksworm said:
    Quizzical said:
    If you really wanted to combine the high turbo clock speeds of Sky Lake Refresh Refresh with the eight cores of Ryzen 7, then now you can.  It only costs $488, and the base frequency is a meager 3.6 GHz, so who knows how aggressive that turbo will be.
    Frequencies are not comparable across CPU brands.  Intel has a better Per-MHz performance than AMD, so a 3.6GHz Intel CPU might be comparable to a 4GHz AMD CPU.  Intel tends to be about 8-10% ahead there, compared to AMD.  AMD has always tried to make up for this with more Cores, which is beneficial in many scenarios...

    Not so much for gaming, though, which is why Intel is still king there...  But if I were building a rig for Video Editing, for example, then I'd totally consider one of the 8+ Core Ryzen at their price points over (for example) a 4-6 Core Intel CPU.

    I stopped reading there, because you are clearly clueless.

    16 Core/32 Threads doesn't offer much to the gaming market.  Intel obviously knows this, which is why they aren't breaking their back to change it, yet, but simply adding Cores and refining what they already have.

    The markets where Ryzen is more impactful aren't those that people on this forum are likely to talk about regularly ;-)
    I keep hearing this nonsense that Intel is better at gaming.  First off, for most games, it does not matter at this point, the differences between Intel and AMD are negligible.  How well your game runs mostly depends on the gpu these days.   If you are going to waste your money on Intel, feel free, but don't come here and try to tell us how much better your system runs on Intel when most of the people know for a fact you are full of hot air.

  • gervaise1gervaise1 Member EpicPosts: 6,919
    Worth dwelling on what is usually behind a "refresh". 

    To use Samsung as an example (its not just Intel) their initial roadmap for 7nm had them aiming initially for 8nm; the first refresh was for 7nm; the second refresh was for 5nm. So if this had come to past saying "but its only a Samsung refresh refresh" would have obscured the drop from 8nm to 5nm. (The roadmap has moved on of course).

    Its typical that refreshes bring improvements. The first 14nm chips - were more like 16-18nm. When a company is pushing a new product though it tends to focus on the headline catching stuff. And saying 23nm to 14nm sounds much better than e.g. 20nm (nominal 23) to 18nm (nominal 14).

    Now Intel - absolutely - have had issues getting 10nm up and running. So it is fair to say that the latest "refresh" will be "dredging the barrel" for improvements to the 14nm process.

    We shouldn't be to dismissive though about "refreshes" - which many companies do.

  • QuizzicalQuizzical Member LegendaryPosts: 25,501
    Ozmodan said:
    Well, I got my 9700k upgrade from work as usual, along with a new Z390 MB to go with it.
    The processor is a beast... I lost 4 cores compared to the 8700k, but I have 8 real cores without hyperthreading and the CPU crushes the competition. Rizen needs 16 logical cores to beat it.

    For gaming, the 9700k is the new 4790k.

    I'm very happy of my upgrade.
    Hope you bought a beast of a cpu cooler or you will be buying new outfit next year.

    I'm on watercooling for years now, not a sophisticated high end thing but an all in one solution that works just fine.
    And according to my sensor monitors, sorry to disappoint but at least at constant 4.9ghz the 9700k doesn't heat more than the 8700k at constant 4.7ghz with the same cooler.

    Also, quite amusingly... the 9700k has a lower TDP than the 2700x (95w vs 105w). So do you really want to talk about power consumption and heat here? ;)
    Ozmodan said:
    I keep hearing this nonsense that Intel is better at gaming.  First off, for most games, it does not matter at this point, the differences between Intel and AMD are negligible.  How well your game runs mostly depends on the gpu these days.   If you are going to waste your money on Intel, feel free, but don't come here and try to tell us how much better your system runs on Intel when most of the people know for a fact you are full of hot air.
    Huho... AMD fanatic alert ;)
    There's absolutely no doubt that Intel is still better than AMD in single core performance, and games still heavily rely on that. And benchmarks confirm that.

    Does it really impact the average gamer? Not really, you can game just fine on an AMD rig. But when you go into high end stuff like VR gaming where high frame rates are important, then yeah, it makes a significant difference.
    The relatively cheap water coolers don't really do any better of a job of cooling than relatively nice air coolers that cost about the same price.  But there are plenty of fine coolers on both sides that are quite capable of keeping temperatures down on any mainstream consumer CPUs.

    The TDP on the latest batch of Intel CPUs is misleading, at least as they are commonly implemented on motherboards.  The Core i9-9900K also has a 95 W TDP, but on most motherboards, it can use more like 150 W--and a lot more than a Ryzen 7 2700X.  It's really just a case of higher performance at the expense of higher power consumption, but it would be better if Intel called it a 150 W CPU.

    Whether Intel is better at AMD at single core performance depends tremendously on what market you're looking at.  If you buy Intel's top end consumer desktop parts with turbo up to 5 GHz, then yes, that's better single-threaded performance than anything AMD has to offer.  If you're on a tighter budget and are looking at an Intel CPU with a max clock speed under 4 GHz, it might not still have higher single-core performance than the AMD alternative.  If you're looking at Atom, then no, Intel is terrible at single-threaded performance.

    Regardless of single-threaded performance, I sure wouldn't buy a new dual core CPU today for gaming.  Having more cores allows the CPU to get on things when the programmer expects.  There are diminishing returns to this, and eight cores isn't necessarily better than six.  But four cores sure is better than two in some games, and it's going to be an increasing fraction of the number of games out there going forward.

    Furthermore, the traditional single-threaded bottleneck in games is for a rendering thread.  That thread being a bottleneck goes away if you're using DirectX 12 or Vulkan properly, the new APIs make it possible to efficiently split that work among arbitrarily many threads.  Doing that is still rare, but I expect it to become increasingly common going forward.

    That's not to say that single-threaded bottlenecks are going to disappear entirely.  Bad code is never going to disappear entirely.  But give it several years and it's likely that it will be "some games" have a single-threaded bottleneck, not "most games".
    Asm0deusGdemami
  • QuizzicalQuizzical Member LegendaryPosts: 25,501
    gervaise1 said:
    Worth dwelling on what is usually behind a "refresh". 

    To use Samsung as an example (its not just Intel) their initial roadmap for 7nm had them aiming initially for 8nm; the first refresh was for 7nm; the second refresh was for 5nm. So if this had come to past saying "but its only a Samsung refresh refresh" would have obscured the drop from 8nm to 5nm. (The roadmap has moved on of course).

    Its typical that refreshes bring improvements. The first 14nm chips - were more like 16-18nm. When a company is pushing a new product though it tends to focus on the headline catching stuff. And saying 23nm to 14nm sounds much better than e.g. 20nm (nominal 23) to 18nm (nominal 14).

    Now Intel - absolutely - have had issues getting 10nm up and running. So it is fair to say that the latest "refresh" will be "dredging the barrel" for improvements to the 14nm process.

    We shouldn't be to dismissive though about "refreshes" - which many companies do.

    You're talking about die shrinks.  This isn't a die shrink.  Intel's 14 nm++ process node actually made a key feature larger than their original 14 nm node for the sake of allowing higher clock speeds at the expense of higher power consumption.  There has also been a lot of maturation of Intel's 14 nm node since the original Broadwell dual cores arrived, and that makes things better in a lot of ways, but again, that's not a die shrink.

    I wouldn't dismiss a true die shrink as a mere refresh.  Moving from Conroe to Penryn or from Sandy Bridge to Ivy Bridge genuinely did move things forward.  But this is more like going from Haswell to Devil's Canyon or Kaveri to Godaveri.  You can improve things a little with a respin, binning, and a more mature process node, but only a little.

    In this case, the headline is that Intel finally launched an eight core CPU in their mainstream consumer line.  They could have done so years ago, but didn't feel that competitive pressure from AMD forced them to until now.
Sign In or Register to comment.