If you really wanted to combine the high turbo clock speeds of Sky Lake Refresh Refresh with the eight cores of Ryzen 7, then now you can. It only costs $488, and the base frequency is a meager 3.6 GHz, so who knows how aggressive that turbo will be.
This is the first part that Intel is officially calling 9th generation Core; hence the three "refresh"es as part of the title. A quick refresher on Intel's naming convention:
Sixth generation Core: Sky Lake
Seventh generation Core: Kaby Lake, aka, Sky Lake Refresh
Eighth generation Core: Kaby Lake-R, Coffee Lake-S, Kaby Lake-G, Coffee Lake-U/H, Whiskey Lake-U, Amber Lake-Y, and Cannon Lake-U; some of those could be called Sky Lake Refresh Refresh, but not all of them should have existed
Ninth generation Core: Coffee Lake Refresh, aka, Sky Lake Refresh Refresh Refresh
(The list on eighth generation was copied from Anandtech, because I lost track of it all on my own.)
Why do I just stick more "refresh" on the name? Because it's neither a new process node nor a new CPU core. Sky Lake Refresh topped out at four cores, Sky Lake Refresh Refresh at six cores, and Sky Lake Refresh Refresh Refresh at eight cores, but they're all essentially the same cores. In contrast, moving from Broadwell to Sky Lake was a genuinely new CPU core, and moving from Haswell to Broadwell was a new process node.
For all Intel's talk about how moving from 14 nm to 14+ nm to 14++ nm (don't they realize that an increment operator is going in the wrong direction?), it's effectively just a more mature version of the same 14 nm process node, and never would have existed if not for their debacle on 10 nm. Even if they someday make a 14++*^! nm process node, it still won't be competitive with TSMC 7 nm or Samsung 7 nm.
Comments
>yawn<
But it's 5GHZ!!!!!!!! *
*not really
Audio recording. It started out on the phonograph. That technology progressed pretty rapidly through various speeds and formats and quality until we hit magnetic tape. Then it went digital with the CD. Today it's still digital, but doesn't necessarily even have a physical format anymore.
Now let's look at the primary function of audio recording: to preserve the sound as accurately as possible. With respect to fidelity, a late era record actually isn't all that bad. A lot of audio purists actually prefer a record to digital recording. Similar with AM radio, to FM with excellent fidelity and good transmission distance, digital satellite transmission that covers entire continents, to today's purely digital transmission that has an effective global reach.
So why aren't we continuing to evolve audio recording and reach even better levels of fidelity and transmission efficiency?
Because it's good enough.
Sure, you can make a higher fidelity format. But humans wouldn't be able to perceive the difference. Most people listen to mediocre sound formats anyway because they are more convenient. Very few people invest in the equipment that is needed to tell the difference between what is legitimately "high fidelity" and what is just standard broadcast media. And cellular data (at least at the levels required for audio data) is nearly ubiquitous in most areas.
I think CPU speed as hit that same wall, at least with our current programming paradigms. In a world where single core IPC is still very much king, physics has us boxed in on speed and programming has us boxed in on parallelism. Even if you could make it faster, let's face it, the vast majority of the world spends the vast majority of their computing time on mobile devices running very small processors. You can throw more cores at it, but that isn't going to make your game run much faster, and it certainly has a very steep level of diminishing returns.
People will default to what is convenient and inexpensive.... like they have with Audio. And that's why hypothesis on why mobile has been the growth driver for more than a decade. There's no incentive to do anything other than slowly iterate on a desktop design.
This was an interesting read. I don't know that it's true or not.
If it is true, I think it says more about Intel as a company than it does about the 9000 series CPUs.
As to any benchmark coming from Intel, you know they are going to cheat and they obviously have with this release.
Also not mentioned you can use the most current motherboards for this new chip, but you have to flash the bios to do so, something I do not like doing.
Intel is floundering right now. Wait until we start seeing 7nm from AMD, they will be crying.
Makes the people at Principle Technologies look like a bunch of shills.
https://www.google.com/amp/s/www.techspot.com/amp/news/77313-do-need-re-review-core-i9-9900k.html
Most motherboards aren’t enforcing Intel Power States properly - which means that they are “overclocking” by default.
That really isn’t news that an enthusiast board have a default setting that enables some form of overclock. The scandal is that out of all Z370 boards, under “stock” (not the same as default - this should be the baseline Intel stock setting) - all manufacturers except ASUS were still ignoring power limitations. Under fully loaded conditions, the official TDP of 95 W is being blown past, with chips running in excess of 150 W
Overclocking isn’t a bad thing. So long as you realize your doing it and understand the risks. I don’t know if it’s Intel pushing faulty power state specs to motherboard manufacturers for their firmware (the author finds it odd all save one are exhibiting the same behavior), or if it’s a conspiracy of many vendors. Or is it intentional to make the CPU look more attractive in multithreaded applications versus some other unnamed CPU with many cores available.
MurderHerd
I'm sensitive to how much noise my computer makes when I'm doing quiet work, and if the CPU generates 50% more heat than specified then for me that CPU is unusable due to cooler noise.
There's also no guarantee that third generation Ryzen will go with the chiplet design. In fact, it probably won't, at least excluding Threadripper. Having multiple hops to get to memory will kill you on latency, and it's completely unnecessary for a small die with only eight cores.
It also isn't guaranteed that third generation Ryzen will use exactly the same die as EPYC. If all the PCI Express controllers and memory controllers are on the I/O die, then what happens to a core chiplet with no PCI Express or memory controllers? Can the infinity fabric be reallocated to serve those purposes? Maybe it can, but if it can't, then they would need a different die.
Re-running all the benchmarks with proper thermal limits enabled
Gaming saw no appreciable change (not many games are CPU intensive anyway). Intel still wins at gaming but it’s matter of various degrees of overkill.
Productivity saw a big drop. 9900k and 2700X all of a sudden are pretty much equal - one just costs $200 more.
But the problem is when a lot of readers and even some reviewers don't know how to interpret results and look at performance benchmarks of CPUs or GPUs at stock speeds as meaning that this motherboard is simply faster than that one. Thus, motherboard vendors wanted to "win" at those benchmarks. And how do you do that? By making the "stock" settings have some sort of automatic overclock. This isn't the first time that has happened, though it likely does have the most dramatic results.
Not so much for gaming, though, which is why Intel is still king there... But if I were building a rig for Video Editing, for example, then I'd totally consider one of the 8+ Core Ryzen at their price points over (for example) a 4-6 Core Intel CPU.
I stopped reading there, because you are clearly clueless.
16 Core/32 Threads doesn't offer much to the gaming market. Intel obviously knows this, which is why they aren't breaking their back to change it, yet, but simply adding Cores and refining what they already have.
The markets where Ryzen is more impactful aren't those that people on this forum are likely to talk about regularly ;-)
This. The huge advantage Intel has is that their older CPUs were so much better than AMD, that they have much better longevity as a result. You can keep those CPUs longer, and they maintain viability longer than older AMD CPUs (Pre-Ryzen). As laughable as it sounds, Intel customers could afford to wait, and even though their CPUs cost more... they definitely got a TON more value out of them than the cheaper AMD CPUs, which fell out of viability years earlier.
This means that it was much easier for AMD to improve to be comparable to Intel than for Intel to improve to be much better than itself. Intel's improvements are more evolutionary, because they already had the "winning formula" of the time. It's AMD that needed revolutionary changes to bring themselves back to a competitive standing.
AMD hasn't even really "caught up," they just offer more cores at similar price point... as they've always done (Pre-Ryzen: Quad Core A10s competed against Dual Core i3s, and lost).
I do like the idea of getting CPU and GPU from a single manufacturer, though. Makes driver maintenance so much easier :-P
An AMD CPU from the Ivy/Sandy Bridge (or even Haswell) era is going to bottleneck in the latest titles. They are terrible compared to Intel's, which have definitely withstood the test of time much better.
This is especially true when you go into the Mobile (Laptop) gaming market - where AMD was all but dead as their APUs were simply too weak, and Intel was even starting to surpass their iGPUs with Iris Pro (which they seem to not be developing as heavily today). That's also why they just stopped trying with Mobile APUs (earlier than Intel IIRC). Worse performance, higher TDP and Heat Generation (worse battery life, etc.).
With the latest refresh-refresh as Quizz called it, the sheer benchmarks are actually better, but even that is just due to the more cores. Practically, for everyday use and in games which are still using 4 cores at most, a decently overclocked 2500K keeps up the pace even today.
1-2 more years, and that CPU will go to legendary status, especially in this current era of 6 months steps and refresh-refresh-refreshes
To use Samsung as an example (its not just Intel) their initial roadmap for 7nm had them aiming initially for 8nm; the first refresh was for 7nm; the second refresh was for 5nm. So if this had come to past saying "but its only a Samsung refresh refresh" would have obscured the drop from 8nm to 5nm. (The roadmap has moved on of course).
Its typical that refreshes bring improvements. The first 14nm chips - were more like 16-18nm. When a company is pushing a new product though it tends to focus on the headline catching stuff. And saying 23nm to 14nm sounds much better than e.g. 20nm (nominal 23) to 18nm (nominal 14).
Now Intel - absolutely - have had issues getting 10nm up and running. So it is fair to say that the latest "refresh" will be "dredging the barrel" for improvements to the 14nm process.
We shouldn't be to dismissive though about "refreshes" - which many companies do.
The TDP on the latest batch of Intel CPUs is misleading, at least as they are commonly implemented on motherboards. The Core i9-9900K also has a 95 W TDP, but on most motherboards, it can use more like 150 W--and a lot more than a Ryzen 7 2700X. It's really just a case of higher performance at the expense of higher power consumption, but it would be better if Intel called it a 150 W CPU.
Whether Intel is better at AMD at single core performance depends tremendously on what market you're looking at. If you buy Intel's top end consumer desktop parts with turbo up to 5 GHz, then yes, that's better single-threaded performance than anything AMD has to offer. If you're on a tighter budget and are looking at an Intel CPU with a max clock speed under 4 GHz, it might not still have higher single-core performance than the AMD alternative. If you're looking at Atom, then no, Intel is terrible at single-threaded performance.
Regardless of single-threaded performance, I sure wouldn't buy a new dual core CPU today for gaming. Having more cores allows the CPU to get on things when the programmer expects. There are diminishing returns to this, and eight cores isn't necessarily better than six. But four cores sure is better than two in some games, and it's going to be an increasing fraction of the number of games out there going forward.
Furthermore, the traditional single-threaded bottleneck in games is for a rendering thread. That thread being a bottleneck goes away if you're using DirectX 12 or Vulkan properly, the new APIs make it possible to efficiently split that work among arbitrarily many threads. Doing that is still rare, but I expect it to become increasingly common going forward.
That's not to say that single-threaded bottlenecks are going to disappear entirely. Bad code is never going to disappear entirely. But give it several years and it's likely that it will be "some games" have a single-threaded bottleneck, not "most games".
I wouldn't dismiss a true die shrink as a mere refresh. Moving from Conroe to Penryn or from Sandy Bridge to Ivy Bridge genuinely did move things forward. But this is more like going from Haswell to Devil's Canyon or Kaveri to Godaveri. You can improve things a little with a respin, binning, and a more mature process node, but only a little.
In this case, the headline is that Intel finally launched an eight core CPU in their mainstream consumer line. They could have done so years ago, but didn't feel that competitive pressure from AMD forced them to until now.