Worth dwelling on what is usually behind a "refresh".
To use Samsung as an example (its not just Intel) their initial roadmap for 7nm had them aiming initially for 8nm; the first refresh was for 7nm; the second refresh was for 5nm. So if this had come to past saying "but its only a Samsung refresh refresh" would have obscured the drop from 8nm to 5nm. (The roadmap has moved on of course).
Its typical that refreshes bring improvements. The first 14nm chips - were more like 16-18nm. When a company is pushing a new product though it tends to focus on the headline catching stuff. And saying 23nm to 14nm sounds much better than e.g. 20nm (nominal 23) to 18nm (nominal 14).
Now Intel - absolutely - have had issues getting 10nm up and running. So it is fair to say that the latest "refresh" will be "dredging the barrel" for improvements to the 14nm process.
We shouldn't be to dismissive though about "refreshes" - which many companies do.
You're talking about die shrinks. This isn't a die shrink. Intel's 14 nm++ process node actually made a key feature larger than their original 14 nm node for the sake of allowing higher clock speeds at the expense of higher power consumption. There has also been a lot of maturation of Intel's 14 nm node since the original Broadwell dual cores arrived, and that makes things better in a lot of ways, but again, that's not a die shrink.
I wouldn't dismiss a true die shrink as a mere refresh. Moving from Conroe to Penryn or from Sandy Bridge to Ivy Bridge genuinely did move things forward. But this is more like going from Haswell to Devil's Canyon or Kaveri to Godaveri. You can improve things a little with a respin, binning, and a more mature process node, but only a little.
In this case, the headline is that Intel finally launched an eight core CPU in their mainstream consumer line. They could have done so years ago, but didn't feel that competitive pressure from AMD forced them to until now.
No not die shrinks but process maturation. Process not product since when the manufacturer talks about e.g. "14nm" they are talking about the name of a manufacturing process not a "14nm cpu". Which is why a 14nm cpu wll contain stuff - transistors, interconnects, etc. - from say 8nm to 80nm. And Over time improvements are made; tolerances will improve; yields get better.
You took Skylake - why? There was no die shrink involved with Skylake - it was a Broadwell refresh! And some people pointed this out at the time. No real gain; just another 14nm cpu; the big improvement is the motherboard. All true of course.
From a manufacturing process point of view however Skylake was huge. It wasn't a die shrink but it was Intel's first "mature" 14nm manufacturing process. That was able to deliver high volumes.
And that is my point.
Needless to say it typically becomes harder and harder to improve a process and diminishing returns - probably - kick in with every refresh.
You took Skylake - why? There was no die shrink involved with Skylake - it was a Broadwell refresh! And some people pointed this out at the time. No real gain; just another 14nm cpu; the big improvement is the motherboard. All true of course.
Sky Lake was a new architecture, not just Broadwell on a new process node. Sky Lake is heavily derivative of Broadwell, so it's not a huge overhaul like going from Clarkdale to Sandy Bridge. Sky Lake improved some caches and made certain things wider. But it is still a new architecture, and not just Broadwell on a different process node.
One way that you can tell this is with benchmarks that normalize everything. For example, get a Sky Lake CPU and a Broadwell CPU, disable all but one CPU core, and fix the clock speed at 3 GHz with turbo disabled. The Sky Lake CPU will tend to perform several percent faster than the Broadwell CPU. Meanwhile, under that test, the Sky Lake CPU will perform identically (at least up to rounding) to Kaby Lake, Coffee Lake, or the latest "Coffee Lake Refresh", at least if you can pick models with exactly the same L3 cache capacity.
The last time Intel offered a mainstream consumer desktop CPU that improved the functionality of the cores on a per clock basis was Sky Lake. Everything since then is just a refresh. Incidentally, Sky Lake-X cores are a little different from Sky Lake, with a different L2 cache.
Refreshes aren't necessarily a bad thing. They can mean that a CPU vendor makes something a little better after a year rather than having to wait two years for an update. But three new generations just being a refresh like this is unprecedented. The reason for so many refreshes is that Intel's 10 nm process is severely delayed, and that's what is breaking everything for Intel right now. The problems might not look that bad right now because AMD is also still on 12/14 nm, and on an inferior process node to Intel's 14++ nm, at that. But once AMD has numerous products out on 7 nm, Intel is going to be in a world of hurt until they can catch up.
Well, I got my 9700k upgrade from work as usual, along with a new Z390 MB to go with it.
The processor is a beast... I lost 4 cores compared to the 8700k, but I have 8 real cores without hyperthreading and the CPU crushes the competition. Rizen needs 16 logical cores to beat it.
For gaming, the 9700k is the new 4790k.
I'm very happy of my upgrade.
Hope you bought a beast of a cpu cooler or you will be buying new outfit next year.
I'm on watercooling for years now, not a sophisticated high end thing but an all in one solution that works just fine.
And according to my sensor monitors, sorry to disappoint but at least at constant 4.9ghz the 9700k doesn't heat more than the 8700k at constant 4.7ghz with the same cooler.
Also, quite amusingly... the 9700k has a lower TDP than the 2700x (95w vs 105w). So do you really want to talk about power consumption and heat here?
I keep hearing this nonsense that Intel is better at gaming. First off, for most games, it does not matter at this point, the differences between Intel and AMD are negligible. How well your game runs mostly depends on the gpu these days. If you are going to waste your money on Intel, feel free, but don't come here and try to tell us how much better your system runs on Intel when most of the people know for a fact you are full of hot air.
Huho... AMD fanatic alert
There's absolutely no doubt that Intel is still better than AMD in single core performance, and games still heavily rely on that. And benchmarks confirm that.
Does it really impact the average gamer? Not really, you can game just fine on an AMD rig. But when you go into high end stuff like VR gaming where high frame rates are important, then yeah, it makes a significant difference.
Not to rain on your parade, because the CPU you got is awesome, and I'm glad your enjoying it.
But.
Intel changed their definition of TDP, apparently. Your 95W TDP now is only valid with all cores at base frequency (3.6Ghz). Any higher, regardless of if it's an overclock or Intel's own Boost algorithm, is not bound by any particular power draw. When limited to 95W, the 9900k and 2700X are pretty well equal (and the 8800k for that matter) - not sure that the 9700 would be much different, as it's same arch and clocks, just no HT. But your CPU running at 4.9Ghz is most definitely not also running at 95W under any type of significant load.
Now, it is indeed true that 95W < 105W, and that even at 3.6Ghz it's matching an up to** 4.3Ghz AMD CPU. So all of that still counts for some bragging rights.
Also, with respect to temperature... I won't nit pick too much, but it could be 1 core at 4.9 Ghz on the 9700k versus 6 cores at 4.8Ghz on the 8800k, and you'd certainly see lower temps on the 9900k. Not saying your trying to cheat or anything, just pointing out that without some context or basis, the statement in and of itself doesn't really mean much.
Audio recording. It started out on the phonograph. That technology progressed pretty rapidly through various speeds and formats and quality until we hit magnetic tape. Then it went digital with the CD. Today it's still digital, but doesn't necessarily even have a physical format anymore.
Now let's look at the primary function of audio recording: to preserve the sound as accurately as possible. With respect to fidelity, a late era record actually isn't all that bad. A lot of audio purists actually prefer a record to digital recording. Similar with AM radio, to FM with excellent fidelity and good transmission distance, digital satellite transmission that covers entire continents, to today's purely digital transmission that has an effective global reach.
So why aren't we continuing to evolve audio recording and reach even better levels of fidelity and transmission efficiency?
Because it's good enough.
Sure, you can make a higher fidelity format. But humans wouldn't be able to perceive the difference. Most people listen to mediocre sound formats anyway because they are more convenient. Very few people invest in the equipment that is needed to tell the difference between what is legitimately "high fidelity" and what is just standard broadcast media. And cellular data (at least at the levels required for audio data) is nearly ubiquitous in most areas.
I think CPU speed as hit that same wall, at least with our current programming paradigms. In a world where single core IPC is still very much king, physics has us boxed in on speed and programming has us boxed in on parallelism. Even if you could make it faster, let's face it, the vast majority of the world spends the vast majority of their computing time on mobile devices running very small processors. You can throw more cores at it, but that isn't going to make your game run much faster, and it certainly has a very steep level of diminishing returns.
People will default to what is convenient and inexpensive.... like they have with Audio. And that's why hypothesis on why mobile has been the growth driver for more than a decade. There's no incentive to do anything other than slowly iterate on a desktop design.
Well thought out opinion Ryzen. Actually high end audio is still original tapes or phono stage. Very few if any high end audio is digital. Actually high end audio is till amplified with tubes. Phono arms and, cartridges are much more superior in sound quality to and digital format. There is still some very nice solid state pieces that can produce good sound quality, but still your very high end systems are strictly analog.
Well, I got my 9700k upgrade from work as usual, along with a new Z390 MB to go with it.
The processor is a beast... I lost 4 cores compared to the 8700k, but I have 8 real cores without hyperthreading and the CPU crushes the competition. Rizen needs 16 logical cores to beat it.
For gaming, the 9700k is the new 4790k.
I'm very happy of my upgrade.
Hope you bought a beast of a cpu cooler or you will be buying new outfit next year.
I'm on watercooling for years now, not a sophisticated high end thing but an all in one solution that works just fine.
And according to my sensor monitors, sorry to disappoint but at least at constant 4.9ghz the 9700k doesn't heat more than the 8700k at constant 4.7ghz with the same cooler.
Also, quite amusingly... the 9700k has a lower TDP than the 2700x (95w vs 105w). So do you really want to talk about power consumption and heat here?
I keep hearing this nonsense that Intel is better at gaming. First off, for most games, it does not matter at this point, the differences between Intel and AMD are negligible. How well your game runs mostly depends on the gpu these days. If you are going to waste your money on Intel, feel free, but don't come here and try to tell us how much better your system runs on Intel when most of the people know for a fact you are full of hot air.
Huho... AMD fanatic alert
There's absolutely no doubt that Intel is still better than AMD in single core performance, and games still heavily rely on that. And benchmarks confirm that.
Does it really impact the average gamer? Not really, you can game just fine on an AMD rig. But when you go into high end stuff like VR gaming where high frame rates are important, then yeah, it makes a significant difference.
More nonsense. Sorry, but I can count the games on one hand where the cpu has much of an effect when using any relatively newer cpu, either AMD or Intel. The 9700k is strictly for people that want bragging rights rather than game performance. If you want to waste your money on bragging rights it is your privilege, but for most of us throwing money away on expensive components is just ridiculous!
More nonsense. Sorry, but I can count the games on one hand where the cpu has much of an effect when using any relatively newer cpu, either AMD or Intel. The 9700k is strictly for people that want bragging rights rather than game performance. If you want to waste your money on bragging rights it is your privilege, but for most of us throwing money away on expensive components is just ridiculous!
Some of us use CPUs for CPU-intensive things other than games. That today's higher end CPUs are also good at games is a nice bonus, but not always the driving reason for the purchase, especially once you move into the HEDT category.
Audio recording. It started out on the phonograph. That technology progressed pretty rapidly through various speeds and formats and quality until we hit magnetic tape. Then it went digital with the CD. Today it's still digital, but doesn't necessarily even have a physical format anymore.
Now let's look at the primary function of audio recording: to preserve the sound as accurately as possible. With respect to fidelity, a late era record actually isn't all that bad. A lot of audio purists actually prefer a record to digital recording. Similar with AM radio, to FM with excellent fidelity and good transmission distance, digital satellite transmission that covers entire continents, to today's purely digital transmission that has an effective global reach.
So why aren't we continuing to evolve audio recording and reach even better levels of fidelity and transmission efficiency?
Because it's good enough.
Sure, you can make a higher fidelity format. But humans wouldn't be able to perceive the difference. Most people listen to mediocre sound formats anyway because they are more convenient. Very few people invest in the equipment that is needed to tell the difference between what is legitimately "high fidelity" and what is just standard broadcast media. And cellular data (at least at the levels required for audio data) is nearly ubiquitous in most areas.
I think CPU speed as hit that same wall, at least with our current programming paradigms. In a world where single core IPC is still very much king, physics has us boxed in on speed and programming has us boxed in on parallelism. Even if you could make it faster, let's face it, the vast majority of the world spends the vast majority of their computing time on mobile devices running very small processors. You can throw more cores at it, but that isn't going to make your game run much faster, and it certainly has a very steep level of diminishing returns.
People will default to what is convenient and inexpensive.... like they have with Audio. And that's why hypothesis on why mobile has been the growth driver for more than a decade. There's no incentive to do anything other than slowly iterate on a desktop design.
Well thought out opinion Ryzen. Actually high end audio is still original tapes or phono stage. Very few if any high end audio is digital. Actually high end audio is till amplified with tubes. Phono arms and, cartridges are much more superior in sound quality to and digital format. There is still some very nice solid state pieces that can produce good sound quality, but still your very high end systems are strictly analog.
Also, with respect to temperature... I won't nit pick too much, but it could be 1 core at 4.9 Ghz on the 9700k versus 6 cores at 4.8Ghz on the 8800k, and you'd certainly see lower temps on the 9900k. Not saying your trying to cheat or anything, just pointing out that without some context or basis, the statement in and of itself doesn't really mean much.
My temperature tests are made in the same conditions (all cores used) with the same software, Aida 64 System Stability test, and all cores set to maximum turbo boost speed (4.7 for the 8700k, 4.9 for the 9700k).
Eight cores with eight threads doesn't necessarily burn more power than six cores with twelve threads, at least in programs that can push all of those threads. But it doesn't necessarily offer more performance, either. If the driving reason for the upgrade was something that can take advantage of more real cores well but not hyperthreading, so that the new CPU is substantially faster than the old, then it's likely using substantially more power, too.
Also, if you're doing an all-core overclock to the maximum turbo speed, you're probably pushing voltage, too. In that case, exactly how much power you use will depend tremendously on the voltage. That will vary considerably from one sample to the next.
If you really wanted to combine the high turbo clock speeds of Sky Lake Refresh Refresh with the eight cores of Ryzen 7, then now you can. It only costs $488, and the base frequency is a meager 3.6 GHz, so who knows how aggressive that turbo will be.
Frequencies are not comparable across CPU brands. Intel has a better Per-MHz performance than AMD, so a 3.6GHz Intel CPU might be comparable to a 4GHz AMD CPU. Intel tends to be about 8-10% ahead there, compared to AMD. AMD has always tried to make up for this with more Cores, which is beneficial in many scenarios...
Not so much for gaming, though, which is why Intel is still king there... But if I were building a rig for Video Editing, for example, then I'd totally consider one of the 8+ Core Ryzen at their price points over (for example) a 4-6 Core Intel CPU.
I stopped reading there, because you are clearly clueless.
16 Core/32 Threads doesn't offer much to the gaming market. Intel obviously knows this, which is why they aren't breaking their back to change it, yet, but simply adding Cores and refining what they already have.
The markets where Ryzen is more impactful aren't those that people on this forum are likely to talk about regularly ;-)
I keep hearing this nonsense that Intel is better at gaming. First off, for most games, it does not matter at this point, the differences between Intel and AMD are negligible. How well your game runs mostly depends on the gpu these days. If you are going to waste your money on Intel, feel free, but don't come here and try to tell us how much better your system runs on Intel when most of the people know for a fact you are full of hot air.
Intel has better Per-MHz performance, and better Single - Quad Core performance than AMD. That is the sweet spot for gaming. The difference isn't negligible when you're on a lower budget and want a PC that remains viable as long as possible.
As far as the GPU, that is a completely different discussion. Nvidia GPUs still tend to outperform AMD's, while generating less heat and using less power. You still have to, basically, pay the same price as an GTX 1060 to get an AMD GPU that performs comparably, for example. Shopping out of the same budget bracket (but taking AMD's lower price), will just leave you with a worse performing component.
AMD has CPUs with tons of cores, but those are not really things that gamers do or should care hugely about. Those are more disruptive in the Creative Market (Video Editors, etc.) and Workstation PC markets; where applications do scale very well with high core counts.
A lot of people on YouTube, etc. that are going Ryzen are doing so because they do more than game on their PCs. They're streamers, they edit logs of video (content creators), etc. For some of those tasks (CPU Video Encoding), a Threadripper is AMAZING.
It's not nonsense, it's fact.
And it's been well known for over a decade. Where have you been?
I'm not hugely interested in arguing with fanboys, and I'm not trying to convince you that one is better than the other "for you" (since "for you" is often just someone's personal preference or tech-partisan stance on a matter). I simply stopped buying AMD and will never go back, because I see no advantage in using their components for what I do on a PC. Intel does it just as well, or better, and in my experience the PCs tend to stay more viable much longer than an AMD configuration.
Also, Intel/Nvidia has way better developer support than AMD (QSV, CUDA, NVENC, etc.).
If you really wanted to combine the high turbo clock speeds of Sky Lake Refresh Refresh with the eight cores of Ryzen 7, then now you can. It only costs $488, and the base frequency is a meager 3.6 GHz, so who knows how aggressive that turbo will be.
Frequencies are not comparable across CPU brands. Intel has a better Per-MHz performance than AMD, so a 3.6GHz Intel CPU might be comparable to a 4GHz AMD CPU. Intel tends to be about 8-10% ahead there, compared to AMD. AMD has always tried to make up for this with more Cores, which is beneficial in many scenarios...
Not so much for gaming, though, which is why Intel is still king there... But if I were building a rig for Video Editing, for example, then I'd totally consider one of the 8+ Core Ryzen at their price points over (for example) a 4-6 Core Intel CPU.
I stopped reading there, because you are clearly clueless.
16 Core/32 Threads doesn't offer much to the gaming market. Intel obviously knows this, which is why they aren't breaking their back to change it, yet, but simply adding Cores and refining what they already have.
The markets where Ryzen is more impactful aren't those that people on this forum are likely to talk about regularly ;-)
I keep hearing this nonsense that Intel is better at gaming. First off, for most games, it does not matter at this point, the differences between Intel and AMD are negligible. How well your game runs mostly depends on the gpu these days. If you are going to waste your money on Intel, feel free, but don't come here and try to tell us how much better your system runs on Intel when most of the people know for a fact you are full of hot air.
Intel has better Per-MHz performance, and better Single - Quad Core performance than AMD. That is the sweet spot for gaming. The difference isn't negligible when you're on a lower budget and want a PC that remains viable as long as possible.
Actually right now AMD is likely better choice for a budget PC. Intel is good when you spend like $250 your processor because at that price point AMD would only offer you more cores than you need whereas Intel can offer better single core performance.
But if you're planning to spend less than $200, then AMD offers you enough cores with enough performance cheaper than Intel, and also has a bit better low-price motherboards.
Intel has better Per-MHz performance, and better Single - Quad Core performance than AMD. That is the sweet spot for gaming. The difference isn't negligible when you're on a lower budget and want a PC that remains viable as long as possible.
As far as the GPU, that is a completely different discussion. Nvidia GPUs still tend to outperform AMD's, while generating less heat and using less power. You still have to, basically, pay the same price as an GTX 1060 to get an AMD GPU that performs comparably, for example. Shopping out of the same budget bracket (but taking AMD's lower price), will just leave you with a worse performing component.
AMD has CPUs with tons of cores, but those are not really things that gamers do or should care hugely about. Those are more disruptive in the Creative Market (Video Editors, etc.) and Workstation PC markets; where applications do scale very well with high core counts.
A lot of people on YouTube, etc. that are going Ryzen are doing so because they do more than game on their PCs. They're streamers, they edit logs of video (content creators), etc. For some of those tasks (CPU Video Encoding), a Threadripper is AMAZING.
It's not nonsense, it's fact.
And it's been well known for over a decade. Where have you been?
I'm not hugely interested in arguing with fanboys, and I'm not trying to convince you that one is better than the other "for you" (since "for you" is often just someone's personal preference or tech-partisan stance on a matter). I simply stopped buying AMD and will never go back, because I see no advantage in using their components for what I do on a PC. Intel does it just as well, or better, and in my experience the PCs tend to stay more viable much longer than an AMD configuration.
Also, Intel/Nvidia has way better developer support than AMD (QSV, CUDA, NVENC, etc.).
You buy particular products, not just a brand name. It is true that for the latest generation of relatively higher end mainstream desktop CPUs, Intel CPUs have higher IPC than AMD. But that's an awful lot of qualifiers. If you're only looking for the Intel logo and nothing else, you might think you've found a great deal on this:
And never mind that it has considerably lower IPC than AMD's latest. If you're sharp enough to get something from the latest lineup but only looking for the Intel logo, you could end up with a Pentium Silver and wonder why it's terrible at gaming--inferior to a nine year old AMD Phenom II X4, even.
But even if you know enough to avoid mistakes like that, if you're only looking at Intel, you could end up buying something stupid like this:
It's not just that the AMD CPUs are cheaper. They also clock higher, which mostly offsets Intel's IPC advantage. They also enable whatever AMD's version of hyperthreading is. And going with AMD gives you your choice of two entire extra cores or an integrated GPU that actually works well.
Speaking of integrated GPUs, on a really severe budget, you get this:
The integrated GPU in there will blow away any discrete GPU that you can find for $50 or less. Sure, it's not as good as discrete GPUs that you could get for $100, but that doesn't fit some budgets.
I'm not trying to argue that you should only buy AMD for gaming. Quite the opposite, actually: unless you have some very specialized needs, you should consider both AMD and Intel. If you don't, then you'll sometimes end up buying something stupid.
The Haswell jump is/was still the best cpu jump for the dollar. We won't be seeing anything like that for another 3-5 years. AMD is so far behind Intel doesn't even need to try. I'm still waiting for a cpu that is worth a rebuild.
[[ DEAD ]] - Funny - I deleted my account on the site using the cancel account button. Forum user is separate and still exists with no way of deleting it. Delete it admins. Do it, this ends now.
The Haswell jump is/was still the best cpu jump for the dollar. We won't be seeing anything like that for another 3-5 years. AMD is so far behind Intel doesn't even need to try. I'm still waiting for a cpu that is worth a rebuild.
What are you talking about? Haswell was a little better than Ivy Bridge, but it wasn't a huge jump in performance. The most notable thing that Haswell did far better than Ivy Bridge was bringing idle power down.
And Intel absolutely does need to try. With AMD on the verge of moving to 7 nm and Intel's 10 nm process a disaster, gaming desktops, entry-level servers, super high-end (e.g., eight socket) servers, and very low power laptops might well be the only markets where Intel still even has a competitive CPU a year from now. And it's far from guaranteed that they'll even be competitive in those markets.
Contrary to popular belief, Intel's 10 nm process woes aren't due to not caring. Sometimes when you try something hard, it just doesn't work out.
If you really wanted to combine the high turbo clock speeds of Sky Lake Refresh Refresh with the eight cores of Ryzen 7, then now you can. It only costs $488, and the base frequency is a meager 3.6 GHz, so who knows how aggressive that turbo will be.
Frequencies are not comparable across CPU brands. Intel has a better Per-MHz performance than AMD, so a 3.6GHz Intel CPU might be comparable to a 4GHz AMD CPU. Intel tends to be about 8-10% ahead there, compared to AMD. AMD has always tried to make up for this with more Cores, which is beneficial in many scenarios...
Not so much for gaming, though, which is why Intel is still king there... But if I were building a rig for Video Editing, for example, then I'd totally consider one of the 8+ Core Ryzen at their price points over (for example) a 4-6 Core Intel CPU.
I stopped reading there, because you are clearly clueless.
16 Core/32 Threads doesn't offer much to the gaming market. Intel obviously knows this, which is why they aren't breaking their back to change it, yet, but simply adding Cores and refining what they already have.
The markets where Ryzen is more impactful aren't those that people on this forum are likely to talk about regularly ;-)
I keep hearing this nonsense that Intel is better at gaming. First off, for most games, it does not matter at this point, the differences between Intel and AMD are negligible. How well your game runs mostly depends on the gpu these days. If you are going to waste your money on Intel, feel free, but don't come here and try to tell us how much better your system runs on Intel when most of the people know for a fact you are full of hot air.
Intel has better Per-MHz performance, and better Single - Quad Core performance than AMD. That is the sweet spot for gaming. The difference isn't negligible when you're on a lower budget and want a PC that remains viable as long as possible.
As far as the GPU, that is a completely different discussion. Nvidia GPUs still tend to outperform AMD's, while generating less heat and using less power. You still have to, basically, pay the same price as an GTX 1060 to get an AMD GPU that performs comparably, for example. Shopping out of the same budget bracket (but taking AMD's lower price), will just leave you with a worse performing component.
AMD has CPUs with tons of cores, but those are not really things that gamers do or should care hugely about. Those are more disruptive in the Creative Market (Video Editors, etc.) and Workstation PC markets; where applications do scale very well with high core counts.
A lot of people on YouTube, etc. that are going Ryzen are doing so because they do more than game on their PCs. They're streamers, they edit logs of video (content creators), etc. For some of those tasks (CPU Video Encoding), a Threadripper is AMAZING.
It's not nonsense, it's fact.
And it's been well known for over a decade. Where have you been?
I'm not hugely interested in arguing with fanboys, and I'm not trying to convince you that one is better than the other "for you" (since "for you" is often just someone's personal preference or tech-partisan stance on a matter). I simply stopped buying AMD and will never go back, because I see no advantage in using their components for what I do on a PC. Intel does it just as well, or better, and in my experience the PCs tend to stay more viable much longer than an AMD configuration.
Also, Intel/Nvidia has way better developer support than AMD (QSV, CUDA, NVENC, etc.).
Shill for Intel much? The AMD 2700 x competes well with ANY Intel processor. And I CAN install the next generation cpu from AMD in the same exact motherboard. Post nonsense on this board and expect to get called on it.
If you really wanted to combine the high turbo clock speeds of Sky Lake Refresh Refresh with the eight cores of Ryzen 7, then now you can. It only costs $488, and the base frequency is a meager 3.6 GHz, so who knows how aggressive that turbo will be.
Frequencies are not comparable across CPU brands. Intel has a better Per-MHz performance than AMD, so a 3.6GHz Intel CPU might be comparable to a 4GHz AMD CPU. Intel tends to be about 8-10% ahead there, compared to AMD. AMD has always tried to make up for this with more Cores, which is beneficial in many scenarios...
Not so much for gaming, though, which is why Intel is still king there... But if I were building a rig for Video Editing, for example, then I'd totally consider one of the 8+ Core Ryzen at their price points over (for example) a 4-6 Core Intel CPU.
I stopped reading there, because you are clearly clueless.
16 Core/32 Threads doesn't offer much to the gaming market. Intel obviously knows this, which is why they aren't breaking their back to change it, yet, but simply adding Cores and refining what they already have.
The markets where Ryzen is more impactful aren't those that people on this forum are likely to talk about regularly ;-)
I keep hearing this nonsense that Intel is better at gaming. First off, for most games, it does not matter at this point, the differences between Intel and AMD are negligible. How well your game runs mostly depends on the gpu these days. If you are going to waste your money on Intel, feel free, but don't come here and try to tell us how much better your system runs on Intel when most of the people know for a fact you are full of hot air.
Intel has better Per-MHz performance, and better Single - Quad Core performance than AMD. That is the sweet spot for gaming. The difference isn't negligible when you're on a lower budget and want a PC that remains viable as long as possible.
As far as the GPU, that is a completely different discussion. Nvidia GPUs still tend to outperform AMD's, while generating less heat and using less power. You still have to, basically, pay the same price as an GTX 1060 to get an AMD GPU that performs comparably, for example. Shopping out of the same budget bracket (but taking AMD's lower price), will just leave you with a worse performing component.
AMD has CPUs with tons of cores, but those are not really things that gamers do or should care hugely about. Those are more disruptive in the Creative Market (Video Editors, etc.) and Workstation PC markets; where applications do scale very well with high core counts.
A lot of people on YouTube, etc. that are going Ryzen are doing so because they do more than game on their PCs. They're streamers, they edit logs of video (content creators), etc. For some of those tasks (CPU Video Encoding), a Threadripper is AMAZING.
It's not nonsense, it's fact.
And it's been well known for over a decade. Where have you been?
I'm not hugely interested in arguing with fanboys, and I'm not trying to convince you that one is better than the other "for you" (since "for you" is often just someone's personal preference or tech-partisan stance on a matter). I simply stopped buying AMD and will never go back, because I see no advantage in using their components for what I do on a PC. Intel does it just as well, or better, and in my experience the PCs tend to stay more viable much longer than an AMD configuration.
Also, Intel/Nvidia has way better developer support than AMD (QSV, CUDA, NVENC, etc.).
Shill for Intel much? The AMD 2700 x competes well with ANY Intel processor. And I CAN install the next generation cpu from AMD in the same exact motherboard. Post nonsense on this board and expect to get called on it.
I think soon very soon the sweet spot for gaming will be 6core cpu's not 4 cores like it used to be/ maybe currently is...barely.
Other than that he does have some points like intel does have better per mhz performance and better IPC which frankly is what gives them edge right now.
Brenics ~ Just to point out I do believe Chris Roberts is going down as the man who cheated backers and took down crowdfunding for gaming.
If I was buying a CPU for 3D rendering and video editing. I would definitely go AMD. However, if I wasn't the one buying it then the 9700K seems like a solid choice. IPC is dependent on workload. If you are picking a CPU for a specific software, always base your judgement on how a CPU performs with that software. Where a CPU might perform quickly in consumer applications, it might be overwhelmed with enterprise applications.
If you are picking a CPU for a specific software, always base your judgement on how a CPU performs with that software.
This, exactly. What matters is not how good a CPU is at the programs that reviewers use. What matters is how good it is at the programs you use. Gamers don't know what games we will play years from now, but enterprise users sometimes do know that it's for some particular program.
Not sure about you, but when I game, I use ALL 8 cores. I have a bunch of peripheral programs that I have running. So you can take your intel CPUs and shove them where the sun does not shine. That is why so many of the workstations I build are AMD CPUs, they also have many programs running at the same time.
I also tell the people I build computers for, that when AMD comes out with their 7nm CPUs, I can upgrade their system with a new CPU without changing anything else. Good luck trying that with an Intel processor.
Not sure about you, but when I game, I use ALL 8 cores. I have a bunch of peripheral programs that I have running.
Why do you do that? I mean, this seems like an easily fixable problem. Most programs while running in the background will take only a trivial processing load.
Though if you genuinely do need a ton of stuff running in the background that is a heavy CPU load, you might want to look at a Threadripper 2950X.
You took Skylake - why? There was no die shrink involved with Skylake - it was a Broadwell refresh! And some people pointed this out at the time. No real gain; just another 14nm cpu; the big improvement is the motherboard. All true of course.
Sky Lake was a new architecture, not just Broadwell on a new process node. Sky Lake is heavily derivative of Broadwell, so it's not a huge overhaul like going from Clarkdale to Sandy Bridge. Sky Lake improved some caches and made certain things wider. But it is still a new architecture, and not just Broadwell on a different process node.
One way that you can tell this is with benchmarks that normalize everything. For example, get a Sky Lake CPU and a Broadwell CPU, disable all but one CPU core, and fix the clock speed at 3 GHz with turbo disabled. The Sky Lake CPU will tend to perform several percent faster than the Broadwell CPU. Meanwhile, under that test, the Sky Lake CPU will perform identically (at least up to rounding) to Kaby Lake, Coffee Lake, or the latest "Coffee Lake Refresh", at least if you can pick models with exactly the same L3 cache capacity.
The last time Intel offered a mainstream consumer desktop CPU that improved the functionality of the cores on a per clock basis was Sky Lake. Everything since then is just a refresh. Incidentally, Sky Lake-X cores are a little different from Sky Lake, with a different L2 cache.
Refreshes aren't necessarily a bad thing. They can mean that a CPU vendor makes something a little better after a year rather than having to wait two years for an update. But three new generations just being a refresh like this is unprecedented. The reason for so many refreshes is that Intel's 10 nm process is severely delayed, and that's what is breaking everything for Intel right now. The problems might not look that bad right now because AMD is also still on 12/14 nm, and on an inferior process node to Intel's 14++ nm, at that. But once AMD has numerous products out on 7 nm, Intel is going to be in a world of hurt until they can catch up.
We agree - your last paragraph is the key. To some extent we are talking about two different aspects: design and manufacture.
A product is the combination of its design plus its manufacture however. And not only was Skylake a new architecture it also marked the maturation of Intel's 14nm manufacturing.
And its the manufacturing side @ 10nm that Intel is struggling with now. A new killer 10nm architecture won't change that. Leading - as you say - to three design refreshes. And on the manufacturing side ... probably four? Broadwell to Skylake was a refresh but in truth we don't see how they are made just the design side.
Not sure about you, but when I game, I use ALL 8 cores. I have a bunch of peripheral programs that I have running.
Why do you do that? I mean, this seems like an easily fixable problem. Most programs while running in the background will take only a trivial processing load.
Though if you genuinely do need a ton of stuff running in the background that is a heavy CPU load, you might want to look at a Threadripper 2950X.
Considering Ozmodan's post history, I think he just goes out of his way to present a situation that's unfavourable to Intel.
That's not to say he couldn't have real reason this time, but if it's unfavourable to Intel or NVidia's RTX cards, you can be sure Ozmodan will present it on these forums.
Not sure about you, but when I game, I use ALL 8 cores. I have a bunch of peripheral programs that I have running.
Why do you do that? I mean, this seems like an easily fixable problem. Most programs while running in the background will take only a trivial processing load.
Though if you genuinely do need a ton of stuff running in the background that is a heavy CPU load, you might want to look at a Threadripper 2950X.
Considering Ozmodan's post history, I think he just goes out of his way to present a situation that's unfavourable to Intel.
That's not to say he couldn't have real reason this time, but if it's unfavourable to Intel or NVidia's RTX cards, you can be sure Ozmodan will present it on these forums.
No, Intel is a good buy if money is no object. My big dispute is people that think they need the top end processor for gaming. For most of us that game, it is important to watch the cost and the CPU for most uses is secondary to the GPU. So it becomes a practical choice to throw that extra $100+ or so money you could spend on a Intel CPU to a AMD CPU while getting a bigger performance gain with the better GPU. I just don't think you will see noticeable differences in most games.
Comments
You took Skylake - why? There was no die shrink involved with Skylake - it was a Broadwell refresh! And some people pointed this out at the time. No real gain; just another 14nm cpu; the big improvement is the motherboard. All true of course.
From a manufacturing process point of view however Skylake was huge. It wasn't a die shrink but it was Intel's first "mature" 14nm manufacturing process. That was able to deliver high volumes.
And that is my point.
Needless to say it typically becomes harder and harder to improve a process and diminishing returns - probably - kick in with every refresh.
One way that you can tell this is with benchmarks that normalize everything. For example, get a Sky Lake CPU and a Broadwell CPU, disable all but one CPU core, and fix the clock speed at 3 GHz with turbo disabled. The Sky Lake CPU will tend to perform several percent faster than the Broadwell CPU. Meanwhile, under that test, the Sky Lake CPU will perform identically (at least up to rounding) to Kaby Lake, Coffee Lake, or the latest "Coffee Lake Refresh", at least if you can pick models with exactly the same L3 cache capacity.
The last time Intel offered a mainstream consumer desktop CPU that improved the functionality of the cores on a per clock basis was Sky Lake. Everything since then is just a refresh. Incidentally, Sky Lake-X cores are a little different from Sky Lake, with a different L2 cache.
Refreshes aren't necessarily a bad thing. They can mean that a CPU vendor makes something a little better after a year rather than having to wait two years for an update. But three new generations just being a refresh like this is unprecedented. The reason for so many refreshes is that Intel's 10 nm process is severely delayed, and that's what is breaking everything for Intel right now. The problems might not look that bad right now because AMD is also still on 12/14 nm, and on an inferior process node to Intel's 14++ nm, at that. But once AMD has numerous products out on 7 nm, Intel is going to be in a world of hurt until they can catch up.
But.
Intel changed their definition of TDP, apparently. Your 95W TDP now is only valid with all cores at base frequency (3.6Ghz). Any higher, regardless of if it's an overclock or Intel's own Boost algorithm, is not bound by any particular power draw. When limited to 95W, the 9900k and 2700X are pretty well equal (and the 8800k for that matter) - not sure that the 9700 would be much different, as it's same arch and clocks, just no HT. But your CPU running at 4.9Ghz is most definitely not also running at 95W under any type of significant load.
Now, it is indeed true that 95W < 105W, and that even at 3.6Ghz it's matching an up to** 4.3Ghz AMD CPU. So all of that still counts for some bragging rights.
Also, with respect to temperature... I won't nit pick too much, but it could be 1 core at 4.9 Ghz on the 9700k versus 6 cores at 4.8Ghz on the 8800k, and you'd certainly see lower temps on the 9900k. Not saying your trying to cheat or anything, just pointing out that without some context or basis, the statement in and of itself doesn't really mean much.
"Be water my friend" - Bruce Lee
Also, if you're doing an all-core overclock to the maximum turbo speed, you're probably pushing voltage, too. In that case, exactly how much power you use will depend tremendously on the voltage. That will vary considerably from one sample to the next.
Does your company physically install the new CPU, too, or do they just pay for it and leave it to you to install it?
As far as the GPU, that is a completely different discussion. Nvidia GPUs still tend to outperform AMD's, while generating less heat and using less power. You still have to, basically, pay the same price as an GTX 1060 to get an AMD GPU that performs comparably, for example. Shopping out of the same budget bracket (but taking AMD's lower price), will just leave you with a worse performing component.
AMD has CPUs with tons of cores, but those are not really things that gamers do or should care hugely about. Those are more disruptive in the Creative Market (Video Editors, etc.) and Workstation PC markets; where applications do scale very well with high core counts.
A lot of people on YouTube, etc. that are going Ryzen are doing so because they do more than game on their PCs. They're streamers, they edit logs of video (content creators), etc. For some of those tasks (CPU Video Encoding), a Threadripper is AMAZING.
It's not nonsense, it's fact.
And it's been well known for over a decade. Where have you been?
I'm not hugely interested in arguing with fanboys, and I'm not trying to convince you that one is better than the other "for you" (since "for you" is often just someone's personal preference or tech-partisan stance on a matter). I simply stopped buying AMD and will never go back, because I see no advantage in using their components for what I do on a PC. Intel does it just as well, or better, and in my experience the PCs tend to stay more viable much longer than an AMD configuration.
Also, Intel/Nvidia has way better developer support than AMD (QSV, CUDA, NVENC, etc.).
But if you're planning to spend less than $200, then AMD offers you enough cores with enough performance cheaper than Intel, and also has a bit better low-price motherboards.
https://www.newegg.com/Product/Product.aspx?Item=9SIA4RE7N32013
And never mind that it has considerably lower IPC than AMD's latest. If you're sharp enough to get something from the latest lineup but only looking for the Intel logo, you could end up with a Pentium Silver and wonder why it's terrible at gaming--inferior to a nine year old AMD Phenom II X4, even.
But even if you know enough to avoid mistakes like that, if you're only looking at Intel, you could end up buying something stupid like this:
https://www.newegg.com/Product/Product.aspx?Item=9SIADZJ83R6757
When you could have had one of these:
https://www.newegg.com/Product/Product.aspx?Item=N82E16819113480
https://www.newegg.com/Product/Product.aspx?Item=N82E16819113496
It's not just that the AMD CPUs are cheaper. They also clock higher, which mostly offsets Intel's IPC advantage. They also enable whatever AMD's version of hyperthreading is. And going with AMD gives you your choice of two entire extra cores or an integrated GPU that actually works well.
Speaking of integrated GPUs, on a really severe budget, you get this:
https://www.newegg.com/Product/Product.aspx?Item=N82E16819113481
The integrated GPU in there will blow away any discrete GPU that you can find for $50 or less. Sure, it's not as good as discrete GPUs that you could get for $100, but that doesn't fit some budgets.
I'm not trying to argue that you should only buy AMD for gaming. Quite the opposite, actually: unless you have some very specialized needs, you should consider both AMD and Intel. If you don't, then you'll sometimes end up buying something stupid.
And Intel absolutely does need to try. With AMD on the verge of moving to 7 nm and Intel's 10 nm process a disaster, gaming desktops, entry-level servers, super high-end (e.g., eight socket) servers, and very low power laptops might well be the only markets where Intel still even has a competitive CPU a year from now. And it's far from guaranteed that they'll even be competitive in those markets.
Contrary to popular belief, Intel's 10 nm process woes aren't due to not caring. Sometimes when you try something hard, it just doesn't work out.
Brenics ~ Just to point out I do believe Chris Roberts is going down as the man who cheated backers and took down crowdfunding for gaming.
IPC is dependent on workload. If you are picking a CPU for a specific software, always base your judgement on how a CPU performs with that software. Where a CPU might perform quickly in consumer applications, it might be overwhelmed with enterprise applications.
I also tell the people I build computers for, that when AMD comes out with their 7nm CPUs, I can upgrade their system with a new CPU without changing anything else. Good luck trying that with an Intel processor.
Though if you genuinely do need a ton of stuff running in the background that is a heavy CPU load, you might want to look at a Threadripper 2950X.
Brenics ~ Just to point out I do believe Chris Roberts is going down as the man who cheated backers and took down crowdfunding for gaming.
A product is the combination of its design plus its manufacture however. And not only was Skylake a new architecture it also marked the maturation of Intel's 14nm manufacturing.
And its the manufacturing side @ 10nm that Intel is struggling with now. A new killer 10nm architecture won't change that. Leading - as you say - to three design refreshes. And on the manufacturing side ... probably four? Broadwell to Skylake was a refresh but in truth we don't see how they are made just the design side.
That's not to say he couldn't have real reason this time, but if it's unfavourable to Intel or NVidia's RTX cards, you can be sure Ozmodan will present it on these forums.