My last nVidia card was an integrated GPU on a TabletPC, before that was a GeForce4. They also both had Intel inside. Probably not the best example of nVidia and Intel at the time. I have contemplated switching a couple times, but benchmarks for my application workload keep bringing me back to AMD for CPUs. Price to performance keeps bringing me back to AMD GPUs. My current GPU is a Fury X that I bought a year after its release on the cheap. Runs 4K just fine as long as I keep its 4GB memory in mind when picking settings. I'm going to be replacing it this Fall, depends on what the benchmarks say if I will go with an older Vega or Volta. I am going to pick the one with the higher compute performance.
Others fail to notice changes and still measure CPU speeds by a standard that was very popular in 1990s. Over time they develop strange superstitions as a result.
You are aware that the unit of measurement for a processors clock speed is hertz right? Maybe you should have said something like computation rate which is expressed in ops or flops (operations per second or floating point operations per second) or maybe instructions per second. In which case that information is rarely available to you at time of purchase. Those would be actual performance benchmarks.
I don't know about you, but when I talk about a processor being "fast", I don't mean the clock speed. I mean "does the computations that it's asked to do in a short period of time". Otherwise, you'd have to think of a Pentium 4 as being faster than an Athlon 64, or a Bulldozer as being faster than a Sandy Bridge.
But if you really want to be pedantic, you are aware that a single chip commonly has different regions that run at different clock speeds, aren't you? If all that matters is the clock speed, then which one? Whichever number is largest?
No, what I was saying was the metrics that actually measure real performance are usually the ones ignored. Measuring instructions per second is far more useful. A computer program is a series of instructions. Comparing pcmark benchmarks would be more useful for games because it takes into account the moving around of large memory buffers that happens in games but not in other programs (in gaming the CPU has to copy the textures and vertex buffers into the graphics card which is nearly 70% of the work performed by the CPU in a game, the graphics card is responsible for most of the work performed in any game as the CPU really just tracks player coordinates and handles game logic).
Yes, you can run many of today's games on PC's older than 3 years, but you end up stepping down the resolution and objects displayed. You are also very limited on multiprocessing. Try to run a streaming app when you are playing a game with one of these older PC's you will have issues. I usually have 3 or 4 other applications running when playing a game, sometimes more. Good luck with an old PC.
So in the end I disagree with your point of view.
3 years old isn't really old enough to have to turn the resolution down. If you are multitasking really thats a ram issue. In fact the processors are of comparable speeds, they just have more cores now. Really the performance difference between hardware generations is exaggerated. Its really more of a luxury to have the newest than a necessity.
Now you are just making up things. Multiprocessing is very dependent on how many cores you have. The more cores the better the multiprocessing.
And yes, a 3 year old computer is going to have some problems with the newer games. Go and try to run ARK on a 3 year old computer with max settings as an example.
Now you are just making up things. Multiprocessing is very dependent on how many cores you have. The more cores the better the multiprocessing.
And yes, a 3 year old computer is going to have some problems with the newer games. Go and try to run ARK on a 3 year old computer with max settings as an example.
I don't know exactly where this thread is coming from, or where it's going, but it's not too hard to write non-optimized code that will pretty much run crappy on any platform. From what I understand, ARK is one such example.
If you go applying the "more cores" logic, that would point to something like E7-8890 being the best gaming CPU available. It's 24C/48T... but since physics still exists and you can only cram about 165W in that package, you crank up the cores and it's only going to run at about 2.2 Ghz.... Not very optimal for gaming, which still struggles to use more than 2-4 cores effectively.
The difference in CPUs three years ago versus what's available today is.... not really all that much. In fact, you could go back a lot farther than that before you start to see significant differences. I know a lot of people who still game very comfortably on CPUs from 2008 and 2011. Yeah, there are faster CPUs available today, for certain, but it won't appreciably increase your gaming experience all that much. And I know a lot of people who play games that even today's top-of-the-line CPUs and GPUs can't max out.
You should build to your budget, and get whatever the best your budget can afford. If that's new hardware, then great. If it's not, that's not always a bad thing either.
If Gen 2 and Gen 3 are equal cost, then I would lean toward Gen 3 mostly because of a bigger window for support/driver windows , but there is also that general assumption that in electronics newer revisions tend to be faster, smaller, and/or more efficient... which does tend to be true.
You can make the case against Gen 3 though if it's bleeding edge tech, as it hasn't had a chance to prove itself. I know a lot of people held off on SSDs because of this (and I don't really blame them). Some people also apply this to CPUs, GPUs, and Chipsets. If it's a critical machine, I can't blame them... it gives things like FDIV time to shake out.. but you can counter that again with how far back spectre/meltdown apply, and to make yourself immune to that you have to go back a long way.
The newest hardware isn't always the best option for your gaming PC. I'm not saying that from a performance perspective, but rather from a cost efficiency perspective. You can usually play modern games on 1 to 2 year old hardware at max or near max settings and have really great results. Internet also get slower and latency problems found. https://www.corenetworkz.com/2018/07/solve-online-gaming-issues-att-internet-bandwidth.html So instead of buying the best hardware from today, or settling for mediocre hardware from today, buy the best hardware from a year ago after its been marked down by 75%. Many types of hardware like CPUs and graphics cards will come with many tiers. The lower tiers are often significantly slower than the higher tiers from 3 years ago.
Your idea sounds with a good buyer logic but if we purchase an older version of hardware unit, sometimes we may face performance issues with newer games within one or two years itself. I do agree that buying a latest hardware unit may be risky that not many review sites may not updated the proper evaluation of the unit. It is always better to invest on a proven hardware version. One problem associated with this strategy is the possibility of getting outdated faster than those who had the latest unit purchased in the same year.
I held off upgrading this year because so far I'm not playing any games which are over taxing my 4 year old gaming laptop.
I don't play any of the newest titles however, most are at least 2 to 4 years post launch which is likely why.
There isn't anything in MMORPG space I've been unable to play, even the newer ones because they code to a lower hardware standard these days.
Exactly. Unless you're doing more than just playing games, it doesn't make any sense to upgrade sooner than every ~3 years. If you made a good, informed(if more expensive) purchase, you can stretch it out to ~5 years.
You can stretch it out even more if you know how to overclock, my i5-750@4ghz is still running my games on max with only a few things turned down to high (or very high)... I am only now starting to think to rebuild myself a new rig and even then I am waiting due to stupid high gpu prices.
Playing on a 1080p screen I never really see a difference between maxed out setting and the setting just under that...
Brenics ~ Just to point out I do believe Chris Roberts is going down as the man who cheated backers and took down crowdfunding for gaming.
Probably a good rule to live by, though it's hard to keep away from "new and shiny". I often find myself looking up lists like coolest tech gadgets in 2018 and such and drooling over new stuff. Like, I have a decent build with a good GPU, but those new AMD Ryzen CPU's don't leave me alone. Wanna try to build a PC around one of those.
Comments
And yes, a 3 year old computer is going to have some problems with the newer games. Go and try to run ARK on a 3 year old computer with max settings as an example.
If you go applying the "more cores" logic, that would point to something like E7-8890 being the best gaming CPU available. It's 24C/48T... but since physics still exists and you can only cram about 165W in that package, you crank up the cores and it's only going to run at about 2.2 Ghz.... Not very optimal for gaming, which still struggles to use more than 2-4 cores effectively.
The difference in CPUs three years ago versus what's available today is.... not really all that much. In fact, you could go back a lot farther than that before you start to see significant differences. I know a lot of people who still game very comfortably on CPUs from 2008 and 2011. Yeah, there are faster CPUs available today, for certain, but it won't appreciably increase your gaming experience all that much. And I know a lot of people who play games that even today's top-of-the-line CPUs and GPUs can't max out.
You shouldn't buy newest just because it's new.
You should build to your budget, and get whatever the best your budget can afford. If that's new hardware, then great. If it's not, that's not always a bad thing either.
If Gen 2 and Gen 3 are equal cost, then I would lean toward Gen 3 mostly because of a bigger window for support/driver windows , but there is also that general assumption that in electronics newer revisions tend to be faster, smaller, and/or more efficient... which does tend to be true.
You can make the case against Gen 3 though if it's bleeding edge tech, as it hasn't had a chance to prove itself. I know a lot of people held off on SSDs because of this (and I don't really blame them). Some people also apply this to CPUs, GPUs, and Chipsets. If it's a critical machine, I can't blame them... it gives things like FDIV time to shake out.. but you can counter that again with how far back spectre/meltdown apply, and to make yourself immune to that you have to go back a long way.
Brenics ~ Just to point out I do believe Chris Roberts is going down as the man who cheated backers and took down crowdfunding for gaming.