It looks like you're new here. If you want to get involved, click one of these buttons!
The GPU is the Radeon R9 285. From the specs, it looks like it should be basically equivalent to a Radeon R9 280, except with lower power consumption and a mildly updated feature set. It probably has a smaller die, which would save AMD money. As the first 28 nm GPU, Tahiti was also the least efficient, so there is room to improve there. AMD has set the MSRP to $250, and it looks like they may be discontinuing the Radeon R9 280. Plenty of R9 280s are well below $250, so the R9 285 looks to be priced not to sell, at least initially.
On the CPU side, things are less exciting. We have three new CPUs to talk about. The most interesting one is the FX-8370. Basically, take an FX-8350, add 100 MHz, and you have an FX-8370. That's it. There's also the FX-8370E, which is a 95 W version of the same thing, and the FX-8320E, which is a 95 W version of the old FX-8320. Bringing down power consumption is welcome, of course, but it looks like there's not much to see here.
Perhaps a more exciting upcoming launch is Haswell-E, which will be the desktop version of Intel's next generation Xeon E5 chips. But credible rumors put this launch as happening on the Friday before Labor Day weekend, which seems timed to minimize publicity. That's the sort of thing that you do if you know that the part is terrible. I'm not sure how Intel could have messed up the chip unless DDR4 simply isn't ready for prime time.
How credible are the rumors? Haswell-E is the first part to use DDR4, so early DDR4 memory basically just has to work with Haswell-E and that's it. Furthermore, the launch of Haswell-E is the first time that it makes sense to use DDR4. Memory manufacturers selling DDR4 aren't bound by Intel's launch dates, and can launch DDR4 whenever they want. So why exactly did New Egg list DDR4 memory as releasing on August 29? They seem to have subsequently changed some of the DDR4 release dates to September 11, so maybe the rumors aren't as credible, or maybe Intel simply decided to delay the launch.
Comments
Whats your take on the lower end Intel cpus having less PCI-E lanes?
TSW - AoC - Aion - WOW - EVE - Fallen Earth - Co - Rift - || XNA C# Java Development
Why would Nvidia be happy that the industry is stuck on 28 nm just like it has been for the last 2 1/2 years? A card you bought two years ago is likely still current generation state of the art. The only new GPU chip that Nvidia has brought to market in the last year and a half is a Maxwell test part priced not to sell. The same reasons that prevent AMD from making a chip much better than before apply just as much to Nvidia. That's not the way to get people to upgrade, which is what drives new card sales.
I'm not very computer savvy, but I'm actually quite happy.
I've been using my 560 TI since it came out, and still don't feel the need to replace it, with any luck i wont have to until it gives up on life!
Makes PC gaming a lot more affordable
That said, it is a shame that it sort of halts the progress of graphical fidelity in gaming.
There will basically be three reasons to buy Haswell-E rather than the non-E version. Those three reasons are the same as with Ivy Bridge-E and Sandy Bridge-E before them:
1) You want more than four cores.
2) You need massive PCI Express bandwidth.
3) You need massive system memory bandwidth or capacity.
For much of the last several years, AMD has actually been a better option than Intel if (3) is the concern, as you could get something like this (or its predecessors):
http://www.newegg.com/Product/Product.aspx?Item=N82E16819113316
At pure CPU performance, that's considerably slower than an FX-8320, let alone Intel's latest CPUs, but for a workload that is heavily constrained by memory bandwidth (e.g., most operations grab data randomly from 1 GB, so that rarely can the data be in CPU cache), CPU performance itself doesn't matter, but only memory bandwidth. That's a corner case that is irrelevant to consumer software, however.
If (2) is the concern, then AMD has often just as good as Intel in the last several years, and cheaper, too, with motherboards based on its 990FX chipset, and before that, 890FX and 790FX. That changed when Intel moved to PCI Express 3.0 and AMD's FX-series motherboards didn't. Even so, the transition to PCI Express 3.0 doubled PCI Express bandwidth. Even in cases where the difference between PCI Express 2.0 x8 and x16 bandwidth matters, it's likely that the difference between PCI Express 3.0 x8 and x16 bandwidth will also matter.
So really, if you're looking at Haswell-E, like Ivy Bridge-E or Sandy Bridge-E, or even Gulftown before them, you're probably looking at it because you want more than four cores--and faster cores than AMD offers. Is it worth paying an extra $200 to get two PCI Express 3.0 x16 slots, rather than just one, or having two x8 slots, as is rumored to be the case? I'm inclined to say "no".
But that doesn't mean that Haswell-E will end up being better than Ivy Bridge-E. Haswell-E has the problem of DDR4 memory, which at the moment, is expensive. Four memory slots means you need four memory modules to get full bandwidth. For DDR3, that doesn't have to be expensive. Here you can do it for $85:
http://www.newegg.com/Product/Product.aspx?Item=N82E16820231325
For DDR4, it starts at $220, and that's decidedly of the budget variety:
http://www.newegg.com/Product/Product.aspx?Item=N82E16820148860
Granted, that's only 8 GB of DDR3 as compared to 16 GB of DDR4 but 8 GB is still a lot today. If you want 16 GB of DDR3, it's $153:
http://www.newegg.com/Product/Product.aspx?Item=N82E16820231441
If you insist on 2133 MHz DDR3 to match the speed of DDR4, that only bumps it up to $157:
http://www.newegg.com/Product/Product.aspx?Item=N82E16820231656
That's much better latency timings than you'll find on any DDR4 in existence, too.
So the price difference between DDR3 and DDR4 ends up eating up much of the price difference between the rumored cheaper Haswell-E and the older Ivy Bridge-E. A mature motherboard market will eat up much of the rest of the price difference. Now, it's not likely that either of those will last forever; eventually DDR4 will probably be cheaper than DDR3. But we're not there yet, and if not for the rumored cheaper 6-core and high end 8-core Core i7, the launch of Haswell-E would scarcely matter at all.
After the Devil's Canyon fallout and all those claims about 5Ghz on air (which no one was really able to do with retail samples), I can totally understand downplaying Haswell-E even if it's not terrible.
I think the big reason for the E-line is the combination of #1 and #2 - people who are planning on big SLI/CFX rigs, money is no object (They are dropping $1-3k on GPUs alone). They need the PCI lanes to feed the GPUs and don't want to compromise on the CPU (not that they necessarily need more cores).
That being said, apart from big SLI/CFX rigs, I don't think the launches of Sandy-E or Ivy-E really mattered at all either. No one seriously considers them unless they are looking at a tri/quad GPU setup (or are being sold something horribly overpriced and don't know any better).
Hmm maybe CPU bound, are we there yet?
http://www.hardwarepal.com/planetside-2-cpu-benchmark/
You look at that link and just look at the graphs, and you think "Wow it really is CPU bound".
But if you actually read the article, it turns out Planetside 2 is just coded really poorly and it isn't even running that well on the better CPUs. AMD fairs particularly poorly compared to Intel in this title, and I don't think that's necessarily because Intel is so much better than AMD, just because this particular game isn't terribly efficient. Apart from the fact that there is virtually no difference between an i5 and an i7.
And you can also point to a history of this with this developer: Everquest 2 suffers a similar problem; perhaps even worse as it's been that way for a decade now and never really got fixed. Even today's best CPU will still struggle if you try to crank up all the options on EQ2. You pretty much are just brute forcing particularly poor engine code with CPU IPC in these cases, I guess you call that CPU bound, but I don't think most software fits into this category.
Most other games, you start to get 200+ people together, and you also are going to have network efficiency and server-side issues moreso than whatever your client is dealing with.
Let's take another modern example: Guild Wars 2
http://www.tomshardware.com/reviews/guild-wars-2-performance-benchmark,3268-7.html
Again, here you could just glance at the charts and think "CPU Bound!", but take a deeper look. Again, we see a disparity between AMD and Intel - AMD I still claim is "good enough" if your on a tight budget and your looking at the <$150 CPUs. Apart from that though, looking at the data to see if we are really CPU bound - there is one particular test where they run the same Sandy Bridge Core i5 at 3Ghz and then again OCed to 4Ghz. No difference in frame rate. If we were CPU-bound, we would have seen a notable jump there. There does appear to be some benefit to adding more cores, so we see a more efficient multi-threaded game here, and you see some benefit going all the way out to 6 cores (the most at the time in a consumer CPU) but Hyperthreaded cores don't seem to do that much, and CPU frequency doesn't seem to do that much. I guess if you are constrained by lack of cores, yeah, you could call that CPU-bound.
So, yeah, you could call it CPU bound in a handful of examples, but I think the majority of software that isn't the case and your more constrained by the "Console first" mentality of games conforming to the limited consoles, and then getting translated to the PC.
I'll be waiting for AMD's new SMT architecture next year.
My Phenom II x6 can still play anything out there.
I wouldn't count on a new AMD architecture for desktops coming next year, unless you mean Carrizo, which will be heavily based on Kaveri--and probably not much better than Kaveri. 2016 sure looks more likely for AMD's next "big" core that isn't a derivative of Bulldozer.
Jim Keller has been back at AMD for a year now. There is no telling how much work has been done on the new architecture.
AMD has a lot of plug and play bits and pieces that make things a lot easier.
With no delays, you can assume that it will take about three years between starting work on a chip and launching the chip at retail. If AMD started designing a new chip last year, it's not likely to launch next year.
3 years is generous. Major CPU evolutionary leaps start their early planning stages more than a decade before they see retail release.
A change to the silicon of an existing design can sometimes take upwards of a year alone to implement and get to retail shelves.
If your just looking at tweaking some existing designs, 3 years is probably a good number. But I don't know that AMD can get terribly competitive just by tweaking Piledriver some more.
Yes, a lot of the parts are "plug and play", but when you are down on the nanometer scale, your dealing with weird quantum anomalies, and in the Ghz clock frequency range you have strange RF anomalies, and these type of things take a lot of testing and some amount of trial and error to work out.
Also, with regard to the R9 285...
AMD's original MSRP for the R9 280 was $279. Cards with competition don't stay at MSRP for long. Until the market settles a bit it will be hard to tell if the 285 will be a good value or not. It's also remotely possible that it goes gangbusters for some other market (like coin mining) and we see prices skyrocket for a while based on demand - I don't expect that, but it's happened before and I wouldn't put it outside the realm of possibility.
I do agree that the power savings alone isn't really worth a huge premium, although with AMD's recent versions of PowerTune - power savings translates directly to increased performance.
Will have to wait for some retail pricing to come out to see how aggressive it goes, as well as some real-world benchmarks to see how it performs before we can really judge it.
Was definitely surprised to see someone linking to a $200 280 in one of these threads. Those things were more than my gtx 770 not that long ago. SSD prices still shock me how much they dropped in that time frame. just built that machine earlier this year.