Originally posted by Redemp After reviewing several benchmarks and determining my price point for my new build, I went with the r9 290. Will let you folks know if I happen to have any issues, as it stands I expect performance comparable to a 780 with the normal fluctuations, at a reduced price. If heat becomes an issue I'll remedy it, but I suspect that it will be fine. Now if only AMD had a chip to compete with Intel's ; love rotating between intel and amd each build.
Kaveri is coming in about two months. It's not likely to catch a Core i5-4670K, but it likely won't be all that far behind. I'd expect a gap of 10%-20% at stock speeds, which would be the closest than AMD has been in single-threaded performance in about three years.
Purchasing an AMD card IMO can be a gamble. There can be;
a) heat issues
b) cheap assembly issues
c) compatibility problems with games
d) delayed driver updates for games and bugs (delayed compared to Nvidia releases)
Nvidia for the time being (and I don't see this changing anytime soon) is the safer option if you don't want a headache after assembling a new Rig. I really don't give a flying turd if AMD cards these days are 7.69% faster...every card i've ever purchased from AMD par the 9800Pro had a cheaper build quality compared to it's Nvidia rival card.....this is the reason to me that they are cheaper...they have quality assurance issues. It is more common for people to have problems with AMD cards than Nvidia...all you gotta do is read.
I'll admit though....the 9800Pro back in the day was F legendary.
i'll always feel safer overclocking a nvidia card...
edit: Crossfire using PCI bus is great. If it does indeed net much performance, expect Nvidia to follow suit. My points still stand though.
Couldn't agree more. Funnily the last AMD card i had that worked flawlessly was the 9800 pro as well. That was what, a 128 MEGABYTE of vram card? I've got lots of friends who are all longtime PC gamers and many of them still buy AMD/ATI (whatever you wanna call it) cards even after they've had huge problems with them. My best friend is the only one who seems to have learned his lesson after the fiasco that was the x1800 series cards.
BTW i love that people still try to reference the titan card as a gaming card. That card was released specifically for people who do computational work with CUDA, its capable of double precision fp computations at excellent speed. It also had a metric shit ton of ram, far more than you need for any game.
The 780ti is the card that people should be comparing the 290x to, and guess what, its not $1000 dollars.
"The surest way to corrupt a youth is to instruct him to hold in higher esteem those who think alike than those who think differently."
Originally posted by Hrimnir The 780ti is the card that people should be comparing the 290x to, and guess what, its not $1000 dollars.
It is $700 though, compared to the 290X's $550.
$150 is real money to most people.
Also, I'd like to point out Titan is sold under the GeForce brand - which is the gamer brand.
The professional cards are Quadro/Tesla, and they are much more than $1,000 for a comparable GPU.
nVidia clearly marketed it as a gaming card, and a lot of ... I don't want to say suckers, but maybe I do... bought it for gaming. If it were just for GPU compute people, it would be marketed as Tesla (and it is, for around $3,500), and just for professional 3D folks Quadro (and it is, for around $2,900).
Originally posted by Hrimnir The 780ti is the card that people should be comparing the 290x to, and guess what, its not $1000 dollars.
It is $700 though, compared to the 290X's $550.
$150 is real money to most people.
Also, I'd like to point out Titan is sold under the GeForce brand - which is the gamer brand.
The professional cards are Quadro/Tesla, and they are much more than $1,000 for a comparable GPU.
nVidia clearly marketed it as a gaming card, and a lot of ... I don't want to say suckers, but maybe I do... bought it for gaming. If it were just for GPU compute people, it would be marketed as Tesla (and it is, for around $3,500), and just for professional 3D folks Quadro (and it is, for around $2,900).
All fair points.
I still think its a fair comparison. The type of person who can afford to spend 550 vs 700 on a card isnt really going to balk at another 150 dollars. That type of person is probably going to care about overclockability, heat, noise levels, etc. Though.
Im glad AMD has finally come out with a decent card, up until now the past 3 years have been somewhat of an embarassment for them, and that has allowed Nvidia to pull the kind of bullshit you talked about above (selling what is a low end compute card as a high end gamer card).
Competition is good for everyone. The problem is people are blowing smoke up AMD's because they want SO bad for AMD to beat nvidia that they can't actually look at the card in an objective manner. Telling someone they're better than they are is not the recipe for them doing better the next time around.
On a complete tangent side note. Does anyone else think its hilarious that a thread about a $550 video card has garnered so much attention in a forum full of people who act as if paying $15/mo for an MMO is the equivalent of them giving up their daughter's virginity for a loaf of a bread?
"The surest way to corrupt a youth is to instruct him to hold in higher esteem those who think alike than those who think differently."
I keep coming across as a fanboy here, but really just trying to set some facts straight.
I wouldn't call the last 3 years "embarrassing" for AMD - they did, after all, just coup de grace nVidia on the console front.
The 5890 was vastly superior to the 480 - for all the same reasons everyone is saying the 780Ti is so much better than the 290X... plus it is cheaper.
The 6970 vs 580 - same story.
The 7970 has been on the market for ... almost 22 months now. And in that time has competed very well against the 580, the 680, and even the 780. It was the fastest card when it was released, nVidia couldn't touch it for several months and had to resort to a soft launch of the 680 with insufficient inventory, completely reshuffling their Kepler lineup at release.
AMD had PowerTune almost 18 months before nVidia could compete with Boost. And that technology is a big deal, and a big reason why Kepler and Hawaii perform as well as they do today.
AMD is still pretty superior at multi-monitor configuration with Eyefinity, although nVidia is a whole lot better than they used to be. And AMD is winning at 4k if the benchmarks are to be believed.
nVidia Kepler is very good, and a whole lot better than Fermi. But Fermi was bad, for many of the same reasons that Hawaii is struggling now; except nVidia still insisted on charging a premium for it and pretended nothing was wrong, and their diehard fans believed them and kept on buying them.
If anything, if I were trying to keep score over the past 3 years, I'd say nVidia is still playing catch up. It's one thing to have "the fastest card" - but that isn't the only card in the game.
nVidia has some great products, and Kepler is a great architecture, but $150 is a lot to pay for what amounts to a better cooler at this point.
I keep coming across as a fanboy here, but really just trying to set some facts straight.
I wouldn't call the last 3 years "embarrassing" for AMD - they did, after all, just coup de grace nVidia on the console front.
The 5890 was vastly superior to the 480 - for all the same reasons everyone is saying the 780Ti is so much better than the 290X... plus it is cheaper.
The 6970 vs 580 - same story.
The 7970 has been on the market for ... almost 22 months now. And in that time has competed very well against the 580, the 680, and even the 780. It was the fastest card when it was released, nVidia couldn't touch it for several months and had to resort to a soft launch of the 680 with insufficient inventory, completely reshuffling their Kepler lineup at release.
AMD had PowerTune almost 18 months before nVidia could compete with Boost. And that technology is a big deal, and a big reason why Kepler and Hawaii perform as well as they do today.
AMD is still pretty superior at multi-monitor configuration with Eyefinity, although nVidia is a whole lot better than they used to be. And AMD is winning at 4k if the benchmarks are to be believed.
nVidia Kepler is very good, and a whole lot better than Fermi. But Fermi was bad, for many of the same reasons that Hawaii is struggling now; except nVidia still insisted on charging a premium for it and pretended nothing was wrong, and their diehard fans believed them and kept on buying them.
If anything, if I were trying to keep score over the past 3 years, I'd say nVidia is still playing catch up. It's one thing to have "the fastest card" - but that isn't the only card in the game.
nVidia has some great products, and Kepler is a great architecture, but $150 is a lot to pay for what amounts to a better cooler at this point.
I won't go into details, but ill just post one thing.
In 2012 AMD posted a loss of 1.18 billion dollars, while nvidia posted a profit of 583 million. I could go back and do 2010/2011, etc, but it would really be a waste of time.
Say what you will about the direct hardware comparisons, but the driver issues that have plagued AMD and overall cheapness of the product have had a drastic effect on what people are willing to buy.
If you look at psychology of humans driving buying of products, the decision to spend money on a product is far more important than the "how much" do i spend.
Look at apple, people pay exorbitant premiums for products they could get elsewhere that would perform the same function as well or in many cases better, because of things like aesthetics, build quality, noise levels, etc, etc.
Edit: One other thing i feel compelled to mention. You can't really pull the "AMD released X card and it was the fastest at release" card, because AMD has been 4-8 months behind nvidia for quite a while now. They used to release their new cards within weeks of nvidia, and was that way for many many years. It was only around 2010 when they couldnt keep up, etc, and have no consistently been behind.
"The surest way to corrupt a youth is to instruct him to hold in higher esteem those who think alike than those who think differently."
nVidia has some great products, and Kepler is a great architecture, but $150 is a lot to pay for what amounts to a better cooler at this point.
I guess i should have just committed to an actual response in the last one lol.
Anywhoo. I just wanted to say, at the performance bracket we're talking about you can't really make value judgements. It kind of ties to what i was talking about before, if someone makes the decision to buy a high end processor, video card, car, whatever, they're not concerned about value so much as performance. Someone who is buying a porsche for example isn't going to save 15k and buy a corvette thats just as fast because the rest of the car doesnt match up. There's build quality differences, engineering differences, pedigree, etc etc.
The same thing applies in the computing world on high end hardware. 50 or 100 dollars here or there at the top end isnt really that big of a deal. Thats why people will spend $500 on water cooling hardware to eek out an extra 10% of overclocking ability. Its certainly not cost effective, because the point was never to get a good value for your money.
So when you talk about the 780ti running on the order of 10C cooler and producing 5-10% better results in most situations, that allows a lot more overclocking headroom, even with the base cooler. Throw a water block on it and the difference is even more major.
"The surest way to corrupt a youth is to instruct him to hold in higher esteem those who think alike than those who think differently."
I won't go into details, but ill just post one thing.
In 2012 AMD posted a loss of 1.18 billion dollars, while nvidia posted a profit of 583 million. I could go back and do 2010/2011, etc, but it would really be a waste of time.
Say what you will about the direct hardware comparisons, but the driver issues that have plagued AMD and overall cheapness of the product have had a drastic effect on what people are willing to buy.
If you look at psychology of humans driving buying of products, the decision to spend money on a product is far more important than the "how much" do i spend.
Look at apple, people pay exorbitant premiums for products they could get elsewhere that would perform the same function as well or in many cases better, because of things like aesthetics, build quality, noise levels, etc, etc.
Edit: One other thing i feel compelled to mention. You can't really pull the "AMD released X card and it was the fastest at release" card, because AMD has been 4-8 months behind nvidia for quite a while now. They used to release their new cards within weeks of nvidia, and was that way for many many years. It was only around 2010 when they couldnt keep up, etc, and have no consistently been behind.
AMD's graphics division was profitable in 2012. It's the processor division that was losing money, and I think that a large chunk of that loss was also due to getting out of some wafer agreement with Global Foundries to finally extricate the company from running its own fabs.
If people don't base decisions on price much, then why don't AMD and Nvidia charge a lot more for everything?
The reference R9 290 and 290X are problematic on heat and noise, but the custom cards from board partners that are coming won't be any worse than analogous Nvidia cards with the same power draw--just as it has been all up and down the lineup for years.
-----
AMD has been behind Nvidia? Since when? Not in the last six years, at least if you mean behind in a chronological sense. Yes, the GeForce 8800 GTX did beat the Radeon HD 2900 XT to market by several months, way back in 2006. But let's look at everything since then:
With the Radeon HD 3870, AMD shrunk its architecture to 55 nm in November 2007. Nvidia didn't get there until June 2008. Also in June 2008, Nvidia launched the GeForce GTX 260 and 280, still on 65 nm. Later that month, AMD launched the Radeon HD 4870 with a markedly superior architecture on 55 nm. Nvidia wouldn't shrink the new GeForce 200 series architecture to 55 nm until 2009, and even then, it was still an inferior architecture.
Meanwhile, AMD shrunk its architecture to 40 nm in April 2009. Nvidia got there in September 2009. But by September 2009, AMD launched its new Evergreen architecture that supported DirectX 11 and so forth. Nvidia wouldn't get there with Fermi until April 2010, and even then, it was a far inferior architecture to AMD's at everything except GPU compute. By then, AMD had rolled out its entire new lineup, complete with smaller dies and salvage parts; Nvidia would roll out additional cards over the course of most of 2010 trying to catch up.
Then AMD refreshed its entire lineup with new chips over the course of late 2010 and early 2011, starting with the Radeon HD 6870 in October 2010. Nvidia had nothing new apart from a respin of its previous Fermi parts; Nvidia still had a far inferior architecture for gaming purposes, but at least with the respin, the dies finally worked rather than having to disable large chunks of every single die meant for gaming even in the top bin.
Then AMD launched its Southern Islands architecture on 28 nm in January 2012. AMD launched additional cards on 28 nm in February 2012, and finished off its lineup in March 2012. Nvidia wouldn't have Kepler cards available at retail until May 2012. Nvidia wouldn't add anything outside of the high end to its 28 nm lineup until September 2012.
So let's come forward to Hawaii, which was originally meant to be a 20 nm GPU chip. But the 20 nm process node is rather delayed, so AMD at some point decided to move it back to 28 nm so that they would have a higher end card to compete with Titan while waiting for 20 nm. Nvidia, by the way, hasn't done this with Maxwell, and it's highly probable that AMD will launch 20 nm cards before Nvidia does.
So AMD is reliably months behind Nvidia, you say? For most of the last six years, it's been the other way around.
I won't go into details, but ill just post one thing.
In 2012 AMD posted a loss of 1.18 billion dollars, while nvidia posted a profit of 583 million. I could go back and do 2010/2011, etc, but it would really be a waste of time.
Say what you will about the direct hardware comparisons, but the driver issues that have plagued AMD and overall cheapness of the product have had a drastic effect on what people are willing to buy.
If you look at psychology of humans driving buying of products, the decision to spend money on a product is far more important than the "how much" do i spend.
Look at apple, people pay exorbitant premiums for products they could get elsewhere that would perform the same function as well or in many cases better, because of things like aesthetics, build quality, noise levels, etc, etc.
Edit: One other thing i feel compelled to mention. You can't really pull the "AMD released X card and it was the fastest at release" card, because AMD has been 4-8 months behind nvidia for quite a while now. They used to release their new cards within weeks of nvidia, and was that way for many many years. It was only around 2010 when they couldnt keep up, etc, and have no consistently been behind.
AMD's graphics division was profitable in 2012. It's the processor division that was losing money, and I think that a large chunk of that loss was also due to getting out of some wafer agreement with Global Foundries to finally extricate the company from running its own fabs.
If people don't base decisions on price much, then why don't AMD and Nvidia charge a lot more for everything?
The reference R9 290 and 290X are problematic on heat and noise, but the custom cards from board partners that are coming won't be any worse than analogous Nvidia cards with the same power draw--just as it has been all up and down the lineup for years.
-----
AMD has been behind Nvidia? Since when? Not in the last six years, at least if you mean behind in a chronological sense. Yes, the GeForce 8800 GTX did beat the Radeon HD 2900 XT to market by several months, way back in 2006. But let's look at everything since then:
With the Radeon HD 3870, AMD shrunk its architecture to 55 nm in November 2007. Nvidia didn't get there until June 2008. Also in June 2008, Nvidia launched the GeForce GTX 260 and 280, still on 65 nm. Later that month, AMD launched the Radeon HD 4870 with a markedly superior architecture on 55 nm. Nvidia wouldn't shrink the new GeForce 200 series architecture to 55 nm until 2009, and even then, it was still an inferior architecture.
Meanwhile, AMD shrunk its architecture to 40 nm in April 2009. Nvidia got there in September 2009. But by September 2009, AMD launched its new Evergreen architecture that supported DirectX 11 and so forth. Nvidia wouldn't get there with Fermi until April 2010, and even then, it was a far inferior architecture to AMD's at everything except GPU compute. By then, AMD had rolled out its entire new lineup, complete with smaller dies and salvage parts; Nvidia would roll out additional cards over the course of most of 2010 trying to catch up.
Then AMD refreshed its entire lineup with new chips over the course of late 2010 and early 2011, starting with the Radeon HD 6870 in October 2010. Nvidia had nothing new apart from a respin of its previous Fermi parts; Nvidia still had a far inferior architecture for gaming purposes, but at least with the respin, the dies finally worked rather than having to disable large chunks of every single die meant for gaming even in the top bin.
Then AMD launched its Southern Islands architecture on 28 nm in January 2012. AMD launched additional cards on 28 nm in February 2012, and finished off its lineup in March 2012. Nvidia wouldn't have Kepler cards available at retail until May 2012. Nvidia wouldn't add anything outside of the high end to its 28 nm lineup until September 2012.
So let's come forward to Hawaii, which was originally meant to be a 20 nm GPU chip. But the 20 nm process node is rather delayed, so AMD at some point decided to move it back to 28 nm so that they would have a higher end card to compete with Titan while waiting for 20 nm. Nvidia, by the way, hasn't done this with Maxwell, and it's highly probable that AMD will launch 20 nm cards before Nvidia does.
So AMD is reliably months behind Nvidia, you say? For most of the last six years, it's been the other way around.
Die shrinks are mostly great. Usually a die shrink will allow a specific architecture to operate on lower volts, hence producing less heat and keeping the temperatures down. Die shrinks is the process of economies of scale. It saves the manufacturer resources...resources which can then be allocated elsewhere. Die shrinks doesn't always correlate with a boost in performace over previous generations...it just means a smaller chip, that runs cooler. To an overclocker, a die shrink means he can potentially extract more performance out of an architecture because by default the newer chip runs cooler. Cooler here being the keyword.
A die shrink however doesn't automatically correlate to increased performance, because that is how you're making it sound in your post.
Since the 8800, nVidia has been top dawg. The only card which changed things up recently was the 7970ghz edition and this 290x. Competition is a healthy thing, and I wish well for good old ATi. But I feel like ATi need to work on quality assurance heavily to improve their GPU lineup. Anybody can make a super overclocked sample (7970ghz), but when you can't even get this card stable and you start releasing them to the public...you're out here simply for that doe and thats that.
Since the 8800, nVidia has been top dawg. The only card which changed things up recently was the 7970ghz edition and this 290x. Competition is a healthy thing, and I wish well for good old ATi. But I feel like ATi need to work on quality assurance heavily to improve their GPU lineup. Anybody can make a super overclocked sample (7970ghz), but when you can't even get this card stable and you start releasing them to the public...you're out here simply for that doe and thats that.
If your argument is that Nvidia has usually had the highest performing single-GPU card, then that's merely a philosophical difference between AMD and Nvidia. Nvidia is willing to build much larger dies than AMD, and suffer the delay and yield problems as a result. But that's only relevant to people who care nothing about the price tag, but will pay whatever it costs to get the fastest card available. To the other 99%+ of the market, efficiency matters, too (e.g., most performance you can buy at a given price tag), and that's where AMD has mostly led for most of the last six years.
But AMD did have what was far and away the fastest GPU for two extended stretches, with the Radeon HD 5870 and Radeon HD 7970. Not coincidentally, those were the first cards out on new process nodes that were a full node die shrink from the previous generation. Meaning that there's a good chance that it will repeat next year with the transition to 20 nm. And then 16 nm the year after if Nvidia doesn't get a huge die out on 20 nm quickly.
It also depends where you live. There's nothing wrong with either chipset. But here in Scotland nvidia are very high priced. I think in America they have simmilar prices so take your pick. But over here AMD are just much better value for money, e.g. a 7950 boost is cheaper than a 660gtx but performs around a 670
I won't go into details, but ill just post one thing.
In 2012 AMD posted a loss of 1.18 billion dollars, while nvidia posted a profit of 583 million. I could go back and do 2010/2011, etc, but it would really be a waste of time.
Say what you will about the direct hardware comparisons, but the driver issues that have plagued AMD and overall cheapness of the product have had a drastic effect on what people are willing to buy.
If you look at psychology of humans driving buying of products, the decision to spend money on a product is far more important than the "how much" do i spend.
Look at apple, people pay exorbitant premiums for products they could get elsewhere that would perform the same function as well or in many cases better, because of things like aesthetics, build quality, noise levels, etc, etc.
Edit: One other thing i feel compelled to mention. You can't really pull the "AMD released X card and it was the fastest at release" card, because AMD has been 4-8 months behind nvidia for quite a while now. They used to release their new cards within weeks of nvidia, and was that way for many many years. It was only around 2010 when they couldnt keep up, etc, and have no consistently been behind.
AMD's graphics division was profitable in 2012. It's the processor division that was losing money, and I think that a large chunk of that loss was also due to getting out of some wafer agreement with Global Foundries to finally extricate the company from running its own fabs.
If people don't base decisions on price much, then why don't AMD and Nvidia charge a lot more for everything?
The reference R9 290 and 290X are problematic on heat and noise, but the custom cards from board partners that are coming won't be any worse than analogous Nvidia cards with the same power draw--just as it has been all up and down the lineup for years.
-----
AMD has been behind Nvidia? Since when? Not in the last six years, at least if you mean behind in a chronological sense. Yes, the GeForce 8800 GTX did beat the Radeon HD 2900 XT to market by several months, way back in 2006. But let's look at everything since then:
With the Radeon HD 3870, AMD shrunk its architecture to 55 nm in November 2007. Nvidia didn't get there until June 2008. Also in June 2008, Nvidia launched the GeForce GTX 260 and 280, still on 65 nm. Later that month, AMD launched the Radeon HD 4870 with a markedly superior architecture on 55 nm. Nvidia wouldn't shrink the new GeForce 200 series architecture to 55 nm until 2009, and even then, it was still an inferior architecture.
Meanwhile, AMD shrunk its architecture to 40 nm in April 2009. Nvidia got there in September 2009. But by September 2009, AMD launched its new Evergreen architecture that supported DirectX 11 and so forth. Nvidia wouldn't get there with Fermi until April 2010, and even then, it was a far inferior architecture to AMD's at everything except GPU compute. By then, AMD had rolled out its entire new lineup, complete with smaller dies and salvage parts; Nvidia would roll out additional cards over the course of most of 2010 trying to catch up.
Then AMD refreshed its entire lineup with new chips over the course of late 2010 and early 2011, starting with the Radeon HD 6870 in October 2010. Nvidia had nothing new apart from a respin of its previous Fermi parts; Nvidia still had a far inferior architecture for gaming purposes, but at least with the respin, the dies finally worked rather than having to disable large chunks of every single die meant for gaming even in the top bin.
Then AMD launched its Southern Islands architecture on 28 nm in January 2012. AMD launched additional cards on 28 nm in February 2012, and finished off its lineup in March 2012. Nvidia wouldn't have Kepler cards available at retail until May 2012. Nvidia wouldn't add anything outside of the high end to its 28 nm lineup until September 2012.
So let's come forward to Hawaii, which was originally meant to be a 20 nm GPU chip. But the 20 nm process node is rather delayed, so AMD at some point decided to move it back to 28 nm so that they would have a higher end card to compete with Titan while waiting for 20 nm. Nvidia, by the way, hasn't done this with Maxwell, and it's highly probable that AMD will launch 20 nm cards before Nvidia does.
So AMD is reliably months behind Nvidia, you say? For most of the last six years, it's been the other way around.
Look, you misconstrued my point. I said at that price level, at the high end, of anything, i dont care what field. Price is not as much of a factor as others. Price is only important to people who can't afford the object at hand. Someone who has an extra $50 every paycheck isnt going to spend 500 dollars on a video card. Someone who makes $4000 a month after taxes on the other hand generally isnt going to balk at 500 vs 650, they're going to buy the product they like more. Value isnt as much of a factor to someone like that.
The corvette vs porsche is still a great example. The porsche is 15-20k more, but isnt any faster, doesnt handle any better. Performance wise its the 290x. The porsche is more the 780 ti, in so much as it delivers the same performance but with more refinement. This is something that is important to a lot of people in that price bracket.
As far as being behind Nvidia. Once again you're pulling an AMD fanboi card and you're twisting the facts to support your claim. Yes, if you look at the manufacturing process you can make that claim. Thats like making the claim that because honda moved to Dual Overhead Cams on their cars before ford that any honda is a better performing car than any ford. Obviously not the case.
If you look at the cards that competed with each other and who held the PERFORMANCE crown, its been nvidia ahead of AMD for a long ass time. If nvidia makes a 65nm card that outperforms ati's 55nm card, the nm is relatively meaningless at that point.
"The surest way to corrupt a youth is to instruct him to hold in higher esteem those who think alike than those who think differently."
Since the 8800, nVidia has been top dawg. The only card which changed things up recently was the 7970ghz edition and this 290x. Competition is a healthy thing, and I wish well for good old ATi. But I feel like ATi need to work on quality assurance heavily to improve their GPU lineup. Anybody can make a super overclocked sample (7970ghz), but when you can't even get this card stable and you start releasing them to the public...you're out here simply for that doe and thats that.
If your argument is that Nvidia has usually had the highest performing single-GPU card, then that's merely a philosophical difference between AMD and Nvidia. Nvidia is willing to build much larger dies than AMD, and suffer the delay and yield problems as a result. But that's only relevant to people who care nothing about the price tag, but will pay whatever it costs to get the fastest card available. To the other 99%+ of the market, efficiency matters, too (e.g., most performance you can buy at a given price tag), and that's where AMD has mostly led for most of the last six years.
But AMD did have what was far and away the fastest GPU for two extended stretches, with the Radeon HD 5870 and Radeon HD 7970. Not coincidentally, those were the first cards out on new process nodes that were a full node die shrink from the previous generation. Meaning that there's a good chance that it will repeat next year with the transition to 20 nm. And then 16 nm the year after if Nvidia doesn't get a huge die out on 20 nm quickly.
Once again you're missing the point. This whole discussion is about the 290x, the highest end card AMD has, AT THE HIGH END performance is the most important aspect. Performance per dollar spent is only valid in mid range and entry level cards. People who are looking for the latest and greatest will *generally* pay whatever it takes.
The Titan was a perfect example. The first time in history you've had a card at that price point and it still sold very well. Once again, proving my point. The people who have the money and wherewithal to perform at the bleeding edge aren't nearly as concerned about VALUE as people like us who probably wont spend more than 300 on average for a video card.
Also, who gives 2 shits about how big the overall die is if it operates at the same TDP and then does it 10c cooler?
"The surest way to corrupt a youth is to instruct him to hold in higher esteem those who think alike than those who think differently."
Die shrinks are mostly great. Usually a die shrink will allow a specific architecture to operate on lower volts, hence producing less heat and keeping the temperatures down. Die shrinks is the process of economies of scale. It saves the manufacturer resources...resources which can then be allocated elsewhere. Die shrinks doesn't always correlate with a boost in performace over previous generations...it just means a smaller chip, that runs cooler. To an overclocker, a die shrink means he can potentially extract more performance out of an architecture because by default the newer chip runs cooler. Cooler here being the keyword.
A die shrink however doesn't automatically correlate to increased performance, because that is how you're making it sound in your post.
Since the 8800, nVidia has been top dawg. The only card which changed things up recently was the 7970ghz edition and this 290x. Competition is a healthy thing, and I wish well for good old ATi. But I feel like ATi need to work on quality assurance heavily to improve their GPU lineup. Anybody can make a super overclocked sample (7970ghz), but when you can't even get this card stable and you start releasing them to the public...you're out here simply for that doe and thats that.
He said it better than i could, thus my repeating his post. But he did a very good job of detailing the point i was trying to make.
"The surest way to corrupt a youth is to instruct him to hold in higher esteem those who think alike than those who think differently."
Originally posted by Hrimnir He said it better than i could, thus my repeating his post. But he did a very good job of detailing the point i was trying to make.
Except he's not quite got it right.
Die shrinks are about providing more usable area and increasing efficiency, and they don't correlate to increased performance. He's right about those two things.
Die shrinks only indirectly have anything to do with running cooler. Temperature is a function of energy. The temperature of a system will rise or fall until the energy going in equals the energy being removed.
A die shrink, all things being equal, should result in less power being required, as smaller pathways tend to require less voltage, and power goes up with the square of voltage. There are a lot of other variables in there though. Different transistor technologies, efficiency of the design, etc.
And let's not forget that a die shrink doesn't usually happen without major subsequent architectural changes (unless your doing Intel's Tick-Tock method, which neither GPU manufacturer does). The power per compute unit may go down by 10% with a die shrink, but now you have enough surface area to pack 20% more compute units in the same size die... so did your overall power go down? Nope. It went up, because you packed more compute units in there to make the chip that much faster.
Does that mean your temperature went up? Nope. It means your chip requires more power. The temperature is a function of the power input and the cooler... if you use the same cooler (like AMD has for the past 3 generations), then ok - but the cooler isn't the same as the chip - it's a part that goes on top of the chip, and can be swapped out easily (even by the end user). So is it fair to consider a chip's temperature based on the fan?
Die shrinks have nothing to do with temperature. It's all about surface area and power. Power does correlate to temperature, but it isn't the only variable, and you won't get additional speed (innovation/progress) without using additional power.
Do we consider Intel CPU's to run hot and crappy if you try to use the stock heatsink that Intel provides, or do we all go out and get a decent aftermarket cooler knowing that stock heat sinks pretty much suck?
Another factor into all of this - GPUBoost/PowerTune has increased the duty cycle on cards significantly. Before, A 250W TDP card would only hit 250W on peak loads pretty muc in outlier cases, and be severely underclocked the rest of the time. Now, a 250W TDP card can conceivably be sitting at 250W nearly the entire time, because GPUBoost/PowerTune will dynamically adjust the clock upwards to increase performance (or adjust it down to save the card from melting itself). So cards today are being pushed harder than cards 3 generations ago (1 generation ago for nVidia) - even at the same TDP. It's a shame AMD hasn't updated their reference cooler to account for it, whereas nVidia only happens to have the cooler they have now because they didn't get GPU Boost out in time (the 480 debacle). Maybe this debacle will force AMD's hand on that front.
For a good while now, probably the last 4-5 generations, the speed of a GPU has really only been constrained by power. We are limited to 75W from the PCI bus by the PCI specification, plus 75W per 6-pin and 150W per 8-pin PCIE adapter. There is no specification limit on the number of additional PCIE power adapters your card can require, but there is a practical limit on the physical size of the card you can manufacture and expect your end user to be able to fit inside a case. 2-Slot air coolers are about the limit (there have been some aftermarket 3/4 slots), and those struggle once you get up around 300Ws, even if your looking at nVidia's better stock cooler, just due to the amount of surface area you can pack and still fit inside of a 2-slot cooler.
Power has been limiting GPU speed, not die surface area or die shrinks. The last few generations, the top tier card has capped out with a TDP of 250-350W, and the manufacturers pack as much speed (roughly via a combination of clock speed and compute units) into that amount of TDP. A die shrink doens't make it run cooler, it allows the compute unit to use less power per unit, so they can pack more units in there for the same amount of power use.
A quick example, using nVidia and their Shader Processor/Stream Processor/CUDA core count (whatever their marketing department decides to call them this week) and their listed TDP: GTX280: 65nm, 240 SPUs, 236W GTX480: 40nm, 480 SPUs, 250W GTX780Ti: 28nm, 2,880 SPUs, 250W
So, did the die shrink allow the 780Ti to run cooler than the 480? No, the combination of GPU Boost better monitoring run-away processes (the big problem with the 480 imo) and the much improved heatsink did that. The die shrink allowed them to pack another 2,400 cores on the die and still stay at the same TDP, which is what makes the card faster. You can see that the TDP hasn't really changed much in the past 6 or so years.
I still go for AMD(wait for custom290x)but future don't look so bright for AMD its completely controlled by nvidia sadly.
They bring a card on market thats bit faster then 290x just after its release but cost alot more.
And then they even release a custom card(gigabyte windforce 3x) much sooner then AMD how can you win this battle.
Funny thing is majority(95%) won't even buy the 780ti and go for lesser card still paying to much only becouse nvidia lead the benchmark charts and have better cooler/noise and on every game have a sticker with nvidia slammed on it well almost everygame(this noise/heat problem alone will breake AMD sadly).
AMD price performance is very gamer friendly but they are not rewarded for it(slightly also there own fault prolly?).
They should have released a new card only when the heat/noise would not be that bad and a custom card around the corner.
Nvidia would never have come with 780ti and dropped price for other cards if 290series where not released yet.
At first few weeks ago it looked great for AMD now im affraid they lose this one BIG.
Unless the custom cards for AMD impprove alot then price is what gamers decide, but if noise and heat still high and not much improvement on speed im not convinced gamers go an mass for AMD.
Any sign of custom cards for AMD?
Hope to build full AMD system RYZEN/VEGA/AM4!!!
MB:Asus V De Luxe z77 CPU:Intell Icore7 3770k GPU: AMD Fury X(waiting for BIG VEGA 10 or 11 HBM2?(bit unclear now)) MEMORY:Corsair PLAT.DDR3 1866MHZ 16GB PSU:Corsair AX1200i OS:Windows 10 64bit
Finally got the system built, there's a story there but suffice it to say I should have stuck with my guns and used a clean install, to many driver conflicts with switching that much hardware. ( Lesson learned )
As to the R9 290 ; performance wise I'm satisfied. I had realistic expectations for this new card. I've two mates who recently jumped to the GTX780 so I had enough room to make loose comparisons between mine and theirs. I don't intend to go technical, the reviews and benchmarks are out there for anyone to find if they are so inclined. My main focus was on ensuring the GPU clock didn't downclock under load, the fans maintained the proper rpm, and my temps were "Acceptable".
With the minor glance at performance using Firestrike scores, I jumped from a 7053 score to a 8763(ish). I wasn't impressed at this increase and decided to watch the gpu clock under testing, to see if it was maintaining close or at 957mhz. I'm still at a loss as to that test though, I could never actually get the report to read any higher than 350(ish)mhz during Firestrike which I'm sure if true was severely hampering the results.
Unable to figure out how to correct this I downloaded the latest beta drivers to correct any rpm variations. I enabled Uber bios mode for the card, which simply increases the max fan speed to 47% ( why just 47%?) from the Quiet modes 20% cap. I also went into catalyst and changed the cap to 75%, so the bios switch is pointless for me. ( Unless it changes something more than fan cap ) With the new changes I reran the benchmark, I still didn't get a report of over 350mhz. The only justification I can find for this is the r9's drivers are not supported yet by 3dMark, perhaps that's the issue? * I neglected to mention that simply icreasing the fan speed did see performance gains on Firestrike up to a 8983 score.*
In any case I loaded up BF4 ... cranked up the graphics and watched my report. 957mhz held steady, slight dips to 850's at times but average held at 957mhz, this being at 75% fan cap.
This card does indeed run hot, idle at 58c and peaked out at scary 95c under load. AMD stands that the card are made to operate at those heats, but coming from my much cooler gtx's I can't help but worry each time the 90c is breached. As far as comparing it to the performance of my mates 780's they seem to run neck in neck, with the R9 290 pulling ahead on most games/benchmarks but at the cost of running hotter. ( BF4, Skyrim , Dishonored, 3DMark )
So there it is, my firsthand experience with the R9 290 thus far, I'd have to say I'm happy with the purchase. Money isn't an issue when I'm building my systems but I don't believe in building systems without a purpose. For me the reduced price of the 290 over the 780 sealed the deal, who doesn't want more for less? I'm looking forward to the after market coolers to become more available so I can reduce that running temp to something more acceptable.
Ah one caveat : The increased noise by turning the fans cap up is ignorable, I use gaming headphones and not once did I notice an increase. For those of you using speakers, you'll probably notice but like all system sounds you will quickly ignore them. My only concern at this point is the failure time at running the fans at that capacity?
Originally posted by Redemp Update on the new build, including R9 290 : Finally got the system built, there's a story there but suffice it to say I should have stuck with my guns and used a clean install, to many driver conflicts with switching that much hardware. ( Lesson learned ) As to the R9 290 ; performance wise I'm satisfied. I had realistic expectations for this new card. I've two mates who recently jumped to the GTX780 so I had enough room to make loose comparisons between mine and theirs. I don't intend to go technical, the reviews and benchmarks are out there for anyone to find if they are so inclined. My main focus was on ensuring the GPU clock didn't downclock under load, the fans maintained the proper rpm, and my temps were "Acceptable". With the minor glance at performance using Firestrike scores, I jumped from a 7053 score to a 8763(ish). I wasn't impressed at this increase and decided to watch the gpu clock under testing, to see if it was maintaining close or at 957mhz. I'm still at a loss as to that test though, I could never actually get the report to read any higher than 350(ish)mhz during Firestrike which I'm sure if true was severely hampering the results. Unable to figure out how to correct this I downloaded the latest beta drivers to correct any rpm variations. I enabled Uber bios mode for the card, which simply increases the max fan speed to 47% ( why just 47%?) from the Quiet modes 20% cap. I also went into catalyst and changed the cap to 75%, so the bios switch is pointless for me. ( Unless it changes something more than fan cap ) With the new changes I reran the benchmark, I still didn't get a report of over 350mhz. The only justification I can find for this is the r9's drivers are not supported yet by 3dMark, perhaps that's the issue? * I neglected to mention that simply icreasing the fan speed did see performance gains on Firestrike up to a 8983 score.* In any case I loaded up BF4 ... cranked up the graphics and watched my report. 957mhz held steady, slight dips to 850's at times but average held at 957mhz, this being at 75% fan cap. This card does indeed run hot, idle at 58c and peaked out at scary 95c under load. AMD stands that the card are made to operate at those heats, but coming from my much cooler gtx's I can't help but worry each time the 90c is breached. As far as comparing it to the performance of my mates 780's they seem to run neck in neck, with the R9 290 pulling ahead on most games/benchmarks but at the cost of running hotter. ( BF4, Skyrim , Dishonored, 3DMark ) So there it is, my firsthand experience with the R9 290 thus far, I'd have to say I'm happy with the purchase. Money isn't an issue when I'm building my systems but I don't believe in building systems without a purpose. For me the reduced price of the 290 over the 780 sealed the deal, who doesn't want more for less? I'm looking forward to the after market coolers to become more available so I can reduce that running temp to something more acceptable. Ah one caveat : The increased noise by turning the fans cap up is ignorable, I use gaming headphones and not once did I notice an increase. For those of you using speakers, you'll probably notice but like all system sounds you will quickly ignore them. My only concern at this point is the failure time at running the fans at that capacity?
Nice little review from a non proffesional user.
I already knew this i got friend who also bought one and was as you say came to same conclusions.
Myself im waiting for the customcard 290x i hope this year but i serious doub they be avaible.
For the price of card come one 780 and even more 780 ti are no competions 290+4gb vram(good for skyrim with modding) is so cheap compare to those cards its super you have highend card for that price and performance.
Hope to build full AMD system RYZEN/VEGA/AM4!!!
MB:Asus V De Luxe z77 CPU:Intell Icore7 3770k GPU: AMD Fury X(waiting for BIG VEGA 10 or 11 HBM2?(bit unclear now)) MEMORY:Corsair PLAT.DDR3 1866MHZ 16GB PSU:Corsair AX1200i OS:Windows 10 64bit
Sorry for making another topic but this is to importend NEW CARD to not let you know about fastest single gpu plus cheap price.
I know most nvidia fans dont believe or wanne hear this but AMD 290x is faster in most benchmarks that countS and firestrikes which is showcase from futuremark in DX11 beats TITAN also.
AMD 290x OC(easy OC) is even more rediculously fast.
And that for half the price!!!
Its HOT very hot so you need a good cooling case and noise is also louder then most but card is fast and cheap.
ADVICE: I should wait for the branch makers like ASUS-XFX-MSI or others who will prolly have alot better cooling fans on the BEAST.
This card is hotter than my chick and wont last as long either unless you cover it in ICE and at least the Nvidia GTX 780TI does not run like its having its balls thrashed unlike the AMD. IMO you could give this card away but it will never sit in my rig. Also unless you using the titan for work you don't need that the 780 is enough for gaming and more cost effective too.
Originally posted by goldtoof It also depends where you live. There's nothing wrong with either chipset. But here in Scotland nvidia are very high priced. I think in America they have simmilar prices so take your pick. But over here AMD are just much better value for money, e.g. a 7950 boost is cheaper than a 660gtx but performs around a 670
Hey mate and if you go independent from the UK things like this will get even dearer and you may need to buy from the south to feed your fix. Which way are you voting to stay in the UK or out?
Originally posted by NightBandit Originally posted by ClassicstarSorry for making another topic but this is to importend NEW CARD to not let you know about fastest single gpu plus cheap price.I know most nvidia fans dont believe or wanne hear this but AMD 290x is faster in most benchmarks that countS and firestrikes which is showcase from futuremark in DX11 beats TITAN also.AMD 290x OC(easy OC) is even more rediculously fast.And that for half the price!!!Its HOT very hot so you need a good cooling case and noise is also louder then most but card is fast and cheap.ADVICE: I should wait for the branch makers like ASUS-XFX-MSI or others who will prolly have alot better cooling fans on the BEAST.
This card is hotter than my chick and wont last as long either unless you cover it in ICE and at least the Nvidia GTX 780TI does not run like its having its balls thrashed unlike the AMD. IMO you could give this card away but it will never sit in my rig. Also unless you using the titan for work you don't need that the 780 is enough for gaming and more cost effective too.
Lets see a custom made card for 350 euros or a over priced(still over priced) nvidiacard 780 for 450 euros which is older and slower then 290 seems nobrainer to me which one i choose.
Not to mention if you get 2x 290 with the MUCH improved xfire which will be just bit more cost then 780ti but ALOT FASTER seems also no brainer to me.
Nvidia will only get in my rig if its fair priced as it should be, but it ain't and im also some sympathie for underdog which in this case AMD don't have bad cards eather. And remember (seems you nvidia guys don't understand) all gamers are better of if both AMD and Nvidia are healthy for competion and prices, we gamers benefit from that. If you all keep condeming AMD its also very bad for you Nvidia fanbois prices go up and with no competion believe me you get worse products.
Hope to build full AMD system RYZEN/VEGA/AM4!!!
MB:Asus V De Luxe z77 CPU:Intell Icore7 3770k GPU: AMD Fury X(waiting for BIG VEGA 10 or 11 HBM2?(bit unclear now)) MEMORY:Corsair PLAT.DDR3 1866MHZ 16GB PSU:Corsair AX1200i OS:Windows 10 64bit
Well with knowledge that 7990 is still fastest card on the planet all bench show this.
I wonder what this beast will do?
The AMD Radeon R9 290 X2 code name vesuvius(heat/noise ahhh hope they manage that?)
Beats 780ti 3x:P
P.S On a site note im very pleased seeing how AMD handle drivers these days they are realy fast and update drivers constantly to become better KUDO'S to AMD!
Hope to build full AMD system RYZEN/VEGA/AM4!!!
MB:Asus V De Luxe z77 CPU:Intell Icore7 3770k GPU: AMD Fury X(waiting for BIG VEGA 10 or 11 HBM2?(bit unclear now)) MEMORY:Corsair PLAT.DDR3 1866MHZ 16GB PSU:Corsair AX1200i OS:Windows 10 64bit
I'll preface this by saying it's probably a really bad idea until we see some good aftermarket coolers, or you replace the stock cooler with something better in the first place.
Being able to flash a 290 to the X is a pretty big deal. It's more than just a clock speed change, it also unlocks all the shaders and texture units, and gives you a chance to turn your $400 into a $550 card. A chance, as there's no data yet on how many cards can successfully do this.
That being said, even if it's possible, I don't know that it's a great idea. The 290 tends to run hotter (I use this term figuratively given the previous discussion in the thread; it runs at the same temperature, but has a higher default fan speed) than the 290X does anyway, and there is a reason the 290's get binned as 290's and not 290X's. There are some reports in the linked post of people getting it to work and the card outright dying in a few weeks (interesting, because the cards have only been out for like 2 weeks) .
But on the flip side, if you reduce the silicons lifespan from 7 years to 4 - who really cares, video cards tend to only really be useful for about 3, maybe 4 years anyway if you are really willing to stretch it before they are fairly obsolete (that is, of course, assuming that at a 95F stock temperature we see "normal" silicon life in the first place, which is yet to be seen). The BIOS switch on the card saves you from a faulty firmware upload (AMD has had this in place since the 6970's), so if the flash doesn't work, provided that the card didn't outright die, you flip the switch and go back to default firmware. And if the card is under warranty... well I'm sure flashing the firmware violates the warranty, but unless they decide to pop the firmware out and read it before processing your RMA (which is possible, but not likely unless this becomes a huge problem with a lot of 290's), how are they going to know unless you tell them...
Originally posted by Classicstar The AMD Radeon R9 290 X2 code name vesuvius(heat/noise ahhh hope they manage that?)
I really hope they don't do this: a single 290X is already pushing the power limit for a single PCI slot, and definitely with AMD's stock cooler pushing the limits of what they can keep cool. Two of them on the same PCB would be a worse mistake than the GTX590 was. I think with the improved CFX mechanism, making end-users go across multiple PCI slots is a much better idea than trying to shoehorn them on a single PCB just for the sake of doing so.
Originally posted by Ridelynn Can't let a good thread die without beating it a few more times:BIOS flash R9 290 to 290X I'll preface this by saying it's probably a really bad idea until we see some good aftermarket coolers, or you replace the stock cooler with something better in the first place.Being able to flash a 290 to the X is a pretty big deal. It's more than just a clock speed change, it also unlocks all the shaders and texture units, and gives you a chance to turn your $400 into a $550 card. A chance, as there's no data yet on how many cards can successfully do this.That being said, even if it's possible, I don't know that it's a great idea. The 290 tends to run hotter (I use this term figuratively given the previous discussion in the thread; it runs at the same temperature, but has a higher default fan speed) than the 290X does anyway, and there is a reason the 290's get binned as 290's and not 290X's. There are some reports in the linked post of people getting it to work and the card outright dying in a few weeks (interesting, because the cards have only been out for like 2 weeks) .But on the flip side, if you reduce the silicons lifespan from 7 years to 4 - who really cares, video cards tend to only really be useful for about 3, maybe 4 years anyway if you are really willing to stretch it before they are fairly obsolete (that is, of course, assuming that at a 95F stock temperature we see "normal" silicon life in the first place, which is yet to be seen). The BIOS switch on the card saves you from a faulty firmware upload (AMD has had this in place since the 6970's), so if the flash doesn't work, provided that the card didn't outright die, you flip the switch and go back to default firmware. And if the card is under warranty... well I'm sure flashing the firmware violates the warranty, but unless they decide to pop the firmware out and read it before processing your RMA (which is possible, but not likely unless this becomes a huge problem with a lot of 290's), how are they going to know unless you tell them...
Ive seen in real life the 290 go FULL LOAD 64 degrees and ive seen more saying this and this is in normal rig no special cooling or case the one i saw was CM HAFX CASE.
I just have mine 290x and testing it in full load so far its around 89 degrees.
And AMD is on spewing out beta drivers constantly to improve alot.
The custom cards prolly improve alot and with price tags i dont see any reason why to buy a nvidia with 3gbvram for 650 euros do you?
Hope to build full AMD system RYZEN/VEGA/AM4!!!
MB:Asus V De Luxe z77 CPU:Intell Icore7 3770k GPU: AMD Fury X(waiting for BIG VEGA 10 or 11 HBM2?(bit unclear now)) MEMORY:Corsair PLAT.DDR3 1866MHZ 16GB PSU:Corsair AX1200i OS:Windows 10 64bit
Comments
Kaveri is coming in about two months. It's not likely to catch a Core i5-4670K, but it likely won't be all that far behind. I'd expect a gap of 10%-20% at stock speeds, which would be the closest than AMD has been in single-threaded performance in about three years.
Couldn't agree more. Funnily the last AMD card i had that worked flawlessly was the 9800 pro as well. That was what, a 128 MEGABYTE of vram card? I've got lots of friends who are all longtime PC gamers and many of them still buy AMD/ATI (whatever you wanna call it) cards even after they've had huge problems with them. My best friend is the only one who seems to have learned his lesson after the fiasco that was the x1800 series cards.
BTW i love that people still try to reference the titan card as a gaming card. That card was released specifically for people who do computational work with CUDA, its capable of double precision fp computations at excellent speed. It also had a metric shit ton of ram, far more than you need for any game.
The 780ti is the card that people should be comparing the 290x to, and guess what, its not $1000 dollars.
"The surest way to corrupt a youth is to instruct him to hold in higher esteem those who think alike than those who think differently."
- Friedrich Nietzsche
It is $700 though, compared to the 290X's $550.
$150 is real money to most people.
Also, I'd like to point out
Titan is sold under the GeForce brand - which is the gamer brand.
The professional cards are Quadro/Tesla, and they are much more than $1,000 for a comparable GPU.
nVidia clearly marketed it as a gaming card, and a lot of ... I don't want to say suckers, but maybe I do... bought it for gaming. If it were just for GPU compute people, it would be marketed as Tesla (and it is, for around $3,500), and just for professional 3D folks Quadro (and it is, for around $2,900).
All fair points.
I still think its a fair comparison. The type of person who can afford to spend 550 vs 700 on a card isnt really going to balk at another 150 dollars. That type of person is probably going to care about overclockability, heat, noise levels, etc. Though.
Im glad AMD has finally come out with a decent card, up until now the past 3 years have been somewhat of an embarassment for them, and that has allowed Nvidia to pull the kind of bullshit you talked about above (selling what is a low end compute card as a high end gamer card).
Competition is good for everyone. The problem is people are blowing smoke up AMD's because they want SO bad for AMD to beat nvidia that they can't actually look at the card in an objective manner. Telling someone they're better than they are is not the recipe for them doing better the next time around.
On a complete tangent side note. Does anyone else think its hilarious that a thread about a $550 video card has garnered so much attention in a forum full of people who act as if paying $15/mo for an MMO is the equivalent of them giving up their daughter's virginity for a loaf of a bread?
"The surest way to corrupt a youth is to instruct him to hold in higher esteem those who think alike than those who think differently."
- Friedrich Nietzsche
I keep coming across as a fanboy here, but really just trying to set some facts straight.
I wouldn't call the last 3 years "embarrassing" for AMD - they did, after all, just coup de grace nVidia on the console front.
The 5890 was vastly superior to the 480 - for all the same reasons everyone is saying the 780Ti is so much better than the 290X... plus it is cheaper.
The 6970 vs 580 - same story.
The 7970 has been on the market for ... almost 22 months now. And in that time has competed very well against the 580, the 680, and even the 780. It was the fastest card when it was released, nVidia couldn't touch it for several months and had to resort to a soft launch of the 680 with insufficient inventory, completely reshuffling their Kepler lineup at release.
AMD had PowerTune almost 18 months before nVidia could compete with Boost. And that technology is a big deal, and a big reason why Kepler and Hawaii perform as well as they do today.
AMD is still pretty superior at multi-monitor configuration with Eyefinity, although nVidia is a whole lot better than they used to be. And AMD is winning at 4k if the benchmarks are to be believed.
nVidia Kepler is very good, and a whole lot better than Fermi. But Fermi was bad, for many of the same reasons that Hawaii is struggling now; except nVidia still insisted on charging a premium for it and pretended nothing was wrong, and their diehard fans believed them and kept on buying them.
If anything, if I were trying to keep score over the past 3 years, I'd say nVidia is still playing catch up. It's one thing to have "the fastest card" - but that isn't the only card in the game.
nVidia has some great products, and Kepler is a great architecture, but $150 is a lot to pay for what amounts to a better cooler at this point.
I won't go into details, but ill just post one thing.
In 2012 AMD posted a loss of 1.18 billion dollars, while nvidia posted a profit of 583 million. I could go back and do 2010/2011, etc, but it would really be a waste of time.
Say what you will about the direct hardware comparisons, but the driver issues that have plagued AMD and overall cheapness of the product have had a drastic effect on what people are willing to buy.
If you look at psychology of humans driving buying of products, the decision to spend money on a product is far more important than the "how much" do i spend.
Look at apple, people pay exorbitant premiums for products they could get elsewhere that would perform the same function as well or in many cases better, because of things like aesthetics, build quality, noise levels, etc, etc.
Edit: One other thing i feel compelled to mention. You can't really pull the "AMD released X card and it was the fastest at release" card, because AMD has been 4-8 months behind nvidia for quite a while now. They used to release their new cards within weeks of nvidia, and was that way for many many years. It was only around 2010 when they couldnt keep up, etc, and have no consistently been behind.
"The surest way to corrupt a youth is to instruct him to hold in higher esteem those who think alike than those who think differently."
- Friedrich Nietzsche
I guess i should have just committed to an actual response in the last one lol.
Anywhoo. I just wanted to say, at the performance bracket we're talking about you can't really make value judgements. It kind of ties to what i was talking about before, if someone makes the decision to buy a high end processor, video card, car, whatever, they're not concerned about value so much as performance. Someone who is buying a porsche for example isn't going to save 15k and buy a corvette thats just as fast because the rest of the car doesnt match up. There's build quality differences, engineering differences, pedigree, etc etc.
The same thing applies in the computing world on high end hardware. 50 or 100 dollars here or there at the top end isnt really that big of a deal. Thats why people will spend $500 on water cooling hardware to eek out an extra 10% of overclocking ability. Its certainly not cost effective, because the point was never to get a good value for your money.
So when you talk about the 780ti running on the order of 10C cooler and producing 5-10% better results in most situations, that allows a lot more overclocking headroom, even with the base cooler. Throw a water block on it and the difference is even more major.
"The surest way to corrupt a youth is to instruct him to hold in higher esteem those who think alike than those who think differently."
- Friedrich Nietzsche
AMD's graphics division was profitable in 2012. It's the processor division that was losing money, and I think that a large chunk of that loss was also due to getting out of some wafer agreement with Global Foundries to finally extricate the company from running its own fabs.
If people don't base decisions on price much, then why don't AMD and Nvidia charge a lot more for everything?
The reference R9 290 and 290X are problematic on heat and noise, but the custom cards from board partners that are coming won't be any worse than analogous Nvidia cards with the same power draw--just as it has been all up and down the lineup for years.
-----
AMD has been behind Nvidia? Since when? Not in the last six years, at least if you mean behind in a chronological sense. Yes, the GeForce 8800 GTX did beat the Radeon HD 2900 XT to market by several months, way back in 2006. But let's look at everything since then:
With the Radeon HD 3870, AMD shrunk its architecture to 55 nm in November 2007. Nvidia didn't get there until June 2008. Also in June 2008, Nvidia launched the GeForce GTX 260 and 280, still on 65 nm. Later that month, AMD launched the Radeon HD 4870 with a markedly superior architecture on 55 nm. Nvidia wouldn't shrink the new GeForce 200 series architecture to 55 nm until 2009, and even then, it was still an inferior architecture.
Meanwhile, AMD shrunk its architecture to 40 nm in April 2009. Nvidia got there in September 2009. But by September 2009, AMD launched its new Evergreen architecture that supported DirectX 11 and so forth. Nvidia wouldn't get there with Fermi until April 2010, and even then, it was a far inferior architecture to AMD's at everything except GPU compute. By then, AMD had rolled out its entire new lineup, complete with smaller dies and salvage parts; Nvidia would roll out additional cards over the course of most of 2010 trying to catch up.
Then AMD refreshed its entire lineup with new chips over the course of late 2010 and early 2011, starting with the Radeon HD 6870 in October 2010. Nvidia had nothing new apart from a respin of its previous Fermi parts; Nvidia still had a far inferior architecture for gaming purposes, but at least with the respin, the dies finally worked rather than having to disable large chunks of every single die meant for gaming even in the top bin.
Then AMD launched its Southern Islands architecture on 28 nm in January 2012. AMD launched additional cards on 28 nm in February 2012, and finished off its lineup in March 2012. Nvidia wouldn't have Kepler cards available at retail until May 2012. Nvidia wouldn't add anything outside of the high end to its 28 nm lineup until September 2012.
So let's come forward to Hawaii, which was originally meant to be a 20 nm GPU chip. But the 20 nm process node is rather delayed, so AMD at some point decided to move it back to 28 nm so that they would have a higher end card to compete with Titan while waiting for 20 nm. Nvidia, by the way, hasn't done this with Maxwell, and it's highly probable that AMD will launch 20 nm cards before Nvidia does.
So AMD is reliably months behind Nvidia, you say? For most of the last six years, it's been the other way around.
Die shrinks are mostly great. Usually a die shrink will allow a specific architecture to operate on lower volts, hence producing less heat and keeping the temperatures down. Die shrinks is the process of economies of scale. It saves the manufacturer resources...resources which can then be allocated elsewhere. Die shrinks doesn't always correlate with a boost in performace over previous generations...it just means a smaller chip, that runs cooler. To an overclocker, a die shrink means he can potentially extract more performance out of an architecture because by default the newer chip runs cooler. Cooler here being the keyword.
A die shrink however doesn't automatically correlate to increased performance, because that is how you're making it sound in your post.
Since the 8800, nVidia has been top dawg. The only card which changed things up recently was the 7970ghz edition and this 290x. Competition is a healthy thing, and I wish well for good old ATi. But I feel like ATi need to work on quality assurance heavily to improve their GPU lineup. Anybody can make a super overclocked sample (7970ghz), but when you can't even get this card stable and you start releasing them to the public...you're out here simply for that doe and thats that.
If your argument is that Nvidia has usually had the highest performing single-GPU card, then that's merely a philosophical difference between AMD and Nvidia. Nvidia is willing to build much larger dies than AMD, and suffer the delay and yield problems as a result. But that's only relevant to people who care nothing about the price tag, but will pay whatever it costs to get the fastest card available. To the other 99%+ of the market, efficiency matters, too (e.g., most performance you can buy at a given price tag), and that's where AMD has mostly led for most of the last six years.
But AMD did have what was far and away the fastest GPU for two extended stretches, with the Radeon HD 5870 and Radeon HD 7970. Not coincidentally, those were the first cards out on new process nodes that were a full node die shrink from the previous generation. Meaning that there's a good chance that it will repeat next year with the transition to 20 nm. And then 16 nm the year after if Nvidia doesn't get a huge die out on 20 nm quickly.
Look, you misconstrued my point. I said at that price level, at the high end, of anything, i dont care what field. Price is not as much of a factor as others. Price is only important to people who can't afford the object at hand. Someone who has an extra $50 every paycheck isnt going to spend 500 dollars on a video card. Someone who makes $4000 a month after taxes on the other hand generally isnt going to balk at 500 vs 650, they're going to buy the product they like more. Value isnt as much of a factor to someone like that.
The corvette vs porsche is still a great example. The porsche is 15-20k more, but isnt any faster, doesnt handle any better. Performance wise its the 290x. The porsche is more the 780 ti, in so much as it delivers the same performance but with more refinement. This is something that is important to a lot of people in that price bracket.
As far as being behind Nvidia. Once again you're pulling an AMD fanboi card and you're twisting the facts to support your claim. Yes, if you look at the manufacturing process you can make that claim. Thats like making the claim that because honda moved to Dual Overhead Cams on their cars before ford that any honda is a better performing car than any ford. Obviously not the case.
If you look at the cards that competed with each other and who held the PERFORMANCE crown, its been nvidia ahead of AMD for a long ass time. If nvidia makes a 65nm card that outperforms ati's 55nm card, the nm is relatively meaningless at that point.
"The surest way to corrupt a youth is to instruct him to hold in higher esteem those who think alike than those who think differently."
- Friedrich Nietzsche
Once again you're missing the point. This whole discussion is about the 290x, the highest end card AMD has, AT THE HIGH END performance is the most important aspect. Performance per dollar spent is only valid in mid range and entry level cards. People who are looking for the latest and greatest will *generally* pay whatever it takes.
The Titan was a perfect example. The first time in history you've had a card at that price point and it still sold very well. Once again, proving my point. The people who have the money and wherewithal to perform at the bleeding edge aren't nearly as concerned about VALUE as people like us who probably wont spend more than 300 on average for a video card.
Also, who gives 2 shits about how big the overall die is if it operates at the same TDP and then does it 10c cooler?
"The surest way to corrupt a youth is to instruct him to hold in higher esteem those who think alike than those who think differently."
- Friedrich Nietzsche
He said it better than i could, thus my repeating his post. But he did a very good job of detailing the point i was trying to make.
"The surest way to corrupt a youth is to instruct him to hold in higher esteem those who think alike than those who think differently."
- Friedrich Nietzsche
Except he's not quite got it right.
Die shrinks are about providing more usable area and increasing efficiency, and they don't correlate to increased performance. He's right about those two things.
Die shrinks only indirectly have anything to do with running cooler. Temperature is a function of energy. The temperature of a system will rise or fall until the energy going in equals the energy being removed.
A die shrink, all things being equal, should result in less power being required, as smaller pathways tend to require less voltage, and power goes up with the square of voltage. There are a lot of other variables in there though. Different transistor technologies, efficiency of the design, etc.
And let's not forget that a die shrink doesn't usually happen without major subsequent architectural changes (unless your doing Intel's Tick-Tock method, which neither GPU manufacturer does). The power per compute unit may go down by 10% with a die shrink, but now you have enough surface area to pack 20% more compute units in the same size die... so did your overall power go down? Nope. It went up, because you packed more compute units in there to make the chip that much faster.
Does that mean your temperature went up? Nope. It means your chip requires more power. The temperature is a function of the power input and the cooler... if you use the same cooler (like AMD has for the past 3 generations), then ok - but the cooler isn't the same as the chip - it's a part that goes on top of the chip, and can be swapped out easily (even by the end user). So is it fair to consider a chip's temperature based on the fan?
Die shrinks have nothing to do with temperature. It's all about surface area and power. Power does correlate to temperature, but it isn't the only variable, and you won't get additional speed (innovation/progress) without using additional power.
Do we consider Intel CPU's to run hot and crappy if you try to use the stock heatsink that Intel provides, or do we all go out and get a decent aftermarket cooler knowing that stock heat sinks pretty much suck?
Another factor into all of this - GPUBoost/PowerTune has increased the duty cycle on cards significantly. Before, A 250W TDP card would only hit 250W on peak loads pretty muc in outlier cases, and be severely underclocked the rest of the time. Now, a 250W TDP card can conceivably be sitting at 250W nearly the entire time, because GPUBoost/PowerTune will dynamically adjust the clock upwards to increase performance (or adjust it down to save the card from melting itself). So cards today are being pushed harder than cards 3 generations ago (1 generation ago for nVidia) - even at the same TDP. It's a shame AMD hasn't updated their reference cooler to account for it, whereas nVidia only happens to have the cooler they have now because they didn't get GPU Boost out in time (the 480 debacle). Maybe this debacle will force AMD's hand on that front.
For a good while now, probably the last 4-5 generations, the speed of a GPU has really only been constrained by power. We are limited to 75W from the PCI bus by the PCI specification, plus 75W per 6-pin and 150W per 8-pin PCIE adapter. There is no specification limit on the number of additional PCIE power adapters your card can require, but there is a practical limit on the physical size of the card you can manufacture and expect your end user to be able to fit inside a case. 2-Slot air coolers are about the limit (there have been some aftermarket 3/4 slots), and those struggle once you get up around 300Ws, even if your looking at nVidia's better stock cooler, just due to the amount of surface area you can pack and still fit inside of a 2-slot cooler.
Power has been limiting GPU speed, not die surface area or die shrinks. The last few generations, the top tier card has capped out with a TDP of 250-350W, and the manufacturers pack as much speed (roughly via a combination of clock speed and compute units) into that amount of TDP. A die shrink doens't make it run cooler, it allows the compute unit to use less power per unit, so they can pack more units in there for the same amount of power use.
A quick example, using nVidia and their Shader Processor/Stream Processor/CUDA core count (whatever their marketing department decides to call them this week) and their listed TDP:
GTX280: 65nm, 240 SPUs, 236W
GTX480: 40nm, 480 SPUs, 250W
GTX780Ti: 28nm, 2,880 SPUs, 250W
So, did the die shrink allow the 780Ti to run cooler than the 480? No, the combination of GPU Boost better monitoring run-away processes (the big problem with the 480 imo) and the much improved heatsink did that. The die shrink allowed them to pack another 2,400 cores on the die and still stay at the same TDP, which is what makes the card faster. You can see that the TDP hasn't really changed much in the past 6 or so years.
I still go for AMD(wait for custom290x)but future don't look so bright for AMD its completely controlled by nvidia sadly.
They bring a card on market thats bit faster then 290x just after its release but cost alot more.
And then they even release a custom card(gigabyte windforce 3x) much sooner then AMD how can you win this battle.
Funny thing is majority(95%) won't even buy the 780ti and go for lesser card still paying to much only becouse nvidia lead the benchmark charts and have better cooler/noise and on every game have a sticker with nvidia slammed on it well almost everygame(this noise/heat problem alone will breake AMD sadly).
AMD price performance is very gamer friendly but they are not rewarded for it(slightly also there own fault prolly?).
They should have released a new card only when the heat/noise would not be that bad and a custom card around the corner.
Nvidia would never have come with 780ti and dropped price for other cards if 290series where not released yet.
At first few weeks ago it looked great for AMD now im affraid they lose this one BIG.
Unless the custom cards for AMD impprove alot then price is what gamers decide, but if noise and heat still high and not much improvement on speed im not convinced gamers go an mass for AMD.
Any sign of custom cards for AMD?
Hope to build full AMD system RYZEN/VEGA/AM4!!!
MB:Asus V De Luxe z77
CPU:Intell Icore7 3770k
GPU: AMD Fury X(waiting for BIG VEGA 10 or 11 HBM2?(bit unclear now))
MEMORY:Corsair PLAT.DDR3 1866MHZ 16GB
PSU:Corsair AX1200i
OS:Windows 10 64bit
Update on the new build, including R9 290 :
Finally got the system built, there's a story there but suffice it to say I should have stuck with my guns and used a clean install, to many driver conflicts with switching that much hardware. ( Lesson learned )
As to the R9 290 ; performance wise I'm satisfied. I had realistic expectations for this new card. I've two mates who recently jumped to the GTX780 so I had enough room to make loose comparisons between mine and theirs. I don't intend to go technical, the reviews and benchmarks are out there for anyone to find if they are so inclined. My main focus was on ensuring the GPU clock didn't downclock under load, the fans maintained the proper rpm, and my temps were "Acceptable".
With the minor glance at performance using Firestrike scores, I jumped from a 7053 score to a 8763(ish). I wasn't impressed at this increase and decided to watch the gpu clock under testing, to see if it was maintaining close or at 957mhz. I'm still at a loss as to that test though, I could never actually get the report to read any higher than 350(ish)mhz during Firestrike which I'm sure if true was severely hampering the results.
Unable to figure out how to correct this I downloaded the latest beta drivers to correct any rpm variations. I enabled Uber bios mode for the card, which simply increases the max fan speed to 47% ( why just 47%?) from the Quiet modes 20% cap. I also went into catalyst and changed the cap to 75%, so the bios switch is pointless for me. ( Unless it changes something more than fan cap ) With the new changes I reran the benchmark, I still didn't get a report of over 350mhz. The only justification I can find for this is the r9's drivers are not supported yet by 3dMark, perhaps that's the issue? * I neglected to mention that simply icreasing the fan speed did see performance gains on Firestrike up to a 8983 score.*
In any case I loaded up BF4 ... cranked up the graphics and watched my report. 957mhz held steady, slight dips to 850's at times but average held at 957mhz, this being at 75% fan cap.
This card does indeed run hot, idle at 58c and peaked out at scary 95c under load. AMD stands that the card are made to operate at those heats, but coming from my much cooler gtx's I can't help but worry each time the 90c is breached. As far as comparing it to the performance of my mates 780's they seem to run neck in neck, with the R9 290 pulling ahead on most games/benchmarks but at the cost of running hotter. ( BF4, Skyrim , Dishonored, 3DMark )
So there it is, my firsthand experience with the R9 290 thus far, I'd have to say I'm happy with the purchase. Money isn't an issue when I'm building my systems but I don't believe in building systems without a purpose. For me the reduced price of the 290 over the 780 sealed the deal, who doesn't want more for less? I'm looking forward to the after market coolers to become more available so I can reduce that running temp to something more acceptable.
Ah one caveat : The increased noise by turning the fans cap up is ignorable, I use gaming headphones and not once did I notice an increase. For those of you using speakers, you'll probably notice but like all system sounds you will quickly ignore them. My only concern at this point is the failure time at running the fans at that capacity?
Nice little review from a non proffesional user.
I already knew this i got friend who also bought one and was as you say came to same conclusions.
Myself im waiting for the customcard 290x i hope this year but i serious doub they be avaible.
For the price of card come one 780 and even more 780 ti are no competions 290+4gb vram(good for skyrim with modding) is so cheap compare to those cards its super you have highend card for that price and performance.
Hope to build full AMD system RYZEN/VEGA/AM4!!!
MB:Asus V De Luxe z77
CPU:Intell Icore7 3770k
GPU: AMD Fury X(waiting for BIG VEGA 10 or 11 HBM2?(bit unclear now))
MEMORY:Corsair PLAT.DDR3 1866MHZ 16GB
PSU:Corsair AX1200i
OS:Windows 10 64bit
This card is hotter than my chick and wont last as long either unless you cover it in ICE and at least the Nvidia GTX 780TI does not run like its having its balls thrashed unlike the AMD. IMO you could give this card away but it will never sit in my rig. Also unless you using the titan for work you don't need that the 780 is enough for gaming and more cost effective too.
Asbo
Hey mate and if you go independent from the UK things like this will get even dearer and you may need to buy from the south to feed your fix. Which way are you voting to stay in the UK or out?
Asbo
Lets see a custom made card for 350 euros or a over priced(still over priced) nvidiacard 780 for 450 euros which is older and slower then 290 seems nobrainer to me which one i choose.
Not to mention if you get 2x 290 with the MUCH improved xfire which will be just bit more cost then 780ti but ALOT FASTER seems also no brainer to me.
Nvidia will only get in my rig if its fair priced as it should be, but it ain't and im also some sympathie for underdog which in this case AMD don't have bad cards eather. And remember (seems you nvidia guys don't understand) all gamers are better of if both AMD and Nvidia are healthy for competion and prices, we gamers benefit from that.
If you all keep condeming AMD its also very bad for you Nvidia fanbois prices go up and with no competion believe me you get worse products.
Hope to build full AMD system RYZEN/VEGA/AM4!!!
MB:Asus V De Luxe z77
CPU:Intell Icore7 3770k
GPU: AMD Fury X(waiting for BIG VEGA 10 or 11 HBM2?(bit unclear now))
MEMORY:Corsair PLAT.DDR3 1866MHZ 16GB
PSU:Corsair AX1200i
OS:Windows 10 64bit
Well with knowledge that 7990 is still fastest card on the planet all bench show this.
I wonder what this beast will do?
The AMD Radeon R9 290 X2 code name vesuvius(heat/noise ahhh hope they manage that?)
Beats 780ti 3x:P
P.S On a site note im very pleased seeing how AMD handle drivers these days they are realy fast and update drivers constantly to become better KUDO'S to AMD!
Hope to build full AMD system RYZEN/VEGA/AM4!!!
MB:Asus V De Luxe z77
CPU:Intell Icore7 3770k
GPU: AMD Fury X(waiting for BIG VEGA 10 or 11 HBM2?(bit unclear now))
MEMORY:Corsair PLAT.DDR3 1866MHZ 16GB
PSU:Corsair AX1200i
OS:Windows 10 64bit
Can't let a good thread die without beating it a few more times:
BIOS flash R9 290 to 290X
I'll preface this by saying it's probably a really bad idea until we see some good aftermarket coolers, or you replace the stock cooler with something better in the first place.
Being able to flash a 290 to the X is a pretty big deal. It's more than just a clock speed change, it also unlocks all the shaders and texture units, and gives you a chance to turn your $400 into a $550 card. A chance, as there's no data yet on how many cards can successfully do this.
That being said, even if it's possible, I don't know that it's a great idea. The 290 tends to run hotter (I use this term figuratively given the previous discussion in the thread; it runs at the same temperature, but has a higher default fan speed) than the 290X does anyway, and there is a reason the 290's get binned as 290's and not 290X's. There are some reports in the linked post of people getting it to work and the card outright dying in a few weeks (interesting, because the cards have only been out for like 2 weeks) .
But on the flip side, if you reduce the silicons lifespan from 7 years to 4 - who really cares, video cards tend to only really be useful for about 3, maybe 4 years anyway if you are really willing to stretch it before they are fairly obsolete (that is, of course, assuming that at a 95F stock temperature we see "normal" silicon life in the first place, which is yet to be seen). The BIOS switch on the card saves you from a faulty firmware upload (AMD has had this in place since the 6970's), so if the flash doesn't work, provided that the card didn't outright die, you flip the switch and go back to default firmware. And if the card is under warranty... well I'm sure flashing the firmware violates the warranty, but unless they decide to pop the firmware out and read it before processing your RMA (which is possible, but not likely unless this becomes a huge problem with a lot of 290's), how are they going to know unless you tell them...
I really hope they don't do this: a single 290X is already pushing the power limit for a single PCI slot, and definitely with AMD's stock cooler pushing the limits of what they can keep cool. Two of them on the same PCB would be a worse mistake than the GTX590 was. I think with the improved CFX mechanism, making end-users go across multiple PCI slots is a much better idea than trying to shoehorn them on a single PCB just for the sake of doing so.
Ive seen in real life the 290 go FULL LOAD 64 degrees and ive seen more saying this and this is in normal rig no special cooling or case the one i saw was CM HAFX CASE.
I just have mine 290x and testing it in full load so far its around 89 degrees.
And AMD is on spewing out beta drivers constantly to improve alot.
The custom cards prolly improve alot and with price tags i dont see any reason why to buy a nvidia with 3gbvram for 650 euros do you?
Hope to build full AMD system RYZEN/VEGA/AM4!!!
MB:Asus V De Luxe z77
CPU:Intell Icore7 3770k
GPU: AMD Fury X(waiting for BIG VEGA 10 or 11 HBM2?(bit unclear now))
MEMORY:Corsair PLAT.DDR3 1866MHZ 16GB
PSU:Corsair AX1200i
OS:Windows 10 64bit