Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

HD5870 Released today

CleffyCleffy Member RarePosts: 6,414

Looks like the e-tailers are starting to sell the HD5870.  With an introductory price of $379.99.  I think I will wait a few weeks for the prices to fall and for the OEMs to tweak the card.  I am not a fan of the current reference cooler.  They seem to be fully aware of the fact they are going to be the only ones with a competitive card this generation.

BTW these things are HUGE.  I mean dimensionally, they might not fit.

«13

Comments

  • AmazingAveryAmazingAvery Age of Conan AdvocateMember UncommonPosts: 7,188
    Originally posted by Cleffy


    Looks like the e-tailers are starting to sell the HD5870.  With an introductory price of $379.99.  I think I will wait a few weeks for the prices to fall and for the OEMs to tweak the card.  They seem to be fully aware of the fact they are going to be the only ones with a competitive card this generation.



     

    Yea I have seen a few places listing the 1gb version for $399. Read that ATI sells them on for $333 without shipping, tax or the Partner resell margin added on.

    5850 for $299 too, all with Dirt 2 coupon.



  • CleffyCleffy Member RarePosts: 6,414

    Ohh I forgot to calculate the 2 games it comes with.  Personally, I am planning to get Dirt 2 for the PC so the price is $50 lower for me.

  • AmazingAveryAmazingAvery Age of Conan AdvocateMember UncommonPosts: 7,188
    Originally posted by Cleffy


    Ohh I forgot to calculate the 2 games it comes with.  Personally, I am planning to get Dirt 2 for the PC so the price is $50 lower for me.



     

    Yea Dirt 2 looks good, ATi pumped a few Mill to them too as part of the deal.

    Nice looking cards. Even better when price comes down.



  • QuizzicalQuizzical Member LegendaryPosts: 25,507

    The basic consensus seems to be that it clobbers the GTX 285, and is competitive with the Radeon 4870 X2 and the GTX 295.  Add in a DirectX 11, OpenCL, angle-independent anisotropic filtering, support for three monitors at once, eyefinity, superior audio support, and only 27 W at idle, and it's quite a nice card.

    The Radeon HD 5870 is initially supposed to be $380, and the Radeon HD 5850 is supposed to be $260 and perhaps 75% of the performance of the 5870.  While it's more a hard launch than a paper launch, AMD says it may take a couple of weeks to get enough in stock that they're not selling out as soon as they show up.

    http://www.anandtech.com/video/showdoc.aspx?i=3643&p=27

    "Let’s be clear here: the 5870 is the single fastest single-GPU card we have tested, by a wide margin. Looking at its performance in today’s games, as a $379 card it makes the GTX 285 at its current prices ($300+) completely irrelevant."

    http://www.tomshardware.com/reviews/radeon-hd-5870,2422-22.html

    "ATI’s new flagship is still a solid win right now, even before factoring in the features and benefits this hardware will enable in the months to come."

    "Without question, ATI once again wears the single-GPU performance crown"

    http://hothardware.com/Articles/AMD-ATI-Radeon-HD-5870-Unquestionably-Number-One/?page=14

    "The Radeon HD 5870 we've evaluated here offered excellent performance that decimated any other singe GPU with top notch image quality. It also has the most extensive feature set of any other GPU, with support for ATI Eyefinity, an enhanced UVD 2 engine, and support for DirectX 11. And it is arriving at a fair price point"

    http://www.tweaktown.com/reviews/2933/sapphire_radeon_hd_5870_1gb_graphics_card/index15.html

    "We see across the board in all games the HD 5870 put's out some fantastic numbers, be it in the synthetic 3DMark Vantage or the ultra intensive and ultra popular Far Cry 2. I expected the HD 5870 to perform well, mainly because it needed to, but I didn't expect it to perform this well."

    "We don't have to wait for the driver team to work a little harder on it for it to be a card we want to buy. We want to buy it now because the drivers are great."

    http://www.guru3d.com/article/radeon-hd-5870-review-test/27

    "there's nothing negative to report."

    "consensus is that the Radeon HD 5870 is MUCH faster than the current leading flagship, the GeForce GTX 285."

    "And oh my gawd, I just realized what a beast the 5870 X2 will be."

    http://www.bit-tech.net/hardware/graphics/2009/09/23/ait-radeon-hd-5870-1gb-review/9

    "You can expect similar performance from the Radeon HD 5870 as you might see from a Radeon HD 4870 X2 with the advantage of avoiding devastating the frame rates in games that don't like multi-GPU setups such as Dawn of War II.  The difference between the two cards is DirectX 11 support, a fair whack less electricity and about £55 in raw cash."

    "it's still the fastest single GPU product and if you want the fastest, most future proof product available we'd recommend it"

    http://www.pcgameshardware.com/aid,695689/Radeon-HD-5870-Review-of-the-first-DirectX-11-graphics-card/Reviews/?page=18

    "Ati is taking the lead again. The Radeon HD 5870 delivers first class performance without big drawbacks - almost everything that has been criticized on the predecessors has been improved."

    "The Radeon HD 5870 is the currently fastest single GPU graphics card on the market and definitely the better solution than any multi GPU solution."

    "If you are looking for a new graphics card, you can take a HD 5870 - you can't get a better card at the moment."

    http://www.legitreviews.com/article/1080/17/

    "From a performance perspective the ATI Radeon HD 5870 blew away the Radeon HD 4890 and the GeForce GTX 285 graphics cards in the benchmarks that we ran on Windows 7. The Radeon HD 5870 is by far the fastest single GPU graphics card that we have ever benchmarked and it is the real deal. It doesn't have an annoyingly loud fan, suck down obsessive amounts of power or heat up your room while you aren't gaming. The user experience with this card is very well rounded and it is tough to find anything negative to say about it."

    http://www.hardwarecanucks.com/forum/hardware-canucks-reviews/23415-sapphire-radeon-hd-5870-1gb-gddr5-review-21.html

    "Between the single cards of the last generation to this new bruiser, we are seeing a jump in performance the likes of which hasn't been seen since the dawn of the G80."

    "the HD 5870’s performance per watt is just deranged, off the wall and plain stunning to behold. It needs less power than a GTX 285 but it blows the Nvidia card away when it comes to pure gaming performance."

    http://www.xbitlabs.com/articles/video/display/radeon-hd5870_17.html

    "Nvidia finds itself lagging behind once again. ATI Radeon HD 5870 is undoubtedly the best single-chip gaming graphics card today. It is as fast as dual-chip solutions of the previous generation but is free from their drawbacks."

    http://www.hardocp.com/article/2009/09/22/amds_ati_radeon_hd_5870_video_card_review/14

    "Through all of this, Eyefinity, DX11, DirectCompute 11, OpenCL, the Radeon HD 5870 remains true to the focus of just being a desirable gaming video card. One of the most impressive "features" is the fact that it doubles performance, yet remains within the same power envelope as the previous generation. This is impressive."

    -----

    Reviewers couldn't find much negative to say about the card other than that it's rather big and we don't really know how well it will perform with future software.  Apparently even Nvidia can't find too much negative to say about the card either, apart from the usual standbys that it doesn't support PhysX (but does support Bullet and soon also Havok) and CUDA (but does support DirectCompute and OpenCL).  Nvidia even ends up having to downplay the importance of gaming performance.

    http://www.xbitlabs.com/news/video/display/20090916140327_Nvidia_DirectX_11_Will_Not_Catalyze_Sales_of_Graphics_Cards.html

    "Nvidia believes that special-purpose software that relies on GPGPU technologies will drive people to upgrade their graphics processing units (GPUs), not advanced visual effects in future video games or increased raw performance of DirectX 11-compliant graphics processors."

    "Nvidia believes that in future computing performance will matter much more than graphics performance"

    -----

    That said, when the GT300 series finally arrives, its top card probably will be faster than the Radeon HD 5870.  It would be rather pathetic if Nvidia couldn't make that happen even with a dramatically larger die size on the same node, and hence dramatically more transistors available.  But that's probably a long way off still.

  • CleffyCleffy Member RarePosts: 6,414

    It seems nVidia GTX300 competition is no longer the HD5870.  Its the HD5890 or the HD6870.  I still think nVidia's next flagship will outperform the HD5870.

    When I said that I am disappointed in the HD5870's core clock in the leaked specs thread, it seems it translated into me being disappointed in the HD5870 benchmarks.  Although they beat every single GPU card on the market, I don't think it beats them enough.  It won't correct the problems AMD faces like the large nVidia network and not having enough engines that support its architecture.  Right now AMD only has 2 developers working with the hardware on their cards that have been released for 2 years.  1 of them they paid millions to do so.  nVidia on the other hand has more then just developers, they also have publishers supporting its architecture.  This has played out already with the HD3800's that are technically stronger then anything nVidia has out right now.  Yet when push came to shove, even with DX10.1 being backwords compatible.  Developers just didn't support the card or use the new DX iteration.  It may play out just the same considering the HD5800's were based on the HD3800's design.

  • QuizzicalQuizzical Member LegendaryPosts: 25,507

    It doesn't beat previous cards by enough for what?  Sure, if you already have a GTX 295 sitting in your computer, it doesn't make much sense to "upgrade" to this.  For someone with a much older card looking to upgrade to something really nice soon, this card or the Radeon HD 5850 makes a lot more sense than anything Nvidia has to offer at current prices.

    If you mean doesn't beat previous cards by enough to fend off the top GT300, that's probably true.  By the same token, it probably isn't fast enough to beat the top Radeon 6000 series card.  But both of those are far in the future.  For someone who is waiting for Sandy Bridge/Bulldozer to upgrade, what is on the market right now doesn't particularly matter.  But you could say the same of any other card that has ever been released.

     

    If it's a question of which will be the top card, not merely the top single GPU card, ATI has a better shot here.  The Radeon HD 5870 X2 is coming soon.  It's rumored to release in about a month, though AMD's only official guidance is Q4 2009.  While Nvidia can put two GPUs on a card, too, they can't necessarily double performance the way ATI does.  Recall that they flatly couldn't put two GTX 280/285s on a card at all.  They did manage to put two GTX 275s on one card, but only by underclocking them so that the GTX 295 got killed by two GTX 275s in SLI.  ATI might well have been able to beat the GTX 295 with a theoretical Radeon HD 4890 X2, but vendors didn't bother because it would have been pointless with so little time before the Radeon 5000 series.

    DirectX 10.1 never caught on for the same reason that PhysX and CUDA didn't:  it only ran on cards made by one company.  DirectX 11 shouldn't face that limitation unless Nvidia is giving up on DirectX hardware acceleration, which I doubt.

    Sure, the Radeon HD 5870 is kind of based on the Radeon HD 3870 design, but there's a big difference between 320 shader cores and 1600, don't you think?  If Nvidia could go from 240 shaders on the GTX 285 to 1200 on the top GT300, that would make for a tremendous card if they don't have to redesign to make the new shaders much weaker than before.  But I'd bet that they can't do that.  ATI's strategy is based on more cores clocked slower as compared to Nvidia's.  Unlike processors, video cards scale very well by adding more cores.

  • CleffyCleffy Member RarePosts: 6,414

    What I mean is quite simple.  As long as ATI is putting out a competitive part it can never truly attain the performance crown.  The benchmarks against the GTX285 put it at 10~50% better depending on the application, some of which heavily favor ATI.  If they wanted a good part IMO it would be atleast 50% better in all applications then the GTX285.  When you are talking about new generation performance, this is the margin I would like to see.

    The reason being, nVidia's marketshare.  They can put out a worse part, but their market share is too big for developers to ignore.  What ends up happening is they cater to the lower denomination because of their audience.  As long as nVidia still has the greater marketshare, developers will continue to cave to nVidia's underperforming less advanced parts.  It also means it will underperform on ATI parts because of architectural differences.  This is why ATI has to put out a product that completely decimates the competition so market share is on their side.

    I think DX10.1 is a great example of this.  There is no reason a developer should not be developing on DX10.1 if they are making a DX10 game.  Its backwords compatible.  You have nothing to lose by building on DX10.1 since its supported by all rescent cards.  The same is going to apply for DX11.  Only time will tell if once again nVidia holds back development by not supporting new and more advanced technology.

    Like I said, the HD3870 is technically better then any nVidia GPU today.  Not only does it have more Stream Processors, the processors are also capable of calculating 4 pieces of related data at once.  It is capable of over 3 times the poly limits as the GTX 285.

  • QuizzicalQuizzical Member LegendaryPosts: 25,507

    So basically you're arguing that games are designed for Nvidia's architecture and ATI's top GPU beats Nvidia's anyway, and for ATI to do that somehow just isn't good enough?  With ATI increasing market share of late, I can't see developers shifting to favor Nvidia's architecture more heavily than they already do.  The Radeon HD 5000 series will almost surely continue moving marketshare in ATI's direction, as it's hard to see it being a disaster on par with the Radeon HD 2000 or even 3000 series.  Indeed, DirectX 11 games are already implicitly being designed for ATI's architecture, simply because Nvidia doesn't have a DirectX 11 architecture yet.

    ATI's strategy is predicated on getting better performance per square millimeter of die size than Nvidia, as that means they can sell better performance per dollar.  They already did that with the Radeon HD 4000 series, but had to play catchup as it was released later than Nvidia's GT200 series.  The Radeon 4000 series also had the glaring flaw of high idle power, and as the first nice cards ATI had released in years, it takes a while for people to adjust to ATI actually having a credible product.

    Performance per dollar is really what people are after in video cards.  Someone who wants to spend $100 on a card wants to get the best $100 card he can.  If some other company has the better card or set of cards at some distant price point, then so what?  Even if Nvidia could double their performance per square millimeter with GT300 (which would be quite an impressive feat), they're still trailing ATI here.  Nvidia would likely have the advantage for people who want the best single GPU regardless of price, and perhaps also the best mutli-GPU system regardless of price (meaning Quad SLI vs CrossFireX), but those two are only a tiny fraction of the market.  At any other level of performance, whatever Nvidia comes up with would cost them more to produce than ATI's cards.

    And that's an optimistic scenario for Nvidia, and after GT300 is released.  Until then, Nvidia has no product to compete at the high end, and once Juniper is released (rumored to be next month, and promised by the end of the year), Nvidia will have no product to compete at the mid range, either.  Whatever Nvidia prices their GT200 line at, ATI could offer the same performance for less and still make a tidy profit, even if Nvidia cuts prices far enough to lose money on every card they sell.

  • AmazingAveryAmazingAvery Age of Conan AdvocateMember UncommonPosts: 7,188
    Originally posted by Quizzical


    So basically you're arguing that games are designed for Nvidia's architecture and ATI's top GPU beats Nvidia's anyway, and for ATI to do that somehow just isn't good enough?  With ATI increasing market share of late, I can't see developers shifting to favor Nvidia's architecture more heavily than they already do.  The Radeon HD 5000 series will almost surely continue moving marketshare in ATI's direction, as it's hard to see it being a disaster on par with the Radeon HD 2000 or even 3000 series.  Indeed, DirectX 11 games are already implicitly being designed for ATI's architecture, simply because Nvidia doesn't have a DirectX 11 architecture yet.
    ATI's strategy is predicated on getting better performance per square millimeter of die size than Nvidia, as that means they can sell better performance per dollar.  They already did that with the Radeon HD 4000 series, but had to play catchup as it was released later than Nvidia's GT200 series.  The Radeon 4000 series also had the glaring flaw of high idle power, and as the first nice cards ATI had released in years, it takes a while for people to adjust to ATI actually having a credible product.
    Performance per dollar is really what people are after in video cards.  Someone who wants to spend $100 on a card wants to get the best $100 card he can.  If some other company has the better card or set of cards at some distant price point, then so what?  Even if Nvidia could double their performance per square millimeter with GT300 (which would be quite an impressive feat), they're still trailing ATI here.  Nvidia would likely have the advantage for people who want the best single GPU regardless of price, and perhaps also the best mutli-GPU system regardless of price (meaning Quad SLI vs CrossFireX), but those two are only a tiny fraction of the market.  At any other level of performance, whatever Nvidia comes up with would cost them more to produce than ATI's cards.
    And that's an optimistic scenario for Nvidia, and after GT300 is released.  Until then, Nvidia has no product to compete at the high end, and once Juniper is released (rumored to be next month, and promised by the end of the year), Nvidia will have no product to compete at the mid range, either.  Whatever Nvidia prices their GT200 line at, ATI could offer the same performance for less and still make a tidy profit, even if Nvidia cuts prices far enough to lose money on every card they sell.



     

    I think it would be short sighted to dismiss Nvidia's new cards as they are just around the corner. I say this in relation to market share. 5000 series for DX11 wont push that many cards based on games. There are none at the moment. Not so long ago, ATI paid some 6 million dollars to Valve to make Half Life 2 run good on ATI hardware and to be optimized for DirectX 9 at the time. After a long delay, the game did ship but the Halo effect failed to materialize.This time around Radeon HD 5870 comes with Dirt 2, first DirectX 11 game, but the game is delayed as it was supposed to launch on September 10th and now it's pushed to November time, at least. Radeon HD 5850 will also come with the coupon and once the game is out, you will be able to get your DirectX 11 game, again months after the release of Radeon 5870 / 5850.

    Stalker 2 is also one of the soon to come DirectX 11 titles but is not affliated with ATI. Which other DX11 games are made with ATI cards in mind?

    Supply is very short too, just like when the 4870 was released it took a good couple months for anyone to have any decent stock.

    It is good where companies develop to get performance out of die shrinks but with the 5870x2 having issues with the highest ever current TDP for X2 of 376W so far (still being worked on) in comaprison 4870x2 had TDP of 286W. Are we looking at the first 300W card here? could be talking 3/4 power connecters meaning better and newer PSU's.

    Sure the price per dollar is there and that makes me happy, however, there are other things to consider too.

    Not sure if your aware but the new Nvidia card is a brand new chip that was designed almost entirely from the ground up. Industry sources believe that this is the biggest change since G80 was launched and that you can expect such level of innovation and change. 5800 series is a performer but more evolution than revolutionary. If Nvidia wins a single chip battle, they can win the dual one as well. I am almost positive that this will happen.

     

    Let's not discount the business sector either, Nvidia has the market trounced here. Much like the mobile market.

    In terms of coding for said architecture for each how is Physics for ATI? Well lets look at ATI’s Bullet Physics GPU acceleration via Open CL. Bullet physics was developed on Geforce cards. ATi is hoping this "Open Standard" physics tech as the one they want to accelerate their GPU's. This means that Bullet physics is being developed on Nvidia Geforce cards even though ATI is supposed to get driver and hardware acceleration for Bullet Physics. Comes to question whether ATI has any kind of OpenCL driver that it can push for its own Physics. Even if they do, Nvidia is far ahead with PhysX and Intel won’t let them accelerate Havok, as this is a good task for Larrabee, whenever that comes.

     

    To spice things up, Bullet (AMD/ ATI supported) physics is listing that PhysX with 26.8 percent is the most popular physics library, at least according to Gamedeveloper magazine. Intel’s Havok is second with saucy 22.7 percent and Bullet is third with 10.4 percent. The AMD supported API is clearly behind the competition, but it's still better than Open Dynamics Engine (ODE) that has 4.1 percent of the market. www.bulletphysics.com/wordpress/

     

    Source Fudzilla.

    Please understand I own 4870X2 and also a pair 285's. I am all for performance, I understand that may be a small market demograph, but this is where I come from and will buy the best there is to feed my hobby. In the market below 4890 and GTX 275 are price matched with performance argued either way depending on the situation.

    I feel that really the 5870 should of been able to beat a 4870X2 for them to make better strides. Arkham Asylum has some of the best physic's seen in a game and ATI owners will miss out like the consoles. Unreal Engine 3, Gamebryo, Vision, Instinct, Trinigy, Diesel, Unity 3D, Hero, BigWorld, Dreamworld engines all support Nvidia Physics, ATI has a lot of catching up in this area, and if they can do the same to beat Nvidia out the door next year then it makes for an interesting fight.



  • QuizzicalQuizzical Member LegendaryPosts: 25,507

    "5000 series for DX11 wont push that many cards based on games. There are none at the moment."

    Actually, there is one:  Battleforge.  Come to think of it, that's half as many as there are games that meaningfully use PhysX, after years of Nvidia pushing it relentlessly.

    Sure, for someone who plans on replacing whatever he buys today in six months anyway, DirectX 11 probably isn't important.  For someone who plans on keeping the card for a few years, DirectX 11 will almost certainly end up mattering greatly.

    "Supply is very short too, just like when the 4870 was released it took a good couple months for anyone to have any decent stock."

    At the moment, it's short, though I've seen the 5870 in stock at multiple e-tailers.  OEMs that are selling them in new computers right now don't seem to have gotten the memo that they're in short supply.

    "It is good where companies develop to get performance out of die shrinks but with the 5870x2 having issues with the highest ever current TDP for X2 of 376W so far"

    Yes, yes, someone at Fudzilla took out a calculator and multiplied the Radeon 5870's TDP by 2 to get a highball estimate for the TDP of a Radeon HD 5870 X2, ignoring that past X2 cards have used far less than double the power of a single card.  And he's wrong about that.  Just like he was wrong about the price tag:

    http://www.fudzilla.com/content/view/15434/34/

    And wrong about the release month:

    http://www.fudzilla.com/content/view/13550/34/

    I can't find it at the moment, but he also insisted at some point that it was delayed until November.  He was also wrong about the code name:

    http://www.fudzilla.com/content/view/8934/34/

    And, of course, the common thread in so many of those articles is that he didn't even know what it was called.  There is no RV870.  There never has been an RV870.  AMD said it was Evergreen in the Spring, and that the top single-GPU chip was Cypress was leaked not too long afterwards.

    "Not sure if your aware but the new Nvidia card is a brand new chip that was designed almost entirely from the ground up."

    Yes, yes, whole new architecture, trying to do a zillion things that Nvidia has never done before.  That's a lot harder to do than a simple die shrink, of course, which is probably why the Radeon HD 5870 has been released and Nvidia can't do anything about it except wave their arms, jump up and down, and scream something or other about how CUDA is more important than gaming performance.

    ATI showed off working DirectX 11 hardware at a conference in the Spring.  ATI released a decent 40 nm desktop card at retail in April.  ATI has had GDDR5 cards at retail for over a year.  Nivdia has yet to do any of those, though not for lack of trying.  See the disaster of a card known as the G210, which they haven't even bothered to release at retail.

    "Let's not discount the business sector either, Nvidia has the market trounced here."

    The business market is mostly integrated graphics, and is dominated by Intel.  And that's getting even worse for Nvidia now that Intel won't let Nvidia make integrated graphics for Nehalem.

    "Much like the mobile market."

    The most recent figures have ATI selling more discrete mobile cards than Nvidia, though it's close.  That's hardly a case of Nvidia trouncing the market.

    "Bullet physics was developed on Geforce cards."

    Err, so?  One review took a demo that Nvidia made to show off what their cards could do and found that the Radeon HD 5870 annihilated anything Nvidia had to offer even on Nvidia's own demo.

    "ATi is hoping this "Open Standard" physics tech as the one they want to accelerate their GPU's."

    ATI is taking the approach of supporting Bullet, Havok, and Pixelux--or basically everything except PhysX.  Even so, I'm skeptical of any GPU physics acceleration being anything other than eye candy that most players turn off in the near future, at least in online games.  It's hard enough to send ten bullets over the Internet, let alone a thousand pieces of shrapnel.

    "Even if they do, Nvidia is far ahead with PhysX and Intel won’t let them accelerate Havok"

    Nvidia is ahead by all of two meaningful games.   And funny how ATI is able to do GPU acceleration for Havok.

    -----

    Now certainly, once GT300 is released (the real release, not a reviewers' edition paper launch, and not a new name on old cards), the market changes dramatically.  But that's probably months away, and until then, there's basically no reason to buy an Nvidia card at the high end.  Once Juniper is out, there won't be much reason to buy an Nvidia card at the mid-range, either.  That still leaves the low end, where the low end cards in the GT200 line still aren't available at retail.

  • AmazingAveryAmazingAvery Age of Conan AdvocateMember UncommonPosts: 7,188

    So essentially there is no reason to go and buy a 5870 right now with a larger more dominant competitor's release right around the corner considering the buyer should get something future proof and something that can do more than 10 bullets online.

    I can do 10 bullets online and have physic effects maxed out right now, (at a higher frame rate in the games I play better than what anything ATI has out there right now) if I am going to spend all that money on a new card (or two) I think I'd wait and see what the other manufacturer has in store considering all the above points. Weighing up these are high end cards after all they should be able to do this stuff. Do you want option A who does a good job for $50 less or option B which may do a better job and I think most likely for that $50 more and then looking at the lifespan you want out of the card.

    There is one game where I do get better frame rates with the 4870x2 and that is AoC. But when their engine upgrade comes with the expansion next year I am unsure of which top performance card will be better. Then I will have only the inbetween experiences to go buy.

    No one wants to waste money on a performance card when there is something couple months away with just as much buzz.

    Physx is just like all the tessellation stuff ATI taked about except there are plenty of games that use physx.

    I am eager to see what the 5870x2 can do, and the TDP is not a double amount of the 4870x2's :)



  • CleffyCleffy Member RarePosts: 6,414

    I think you make 2 assumptions that are wrong.  In the business sector for businesses that make use of Discrete Graphics, ATI has been winning that race for years.

    As far as hardware based physics.  Ofcourse the studios you mentioned would support them.  They are engine developers, they get more business with the more technology the can support.  They will most likely support Eyefinity and tesselation as well.  However, CUDA and PhysX aren't going to be the standards.  You cannot make a standard that isn't supported by 50% of cards.  This is why OpenCL and DirectCompute will most likely end up the standards, unless nVidia extends support to AMD.  They have better hardware support.

    Defining standards usually take years as the proprietary makers add a bigger support network or adopt a different standard.  It took the 3D industry years before Python and LUA became the scripting standard.

  • dfandfan Member Posts: 362

    Cuda has practically no 3rd party support. Almost every notable software and hardware company favor Open CL.

  • QuizzicalQuizzical Member LegendaryPosts: 25,507

    "So essentially there is no reason to go and buy a 5870 right now with a larger more dominant competitor's release right around the corner"

    If you want to play that game, then by the time the GT300 is out and generally available, you could just as well say, there's no reason to get a GT 300 right now with the Radeon HD 6000 series just around the corner.

    Since you already have a pair of GTX 285s, getting a Radeon HD 5870 wouldn't be an upgrade for you (assuming they're in SLI rather than in separate computers), and getting two of them likely wouldn't be enough of an upgrade to justify the cost.  Most people don't already have two GTX 285s, though.

    Right now, I've got a Radeon X1300 Pro.  Even a $100 card from the previous generation would be a huge upgrade for me (except that I can't use it with my current computer as it has no PCI Express slot).  For me, it's not an issue of, is this a big enough upgrade?  I'm looking to get a new computer shortly, and have been waiting for Lynnfield processors, DirectX 11 video cards, TRIM support for solid state drives, and Windows 7.  With all four of those coming in September or October of this year, I don't want to wait until next year for a GT300 that might be really awesome or might not even be as good as what's on the market now (kind of; AMD says the new cards should be more widely available in two weeks).  Whether or not a GT300 would be better, a Radeon HD 5850 will be good enough for a long, long time.  My Radeon X1300 Pro, GeForce 4 MX (some number that I don't recall), and ATI Rage Pro each gave me a few good years, and a Radeon HD 5850 is a much higher end card than any of those ever were.

    "Physx is just like all the tessellation stuff ATI taked about except there are plenty of games that use physx."

    There are many games that use PhysX and have it run on the processor.  That means it runs just as well on an ATI card as an Nvidia one, or for that matter, Intel integrated graphics.  We seem to be up to two meaningful meaningful games that use GPU acceleration of PhysX.

    http://www.anandtech.com/video/showdoc.aspx?i=3539&p=8

    That review counts one as of a few months ago, over the course of the next several pages from the one I linked.  Batman: Arkham Asylum makes two.

    The reason tesselation hasn't taken off yet is the same as the reason PhysX hasn't:  developers don't want to put massive resources into something that flatly won't run on many cards--even high end gaming cards--simply because they're the wrong brand.  With tesselation now part of DirectX 11, that will change shortly.  There doesn't seem to be any change to that on the horizon for PhysX.

    -----

    GPGPU may eventually be a big deal to the average home user, but we're not there yet.  Proprietary standards like CUDA aren't the only thing holding it back.  The bigger problem, and the one still facing OpenCL and DirectX Compute, is the complete lack of a killer app.  GPU accelerated scientific computing isn't exactly a mainstream activity.  I'd be surprised if it's even terribly common among scientists in general, though it does matter greatly in some specific subfields.  That leaves video transcoding as the only application out there that an average home user might theoretically want, and telling people, this card will let you change the resolution and file format of a video faster than if you leave your computer running overnight to have the processor do the same thing, isn't likely to create a ton of demand for an expensive card.

  • CleffyCleffy Member RarePosts: 6,414

    There is already a demand for DirectXCompute.  Even though its not far out of the realm of GPUs, its the use of Compute Shaders.  They are slightly more advanced then Deferred Shaders previously used in games.  The biggest difference you will notice is the ability for the Shader to react to the environment.  It will also probably be used in serverside calculations.  You can save thousands by utilizing a GPGPU instead of processors in the server market.

  • QuizzicalQuizzical Member LegendaryPosts: 25,507

    Using a video card to display graphical stuff in games is well and good (and the main reason why people buy anything other than low end cards), but that's not GPGPU.

    Nvidia has released their official spin on Cypress.  Here it is, with AMD's responses:

    http://www.tweaktown.com/news/13199/amd_respond_to_nvidia_s_tough_radeon_hd_5800_questions/index.html

    Conspicuously absent is any talk of GT300.  When they're reduced to "Sure, PhysX GPU acceleration is basically non-existant, but so is Havok GPU acceleration!", you know they're in trouble.  What's really interesting is that Nvidia doesn't want to talk about how Cypress runs the game already on the market.

  • QuizzicalQuizzical Member LegendaryPosts: 25,507
    Originally posted by AmazingAvery
    Supply is very short too, just like when the 4870 was released it took a good couple months for anyone to have any decent stock.



     

    While the supply is less than ideal for now, New Egg and ZipZoomFly both have them in stock right now, though for each, it's only one brand in stock and several brands out of stock.  That's not exactly a paper launch.  If it's like this the day after launch, I doubt that it will take two months to get reliable supplies of them.

  • CleffyCleffy Member RarePosts: 6,414

    They are still in stock at Tigersdirect.  I think the reason is because of the difficulty navigating to them.  Correction: They have just starting taking pre-orders.

  • AmazingAveryAmazingAvery Age of Conan AdvocateMember UncommonPosts: 7,188
    Originally posted by Cleffy


    They are still in stock at Tigersdirect.  I think the reason is because of the difficulty navigating to them.



     

    Newegg and tigerdirect and Ncix.com, zipzoomfly.com all have them out of stock. As reported by many sites including some reputable ones have stated that supply is a concern and that the launch is a hard one, but suffers from a typical limted supply. Some in some out not very many available though. Many places will get another batch start Oct. maybe things then will be better.

    I'm still going to wait for NV to show what they have though.



  • dfandfan Member Posts: 362

    Dell (alienware) took a large slice of 5800 stocks.

  • jaysinsjaysins Member UncommonPosts: 107

     Brand new architecture on a die shrink is a very risky thing and something that generally is avoided like with intel's tick tock cycle of cpu's. Not saying it cannot be done but Nvidia is taking a huge risk here and they are being uncharacteristically quiet as well which only makes you wonder why. Anandtech did an awesome write up about ATI's change in how they are going to position themselves in the market and the story behind the 4000 series, http://www.anandtech.com/video/showdoc.aspx?i=3469. Nvidia won't be able to appropriately deal with this until at least next generation or the one after that because of the long development cycle of the gpu's. The 300 series from Nvidia was already planned before the 4000 series hit and isn't a response to ATI's new plan of bettering Nvidia in cost/performance but losing the crown of most powerful card. I think was a brilliant strategy by ATI and it clearly caught nvidia off guard. 

    You never know with these launches but I suspect because of Nvidia's lack of time to properly respond to ATI's new ways that they will indeed offer the most powerful gpu on the market but face difficulty matching ATI's prices and value which it ended up taking Nvidia many months to be competitive on that front . That's assuming that this very new and unique architecture has decent yields that Nvidia can improve quickly. I have a 4870 myself and at the time it was an easy decision, even a no brainer. PhysX hasn't yet shown it's worth for me at least and I plan on purchasing one of the next cards available. I'll wait till the 300 card drops as if it is a winner ATI will drop their prices and likely Nvidia will follow suit. I'll wait it out a month or so after the 300 is available and make my decision than. Currently I'm excited and blame ATI for bringing us value that hasn't been seen in the gpu market for quite some time. In the end I win as a happy consumer.

  • bhugbhug Member UncommonPosts: 944

    9.9.25
    Be advised this gpu exhausts 88*C air into the case (near the boiling point of water 100*C) make sure there is plenty of free air space to the side and behind one's case.

    Cypress 5870 $380, 1600 stream processing units, 80 texture, 32rop, 850MHz, 1Gb gddr5, 256b bus@4.8GHz, 153GB/s memory bandwidth, 2.7Tflop single/544Gflop dbl precision, 27/188W, 10.75", 68B texels/s, 40-88*C (20% less performance 5850, 512MB gddr5, 1440 stream 72 texture 32rop, 725MHz, 2.09Tflop, $260), 2.15B gates, exhaust inside case.

    video
    UVD 2.0 is back, and it has been enhanced to allow to accelerated decode of two 1080p HD video streams at once. HDMI features have also been improved, supporting HDMI 1.3a. There have even be HDMI audio improvements supporting Dolby TruHD and DTS-HD Master Audio with full support for Blu-ray audio formats and up to 8 channels of 192kHz/24-bit audio.

    image

  • havok527havok527 Member Posts: 80

    Is this card better than the newest most high-end Nvidia video card?

  • dfandfan Member Posts: 362
    Originally posted by havok527


    Is this card better than the newest most high-end Nvidia video card?

     

    5870 is a tad slower than 295 but other abilities are much better.

    On cheaper class 5850 wins 285 in every possible way.

Sign In or Register to comment.