Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

HD5870 Released today

2

Comments

  • QuizzicalQuizzical Member LegendaryPosts: 25,507
    Originally posted by havok527


    Is this card better than the newest most high-end Nvidia video card?



     

    On a per GPU basis, yes, by far.  On a per card basis, not quite, but it's close.  A better apples to apples comparison is that the Radeon HD 5870 X2 which is coming soon will almost surely be dramatically faster than the GeForce GTX 295, which is effectively two underclocked GeForce GTX 275s on a single card.

  • AmazingAveryAmazingAvery Age of Conan AdvocateMember UncommonPosts: 7,188
    Originally posted by Quizzical

    Originally posted by havok527


    Is this card better than the newest most high-end Nvidia video card?



     

    On a per GPU basis, yes, by far.  On a per card basis, not quite, but it's close.  A better apples to apples comparison is that the Radeon HD 5870 X2 which is coming soon will almost surely be dramatically faster than the GeForce GTX 295, which is effectively two underclocked GeForce GTX 275s on a single card.



     

    But from the looks of it Quizzical, the 5870X2 could possibly be 2 underclocked 5870's. It would be more fair to compare against Nvidia's forthcoming new dual solution card  neither of which are released yet.



  • QuizzicalQuizzical Member LegendaryPosts: 25,507

    As far as what to compare Hemlock to, sure, comparing it to an unreleased Nvidia card right now is fair enough.  The only real problem with that comparison is that we really have no idea how anything in the GT300 series will perform, let alone a theoretical dual GPU card.

    But the other reason why the GTX 295 didn't double the performance of the GTX 285 is that isn't two GTX 285s on a card.  It's two GTX 275s on a card, and that's even before underclocking.  The Radeon HD 4850 X2 didn't underclock anything, but didn't double the performance of the Radeon HD 4870 because it had slower parts to begin with.

    As for whether underclocking will be necessary for Hemlock, it's easier to fit two 188 W cards on a single card than two 219 W ones, which what the GTX 275 is.  That doesn't mean it won't end up being necessary, but ATI didn't have to underclock for the 3850 X2, 3870 X2, 4850 X2, or 4870 X2.  I guess in the case of the Radeon HD 3000 series, they were helped by the single chips being kind of pathetic to begin with.  Meanwhile, Nvidia did have to underclock to make a GeForce 7900 GX2, GeForce 7950 GX2, and GeForce 9800 GX2.

  • AmazingAveryAmazingAvery Age of Conan AdvocateMember UncommonPosts: 7,188

    There is a nice review of a crossfire 5870 set up to give an idea of 5870x2 performance here: www.guru3d.com/article/radeon-hd-5870-crossfirex-test-review/1

    Simply put we need to give the new card some time to get it's drivers mature to get a better accurate picture down the road.

    For now those of us with 2 285's need not rush out depending on what games you play.



  • FignarFignar Member CommonPosts: 417
    Originally posted by Quizzical

    Originally posted by havok527


    Is this card better than the newest most high-end Nvidia video card?



     

    On a per GPU basis, yes, by far.  On a per card basis, not quite, but it's close.  A better apples to apples comparison is that the Radeon HD 5870 X2 which is coming soon will almost surely be dramatically faster than the GeForce GTX 295, which is effectively two underclocked GeForce GTX 275s on a single card.

     

    I own two of them in Crossfire and I am 100% sure they are held back by the drivers. I am updating the bios to the Asus version so I can oc the cards using the Asus voltage tweaker which can get out another 38% performance increase. With mature drivers these should beat the 295 easily in a majority of benchmarks .  The 5890 and 5870x2 or 5890x2 will be damn fast cards and I am no ATi fan boy but Nvidia will have to have something special with the GT300 to beat these and keep the price/performance aspect affordable.

    Water cooled Intel Corei7 920 D0 Stepping OC'd 4.3GHz - 6GB Corsair Dominator GT RAM 2000Mhz - ASUS RAGE II EXTREME X58 Mobo - 2x HD 5870 in Crossfire X, OC'd 0.9Ghz core 1.3Ghz RAM - Dell 2407WFP Flat Panel LCD 24" 1920x1200

  • AbrahmmAbrahmm Member Posts: 2,448

    I'm in the market for a new card as my 8800gts is showing it's age. I was thinking about picking up a cheap recertified 260 and waiting for the new 300 series to come out. I'm definitely waiting for the new 300GTX to come out before I decide on a new card, I'm not going to jump the gun on the 5870's because they are the newest and shiniest.

    No matter which one wins the performance race, the truth is this competition between the two ensures that the biggest winner of all is the consumer. Thank you capitalism.

    Tried: LotR, CoH, AoC, WAR, Jumpgate Classic
    Played: SWG, Guild Wars, WoW
    Playing: Eve Online, Counter-strike
    Loved: Star Wars Galaxies
    Waiting for: Earthrise, Guild Wars 2, anything sandbox.

  • QuizzicalQuizzical Member LegendaryPosts: 25,507

    I don't think it's automatic that the Radeon HD 5870 will eventually be faster than a GeForce GTX 295.  The 5870 probably has more to gain from better drivers than the GTX 295, but we don't really know how much.  That the single-GPU Radeon HD 5870 is generally competitive with the GeForce GTX 295 and sometimes beats it already (though on average, it loses by about 10%) is quite impressive, though.

    If one wishes to compare the cards performance, it's perhaps worth noting that most reviews only focused on the average frame rate, which isn't always the right measure.  Hardware Canucks found that even when a Radeon HD 5870 loses to the Radeon HD 4870 X2 and the GeForce GTX 295 in average frame rates, it sometimes beats them in minimum frame rates.  That's an advantage of a single GPU card over a dual GPU card.  If we get away from raw performance, surely the dramatically lower power consumption and considerably lower price tag matter, too.

    There isn't necessarily a Radeon HD 5890 card coming, though.  ATI supposedly is working on six known chips for this generation, of which only Cypress is out--and none of them are likely to be branded as a 5890.  (Juniper, Redwood, and Cedar are for mid range and low end cards with a lower price tag, Hemlock is for X2 cards, and Trillian is for the six monitor variant of the 5870.)  People may think that there's a Radeon HD 5890 coming because there was a 4890, but that was a different chip, RV790, as opposed to RV770.  It had more transistors and a bigger die size, not just higher clock speeds.

    Some people have claimed that the Radeon HD 5870 is held back by memory bandwidth (which was only increased by about 33%, not doubled from the previous generation like most of the specs were).  I'd think it would be possible to test that by clocking the memory differently and seeing how it affects performance.  Even if you can't overclock memory (if the chips aren't fast enough), you can underclock it and see if that hurts your performance.  If the card is held back by memory bandwidth, then ATI could perhaps make a Radeon HD 5890 with high-binned Cypress chips and faster GDDR5 memory chips once they're available.

  • FignarFignar Member CommonPosts: 417

    Thats the  problem if you always wait you get nothing because when Nvidia bring out the GT300 right round the corner ATI will have something else to rival that. It's a cat and mouse game so you just have to decide when you want to take the plunge.  The best thing you can do is have no loyalty to either company and just go with what is the best at the time you decide to buy.

     

     

    Water cooled Intel Corei7 920 D0 Stepping OC'd 4.3GHz - 6GB Corsair Dominator GT RAM 2000Mhz - ASUS RAGE II EXTREME X58 Mobo - 2x HD 5870 in Crossfire X, OC'd 0.9Ghz core 1.3Ghz RAM - Dell 2407WFP Flat Panel LCD 24" 1920x1200

  • QuizzicalQuizzical Member LegendaryPosts: 25,507

    My view is that the time to upgrade is when what you have isn't good enough anymore.  It may be worth waiting a bit if something is very close to release (e.g., someone who wants the best <$300 card he can get should wait a few days for the Radeon HD 5850), but if what you have isn't good enough, waiting for something that will be out six months later is kind of silly if it's something as easy to swap out as a video card.  Conversely, if what you have is plenty good enough for what you want to do, it would be silly to upgrade to a Radeon HD 5870 or GT300 or whatever no matter how incredibly amazing they are.

     

  • dfandfan Member Posts: 362

    GT300 is still very far release. Realistic assumption for release is at the end of Q1 2010 at earliest.

  • xanphiaxanphia Member Posts: 684

    Hmmmm, I'm not sure what to get then. The best ATI or Nvidia card available. The pros and cons are obvious on each. I hate decisions.

  • dfandfan Member Posts: 362
    Originally posted by xanphia


    Hmmmm, I'm not sure what to get then. The best ATI or Nvidia card available. The pros and cons are obvious on each. I hate decisions.

     

    Keep in mind ati 5-series performance will significantly over time. The only reason favoring 295 I can think of is that it's 10 % faster atm, although that will turn upside down in future.

  • erandurerandur Member Posts: 727
    Originally posted by dfan

    Originally posted by xanphia


    Hmmmm, I'm not sure what to get then. The best ATI or Nvidia card available. The pros and cons are obvious on each. I hate decisions.

     

    Keep in mind ati 5-series performance will significantly over time. The only reason favoring 295 I can think of is that it's 10 % faster atm, although that will turn upside down in future.

    Don't forget the 5xxx series have DX11 support, and DX11 is a way bigger release than DX10. DX10 added some useless eye-candy, but forgot some important things, making the eye-candy useless. DX11 adds GPGPU, and finishes what DX10 began.

    IMO, there's no question which card to buy atm, not untill the nvidia 300 series come out.

    You know it, the best way to realize your dreams is waking up and start moving, never lose hope and always keep up.

  • VarnyVarny Member Posts: 765

    I don't see games needing DX11 until the next round of consoles because now all the developers have fucked off over there we'll all be playing Unreal Engine 3 games.

  • QuizzicalQuizzical Member LegendaryPosts: 25,507
    Originally posted by xanphia


    Hmmmm, I'm not sure what to get then. The best ATI or Nvidia card available. The pros and cons are obvious on each. I hate decisions.



     

    If you're looking for a high end card in the near future, there's not much reason to get anything other than a Radeon HD 5000 series.  The Radeon HD 5870 is far and away the best single GPU card on the market, and better performance per dollar than Nvidia's top single GPU card, the GeForce GTX 285.  If you want a multi-GPU system, two Radeon HD 5870s in CrossFire will clobber any two GPU system Nvidia has to offer.  If you want more than two, the Radeon HD 5870 X2 is coming soon and two Radeon HD 5870 X2s in CrossFireX will almost surely be dramatically faster than two GeForce GTX 295s in Quad SLI at just about everything that isn't processor-bound.

    The GT 300 is only a factor if you're not going to upgrade soon.

  • QuizzicalQuizzical Member LegendaryPosts: 25,507
    Originally posted by Varny


    I don't see games needing DX11 until the next round of consoles because now all the developers have fucked off over there we'll all be playing Unreal Engine 3 games.



     

    Even if you don't care a bit about DirectX 11, the Radeon HD 5870 is far better than any single GPU that Nvidia has at DirectX 9 and 10 games.

  • erandurerandur Member Posts: 727
    Originally posted by Varny


    I don't see games needing DX11 until the next round of consoles because now all the developers have fucked off over there we'll all be playing Unreal Engine 3 games.

    GPGPU is more than just games, it enabled you to sue your VRAM as normal RAM when you're not using it. + the current generations consoles uses DX9, or something similar. The 360 uses something which they called DX9.0d iirc, the pc version didn't go higher than 9.0c, so the 360 uses a newer version.

    And we'll never see UT3 games ever again, remember the performance in Fury? ;) And developing for DX11 is much easier than you think. Unlike DX9 - DX10, which was a 'huge leap', for no good reason. DX11 games can be based on DX9 as well as DX10 games, so porting games over is going to be much smoother than with the release of DX10.

    Also, DX11 doesn't have that much advantages for consoles. PC developers won't wait for the consoles to develop DX11 games, consoles will never be able to use GPGPU ever anyway.

     

    Nvidia was the pioneer in DX10, which was a mistake. ATI is the pioneer in DX11, and hopefully DX11 will live up to its expectations.

    You know it, the best way to realize your dreams is waking up and start moving, never lose hope and always keep up.

  • xanphiaxanphia Member Posts: 684

    I was planning on upgrading before Xmas. So it seems the Radeon 5870 is my best choice then. I don't think I'll have enough capital to get a duel 5870 or Crossfire.

  • QuizzicalQuizzical Member LegendaryPosts: 25,507
    Originally posted by erandur
    GPGPU is more than just games, it enabled you to sue your VRAM as normal RAM when you're not using it. + the current generations consoles uses DX9, or something similar. The 360 uses something which they called DX9.0d iirc, the pc version didn't go higher than 9.0c, so the 360 uses a newer version. 



     

    GPGPU isn't games at all.  It basically means, using video cards for things that they haven't traditionally been used for--that is, other than games.  I'm not sure that it's possible to use video memory as normal system memory, but even if you could, why would you?  It's cheaper to buy 8 GB of fairly fast DDR3 memory than a high end video card with 1 GB of GDDR5.

    Video cards have huge numbers of shader cores, while processors only have a few cores, typically 2 or 4.  Each shader core in a video card is dramatically slower and less versatile than a core of a typical Intel or AMD processor, making the video card pretty useless for single threaded applications.  The advantage is that the combined processing power of 1600 shader cores in a high end video card dramatically exceeds that of the 4 cores in a high end processor.  If you need a ton of processing power for something that can be sufficiently threaded as to actually stress most of the shader cores in a video card, the video card can offer several times the processing power of even a high end $1000 Core i7 Extreme processor.

    The problem with GPGPU is that there aren't many applications that are sufficiently parallel to do that.  Graphics are, which is why video cards were developed to handle 3D graphics.  There are some problems in scientific computing that have to apply the same formulas to enormous data sets that can easily be made sufficiently parallel, but that's a niche application that an average home user couldn't make use of.  The only home application of GPGPU at the moment is video transcoding, that is, changing file format or resolution of a video.  That's not exactly a killer app.

    More general types of video editing may eventually go GPGPU, but that's still not the sort of thing that huge numbers of people need to make use of.  I don't see any other natural candidates for it, either, and if there were any obvious ones, Nvidia would surely be hyping them by now.  Something that requires a ton of processing power but doesn't consist almost entirely of computations that can be done in parallel with a huge number of threads won't work.  Something that can be made sufficiently parallel but can also be handled by normal processors just fine makes it pointlessly expensive to put in the extra coding expense to make it use GPGPU rather than a normal processor.

    And it will become harder to come up with a killer app for GPGPU as time passes, too.  With processors having to contend with runaway current leakage starting around 3 GHz and really hitting a wall around 4 GHz, both Intel and AMD have decided that the best way to improve processor performance is by adding more cores more so than clocking them higher.  AMD's "Bulldozer" architecture due out in 2011 will reportedly have 16-core processors.  With processors themselves becoming increasingly parallel, the threshold for just how parallel something needs to be to make GPGPU a sensible approach to it will only increase.

  • QuizzicalQuizzical Member LegendaryPosts: 25,507
    Originally posted by xanphia


    I was planning on upgrading before Xmas. So it seems the Radeon 5870 is my best choice then. I don't think I'll have enough capital to get a duel 5870 or Crossfire.



     

    If a $380 card fits your budget and you want a good video card for gaming, then the Radeon HD 5870 is the card to get, and will almost surely remain that way until Christmas.  If you buy one now, I doubt that you'll particularly regret it before Christmas.  There might be some factory overclocked versions out or minor price cuts by then, but that's about all that is likely to change before the end of the year.  It's possible but not likely that Nvidia's card that people have been calling GT 300 but is now rumored to be "Fermi" could be out by then, though we really have no idea how it will perform.

    At the super high end, the other card that will be out by Christmas is Hemlock, a dual GPU card likely to be branded as the Radeon HD 5870 X2, and at a really high price point.  Heading down into the mid range, Juniper should be out by the end of the year, and is rumored to be branded as Radeon HD 5770 and 5750.  Those should have performance comparable to the Radeon HD 4890 and 4870, or the GeForce GTX 275 and 260 if you prefer comparisons to Nvidia cards.  They'll probably go for somewhere between $100 and $200.  ATI has Redwood and Cedar cards coming at under $100 (and probably under $50 for Cedar, but you wouldn't want to game on that), but those probably won't be out this year, not least because AMD's official guidance on their release date is Q1 2010.

    On the Nvidia side, the GeForce G210 and GeForce GT 220 should be out soon, but the G210 is a very low end card, and the GT 220 is down there as well.  A GT 230 and GT 240 have been long rumored (code names GT 214 and 212 respectively), but I haven't heard any rumors on if or when those will release.  Those will be lower midrange cards.  There are rumors that some of the GT* 200 series cards will be rebranded as the GT 300 series to try to cover up that Nvidia is a generation behind until they can get the real GT 300 cards out.

    As far as price cuts go, I'd expect that the Radeon HD 5870 and 5850 will drop a bit in price once there is enough stock that every card manufacturer who cares to has them sitting on store shelves everywhere rather than flickering in and out of stock.  I don't see any major downward pressure on prices until Nvidia has their GT 300 series out so that competition can force prices down.  Nvidia wouldn't want to get into a price war with with their existing cards, as a 470 mm^2 GTX 275 is vastly more expensive to produce than a 170 mm^2 Juniper, so a price war would be suicide on their part.  I wouldn't competition from GT 300 to push prices down as dramatically as happened with the Radeon HD 4800 series, though, as ATI had to offer a better price/performance ratio than Nvidia to finally get some decent market share after offering consecutive generations of uncompetitive parts.  That's not coming until next year, anyway.

  • dfandfan Member Posts: 362
    Originally posted by Varny


    I don't see games needing DX11 until the next round of consoles because now all the developers have fucked off over there we'll all be playing Unreal Engine 3 games.

     

    There is already one dx11 game out, battleforge, and more games will come out during this year.

  • xanphiaxanphia Member Posts: 684

    Thanks for the very well thought post Quizzical. It seems the 5870 is the way to go. I plan on purchasing around Nov. so stock will be up plus Xmas deals will be in full swing and I might be able to find a bargain. I'm getting a whole new computer, so I may not get deals on the card itself, but other parts as well.

     

    One last questions, the radeon 5870 will be future proof for how long? I know the new Nvidia card will beat it. As far as games though, how many years will I be able to get out of this card?

  • QuizzicalQuizzical Member LegendaryPosts: 25,507

    How future-proof the card will be depends on what games you want to play and what settings.  If you're willing to turn down settings and run games at low resolutions, it will last a lot longer than if you insisting on running games at 2560x1600 with all settings maxed (in which case, it will struggle with Crysis Warhead already).  Because it has DirectX 11, which is brand new at the moment, it will presumably be at least compatible with nearly all future games (basically excluding a handful of games that use proprietary Nvidia stuff) for several years.

    Apparently Nvidia is claiming that the "GT 300" will be out in November, while ATI says Nvidia is a good six months behind them.  Obviously ATI has neither the inside information nor the incentives to offer a charitable guess on how soon an Nvidia card will be out.  Meanwhile, if the GT 300 were so close to release, Nvidia should be able to show stuff off now, and they're conspicuously not doing so.  Apparently they have some conference later this week which could be a natural venue; if they don't show off a working GT 300 there, then it's not coming anytime soon.

    For comparison, ATI reportedly had working Radeon HD 5870 cards in July, but didn't release them until they had time to mass produce enough to do a real launch.  It apparently takes about two months to go through the production process, and you don't want to start mass producing cards until you have a working one, as you don't really know if a chip will work right until you produce it, and you don't want to pour $50 million into mass producing chips of a given design and then when the first one comes out find out that the design flatly doesn't work.  Nvidia might go the paper launch route of, as soon as they have working cards, send them to reviewers and declare the cards released, and then ramp up production and have them start showing up in stores two months later.

    It's far from guaranteed that GT 300 will be faster than the Radeon HD 5870, even when it does release.  My guess it that it will, as Nvidia has used the traditional enormous die sizes for high end cards in every recent generation of cards, while ATI went with smaller die sizes to reduce costs.  If the GT 300 has well over three billion transistors and Cypress has 2.15 billion at the same node, Nvidia would only have to be kind of competent to come up with a faster card, even if ATI makes an amazingly good 2.15 billion transistor chip.  Some comments coming from Nvidia make it sound like GT 300 will be optimized more for GPGPU than for gaming.

    I do expect the GT 300 to do GPGPU stuff a lot better than the Radeon HD 5870.  Nvidia has always been the company pushing that forward, while ATI just tagged along and said, yeah, we can do that, too, even if not as well.  This is probably at least in part a function of ATI being part of AMD, which will sell you real processors (Phenom II, etc.), while Nvidia only does video cards and not actual processors.  I don't expect GPGPU to matter in the near future to more than a tiny fraction of people currently interested in buying a high end video card.

    -----

    Also, Intel just showed off Larrabee (the new video card they're working on) in some raytracing demo that could only deliver 10 frames per second.  So apparently they do have working Larrabee video cards.  They're just slow at the moment, and it's not clear what features they have.  First generation Larrabee will probably be a disaster from a gaming perspective, and is a long way off, anyway.  I think Intel is probably putting billions into Larrabee because 1) they figure they'll eventually need it in order to compete with AMD as graphics get increasingly integrated into processors, and 2) if GPGPU ends up being as big as Nvidia thinks it will, that could diminish the need for a high end Intel processor, which is a big problem for Intel unless people are buying a high end Intel video card instead of a high end Intel processor.  AMD owns ATI, which makes good video cards, i.e., video cards that someone might buy with the intention of playing games no them.  Nvidia meets that standard, too.  Intel's integrated graphics fail it miserably.  Intel tried to release a real video card (the i740) 11 years ago and it was generally met with derisive laughter until Intel gave up and withdrew from the video card market.

  • noquarternoquarter Member Posts: 1,170

    The GTX300 will likely be faster than the 5870, but the one that is will also be ~$150 more than the 5870 placed firmly in that ridiculous price range where you get completely ripped off by diminishing return on investment. Sorta like buying an extreme edition i7 for $1000. The 5870 should be competitive with whatever card is priced next to it from nVidia (GTX360) so you can't really lose even if nVidia does open up with a GTX380 also unless you really wanted to waste money on the premium. All signs also point to a Spring '10 launch so I think we're still a ways off.

  • xanphiaxanphia Member Posts: 684

    I'll be using a 22" Viewsonic, which I used for my competative Halo 3 days. Obviously I can adjust resolution, but those damn black bars -_-. So I'll probably be playing on a fairly high res. I don't have the specs. in front of me so it's tough to say. But, I can turn the res. down if needed of course.

     

    So, thanks for all the info. and from my perspective it seems the 5870 is the way to go as far price/performance goes.

Sign In or Register to comment.