Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

ati 5870 revealed!

frozenvoidfrozenvoid Member Posts: 40

www.techpowerup.com/103599/AMD_Cypress__Radeon_HD_5870__Stripped.html

 

the seemingly huge die measures 338 mm² (area), and for 40 nm, it translates to "huge", and is vindicated by the transistor count of ~2.1 billion. In contrast, AMD's older flagship GPU, the RV790 holds 959 million, and NVIDIA's GT200 holds 1.4 billion.

its over nvidia is finished

image

«1

Comments

  • CleffyCleffy Member RarePosts: 6,414

    I am a little disappointed in the clock speed.  The 4890 had models over 1 GHz.  So having the same clock as the 4870 is a bit disappointing.  Guess waiting until the nVidia GPU is the best choice after all to see how the OEMs alter the model.

    I wouldn't count nVidia out yet.  First they are definetly gonna double their transistor count considering they are also on a 40nm die.  Also their large developer network has a tendency to make ATI cards underperform.  Even though ATI cards have a much higher processing power and bandwidth.  nVidia also commonly jumps back after taking a beating.  They took a beating in 2008, so chances are the GTX 300 will be a really competitive part that is manufactured cheaper.

  • Loke666Loke666 Member EpicPosts: 21,441
    Originally posted by Cleffy


    I am a little disappointed in the clock speed.  The 4890 had models over 1 GHz.  So having the same clock as the 4870 is a bit disappointing.  Guess waiting until the nVidia GPU is the best choice after all to see how the OEMs alter the model.
    I wouldn't count nVidia out yet.  First they are definetly gonna double their transistor count considering they are also on a 40nm die.  Also their large developer network has a tendency to make ATI cards underperform.  Even though ATI cards have a much higher processing power and bandwidth.  nVidia also commonly jumps back after taking a beating.  They took a beating in 2008, so chances are the GTX 300 will be a really competitive part that is manufactured cheaper.

     

    Of course Nvidia will strike back... But Nvidia and ATI have been switching places about whos have the fastest GFX cards since 3DFX and Matrox got out of the bussiness.

    To be honest I think Nvidia have hold the leading position longer but it is rather close. Nvidia will once again spring by later and then ATI will release even better stuff and so on. That is just how things work.

    But I am rather happy with my GTX 295 right now, Ill be waiting a year before I upgrade at least. Whoevers cards I will upgrade to depends on what is best then.

    I usually prefer Nvidia, not because the hardware but because I think Nvidias drivers are a lot better. Hardware is another matter but without good drivers the card can't use it's hardware to the max.

  • HricaHrica Member UncommonPosts: 1,129

    My Voodoo 5 agp will own this card

  • AbrahmmAbrahmm Member Posts: 2,448

    I'm thinking of picking up a cheap, re-certified 260GTX to replace my aging 8800gts and waiting until the nVidia 300 series comes out. I was just reading an article about the new 300 series and how it is making some big leaps in graphics technology.

    Tried: LotR, CoH, AoC, WAR, Jumpgate Classic
    Played: SWG, Guild Wars, WoW
    Playing: Eve Online, Counter-strike
    Loved: Star Wars Galaxies
    Waiting for: Earthrise, Guild Wars 2, anything sandbox.

  • tvalentinetvalentine Member, Newbie CommonPosts: 4,216

    lol i wonder when they will decide to condense these cards.... i had trouble putting in my 8800 ultra in a full sized case. TBH the cards are fast enough as it is, they are just too damn big. I've been holding off on upgrading till they do release smaller cards, but at this rate i dont see that happening.

    image

    Playing: EVE Online
    Favorite MMOs: WoW, SWG Pre-cu, Lineage 2, UO, EQ, EVE online
    Looking forward to: Archeage, Kingdom Under Fire 2
    KUF2's Official Website - http://www.kufii.com/ENG/ -

  • noquarternoquarter Member Posts: 1,170


    Originally posted by Abrahmm
    I'm thinking of picking up a cheap, re-certified 260GTX to replace my aging 8800gts and waiting until the nVidia 300 series comes out. I was just reading an article about the new 300 series and how it is making some big leaps in graphics technology.

    Big leaps considering the 9 and 200 series were reprints of 8 right? ;)

  • SlayVusSlayVus Member Posts: 22

     Here is a thread about the 5850 and 5870 I posted on Extreme Overclockers

    http://forums.extremeoverclocking.com/showthread.php?t=328335

    Core i7 920 @ 3.8GHz
    MSI X58 Platinum SLI
    G.Skill PI Black 3x2GB DDR3-1600
    Western Digital CB 500GBx3 Riad-5
    VisionTek HD 4870 X2
    Corsair 1000HX

  • Syno23Syno23 Member UncommonPosts: 1,360

    The ATI HD 5870X2 is going to be the new beast! Going to be smoking Nvidia, but however, I have to go with Nvidia for stability.

  • Greater_ForceGreater_Force Member Posts: 28

    With only 3 big players (2 if you don't count Intel mobile GPUs) in the GPU business it would be a loss not to see NVidia recover from the problems they have been going through. I think ATI cards are great but we need competition in the market to keep prices down and innovation on the move.

    -Greater Force

  • CleffyCleffy Member RarePosts: 6,414

    Intel is coming out with Larrabee, which probably won't do well.  It is an option.  Nvidia's GTX300 series is a new architecture on a 40nm die using GDDR5.  The architecture is very similiar to ATI since ATI has been able to define the future DX specification as they have previously been able to meet them.  nVidia is still going to be in competition, especially since they have the bigger market share.

    With Intel edging more partners off its platform and becoming more a singular entity with PCs, if things got bad enough AMD would buy nVidia.

  • ohsofresh42ohsofresh42 Member Posts: 68

    I've fallen in love with my 4850 tht little bugger handles everything i throw at it, the 5870 doesnt seem like a huge leap forward other than dx11  form the 4870 and those cards were the best on the market for awhile now. Will be interesting to see the the green teams new card will do hopefully they can win me back but right now ATi can literally do no wrong in my book. ATi's prices are better and you still cant go wrong with a 4800 series card.

  • noquarternoquarter Member Posts: 1,170

    I think it's funny that the 5870 even gets compared to the GTX 295 which is really 2 GTX 275's. The single fastest GPU atm is actually the GTX 285 so it seems that should be the benchmark.

  • AmazingAveryAmazingAvery Age of Conan AdvocateMember UncommonPosts: 7,188

    Nice looking card. I do like ATI.

    I will be waiting for Nvidia's new card in November, only couple months away, to make up mind which will be better.

    They are saying (rumours) that single card on single card Nvidia is better, which in turn will lead to a better performing dual high end card.

    It is all about performance for me, not bothered about power draw, heat etc.



  • dfandfan Member Posts: 362

    Nvidia's next generation cards, GT300, won't come at November.  They come out at the end of Q1 2010 at earliest. Most likely even later.

  • AmazingAveryAmazingAvery Age of Conan AdvocateMember UncommonPosts: 7,188
    Originally posted by dfan


    Nvidia's next generation cards, GT300, won't come at November.  They come out at the end of Q1 2010 at earliest. Most likely even later.



     

    Not confirmed no, but still www.fudzilla.com/content/view/15535/1/  with the sources they have had all along they have a good track record.



  • QuizzicalQuizzical Member LegendaryPosts: 25,507

    Let's compare where Nvidia is now to where ATI was a few months ago.

    As far as performance goes, a full node die shrink from 55 nm to 40 nm is a big deal, as full node die shrinks always are.  Making your transistors only half as large as before lets you have better power efficiency, better production cost efficiency, and get better performance by using twice as many transistors in the same area.

    ATI released their first 40 nm desktop video card at retail in April, the Radeon HD 4770.  It took a bit for it to become commonly available, though there were plenty available for anyone who wanted to buy one by June, and maybe in May.  While the 4770 wasn't exactly a high end card, its performance did blow away its nearest competitors in die size (Radeon HD 4670 and GeForce 9500), as one would hope for with a full node die shrink.  The only reason it's not more popular is that ATI prices it essentially the same as the slightly faster Radeon HD 4850.

    In comparison, Nvidia still doesn't have any 40 nm desktop video cards available at retail.  They did quietly release the GeForce G 210 and GT 220, but only to OEMs.  I couldn't find any review of the GT 220 at all.  The G 210 is a really pathetic card.

    http://www.pcgameshardware.com/aid,694223/Geforce-G210-Nvidias-first-DirectX-101-card-reviewed/Reviews/

    The quick summary is that compared to the Radeon HD 4350 (ATI's lowest end card from the Radeon 4000 generation), the G 210 performs worse in spite of a comparable die size, uses more power at idle, and uses more power at load.  That's like doing a full node die shrink and getting a worse card as a result of it.

    Like the Radeon 4770, the G 210 is really only a test part to see if Nvidia could make 40 nm graphics cards effectively, before trying to make bigger, faster, harder to make cards at 40 nm.  Nvidia wasn't able to get any gains out of the transition to 40 nm at all, or at least not yet.  That makes Nvidia today look behind where ATI was in April in this regard, and if they can't do the die shrink properly, they have no viable product.

    -----

    DirectX 11 support is another feature of next generation cards that is a big deal.  It may not matter much to someone who is going to replace his video card every year, as it will take a while for a lot of games that use DirectX 11 to be released.  But it's highly probable that DirectX 11 will be commonly used in a few years, making it an important consideration for anyone planning on buying a card and keeping it for a few years.

    The Radeon 4000 series of ATI cards had DirectX 10.1 a year ago.  ATI showed off working DirectX 11 hardware at Computex this Spring.  They've showed off their "Cypress" chip (to be released as Radeon HD 5870 and 5850) to the media on two occasions since then.

    Nvidia, meanwhile, doesn't have any DirectX 10.1 desktop cards available at retail, let alone DirectX 11.  The G 210 and GT 220 have DirectX 10.1, but are OEM-only, at least so far.  Nvidia has yet to demonstrate working DirectX 11 hardware at all.

    Indeed, they've gone so far as to downplay the importance of DirectX 11 entirely.

    http://www.xbitlabs.com/news/video/display/20090916140327_Nvidia_DirectX_11_Will_Not_Catalyze_Sales_of_Graphics_Cards.html

    That's the sort of thing that you don't do if you've got your own DirectX 11 cards coming to market just around the corner.  From that article:

    "The firm believes that general purpose computing on graphics processing units (GPGPU) as well as its proprietary tools and emergence of software taking advantage of these technologies will be a better driver for sales of graphics boards than new demanding video games and high-end cards."

    No, seriously.  I didn't just make that up.  They're downplaying the importance of gaming performance.  Ouch.

    Being first to market with DirectX 11 can matter, too.  Right now, any game developers working on DirectX 11 games and trying to optimize the video card performance is doing so with ATI hardware, because there is no such Nvidia hardware.  Without really intending to, that means they're implicitly optimizing their code to work well with ATI's architecture, and not Nvidia's.  That doesn't necessarily preclude them from going back later and trying to make their code work well with Nvidia cards, too, but you'd rather be in ATI's position than Nvidia's in this respect.

    -----

    So what about CUDA and PhysX, which Nvidia pushes relentlessly?  If you want PhysX right now, that basically means you want to play either Mirror's Edge or Batman: Arkham Asylum.  If you don't want to play either of those games, then you don't have any use for PhysX yet.  If you want CUDA right now, the nearest thing to a killer app for it seems to be video transcoding, that is, converting a video from one file type or resolution to another.

    Even if you believe that GPGPU is going to be a huge deal, it doesn't immediately follow that Nvidia will be better than ATI.  Suppose that you're a developer working on GPGPU software.  Do you use CUDA, which will run only on Nvidia cards; DirectX compute, which will run on any company's hardware (at least once DirectX 11 is out) provided that the OS is Windows; or OpenCL, which will run on any hardware, and any operating system?  Unless CUDA is far superior to the alternatives, you're not going to code it for CUDA.  I wouldn't bet against Microsoft's API, as they've been really, really good at getting their standards adopted in the past.

    Now, Nvidia's architecture will probably be better optimized for GPGPU than ATI's, regardless of what API ends up winning.  But there are soon to be three major video card companies, not two.  Larrabee is coming, and Intel has vastly more experience with designing hardware for general purpose computing than Nvidia.  Programmers have vastly more experience trying to code for x86-based processors (which Larrabee will be) than any proprietary standard Nvidia could come up with.  It's far from certain that Larrabee will beat "GT300" (or whatever Nvidia decides to call their next line of cards) at GPGPU performance.  It's worth noting that Intel doesn't have to do a decade worth of catching up on making general-purpose computing hardware, like they have to do with high end 3D gaming hardware if Larrabee is to be competitive there.

    What about PhysX?  Sure, it's been used in a zillion games, but rarely with hardware acceleration.  Meanwhile, Havok has also been used in a zillion games, and is owned by Intel.  Havok isn't proprietary like PhysX is, either; ATI cards will support hardware acceleration for Havok physics once the software for it is done.  And so we're back in the same situation as before:  if you're a game developer, do you want to use PhysX hardware acceleration and only run on Nvidia cards, or do you want to use Havok hardware acceleration and run on everything?  Actually, you probably want to use neither, as diverting the video card to other uses during gameplay tends to hurt game performance.

    -----

    So does Nvidia have an ace up their sleeve, that they're further along than they're letting on and keeping it secret?  The specter of Adam Osborne still haunts the computer industry.  Hype your next line of products and people stop buying your previous one, in order to wait for the next.  That's fine once the next line is out and people buy one rather than the other.  That's not so good if it means they buy nothing instead of something from you today.  And if the next line of products is delayed, they buy nothing from you tomorrow, either.  Osborne Computer Corporation ended up in bankruptcy, and companies have zealously pushed NDAs ever since.

    Or at least that's the situation today.  But what about after the Radeon 5000 series is out?  Trying to compete against the Radeon HD 5000 series with the GeForce GT 200 series is an exercise will be an exercise in futility unless the Radeon HD 5870 demonstrations were all smoke and mirrors, and the cards aren't nearly as good as is believed.  The combination of lower performance, higher power consumption, less compatibility, and a higher price tag (or at least higher production costs, due to a larger die size) at every price point on the market would be toxic to Nvidia's sales.  Such are the problems with being a generation (and a full node) behind.

    If that happens, Nvidia silence would no longer be a case of trying to get a sure sale today rather than a possible sale tomorrow.  It would be a case of giving ATI a sure sale today, rather than a possible Nvidia sale tomorrow.  If GT 300 or whatever they're going to call it (with Nvidia's naming schemes as incomprehensible as they are, would anyone be particularly shocked if they name their next high end card "Bob"?) is truly coming with general retail availability in November, it would behoove Nvidia to jump around screaming and demonstrating everything that they possibly can to convince customers to wait for the GT 300 in November, rather than buying an ATI card now.

    Indeed, Nvidia is already trying to steal attention from ATI.  On the same day that ATI showed off their Radeon HD 5870, Nvidia also had a press event.  Coverage from that day talked at length about Eyefinity (a gimmick unless someone can produce bezel-free monitors) and Radeon HD 5870 specs and benchmarks (over 2.7 teraflops!), with nary a word about whatever happened at the Nvidia event.  If they don't have something to show once the ATI cards are out, well then, behold, I bring you the GeForce GTS 350:

    http://www.newegg.com/Product/ProductList.aspx?Submit=ENE&N=2010380048%201305520548%20106792634%201067947241&bop=And&ShowDeactivatedMark=False&ActiveSearchResult=True&Order=PRICE

    and the GeForce GT 330:

    http://www.newegg.com/Product/ProductList.aspx?Submit=ENE&N=2010380048+1305520548+106793024&QksAutoSuggestion=&ShowDeactivatedMark=False&Configurator=&Subcategory=48&description=&Ntk=&CFG=&SpeTabStoreType=&srchInDesc=

    Well, if variants on the GeForce GTX 8800 have spanned three generations so far, why not make it four?

    It may not come to that.  But if it does, hoping for a paper launch of the "real" GT 300s by the end of the year may well be overly optimistic.  We'll know a lot more soon, once the Radeon 5000 series out.

    Of course, this only matters to someone looking to buy a new video card in the next few months.  If you're holding off on computer purchases for Sandy Bridge or Bulldozer, then we really have no clue how the GTX+*~^ 485 will compare to the Radeon HD 6870 or Core i11-960 (or whatever Intel decides to call Larrabee; they're almost as bad as Nvidia in the inscrutable naming department), or whatever the next generation of cards goes by.  But with everyone else scrambling to get hardware out before Windows 7 (do you really think it's a coincidence that Lynnfield and Propus both just launched?), one would think Nvidia would want to do so, too.

    At the super high end, the top GT 300 probably will beat the Radeon HD 5870 in performance, either in single GPU performance or Quad SLI vs CrossfireX.  Video cards scale well from adding additional cores, and it would take an epic disaster (e.g., the Itanic) for Nvidia to be unable to leverage a much larger die size into increased performance.  But it does matter when it is released.  I'll bet that the Radeon 6000 series will beat the Radeon 5000 series in performance, too.

  • dfandfan Member Posts: 362

    Almost every 3rd party software and hardware companies are backing up OpenCL, cuda will die and that's 100 % sure thing.

  • frozenvoidfrozenvoid Member Posts: 40
    Originally posted by Quizzical


    Let's compare where Nvidia is now to where ATI was a few months ago.
    As far as performance goes, a full node die shrink from 55 nm to 40 nm is a big deal, as full node die shrinks always are.  Making your transistors only half as large as before lets you have better power efficiency, better production cost efficiency, and get better performance by using twice as many transistors in the same area.
    ATI released their first 40 nm desktop video card at retail in April, the Radeon HD 4770.  It took a bit for it to become commonly available, though there were plenty available for anyone who wanted to buy one by June, and maybe in May.  While the 4770 wasn't exactly a high end card, its performance did blow away its nearest competitors in die size (Radeon HD 4670 and GeForce 9500), as one would hope for with a full node die shrink.  The only reason it's not more popular is that ATI prices it essentially the same as the slightly faster Radeon HD 4850.
    In comparison, Nvidia still doesn't have any 40 nm desktop video cards available at retail.  They did quietly release the GeForce G 210 and GT 220, but only to OEMs.  I couldn't find any review of the GT 220 at all.  The G 210 is a really pathetic card.
    http://www.pcgameshardware.com/aid,694223/Geforce-G210-Nvidias-first-DirectX-101-card-reviewed/Reviews/
    The quick summary is that compared to the Radeon HD 4350 (ATI's lowest end card from the Radeon 4000 generation), the G 210 performs worse in spite of a comparable die size, uses more power at idle, and uses more power at load.  That's like doing a full node die shrink and getting a worse card as a result of it.
    Like the Radeon 4770, the G 210 is really only a test part to see if Nvidia could make 40 nm graphics cards effectively, before trying to make bigger, faster, harder to make cards at 40 nm.  Nvidia wasn't able to get any gains out of the transition to 40 nm at all, or at least not yet.  That makes Nvidia today look behind where ATI was in April in this regard, and if they can't do the die shrink properly, they have no viable product.
    -----
    DirectX 11 support is another feature of next generation cards that is a big deal.  It may not matter much to someone who is going to replace his video card every year, as it will take a while for a lot of games that use DirectX 11 to be released.  But it's highly probable that DirectX 11 will be commonly used in a few years, making it an important consideration for anyone planning on buying a card and keeping it for a few years.
    The Radeon 4000 series of ATI cards had DirectX 10.1 a year ago.  ATI showed off working DirectX 11 hardware at Computex this Spring.  They've showed off their "Cypress" chip (to be released as Radeon HD 5870 and 5850) to the media on two occasions since then.
    Nvidia, meanwhile, doesn't have any DirectX 10.1 desktop cards available at retail, let alone DirectX 11.  The G 210 and GT 220 have DirectX 10.1, but are OEM-only, at least so far.  Nvidia has yet to demonstrate working DirectX 11 hardware at all.
    Indeed, they've gone so far as to downplay the importance of DirectX 11 entirely.
    http://www.xbitlabs.com/news/video/display/20090916140327_Nvidia_DirectX_11_Will_Not_Catalyze_Sales_of_Graphics_Cards.html
    That's the sort of thing that you don't do if you've got your own DirectX 11 cards coming to market just around the corner.  From that article:
    "The firm believes that general purpose computing on graphics processing units (GPGPU) as well as its proprietary tools and emergence of software taking advantage of these technologies will be a better driver for sales of graphics boards than new demanding video games and high-end cards."
    No, seriously.  I didn't just make that up.  They're downplaying the importance of gaming performance.  Ouch.
    Being first to market with DirectX 11 can matter, too.  Right now, any game developers working on DirectX 11 games and trying to optimize the video card performance is doing so with ATI hardware, because there is no such Nvidia hardware.  Without really intending to, that means they're implicitly optimizing their code to work well with ATI's architecture, and not Nvidia's.  That doesn't necessarily preclude them from going back later and trying to make their code work well with Nvidia cards, too, but you'd rather be in ATI's position than Nvidia's in this respect.
    -----
    So what about CUDA and PhysX, which Nvidia pushes relentlessly?  If you want PhysX right now, that basically means you want to play either Mirror's Edge or Batman: Arkham Asylum.  If you don't want to play either of those games, then you don't have any use for PhysX yet.  If you want CUDA right now, the nearest thing to a killer app for it seems to be video transcoding, that is, converting a video from one file type or resolution to another.
    Even if you believe that GPGPU is going to be a huge deal, it doesn't immediately follow that Nvidia will be better than ATI.  Suppose that you're a developer working on GPGPU software.  Do you use CUDA, which will run only on Nvidia cards; DirectX compute, which will run on any company's hardware (at least once DirectX 11 is out) provided that the OS is Windows; or OpenCL, which will run on any hardware, and any operating system?  Unless CUDA is far superior to the alternatives, you're not going to code it for CUDA.  I wouldn't bet against Microsoft's API, as they've been really, really good at getting their standards adopted in the past.
    Now, Nvidia's architecture will probably be better optimized for GPGPU than ATI's, regardless of what API ends up winning.  But there are soon to be three major video card companies, not two.  Larrabee is coming, and Intel has vastly more experience with designing hardware for general purpose computing than Nvidia.  Programmers have vastly more experience trying to code for x86-based processors (which Larrabee will be) than any proprietary standard Nvidia could come up with.  It's far from certain that Larrabee will beat "GT300" (or whatever Nvidia decides to call their next line of cards) at GPGPU performance.  It's worth noting that Intel doesn't have to do a decade worth of catching up on making general-purpose computing hardware, like they have to do with high end 3D gaming hardware if Larrabee is to be competitive there.
    What about PhysX?  Sure, it's been used in a zillion games, but rarely with hardware acceleration.  Meanwhile, Havok has also been used in a zillion games, and is owned by Intel.  Havok isn't proprietary like PhysX is, either; ATI cards will support hardware acceleration for Havok physics once the software for it is done.  And so we're back in the same situation as before:  if you're a game developer, do you want to use PhysX hardware acceleration and only run on Nvidia cards, or do you want to use Havok hardware acceleration and run on everything?  Actually, you probably want to use neither, as diverting the video card to other uses during gameplay tends to hurt game performance.
    -----
    So does Nvidia have an ace up their sleeve, that they're further along than they're letting on and keeping it secret?  The specter of Adam Osborne still haunts the computer industry.  Hype your next line of products and people stop buying your previous one, in order to wait for the next.  That's fine once the next line is out and people buy one rather than the other.  That's not so good if it means they buy nothing instead of something from you today.  And if the next line of products is delayed, they buy nothing from you tomorrow, either.  Osborne Computer Corporation ended up in bankruptcy, and companies have zealously pushed NDAs ever since.
    Or at least that's the situation today.  But what about after the Radeon 5000 series is out?  Trying to compete against the Radeon HD 5000 series with the GeForce GT 200 series is an exercise will be an exercise in futility unless the Radeon HD 5870 demonstrations were all smoke and mirrors, and the cards aren't nearly as good as is believed.  The combination of lower performance, higher power consumption, less compatibility, and a higher price tag (or at least higher production costs, due to a larger die size) at every price point on the market would be toxic to Nvidia's sales.  Such are the problems with being a generation (and a full node) behind.
    If that happens, Nvidia silence would no longer be a case of trying to get a sure sale today rather than a possible sale tomorrow.  It would be a case of giving ATI a sure sale today, rather than a possible Nvidia sale tomorrow.  If GT 300 or whatever they're going to call it (with Nvidia's naming schemes as incomprehensible as they are, would anyone be particularly shocked if they name their next high end card "Bob"?) is truly coming with general retail availability in November, it would behoove Nvidia to jump around screaming and demonstrating everything that they possibly can to convince customers to wait for the GT 300 in November, rather than buying an ATI card now.
    Indeed, Nvidia is already trying to steal attention from ATI.  On the same day that ATI showed off their Radeon HD 5870, Nvidia also had a press event.  Coverage from that day talked at length about Eyefinity (a gimmick unless someone can produce bezel-free monitors) and Radeon HD 5870 specs and benchmarks (over 2.7 teraflops!), with nary a word about whatever happened at the Nvidia event.  If they don't have something to show once the ATI cards are out, well then, behold, I bring you the GeForce GTS 350:
    http://www.newegg.com/Product/ProductList.aspx?Submit=ENE&N=2010380048%201305520548%20106792634%201067947241&bop=And&ShowDeactivatedMark=False&ActiveSearchResult=True&Order=PRICE
    and the GeForce GT 330:
    http://www.newegg.com/Product/ProductList.aspx?Submit=ENE&N=2010380048+1305520548+106793024&QksAutoSuggestion=&ShowDeactivatedMark=False&Configurator=&Subcategory=48&description=&Ntk=&CFG=&SpeTabStoreType=&srchInDesc=
    Well, if variants on the GeForce GTX 8800 have spanned three generations so far, why not make it four?
    It may not come to that.  But if it does, hoping for a paper launch of the "real" GT 300s by the end of the year may well be overly optimistic.  We'll know a lot more soon, once the Radeon 5000 series out.
    Of course, this only matters to someone looking to buy a new video card in the next few months.  If you're holding off on computer purchases for Sandy Bridge or Bulldozer, then we really have no clue how the GTX+*~^ 485 will compare to the Radeon HD 6870 or Core i11-960 (or whatever Intel decides to call Larrabee; they're almost as bad as Nvidia in the inscrutable naming department), or whatever the next generation of cards goes by.  But with everyone else scrambling to get hardware out before Windows 7 (do you really think it's a coincidence that Lynnfield and Propus both just launched?), one would think Nvidia would want to do so, too.
    At the super high end, the top GT 300 probably will beat the Radeon HD 5870 in performance, either in single GPU performance or Quad SLI vs CrossfireX.  Video cards scale well from adding additional cores, and it would take an epic disaster (e.g., the Itanic) for Nvidia to be unable to leverage a much larger die size into increased performance.  But it does matter when it is released.  I'll bet that the Radeon 6000 series will beat the Radeon 5000 series in performance, too.

     

    nvidia cannot handle 40nm process. a blog recently stated for every 400 parts they put in the furnace they only got 17 usable ones.

    image

  • drbaltazardrbaltazar Member UncommonPosts: 7,856
    Originally posted by Abrahmm


    I'm thinking of picking up a cheap, re-certified 260GTX to replace my aging 8800gts and waiting until the nVidia 300 series comes out. I was just reading an article about the new 300 series and how it is making some big leaps in graphics technology.

    mm i wonder with all these new cards coming out are they still fitting say in a regular 8800gt area men mine was fairly big and 

    with the fan and all .probably i should just give this comp to my kid and just replace the whole thing 

    but with the vista commotion  im kind of sketish of window 7  ,i think ill wait anyway its not like i would need it 

    hell only few game can abuse my comp right now my biggest slowdown is the hardrive 

    previous poster speed problem is probably same as me hard drive

    but going balistic with a ssd drive like say andintel x25e or their new one if they have one is not cheap were from 500 to 800 i ear for the intel model 

    was fastest ,perfect for a game like everquest 2 

  • QuizzicalQuizzical Member LegendaryPosts: 25,507
    Originally posted by drbaltazar
    hell only few game can abuse my comp right now my biggest slowdown is the hardrive 
    previous poster speed problem is probably same as me hard drive
    but going balistic with a ssd drive like say andintel x25e or their new one if they have one is not cheap were from 500 to 800 i ear for the intel model 



     

    You don't need an SLC solid state drive, any more than you need an enterprise class hard drive.  SLC is designed for use in servers that will be constantly writing to the drive around the clock, so that it doesn't wear out too soon.  If you're not going to write more than, say, 100 GB to the drive per day (as very, very few home users would), then an MLC drive will last basically forever, anyway, or at least until it loses its charge after 10 years or so.

    SLC is only slightly faster than MLC, but vastly more expensive.  If you want an Intel-based solid state drive, get their second generation X25-M (not their first generation one of the same name), which Intel sells at $225 for 74 GB (marketed as 80 GB, but that's hard drive manufacturer shenanigans), though due to an initial shortage caused by a recall (to fix a bug where changing your bios password killed the drive), some retailers are charging a huge premium over that.

    You can also get a cheaper solid state drive by going with an Indilinx-based drive.  Apparently OCZ's MSRP on their Agility drives is $160 for 60 GB and $270 for 120 GB, though vendors are still charging more than that.  Their forthcoming Solid 2 series may well be cheaper yet.  Even the reference Indilinx model (OCZ Vertex, Super-Talent UltraDriveGX, Corsair X-series, Patriot Torqx, G.Skill Falcon, Crucial M225 series) is cheaper per GB than the Intel ones, and still pretty good.

    The Indilinx-based drives aren't as fast as the Intel ones, but I don't think that the difference between 30x hard drive speed in random reads/writes and 50x is nearly as important as the difference between 30x and 1x.  A quick summary of the situation:

    *All hard drives and solid state drives are good at sequential reads (i.e., huge files); some are better than others, but that scarcely matters if the slower ones aren't a bottleneck, either.

     *All hard drives and solid state drives are good at sequential writes, too.

     *All solid state drives are good at random reads (i.e., a zillion small files), while all rotating platter hard drives are bad at random reads.

     *All rotating platter hard drives are bad at random writes, while some solid state hard drives (Intel and Indilinx) are good at them, some (Samsung) are bad, and some (JMicron) are so horribly awful as to be nearly unusable without various hacks to try to get around the fundamental problem that the drive has horrible write latency.

    Of course, a lot of solid state drive manufacturers can't tell you this because they still want you to buy their JMicron drives just to get rid of them, even if they're also offering something better.  Intel and Crucial never produced such awful drives in the first place, Corsair has discontinued theirs, and OCZ is in the process of discontinuing theirs, too.

  • drbaltazardrbaltazar Member UncommonPosts: 7,856
    Originally posted by Quizzical

    Originally posted by drbaltazar
    hell only few game can abuse my comp right now my biggest slowdown is the hardrive 
    previous poster speed problem is probably same as me hard drive
    but going balistic with a ssd drive like say andintel x25e or their new one if they have one is not cheap were from 500 to 800 i ear for the intel model 



     

    You don't need an SLC solid state drive, any more than you need an enterprise class hard drive.  SLC is designed for use in servers that will be constantly writing to the drive around the clock, so that it doesn't wear out too soon.  If you're not going to write more than, say, 100 GB to the drive per day (as very, very few home users would), then an MLC drive will last basically forever, anyway, or at least until it loses its charge after 10 years or so.

    SLC is only slightly faster than MLC, but vastly more expensive.  If you want an Intel-based solid state drive, get their second generation X25-M (not their first generation one of the same name), which Intel sells at $225 for 74 GB (marketed as 80 GB, but that's hard drive manufacturer shenanigans), though due to an initial shortage caused by a recall (to fix a bug where changing your bios password killed the drive), some retailers are charging a huge premium over that.

    You can also get a cheaper solid state drive by going with an Indilinx-based drive.  Apparently OCZ's MSRP on their Agility drives is $160 for 60 GB and $270 for 120 GB, though vendors are still charging more than that.  Their forthcoming Solid 2 series may well be cheaper yet.  Even the reference Indilinx model (OCZ Vertex, Super-Talent UltraDriveGX, Corsair X-series, Patriot Torqx, G.Skill Falcon, Crucial M225 series) is cheaper per GB than the Intel ones, and still pretty good.

    The Indilinx-based drives aren't as fast as the Intel ones, but I don't think that the difference between 30x hard drive speed in random reads/writes and 50x is nearly as important as the difference between 30x and 1x.  A quick summary of the situation:

    *All hard drives and solid state drives are good at sequential reads (i.e., huge files); some are better than others, but that scarcely matters if the slower ones aren't a bottleneck, either.

     *All hard drives and solid state drives are good at sequential writes, too.

     *All solid state drives are good at random reads (i.e., a zillion small files), while all rotating platter hard drives are bad at random reads.

     *All rotating platter hard drives are bad at random writes, while some solid state hard drives (Intel and Indilinx) are good at them, some (Samsung) are bad, and some (JMicron) are so horribly awful as to be nearly unusable without various hacks to try to get around the fundamental problem that the drive has horrible write latency.

    Of course, a lot of solid state drive manufacturers can't tell you this because they still want you to buy their JMicron drives just to get rid of them, even if they're also offering something better.  Intel and Crucial never produced such awful drives in the first place, Corsair has discontinued theirs, and OCZ is in the process of discontinuing theirs, too.

    ty for the precision 

    as for the ssd stay with the newest intel 

    trust me its the fastest and most reliable unless samsung got out a rabbit out of their magicle hat (again ,lolwoiuldnt be the first time)

     

  • drbaltazardrbaltazar Member UncommonPosts: 7,856
    Originally posted by Greater_Force


    With only 3 big players (2 if you don't count Intel mobile GPUs) in the GPU business it would be a loss not to see NVidia recover from the problems they have been going through. I think ATI cards are great but we need competition in the market to keep prices down and innovation on the move.

    mm dont know wich problem you talk about i got a 8800 gt  and i get good perf not the speed demon ati just released but it do good 

    the main advantage of ati is that now one card can do the job of two  unless im mistaken

    and down the road it will prob be crossfire ready

    why ati do this is obvious

    nvidia got sli +physicx 

    i dont believe ati got physicx 

    thats why that card is so big and fast its to counter the triplet of nvidia with a single card wich im sorry to say is a good move

  • noquarternoquarter Member Posts: 1,170


    Originally posted by drbaltazar
    Originally posted by Greater_Force With only 3 big players (2 if you don't count Intel mobile GPUs) in the GPU business it would be a loss not to see NVidia recover from the problems they have been going through. I think ATI cards are great but we need competition in the market to keep prices down and innovation on the move.
    mm dont know wich problem you talk about i got a 8800 gt  and i get good perf not the speed demon ati just released but it do good 
    the main advantage of ati is that now one card can do the job of two  unless im mistaken


    I think he means the problems NVidia has had developing a 40nm chip and implementing DX11 features. A lot of the features in DX11 were suppose to exist in DX10 but pressure from nVidia, who weren't able to design those features into their chips, forced them to be put off for the next version of DX. They also had a run of bad chips that resulted in some big losses.



    and down the road it will prob be crossfire ready
    why ati do this is obvious
    nvidia got sli +physicx
    i dont believe ati got physicx
    thats why that card is so big and fast its to counter the triplet of nvidia with a single card wich im sorry to say is a good move
    nVidia has SLI, ATI has Crossfire.. you can Crossfire the 5870's right away if you want.. the 5870X2 is suppose to be a few weeks behind the 5870. ATI doesn't have hardware support for PhysX since nVidia owns it, but between the Havok API, OpenCL and DirectX compute it's not going to matter much.


    Honestly I think GPU physics will die back out anyway as more cores are added to CPU's. CPU's may not be as efficient at handling physics as a GPU but there is nothing for all those cores to work on anyway and they can still get the job done with that much processing power. GPU's have much more important things to be processing (graphics) when the extra CPU cores can be handling those physics calculations.

Sign In or Register to comment.