Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

GeForce 8 graphics processors to gain PhysX support

gnomexxxgnomexxx Member Posts: 2,920

Cool! 

 

 

GeForce 8 graphics processors to gain PhysX support

by Cyril Kowaliski — 10:39 AM on February 14, 2008

 

During Nvidia's fourth-quarter financial results conference call, Nvidia shed a little more light on its acquisition of Ageia and what it plans to do with the firm's PhysX technology. Nvidia CEO Jen-Hsun Huang made no announcements regarding the deal until asked in the question-and-answer session, but he was happy to divulge a decent number of details.

Huang revealed that Nvidia's strategy is to take the PhysX engine and port it onto CUDA. For those not in the know, CUDA stands for Compute Unified Device Architecture, and it's a C-like application programming interface Nvidia developed to let programmers write general-purpose applications that can run on GPUs. All of Nvidia's existing GeForce 8 graphics processors already support CUDA, and Huang confirmed that the cards will be able to run PhysX.

We're working toward the physics-engine-to-CUDA port as we speak. And we intend to throw a lot of resources at it. You know, I wouldn't be surprised if it helps our GPU sales even in advance of [the port's completion]. The reason is, [it's] just gonna be a software download. Every single GPU that is CUDA-enabled will be able to run the physics engine when it comes. . . . Every one of our GeForce 8-series GPUs runs CUDA.

Huang thinks the integration will encourage people to spend more on graphics processing hardware, as well:

Our expectation is that this is gonna encourage people to buy even better GPUs. It might—and probably will—encourage people to buy a second GPU for their SLI slot. And for the highest-end gamer, it will encourage them to buy three GPUs. Potentially two for graphics and one for physics, or one for graphics and two for physics.

Last, but not least, Huang said developers are "really excited" about the PhysX-to-CUDA port. "Finally they're able to get a physics engine accelerated into a very large population of gamers," he explained. Huang was unwilling to get into a time frame for the release of the first PhysX port. However, considering this will be purely a software implementation and Nvidia now has Ageia engineers on its payroll, the port may not take too long to complete.

===============================
image
image

Comments

  • frodusfrodus Member Posts: 2,396

    Does it give a release date..Sounds like it might be something to this

    Trade in material assumptions for spiritual facts and make permanent progress.

  • gnomexxxgnomexxx Member Posts: 2,920
    Originally posted by frodus


    Does it give a release date..Sounds like it might be something to this

    That's what I'm thinking.  I'd be willing to buy a new motherboard to have 3 video cards on it if this pans out nicely.  I've already got one with 2 SLI.  What's one more.  lol. 

    ===============================
    image
    image

  • CleffyIICleffyII Member, Newbie CommonPosts: 3,440

    I still don't see what the big deal is, ATI has been offering this technology for the last 2 years using the more popular Havok, but no one is raving about it.

    image

  • frodusfrodus Member Posts: 2,396

    Did u have too bring ATi in this Ge-force Tread ITs NEwFRodus loves his dual 8800's


     

    SLI + dual 8800's = happy times

    Trade in material assumptions for spiritual facts and make permanent progress.

  • SpathotanSpathotan Member Posts: 3,928

    Agree with Cleffy. Not to mention its overhyped marketing bullshit, which is the reason Ageia PhysX cards never sold in the first place. With quad-core CPUs this is worthless, along with todays stronger GPUs and multi-gpu systems in general. Not to mention there are only like....two games that support/run PhysX, both of them are Ghost Recon.

    "There's no star system Slave I can't reach, and there's no planet I can't find. There's nowhere in the Galaxy for you to run. Might as well give up now."
    — Boba Fett

  • gnomexxxgnomexxx Member Posts: 2,920
    Originally posted by Spathotan


    Agree with Cleffy. Not to mention its overhyped marketing bullshit, which is the reason Ageia PhysX cards never sold in the first place. With quad-core CPUs this is worthless, along with todays stronger GPUs and multi-gpu systems in general. Not to mention there are only like....two games that support/run PhysX, both of them are Ghost Recon.

    I think after this, there will probably be more coming out.

    ===============================
    image
    image

  • gnomexxxgnomexxx Member Posts: 2,920
    Originally posted by frodus


    Did u have too bring ATi in this Ge-force Tread ITs NEwFRodus loves his dual 8800's












     
    SLI + dual 8800's = happy times

    I like mine too.  They're fun and good.

    ===============================
    image
    image

  • SpathotanSpathotan Member Posts: 3,928


    Originally posted by gnomexxx

    Originally posted by Spathotan

    Agree with Cleffy. Not to mention its overhyped marketing bullshit, which is the reason Ageia PhysX cards never sold in the first place. With quad-core CPUs this is worthless, along with todays stronger GPUs and multi-gpu systems in general. Not to mention there are only like....two games that support/run PhysX, both of them are Ghost Recon.


    I think after this, there will probably be more coming out.

    Possibly, but with this merger happening just now, it would be another year + before we saw this, and by then 8 core processors will be out, once again further rendering this technology pointless. For years hardware has been far surpasing the speed of software development. Crysis WAS an exception for like, 3 weeks, then better drivers came out, Crysis is no match for my Q6600 and 8800GT at very high settings.

    Dont get me wrong im not totally shitting on this and dont mean to bring your thread down. But this technology is preety worthless right now.

    "There's no star system Slave I can't reach, and there's no planet I can't find. There's nowhere in the Galaxy for you to run. Might as well give up now."
    — Boba Fett

  • gnomexxxgnomexxx Member Posts: 2,920

    Originally posted by Spathotan


     

    Originally posted by gnomexxx


    Originally posted by Spathotan
     
    Agree with Cleffy. Not to mention its overhyped marketing bullshit, which is the reason Ageia PhysX cards never sold in the first place. With quad-core CPUs this is worthless, along with todays stronger GPUs and multi-gpu systems in general. Not to mention there are only like....two games that support/run PhysX, both of them are Ghost Recon.

    I think after this, there will probably be more coming out.

     

    Possibly, but with this merger happening just now, it would be another year + before we saw this, and by then 8 core processors will be out, once again further rendering this technology pointless. For years hardware has been far surpasing the speed of software development. Crysis WAS an exception for like, 3 weeks, then better drivers came out, Crysis is no match for my Q6600 and 8800GT at very high settings.

    Dont get me wrong im not totally shitting on this and dont mean to bring your thread down. But this technology is preety worthless right now.

    I'm not a techy guy, but I thought that the move was to take a lot of the load of graphics processing off of the CPU and move it onto the video card.  Wouldn't this fit into that line of thinking?

    Don't laugh if I'm totally off base.  Like I said, I ain't techy.

    ===============================
    image
    image

  • CleffyIICleffyII Member, Newbie CommonPosts: 3,440

    In all honesty its gonna slow down graphical rendering as it did with ATI cards.  The GPU was never really optimized for PPU processing, the CPU though is more generic and can handle most processing tasks.  The reason for this is to make Graphics Processing Faster.  By adding in the physics processing algorythms the GPU would lose some of its dedicated graphics processing power thus making it slower.  You also have to remember, most of the processing handled in the game is on the GPU.  In contrast the CPU really isn't utilized as much in games due to the high graphical nature of next-gen games.

    image

  • SpathotanSpathotan Member Posts: 3,928


    Originally posted by gnomexxx
    Originally posted by Spathotan  

    Originally posted by gnomexxx

    Originally posted by Spathotan
     
    Agree with Cleffy. Not to mention its overhyped marketing bullshit, which is the reason Ageia PhysX cards never sold in the first place. With quad-core CPUs this is worthless, along with todays stronger GPUs and multi-gpu systems in general. Not to mention there are only like....two games that support/run PhysX, both of them are Ghost Recon.
    I think after this, there will probably be more coming out.


     
    Possibly, but with this merger happening just now, it would be another year + before we saw this, and by then 8 core processors will be out, once again further rendering this technology pointless. For years hardware has been far surpasing the speed of software development. Crysis WAS an exception for like, 3 weeks, then better drivers came out, Crysis is no match for my Q6600 and 8800GT at very high settings.
    Dont get me wrong im not totally shitting on this and dont mean to bring your thread down. But this technology is preety worthless right now.

    I'm not a techy guy, but I thought that the move was to take a lot of the load of graphics processing off of the CPU and move it onto the video card.  Wouldn't this fit into that line of thinking?
    Don't laugh if I'm totally off base.  Like I said, I ain't techy.

    Thats the thing though, with todays CPUs its not a problem. Physics are handled by the CPU not the GPU, but things like Ageia PhysX cards and Crossfire with a 3rd card can make that GPU render physics instead of the processor. But Why go spend $200-$400 on another card, taking up a valuable mobo slot and space in the case, when what you need can be done on shit thats already in your system; the CPU.

    "There's no star system Slave I can't reach, and there's no planet I can't find. There's nowhere in the Galaxy for you to run. Might as well give up now."
    — Boba Fett

  • gnomexxxgnomexxx Member Posts: 2,920
    Originally posted by Spathotan


     

    Originally posted by gnomexxx


    Originally posted by Spathotan
     
     





    Originally posted by gnomexxx




    Originally posted by Spathotan

     

    Agree with Cleffy. Not to mention its overhyped marketing bullshit, which is the reason Ageia PhysX cards never sold in the first place. With quad-core CPUs this is worthless, along with todays stronger GPUs and multi-gpu systems in general. Not to mention there are only like....two games that support/run PhysX, both of them are Ghost Recon.

     
     



    I think after this, there will probably be more coming out.





     

    Possibly, but with this merger happening just now, it would be another year + before we saw this, and by then 8 core processors will be out, once again further rendering this technology pointless. For years hardware has been far surpasing the speed of software development. Crysis WAS an exception for like, 3 weeks, then better drivers came out, Crysis is no match for my Q6600 and 8800GT at very high settings.

    Dont get me wrong im not totally shitting on this and dont mean to bring your thread down. But this technology is preety worthless right now.



    I'm not a techy guy, but I thought that the move was to take a lot of the load of graphics processing off of the CPU and move it onto the video card.  Wouldn't this fit into that line of thinking?

    Don't laugh if I'm totally off base.  Like I said, I ain't techy.

     

    Thats the thing though, with todays CPUs its not a problem. Physics are handled by the CPU not the GPU, but things like Ageia PhysX cards and Crossfire with a 3rd card can make that GPU render physics instead of the processor. But Why go spend $200-$400 on another card, taking up a valuable mobo slot and space in the case, when what you need can be done on shit thats already in your system; the CPU.

    Again, I'm not techy, so I'm asking.  But I thought the more you send to the CPU, the more you are clogging up the pipes that go to and from it.  Creating a bottleneck of data.  It can only take in and spit out so much, right?  It's like I heard about bandwidth the other day.  I heard that hard drives can only receive data so fast but people are always screaming for more Internet bandwidth.  Eventually, (if not already in places) the hard drive won't be able to write data as fast as it is receiving it.  So, in the graphics case, even though the CPU may be able to do the processing it just ain't receiving the data fast enough or have the means to send what it could back out.

    ===============================
    image
    image

  • SpathotanSpathotan Member Posts: 3,928


    Originally posted by gnomexxx

    Originally posted by Spathotan

     



    Originally posted by gnomexxx


    Originally posted by Spathotan
     
     


    Originally posted by gnomexxx


    Originally posted by Spathotan
     
    Agree with Cleffy. Not to mention its overhyped marketing bullshit, which is the reason Ageia PhysX cards never sold in the first place. With quad-core CPUs this is worthless, along with todays stronger GPUs and multi-gpu systems in general. Not to mention there are only like....two games that support/run PhysX, both of them are Ghost Recon.
     
     


    I think after this, there will probably be more coming out.


     
    Possibly, but with this merger happening just now, it would be another year + before we saw this, and by then 8 core processors will be out, once again further rendering this technology pointless. For years hardware has been far surpasing the speed of software development. Crysis WAS an exception for like, 3 weeks, then better drivers came out, Crysis is no match for my Q6600 and 8800GT at very high settings.
    Dont get me wrong im not totally shitting on this and dont mean to bring your thread down. But this technology is preety worthless right now.


    I'm not a techy guy, but I thought that the move was to take a lot of the load of graphics processing off of the CPU and move it onto the video card.  Wouldn't this fit into that line of thinking?
    Don't laugh if I'm totally off base.  Like I said, I ain't techy.

     
    Thats the thing though, with todays CPUs its not a problem. Physics are handled by the CPU not the GPU, but things like Ageia PhysX cards and Crossfire with a 3rd card can make that GPU render physics instead of the processor. But Why go spend $200-$400 on another card, taking up a valuable mobo slot and space in the case, when what you need can be done on shit thats already in your system; the CPU.

    Again, I'm not techy, so I'm asking.  But I thought the more you send to the CPU, the more you are clogging up the pipes that go to and from it.  Creating a bottleneck of data.  It can only take in and spit out so much, right?  It's like I heard about bandwidth the other day.  I heard that hard drives can only receive data so fast but people are always screaming for more Internet bandwidth.  Eventually, (if not already in places) the hard drive won't be able to write data as fast as it is receiving it.  So, in the graphics case, even though the CPU may be able to do the processing it just ain't receiving the data fast enough or have the means to send what it could back out.

    All that harddrive stuff is rubbish. Thats what RAM is for, its nothing but a buffer for the harddrive. And with SSD (Solid State Drives, aka flash based harddrives) out there now and picking up pace, RAM will soon become borderline useless, but its still a long ways away.

    And as far as the CPU becoming "clogged", thats what multi-core CPUs are for :-)

    "There's no star system Slave I can't reach, and there's no planet I can't find. There's nowhere in the Galaxy for you to run. Might as well give up now."
    — Boba Fett

  • CleffyIICleffyII Member, Newbie CommonPosts: 3,440

    Bottlenecking would be a problem with or without a PPU.  You still would need to send the data to the CPU in order for it to tell it to send it to the PPU.  Also the lanes on to and from the CPU are larger then to a PCI-e slot.

    image

  • baffbaff Member Posts: 9,457

    This is something I've been waiting for.

    It's been pretty obvious to me that Ageia wasn't selling, that although being an excellent piece of kit, (have a look at Ghost Recon Advanced Warfighter if you want to see the difference between using it and not using it), it is over priced and not enough games actually use it to warrant buying it.

    Clearly then, with the company guarenteed to fold but the tech excellent, it was set to get bought out cheap and sold at a price the market can bear.

    (That suoper network card they used to advertise on this site is another prime candidate).

     

    I don't know if you remember when Nvidia bought out 3DFX. 3DFX were the technology leaders at the time, the next gen of Nvidia cards all came with Anti Aliasing straight afterwards.

     

    What remains to be seen is whether or not Nvidia include a PPU chip on their next GFX board or whether their GFX chip is actually all that is needed. I guess we won't know until the 9 series is released.

     

     

     

Sign In or Register to comment.