Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

New Rendering method removes polygons. Future of Graphics looks "Round"!

2

Comments

  • SnarlingWolfSnarlingWolf Member Posts: 2,697

    Many issues with this video.

     

    First anytime they are drawing "8 billion" points on the screen they're using the same object in the same state. There are ways to load one object into data and replicate it many times on the screen without a lot of hardware work. They don't show scenes with thousands of different objects on the screen at once. They don't show lighting or animation as one person said. They don't show particle systems either. There is a lot missing from this that would make it useable.

     

    Also they exagerate by taking low graphic quality games and acting like that's what all polygon games look like, as opposed to stuff like the crysis engine on max settings.

     

    They also say the reason theirs is fast is because they only draw what's shown. Guess what, that's how polygons engine works. They go through their process of determining what's in front of what object, apply that information to the areas of the screen, then get what color those pixels should be and they draw the scene.

     

     

    There's a lot of misleading in this guy's talk and it's definetly a salesman pitch, I wouldn't put much stock in this.

  • glofishglofish Member UncommonPosts: 346

    the technology presented in the video is does not work for gaming - everything he says is a pretty much a lie

  • championsFanchampionsFan Member Posts: 419

    The method could easily incorporate static animations, such as a waterfall, or Ferris wheel rotating in the background, etc.   If you can search a static 3D space, you can also search a static 4D space time.

    The problem comes when animations need to be generated dynamically, based on what the player has chosen to do.  The only way I can see to do this using a calculate in-advance and search method, would be to calculate the staggeringly large space of all possible actions the player could do and the resulting image they see, and to search through that.

    Cryptic is trying a Customer Development approach to MMO creation.

  • ReizlaReizla Member RarePosts: 4,092

    To all the nay-sayers who tell this technology won't work...

    Keep in mind that the first "real" 3D games (Wolfenstein & Doom) were all based on software 3D rendering on old 2D GPUs. At the time this was extremely revolutionar as well. At the time we all thought "how the hell did they pull this off". From there Voodoo jumped in the 3D market, and from there ATi and nVidia took over.

    I think this technology has a future, but like I said before, you need CPU power instead of GPU power. When displaying moving 3D objects, you only need to do some extra calculations to get the right animations around. That's where the CPU power jumps in.

    Personally I think we'll see some hybrid first, where old fashioned GPUs will do the moving 3D objects and this technology will be used for the background & scenery. When CPUs get more and more powerfull, this technology might take over completely. What's left for ATi and nVidia is making their GPU more like an old co-processor for graphical calculations instead of displaying poligons as they do now...

  • SkuzSkuz Member UncommonPosts: 1,018

    I think that the "it's all static & nothing moves" is a very salient point, it might be a great tool for background/environmnt generation where it isn't malleable/destructible/etc but there's nothing in the video to show it can be effectively applied to games in terms of generating moving graphics, which is the "stock in trade" of games.

    So, interesting tech demo it may be, but it's real-world application in games is a complete unknown at this point.

  • RegenRegen Member Posts: 53

    Lacking the technical knowlege to make a bombastic statement, i can still say one thing tho. 3d rendering today seems inefficient.

    I might be mistaken, but dont a normal 3d rendering draw whole objects, just to draw another object in front of it? If so, then it draws a whole lot of stuff it dont need to.


    Seeing how someone have a quote of einstein:
    "We can't solve problems by using the same kind of thinking we used when we created them."

    Ill wait for something better than a rushed demo, before i say it will work or not.

    image

    http://www.mmorpg.com/discussion2.cfm/thread/261448/page/5


    "I'd just like to see more games that focus on the world, and giving the people in it more of a role, im tired of these constant single player games that you can walk around with millions of people."


    - Parsalin

  • FrEaK411FrEaK411 Member Posts: 3

    Wow, this is amazing stuff. I can't believe it is running only off software...Ill be sure to keep an eye on this! Thanks for the post OP!

  • Mopar63Mopar63 Member UncommonPosts: 300

    The real threat to discrete graphics right now looks to be services like Onlive. You can use ANY computer to play your game. That means a 400$ dell desk top or a $300 compaq laptop, possibly even a netbook could all run the same games.

     

     

  • merieke82merieke82 Member Posts: 165
    Originally posted by Reizla


    To all the nay-sayers who tell this technology won't work...
    Keep in mind that the first "real" 3D games (Wolfenstein & Doom) were all based on software 3D rendering on old 2D GPUs. At the time this was extremely revolutionar as well. At the time we all thought "how the hell did they pull this off". From there Voodoo jumped in the 3D market, and from there ATi and nVidia took over.
    I think this technology has a future, but like I said before, you need CPU power instead of GPU power. When displaying moving 3D objects, you only need to do some extra calculations to get the right animations around. That's where the CPU power jumps in.
    Personally I think we'll see some hybrid first, where old fashioned GPUs will do the moving 3D objects and this technology will be used for the background & scenery. When CPUs get more and more powerfull, this technology might take over completely. What's left for ATi and nVidia is making their GPU more like an old co-processor for graphical calculations instead of displaying poligons as they do now...

     

    I had considered this while watching the video. Would it be possible to blend this approach with a polygon engine? I'm fairly certain the answer is no. I mean you could do it but you wouldn't be better off than if you just had a pure polygon engine.

     

    Let's assume you draw some awesome static background using points with your precalculated lighting and animations. Now, let's say you insert 2 player characters being rendered with traditional polygons who are going at each other with various attacks. Every action they take across the landscape results in a dynamic change in the shadows they cast along with ability effects that alter the color and light around them. In order to draw this you'd have to apply your "unlimited" precalculated lighting on top of the polygon engine and then calculate the polygon rendering through to the background. I just don't see how it's possible but I'd welcome someone to prove me wrong. (Take that a step further and consider what would happen if you were rendering a scene with physics where a hanging light fixture can sway as a result of player action)

     

    I personally think directx11 tesselation will more easily achieve the result desired here in terms of processing power and ease of development. If you haven't seen the dx11 tesselation video look it up on youtube.

     

    Honestly, even if the tech pans out, what development team is going to have the time and resources to build a game out of billions/trillions of points?

  • Recon48Recon48 Member UncommonPosts: 218
    Originally posted by glofish


    The method is neither new nor revolutionary, has been around for ages, it is just not suitable for gaming. The company in question is just trying to raise capital from non-technical but rich investors.

     
    If you don't believe me, ask yourself the following question: Why is it that the video does not show a single moving (animated) object? I'll tell you why ... the technique is suitable only for static landscapes where the search-tree for each pixel is precomputed. You can have unlimited detail as long as all the details are static and only the camera (point of observation) can move...


    Ummm, no thanks...
     
     

    Think of it in these terms...  You said it yourself:  "static landscapes".  Imagine that the game world (or at least the static objects contained therein) could be point cloud based, while all of the polygonal animated or interactive objects are dropped into this world.  The static portion of the environment would be software rendered, not using a single polygon.  Now imagine the number of polygons this frees up to be used for the modeled objects' polygon counts.  More model polys = more realism.

  • merieke82merieke82 Member Posts: 165
    Originally posted by Mopar63


    The real threat to discrete graphics right now looks to be services like Onlive. You can use ANY computer to play your game. That means a 400$ dell desk top or a $300 compaq laptop, possibly even a netbook could all run the same games.
     
     

     

    Those services will hold more value for gamers with very low budgets. Gamers with an income who like to maintain a fast PC, download mods, and retain control over their system will not use those services.

     

    It will hopefully never replace the demand for high end consumer video cards, but agree that it will affect the market share to some degree.

     

  • majimaji Member UncommonPosts: 2,091

    Interesting.

    Still....

    1) It's an advertisment and hype video. I don't trust everything they say

    2) Even if they display only what is needed and can be seen from users point of view, some scenes (especially with a very long viewing range) still require the computer to calculate lots and lots and LOTS of these points the models are made of. So the longer the viewing range, the longer it takes to calculate the stuff, meaning there, if all comes trues what the guy said, will be still a race. Not about the number of polgyons displayed anymore, but the number of dots the card can calculate.

    Let's play Fallen Earth (blind, 300 episodes)

    Let's play Guild Wars 2 (blind, 45 episodes)

  • Thomas2006Thomas2006 Member RarePosts: 1,152

     This tech is still too far off. It might be 16 months before someone could create a static scene you could travel around. But its ALOT farther off in terms of being put into actual gameplay. I would say its more 10+ years off before we start seeing any real games attempt to put it to use.

    There are so many issues that will have to be tackled before this could be put to real use.  It's all nice to put togeather a demo app of nothing but a static scene with simple graphics.  It's another thing all togeather to put togeather a demo of a working game.  Yes it seems it could render the scene without too many issues.  But how is that going to handle rendering the same scene with actual game logic running on the same cpu also?   What about AI, Animation, Full Blown lighting, GUI's, ect? all of them are extra overhead also computed by the CPU.

    Most graphics engines can render a 3D Scene without any other overhead without any problems at 100+ fps.   But that doesn't mean jack if you can't also put the actual game logic in and also keep that constent frame rate.

  • SkuzSkuz Member UncommonPosts: 1,018

    The other thing to bear in mind was the extreme use of the same object over & over which would make any algorithm extremely effecient.

    Far - distance scenes wont be any harder to render than short range ones if the algorithm does what they say it does because the number of pixels on a sceen is a constant.

    The real impressive thing will be when this tech can "do" a demo of a fully moving scene through highly varied terrain, let's say a soldier moving through a dense jungle high up in the hills with much wildlife, plant types, moving opponents etc...then I might be willing to lower my scepticism, if it can do all that without choking the cpu from it's ther gaming processes, collision detection, network comms, physics, ai.....it's a big ask i think.

  • SnarlingWolfSnarlingWolf Member Posts: 2,697
    Originally posted by Reizla


    To all the nay-sayers who tell this technology won't work...
    Keep in mind that the first "real" 3D games (Wolfenstein & Doom) were all based on software 3D rendering on old 2D GPUs. At the time this was extremely revolutionar as well. At the time we all thought "how the hell did they pull this off". From there Voodoo jumped in the 3D market, and from there ATi and nVidia took over.
    I think this technology has a future, but like I said before, you need CPU power instead of GPU power. When displaying moving 3D objects, you only need to do some extra calculations to get the right animations around. That's where the CPU power jumps in.
    Personally I think we'll see some hybrid first, where old fashioned GPUs will do the moving 3D objects and this technology will be used for the background & scenery. When CPUs get more and more powerfull, this technology might take over completely. What's left for ATi and nVidia is making their GPU more like an old co-processor for graphical calculations instead of displaying poligons as they do now...



     

    My first statement still stands, and I do know a lot about 3d graphics and how they work. I wouldn't put much stake in what this guy is pitching at all, but I did want to address one point of yours.

     

    Yes the first 3D games were software based, their are 2 reasons they went GPU. It allowed for much more power, hence much better looking graphics by having a dedicated chip to it (this is also the reason why companies have been trying to push physics chips as well, so we could have some amazing dynamic physics real time in games). The graphics can have their own pipeline and memory that allows them to push the limits of technology.

     

    The other reason is the reason we are likely to not go back to doing graphics on the CPU. The CPU is now free to caclulate things such as AI decisions (complex AI is a huge CPU eater), determining player key strokes and actions, interpretting the incoming network flow and sending it through the program to update the positions of everything in the 3D world, physics (see early as to why they want physics on it's own chip as well), etc.

     

    The CPU does a lot of work in modern times, as does the GPU. To suddenly try and put all the rendering back in the CPU would significantly hinder games. Even with a new magical untrue method of doing so.

  • drbaltazardrbaltazar Member UncommonPosts: 7,856
    Originally posted by SnarlingWolf

    Originally posted by Reizla


    To all the nay-sayers who tell this technology won't work...
    Keep in mind that the first "real" 3D games (Wolfenstein & Doom) were all based on software 3D rendering on old 2D GPUs. At the time this was extremely revolutionar as well. At the time we all thought "how the hell did they pull this off". From there Voodoo jumped in the 3D market, and from there ATi and nVidia took over.
    I think this technology has a future, but like I said before, you need CPU power instead of GPU power. When displaying moving 3D objects, you only need to do some extra calculations to get the right animations around. That's where the CPU power jumps in.
    Personally I think we'll see some hybrid first, where old fashioned GPUs will do the moving 3D objects and this technology will be used for the background & scenery. When CPUs get more and more powerfull, this technology might take over completely. What's left for ATi and nVidia is making their GPU more like an old co-processor for graphical calculations instead of displaying poligons as they do now...



     

    My first statement still stands, and I do know a lot about 3d graphics and how they work. I wouldn't put much stake in what this guy is pitching at all, but I did want to address one point of yours.

     

    Yes the first 3D games were software based, their are 2 reasons they went GPU. It allowed for much more power, hence much better looking graphics by having a dedicated chip to it (this is also the reason why companies have been trying to push physics chips as well, so we could have some amazing dynamic physics real time in games). The graphics can have their own pipeline and memory that allows them to push the limits of technology.

     

    The other reason is the reason we are likely to not go back to doing graphics on the CPU. The CPU is now free to caclulate things such as AI decisions (complex AI is a huge CPU eater), determining player key strokes and actions, interpretting the incoming network flow and sending it through the program to update the positions of everything in the 3D world, physics (see early as to why they want physics on it's own chip as well), etc.

     

    The CPU does a lot of work in modern times, as does the GPU. To suddenly try and put all the rendering back in the CPU would significantly hinder games. Even with a new magical untrue method of doing so.

     

    i think this system of op is talking about will go hybrid!

    they ll use this new system for static object like grass world etc to see everything  and they ll use the gpu for only the movable object like character,npc,animal etc

    will best of both world you ll see seen from 10 000 mile away and you ll get insane res of gpu for the action

    where is the game now so i can start enjoying it?

    oh and that game will use microsoft donnybrook to make sure we can have lot of player on those map.

    grin!thats the sales pitch intel is getting and you know what if one company as the might and the engineer to make it a reality its intel or ibm!

  • LiltawenLiltawen Member UncommonPosts: 245
    Originally posted by Miles-Prower

    Originally posted by drbaltazar


    one thing is sure ,i bet intel will phone those guys and research the techno if  intel engineer think they can make this techno work at a cheaper price this techno might become the norm very fast!we ll have to wait and see since lot of technobable kink need to be ironned out and the cloud of doubt need to be cleared ,but there might be potential to this idea!would be first change in computer lol
    we went to agp because pci was too slow ,then later i think its intel they jumbep in with pci-e techno and we ve been with that ever since.
    so this techno might have some merit in the long run (if company put as much effort as they putted in the polygone system!



    There's really no doubt about it in my mind. Polygons are not the future of gaming. Polygons may get more and more realistic, but they will always suffer from jaggies and things like that. Things like this are the future of gaming and I hope we will see something like this hit markets in the distant future. As to what that means for graphic card manufactures? Well, I guess they'll have to adapt.



    ~Miles "Tails" Prower out! Catch me if you can!

    For years 3D programs which started out as polygon based have been slowly adding various fractal/instancing techniques making todays programs a strange mixture of both. Already were beginning to see next generation 3D programs like Modo and LightWave CORE that are being redesigned from the ground up to be fractal/instanced based rather than polygon based. This is the future.

  • SerpentarSerpentar Member Posts: 246

    Funny voxels in the late 90s and early 2000s were supposed to be that same killer of polygons..and they got used in a handful of games(Namely Command and Conquer: Tiberium Sun) and no further. Polygons have been used in computers since almost there inception. I do not see them dying out anytime soon..

  • PapadamPapadam Member Posts: 2,102
    Originally posted by Serpentar


    Funny voxels in the late 90s and early 2000s were supposed to be that same killer of polygons..and they got used in a handful of games(Namely Command and Conquer: Tiberium Sun) and no further. Polygons have been used in computers since almost there inception. I do not see them dying out anytime soon..

     

    Yea I remember a game called Outcast also used this technology. One of the most underrated games ever released and looked very good for its time. I wish someone would make a MMO based on that game :)

    If WoW = The Beatles
    and WAR = Led Zeppelin
    Then LotrO = Pink Floyd

  • Miles-ProwerMiles-Prower Member Posts: 1,106
    Originally posted by Xodic


    I have noticed these forums are filled with great information on anything related to video games. I should have been logging in and reading here a long time ago. Thanks for the information ! I would have never heard about this otherwise.



    You're very welcome. As a passionate gamer, I love sharing information about games. Yes, I realize this is MMORPG.com, but we're all gamers at heart!



    ~Miles "Tails" Prower out! Catch me if you can!

    imageimageimage
    image
    Come Join us at www.globalequestria.com - Meet other fans of My Little Pony: Friendship is Magic!
  • DenusDenus Member Posts: 40

    Call me incredulous, but there has been a fair amount of research, and not merely entertainment/video game related research but academic and scientific research, into 3-D graphics. Given that I'm sure anyone who actually comes up with an algorithm that could do what the video says would most likely go down in history as one of the greatest computer scientists ever, my first gut instinct is to say "snake oil".

     

    I should actually learn more about 3-D graphics just to see if what this guy is talking about. Either way, I'll remember this in a year and a half.

     

    Edit: I should also note that, even disregarding rude comments, there is a general trend of downranking comments that question or show disbelief with this and upranking of comments that praise or show wonder at it. This video raises so many red flags its not even funny.

  • GTwanderGTwander Member UncommonPosts: 6,035

    Not a computer whiz here, but even if they can only use a "point per pixel" in processing, wouldn't there still be a now HUGE amount a data being used up in the background for storage? Seems like it would to me anyway.

    Writer / Musician / Game Designer

    Now Playing: Skyrim, Wurm Online, Tropico 4
    Waiting On: GW2, TSW, Archeage, The Rapture

  • drbaltazardrbaltazar Member UncommonPosts: 7,856
    Originally posted by Serpentar


    Funny voxels in the late 90s and early 2000s were supposed to be that same killer of polygons..and they got used in a handful of games(Namely Command and Conquer: Tiberium Sun) and no further. Polygons have been used in computers since almost there inception. I do not see them dying out anytime soon..

     

    lol dont dismiss it too fast it just depend on one factor

    ease of use for programmer

    if this idea can be made into an engine like other video game and look good it can take market very fast

    its all in the ease of use!

  • championsFanchampionsFan Member Posts: 419
    Originally posted by GTwander


    Not a computer whiz here, but even if they can only use a "point per pixel" in processing, wouldn't there still be a now HUGE amount a data being used up in the background for storage? Seems like it would to me anyway.

    The nararrator says "trillion of trillions of points", so it all depends on their compression algorithms.  To store that many points uncompressed is impossible even in the largest data centers.   And they will need 1,000,000,000,000 to 1 compression to get that many points stored in less than a terabyte of data.  

     

    This thread has mostly revolved around static vs animated, but really the only issue is whether it is a hoax / myth.  The official UnimitedDetail site says that the youtube video was created on a single core laptop in real-time. If their algorithm really can achieve trillion-to-one compression of point data and then search and decompress in real-time , then it is as radically far ahead as they claim.  

    Cryptic is trying a Customer Development approach to MMO creation.

  • dzikundzikun Member Posts: 150

    Interesting and promising...

    I've been uplinked and downloaded, I've been inputted and outsourced. I know the upside of downsizing, I know the downside of upgrading.

    I'm a high-tech low-life. A cutting-edge, state-of-the-art, bi-coastal multi-tasker, and I can give you a gigabyte in a nanosecond.

    I'm new-wave, but I'm old-school; and my inner child is outward-bound.

    I'm a hot-wired, heat-seeking, warm-hearted cool customer; voice-activated and bio-degradable.

    RIP George Carlin.

Sign In or Register to comment.