Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Has anyone played a morpg using nvidia's 3d vision?

ThebigthrillThebigthrill Member UncommonPosts: 117

I'm upgrading my 47 inch Philips LED TV that I use for my computer monitor.

I want something a little bigger so I'm thinking of going with a Sony 55 inch 3D TV.

Now this is my first 3D TV so I checked the Nvidia 3D vision and the Sony TV Im getting is supported.

I bought a Nvidia card last year cant remember the card type but it doese support 3D vision but either way this time next week I should be playing WOW on a 55 inch Sony TV with 3D.

I'm curiouse if anyone has done this and if its any good.

Will I get a headache? Can I play long periods without passing out?

Reason Im asking is I want to know how good it is and I'm thinking if the 55inch 3D tv would be better or just go with a 60 inch

Sony LED tv without 3d, the 60 inch without is the same price as 55 inch with 3D.

"Don't tell me what to do! , you're not my mod"

Saying invented by me.

Comments

  • pl3dgepl3dge Member UncommonPosts: 183

    I've tried most of the ones Nvidia lists as supported, none of them stand out as being amazing in 3D and tbh not worth it. The only MMO i really enjoyed in 3D was LOTRO, I actually havn't tried Aion in 3D and that's meant to be Vision ready so could be good.

    However, there are alot of good Single Player games that utilize 3D pretty well. The Batman series for example :)

     

    WoW is not very good in 3D, it leaves the game world as normal and  puts depth on the UI, making it quite annoying to play

    image

  • madazzmadazz Member RarePosts: 2,115

    I've tried a few. The problem is when they are not supported properly you get weird floating elements that can really detract from the game. I haven't tried any in months. Come to think of it, I haven't even tried a single player game in 3D in awhile too.

    Also, youll find unless the game is older or has low requirements, you'll want to turn the resolution down. Afterall you are basically rendering the game twice. 

    I would say 3d is pretty cool, but with its current state for gaming it isn't anything to get excited about. I just show it off to some of my buddies because it is interesting. Some games look pretty cool with it on (like bf3 or civ5, dnf too), but it mostly just detracts from gameplay and makes others dizzy.

  • ThebigthrillThebigthrill Member UncommonPosts: 117

    Thanks for replies.

    BF3 is one of the games im looking forward too.

    World of Warcraft

    BF3

    CandC Generals

    Grim Fandango

     

    "Don't tell me what to do! , you're not my mod"

    Saying invented by me.

  • madazzmadazz Member RarePosts: 2,115
    Just checked the compatibility list, and only Aion actually supports 3D vision. Everquest is listed as exellent, as is Fallen Earth and World of Warcraft. Now, Excellent doesn't mean playable. It just means the game itself may look awesome. You might find what I mentioned before to be true (weird floaties). I suggest reading up on them some before delving into it knee deep. I am sure people have reviews of their experience. 
  • jdnewelljdnewell Member UncommonPosts: 2,237

    I personally have never tried it, but know some who have.

    From my understanding its gimmicky. Something you may try for a bit and then just wont worry about it much if at all.

    I for sure wouldnt spend extra money on it. But your opinion may differ.

  • ColeguillaColeguilla Member Posts: 23

    I tried a lot of them, but only liked 2: WoW and GW2, both runs great on 3D.  I don't recommend you TERA, Rift, AoC, GW1, WAR... Haven't tried TSW and SWTOR.

    The biggest problem is you have to play with a mouse cursor, and most of the games show you a double image.  You can try get used to click in the middle of both, but it's pretty annoying.

  • madazzmadazz Member RarePosts: 2,115
    Originally posted by Coleguilla

    I tried a lot of them, but only liked 2: WoW and GW2, both runs great on 3D.  I don't recommend you TERA, Rift, AoC, GW1, WAR... Haven't tried TSW and SWTOR.

    The biggest problem is you have to play with a mouse cursor, and most of the games show you a double image.  You can try get used to click in the middle of both, but it's pretty annoying.

    That would suck. Oddly I have never had that issue with my setup. Instead it just looks like my cursor is on a different level than everything else. Its very distracting!

    And to the OP, you will like BF3 in 3D. But I don't think itll help you in multiplayer, though it will look badass!

  • EnerzealEnerzeal Member Posts: 326

    Honesty I would hold off on this gimic, save the cash and upgrade to the 60 inch, and wait for this to drop..

    http://www.oculusvr.com/

    Oculus Rift, just from watching the video on the main page of that link you can see just how many big name developers are supporting it. In the next few years - VR is finally going to be going mainstream I think. I personally cannot wait to give it a whirl either!

  • QuizzicalQuizzical Member LegendaryPosts: 25,501

    It will be very hit and miss.  Stereoscopic 3D simply isn't ready for prime time yet.  In order to do it properly, you need for the game programmers to implement it in shaders.  The industry standards for that are DirectX 11.1 (which is only available for Windows 8) and OpenGL 4.2 or later.  I'm not aware of any games out yet that use either of those APIs to do stereoscopic 3D.

    Apart from that, there are only three ways that I could see stereoscopic 3D working:

    1)  Have video drivers try to guess which inputs and outputs from various shader stages are intended to do what, in order to figure out what is the camera and how to move it left and right for the two eye viewpoints.  This will be rather hit and miss, and the misses will miss very badly.  Guessing right most of the time isn't good enough, as if 90% of the objects in the game draw correctly while the other 10% are horribly glitched to the point that you can't even tell where they're supposed to be, that's simply unplayable.  And guessing right 90% of the time would actually be very impressive.  This would probably be easier with older APIs that have fewer programmable stages in the graphics pipeline, as that means fewer things that you have to guess correctly.

    2)  Have people at Nvidia take compiled shader binaries for various games running on Nvidia cards in Nvidia labs, then try to reverse engineer them to figure out what is the camera and how to move it left or right.  Then they could make their video drivers do special things to treat those shaders differently in order to make stereoscopic 3D work.  And then do that separately for every single shader in every single game that you want to support--which means that not very many games will work at all.  And then run a risk that any patch that changes a shader could completely break stereoscopic 3D.

    3)  Convince game developers to do some things in shaders that will make stereoscopic 3D work with Nvidia's proprietary approach.  And by "convince", I mean "pay".  Because there's not much other reason for a game developer to implement stereoscopic 3D using an obsolete, proprietary approach that very few players could make use of, probably won't look very good anyway, and is clearly on its way out.

    For option 3, simply finding a game developer who thinks stereoscopic 3D is awesome isn't enough.  A game developer that puts a high priority on making stereoscopic 3D into a big part of his game will probably use OpenGL 4.2 (or 4.3) to do it.  Being able to write your own shaders around the goal of making stereoscopic 3D work well offers huge performance and reliability advantages.  It also means that the game can use stereoscopic 3D on AMD graphics, older versions of Windows, and a greater variety of monitors, in addition to everything it would run on if you use the proprietary Nvidia 3D Vision.  It will probably even eventually allow it to run on Intel graphics.  Eventually, but not particularly soon.

    For that matter, being able to write your own CPU-side code with the goal of making stereoscopic 3D work well is a huge advantage, too.  And there's no way that Nvidia or anyone else would be able to do that through video drivers.

  • madazzmadazz Member RarePosts: 2,115
    Originally posted by Quizzical

    It will be very hit and miss.  Stereoscopic 3D simply isn't ready for prime time yet.  In order to do it properly, you need for the game programmers to implement it in shaders.  The industry standards for that are DirectX 11.1 (which is only available for Windows 8) and OpenGL 4.2 or later.  I'm not aware of any games out yet that use either of those APIs to do stereoscopic 3D.

    Apart from that, there are only three ways that I could see stereoscopic 3D working:

    1)  Have video drivers try to guess which inputs and outputs from various shader stages are intended to do what, in order to figure out what is the camera and how to move it left and right for the two eye viewpoints.  This will be rather hit and miss, and the misses will miss very badly.  Guessing right most of the time isn't good enough, as if 90% of the objects in the game draw correctly while the other 10% are horribly glitched to the point that you can't even tell where they're supposed to be, that's simply unplayable.  And guessing right 90% of the time would actually be very impressive.  This would probably be easier with older APIs that have fewer programmable stages in the graphics pipeline, as that means fewer things that you have to guess correctly.

    2)  Have people at Nvidia take compiled shader binaries for various games running on Nvidia cards in Nvidia labs, then try to reverse engineer them to figure out what is the camera and how to move it left or right.  Then they could make their video drivers do special things to treat those shaders differently in order to make stereoscopic 3D work.  And then do that separately for every single shader in every single game that you want to support--which means that not very many games will work at all.  And then run a risk that any patch that changes a shader could completely break stereoscopic 3D.

    3)  Convince game developers to do some things in shaders that will make stereoscopic 3D work with Nvidia's proprietary approach.  And by "convince", I mean "pay".  Because there's not much other reason for a game developer to implement stereoscopic 3D using an obsolete, proprietary approach that very few players could make use of, probably won't look very good anyway, and is clearly on its way out.

    For option 3, simply finding a game developer who thinks stereoscopic 3D is awesome isn't enough.  A game developer that puts a high priority on making stereoscopic 3D into a big part of his game will probably use OpenGL 4.2 (or 4.3) to do it.  Being able to write your own shaders around the goal of making stereoscopic 3D work well offers huge performance and reliability advantages.  It also means that the game can use stereoscopic 3D on AMD graphics, older versions of Windows, and a greater variety of monitors, in addition to everything it would run on if you use the proprietary Nvidia 3D Vision.  It will probably even eventually allow it to run on Intel graphics.  Eventually, but not particularly soon.

    For that matter, being able to write your own CPU-side code with the goal of making stereoscopic 3D work well is a huge advantage, too.  And there's no way that Nvidia or anyone else would be able to do that through video drivers.

    It is official, you have no idea what you are talking about. It is nice that you google stuff and then come here and share what little you learned in your mechanical writing tone, but please get your facts straight first. Stereoscopic already works on MANY new and old games. The drivers already do an amazing job. Some developers have stated it is not hard to implement either. Also, most of the games that are rated under excellent for 3d vision play perfectly. There are still some that IMO don't convert over well but look awesome, but I had great luck with games rated "excellent" anyways. I think the main difference for the majority is that no specific tweaks were made for it to be called "3d Vision Ready". 

    I think you are a few years too late to the party.

  • madazzmadazz Member RarePosts: 2,115
    Originally posted by Enerzeal

    Honesty I would hold off on this gimic, save the cash and upgrade to the 60 inch, and wait for this to drop..

    http://www.oculusvr.com/

    Oculus Rift, just from watching the video on the main page of that link you can see just how many big name developers are supporting it. In the next few years - VR is finally going to be going mainstream I think. I personally cannot wait to give it a whirl either!

    I've been keeping an eye on to it to some extent. I seriously hope it works out well! If not this one, then another at least.

  • QuizzicalQuizzical Member LegendaryPosts: 25,501
    Originally posted by madazz
    Originally posted by Quizzical

    It will be very hit and miss.  Stereoscopic 3D simply isn't ready for prime time yet.  In order to do it properly, you need for the game programmers to implement it in shaders.  The industry standards for that are DirectX 11.1 (which is only available for Windows 8) and OpenGL 4.2 or later.  I'm not aware of any games out yet that use either of those APIs to do stereoscopic 3D.

    Apart from that, there are only three ways that I could see stereoscopic 3D working:

    1)  Have video drivers try to guess which inputs and outputs from various shader stages are intended to do what, in order to figure out what is the camera and how to move it left and right for the two eye viewpoints.  This will be rather hit and miss, and the misses will miss very badly.  Guessing right most of the time isn't good enough, as if 90% of the objects in the game draw correctly while the other 10% are horribly glitched to the point that you can't even tell where they're supposed to be, that's simply unplayable.  And guessing right 90% of the time would actually be very impressive.  This would probably be easier with older APIs that have fewer programmable stages in the graphics pipeline, as that means fewer things that you have to guess correctly.

    2)  Have people at Nvidia take compiled shader binaries for various games running on Nvidia cards in Nvidia labs, then try to reverse engineer them to figure out what is the camera and how to move it left or right.  Then they could make their video drivers do special things to treat those shaders differently in order to make stereoscopic 3D work.  And then do that separately for every single shader in every single game that you want to support--which means that not very many games will work at all.  And then run a risk that any patch that changes a shader could completely break stereoscopic 3D.

    3)  Convince game developers to do some things in shaders that will make stereoscopic 3D work with Nvidia's proprietary approach.  And by "convince", I mean "pay".  Because there's not much other reason for a game developer to implement stereoscopic 3D using an obsolete, proprietary approach that very few players could make use of, probably won't look very good anyway, and is clearly on its way out.

    For option 3, simply finding a game developer who thinks stereoscopic 3D is awesome isn't enough.  A game developer that puts a high priority on making stereoscopic 3D into a big part of his game will probably use OpenGL 4.2 (or 4.3) to do it.  Being able to write your own shaders around the goal of making stereoscopic 3D work well offers huge performance and reliability advantages.  It also means that the game can use stereoscopic 3D on AMD graphics, older versions of Windows, and a greater variety of monitors, in addition to everything it would run on if you use the proprietary Nvidia 3D Vision.  It will probably even eventually allow it to run on Intel graphics.  Eventually, but not particularly soon.

    For that matter, being able to write your own CPU-side code with the goal of making stereoscopic 3D work well is a huge advantage, too.  And there's no way that Nvidia or anyone else would be able to do that through video drivers.

    It is official, you have no idea what you are talking about. It is nice that you google stuff and then come here and share what little you learned in your mechanical writing tone, but please get your facts straight first. Stereoscopic already works on MANY new and old games. The drivers already do an amazing job. Some developers have stated it is not hard to implement either. Also, most of the games that are rated under excellent for 3d vision play perfectly. There are still some that IMO don't convert over well but look awesome, but I had great luck with games rated "excellent" anyways. I think the main difference for the majority is that no specific tweaks were made for it to be called "3d Vision Ready". 

    I think you are a few years too late to the party.

    On their web site, Nvidia lists 46 games as being 3D Vision ready.  That's using "games" in a rather broad sense, as some of them are demos or synthetic benchmarks.  As you surely know, there are a lot more than 46 games on the market.

    Doing stereoscopic 3D properly is something that can't be hacked together in drivers.  Maybe you can hack something together in drivers that kind of works, but the chances that it will look as good as it would have if the game developer had specifically modfies portions of the code for it are basically zilch.  It's also highly probable that trying to hack something together in drivers will lead to a much larger performance hit than if a programmer with full access to the game engine carefully considers exactly which computations can be done once and applied to both eyes, which ones have to be done separately for each eye, and which can be done for one eye and ignored on the other.

    The amount of programming work that it would take to do 3D properly probably isn't that much.  But then, that's the case with an awful lot of graphical effects--such as basically everything that doesn't rely on data specific to each model that isn't used for any other effects.

  • QuizzicalQuizzical Member LegendaryPosts: 25,501
    Originally posted by madazz

    It is official, you have no idea what you are talking about. It is nice that you google stuff and then come here and share what little you learned in your mechanical writing tone, but please get your facts straight first. Stereoscopic already works on MANY new and old games. The drivers already do an amazing job. Some developers have stated it is not hard to implement either. Also, most of the games that are rated under excellent for 3d vision play perfectly. There are still some that IMO don't convert over well but look awesome, but I had great luck with games rated "excellent" anyways. I think the main difference for the majority is that no specific tweaks were made for it to be called "3d Vision Ready". 

    I think you are a few years too late to the party.

    If you think that it's practical to do stereoscopic 3D in drivers and reliably have it work right, then you're the one with no idea what you're talking about.  Perhaps I should give some more details.

    3D graphics intrinsically involves a lot of different coordinate systems.  There are world coordinates to describe where objects are in the game world (e.g., 100 feet north of this town).  Most objects have their own internal coordinate system, for example, to distinguish between the "front" and "top" of an object.  There is a camera coordinate system to tell you where things are relative to the camera.  There are clip coordinates, which are homogeneous coordinates in RP^3 used for technical reasons.  There are window coordinates, to tell you where something appears on your game window.  There are screen coordinates, to tell you where something appears on your monitor.  There can easily be other coordinate systems, too, depending on what you're doing.

    Modern 3D graphics has several programmable shader stages.  We're currently at six, but compute shaders aren't part of the standard pipeline, the two tessellation stages aren't used very often, and geometry shaders are optional.  A game programmer will write a shader for each stage that he's using, which takes in various data, does various computations, and then outputs various data.  The outputs from one shader stage are used as inputs into the next, though it's not in a straightforward one-to-one manner.

    For video drivers to try to alter the internal computations of a shader stage to do anything other than exactly what the programmer specified is probably going to end in a disaster.  I strongly doubt that attempts at implementing stereoscopic 3D in video drivers even try that.

    Rather, what it probably does is to take the outputs from one stage, then alter them before passing them as inputs into the next stage.  This would be somewhat doable if you know what all of the data being passed from one stage to the next represents.  If a vector being passed from one stage to the next is a position in camera coordinates, then you can create two versions and move one to each side a bit to get two separate camera positions.  If it's in world or object coordinates, you leave it alone, as both eyes need to see an object as being in the same place in the world.  If it's in screen or window coordinates, then it's too late to alter it for stereoscopic 3D.

    Data constantly gets converted from one coordinate system to another.  Sometimes the data passed from one stage to another will include different things that are in different coordinate systems--and thus have to be treated differently.  My guess is that Nvidia has people who try to guess whether shader outputs need to be adjusted and do a lot of trial and error work to get games to kind of work.

    Depending on the internal details of a game engine, it might be possible to make stereoscopic 3D mostly work for some of the programs in a game if you can correctly guess which shader outputs need to be adjusted to move the eye viewpoint.  (A "program" is a linked set of shaders that will be used to process some of the objects in a game world; a single game may have many "programs".)  Or it might just be impossible to do it through drivers for a variety of technical reasons, mainly because the place that some data would need to be adjusted to move the camera never appears as data passed from one shader stage to the next.

    But that's going to get harder to do going forward.  With the old fixed function pipeline, video cards insisted that all data had to be processed a particular way, and you could make stereoscopic 3D work perfectly on the GPU side for that, as you knew exactly what all inputs into the system meant.  But the more shaders you have to deal with, the harder it is to correctly guess exactly which outputs need to be adjusted, and the more likely it is that you'll run into something that just isn't fixable in drivers.  Tessellation especially leads to a huge increase in the number of shaders and programs needed.

    This isn't an issue of a game being badly coded, either.  When you're coding a game engine, you have to focus on what will make the features you want in the game run as quickly as possible.  If that makes it harder for someone else to make your game engine do something that you never intended for it to do, that's not your problem.  Given a choice between 10% higher frame rates at everything and Nvidia's proprietary stereoscopic 3D kind of working, you're going to choose the former, and it's not a hard decision.

    -----

    And that's just the changes that need to be made in code that runs on the GPU.  The video drivers can't even see the code running on your CPU, and doing stereoscopic 3D properly means you need to make some changes there, too.  Doing that in video drivers is impossible.

    For example, one issue that every game that doesn't have a very small game world has to deal with is which objects to draw in a given frame.  You don't just draw everything in the entire game world and let the video card figure out that 99% of it never appears on your monitor.  The performance hit from doing that would be enormous and make your game unplayable.

    Rather, you have to figure out what will appear and what won't, and only tell the video card to draw the things that will appear on the screen.  Or perhaps rather, you try to figure out what might appear, and if you're certain that something won't, you tell the video card not to draw it.  If you're not sure, then you send it to the video card and let it figure it out.

    If an object is not going to appear on your screen (e.g., behind the camera, too far off to the side, too far away, or behind a wall), it's a lot of work to process data for it, upload the uniforms to the video card, send the command to draw it using particular vertex data, do a bunch of processing through the first several pipeline stages, then have clipping ruthlessly eliminate every single primitive that made it that far.  If you can do a few quick computations on the processor and determine that it won't appear, then you can get exactly the same final image by simply discarding it so that the video card never sees it in that frame.

    But what if it wouldn't have appeared without stereoscopic 3D, but would be visible to one eye with stereoscopic 3D?  If the programmer can figure out to scrap the object rather than sending it to the video card, then an object that should have been visible to one eye won't be.  This is likely to lead to objects getting near the edge of the screen, then suddenly vanishing before they actually go off the edge of the screen.  Depending on how back-face culling is implemented, it may lead to you being able to see straight through some walls intermittently.

    Once again, this isn't a case of a game being badly coded.  Quite the opposite, actually.  Culling objects that won't appear on the screen is an enormous performance optimization, and a better programmer is likely to be better at figuring out when objects definitely won't appear, and then discarding them before they ever reach the video card.

    -----

    A programmer who wants to implement stereoscopic 3D the right way can avoid all of these problems.  He knows where the culling is done CPU side, and can alter the code to only discard objects when stereoscopic 3D is in use if they won't be visible to either eye.  He can adjust the coordinates where appropriate to make his data exactly right for each eye.  He can pick the most efficient place in the graphics pipeline to split data in to separate data for each eye to leave a much smaller performance hit for using stereoscopic 3D.  He can carefully consider what ought to be done with various 2D UI elements.  In other words, he can make stereoscopic 3D work exactly right, every single time, without any graphical artifacting whatsoever.

    DirectX 11.1 and OpenGL 4.2 probably make that possible.  I have to say "probably" because I haven't tried it myself, so I don't know for certain.  But I do know that hacking something together in video drivers does not.

    Stereoscopic 3D that works the way it ought to is coming.  But it's not here yet, apart from a tiny handful of games.

    Of course, whether it's worthwhile even if it does work exactly right is a different matter entirely.

  • madazzmadazz Member RarePosts: 2,115
    I have google too.
Sign In or Register to comment.