The upcoming versions of DirectX and OpenGL will probably do about the same things in about the same ways. That's usually how it plays out, as they're both built around exposing the things that video cards are good at. DirectX has the advantage that Microsoft can say, here's the standard right now and either you support it or you don't. OpenGL may have the extensions in place sooner, but tends to take longer to get them adopted into the core specification, and support can be spotty until that happens.
Originally posted by Quizzical The upcoming versions of DirectX and OpenGL will probably do about the same things in about the same ways. That's usually how it plays out, as they're both built around exposing the things that video cards are good at. DirectX has the advantage that Microsoft can say, here's the standard right now and either you support it or you don't. OpenGL may have the extensions in place sooner, but tends to take longer to get them adopted into the core specification, and support can be spotty until that happens.
I would say that depends on how similar Nvidia's implementation of their Mantle equivalent is. If a lot of the work can be fairly directly translated there would be no real reason for any game developer to even use DX12 aside from MS kickbacks. Having AMD Mantle logo right beside NV's TWIMTBP might offset any gains from DX12 publicity.
If that GDC Nvidia presentation showed me anything it was that NV is taking Mantle very seriously and are not going to wait till mid 2015 to get their ball rolling. I gained a bit more respect for NV.
It's not like there's many alternatives to a DX implementation. People aren't going to jump the Windows ship in droves just because DX12 is a little behind OpenGL or doesn't do something that Mantle does. What are people going to run their games on? A tablet with a tenth of the power? No. All Microsoft really has to do is keep up.
I can not remember winning or losing a single debate on the internet.
Direct X IS NOT behind Open GL, based on everything I have read it is the opposite. Open GL is just getting to where DX11 is in abilities. Also, now that DX 12 will add in threading to CPU's and make it less heavy on them, it may make it easier to run, hence it will be better as far as overhead too.
Originally posted by VeryDusty Direct X IS NOT behind Open GL, based on everything I have read it is the opposite. Open GL is just getting to where DX11 is in abilities. Also, now that DX 12 will add in threading to CPU's and make it less heavy on them, it may make it easier to run, hence it will be better as far as overhead too.
have you seen the new NV OGL presentation? There are a ton of things in there that DX cannot currently do.
Originally posted by VeryDusty Direct X IS NOT behind Open GL, based on everything I have read it is the opposite. Open GL is just getting to where DX11 is in abilities. Also, now that DX 12 will add in threading to CPU's and make it less heavy on them, it may make it easier to run, hence it will be better as far as overhead too.
have you seen the new NV OGL presentation? There are a ton of things in there that DX cannot currently do.
OpenGL was way behind DirectX roughly from 2007-2009, basically from the time DirectX 10 launched until OpenGL 4.0 launched. But now it has caught up and they're roughly even.
Some of the stuff in that presentation is still only in OpenGL extensions, not in the core specification. That makes it a nightmare to figure out which cards support which combination of extensions, as supporting extensions is optional. I'd be very, very hesitant to use recent extensions in a released game (or beta, for that matter) until at minimum both AMD and Nvidia announce that all of their OpenGL 4 cards support the extension--which isn't likely to happen a year or so after it gets added to the core specification, at which point, it's no longer an extension.
And even then, you're heavily reliant upon driver support, so players with older drivers who can't update for whatever reason (this is actually decently common in laptops) wouldn't be able to play your new version. If you want them to be able to play an OpenGL 4 version at all, then you're stuck with creating and maintaining two separate OpenGL 4.x code paths. That's a nightmare for testing and debugging.
And let's not forget that we're not talking about major new features here like geometry shaders or tessellation. What we're primarily talking about is ways to do exactly what you could do before except with less load on the CPU. If you have to have the reduced CPU load in order to make your game run smoothly, then you'd make it so that only people with very recent hardware can play your game at all.
Reducing graphical settings doesn't necessarily let you get around this, either. Some graphical effects such as anti-aliasing or any post-processing effects are easy to turn off entirely. Texture resolutions are easy to reduce. But lightening the CPU load basically means that you have to pick out half or 2/3 of what you would have drawn in the "new" version and not draw it at all--not even a simplified version. The CPU load from state changes doesn't care how fancy what you're going to draw with a draw call is, but only how many times you have to switch textures or vertex arrays or whatever.
Originally posted by VeryDusty Direct X IS NOT behind Open GL, based on everything I have read it is the opposite. Open GL is just getting to where DX11 is in abilities. Also, now that DX 12 will add in threading to CPU's and make it less heavy on them, it may make it easier to run, hence it will be better as far as overhead too.
have you seen the new NV OGL presentation? There are a ton of things in there that DX cannot currently do.
OpenGL was way behind DirectX roughly from 2007-2009, basically from the time DirectX 10 launched until OpenGL 4.0 launched. But now it has caught up and they're roughly even.
Some of the stuff in that presentation is still only in OpenGL extensions, not in the core specification. That makes it a nightmare to figure out which cards support which combination of extensions, as supporting extensions is optional. I'd be very, very hesitant to use recent extensions in a released game (or beta, for that matter) until at minimum both AMD and Nvidia announce that all of their OpenGL 4 cards support the extension--which isn't likely to happen a year or so after it gets added to the core specification, at which point, it's no longer an extension.
This becomes moot if the game is Mantle + NV OGL
And even then, you're heavily reliant upon driver support, so players with older drivers who can't update for whatever reason (this is actually decently common in laptops) wouldn't be able to play your new version. If you want them to be able to play an OpenGL 4 version at all, then you're stuck with creating and maintaining two separate OpenGL 4.x code paths. That's a nightmare for testing and debugging.
Do you think DX12 is going to be any easier? They would still have to have backwards compatibility with DX9/10/11 or risk a limited market.
And let's not forget that we're not talking about major new features here like geometry shaders or tessellation. What we're primarily talking about is ways to do exactly what you could do before except with less load on the CPU. If you have to have the reduced CPU load in order to make your game run smoothly, then you'd make it so that only people with very recent hardware can play your game at all.
So progress should always follow the lowest common denominator? I don't agree with that.
Reducing graphical settings doesn't necessarily let you get around this, either. Some graphical effects such as anti-aliasing or any post-processing effects are easy to turn off entirely. Texture resolutions are easy to reduce. But lightening the CPU load basically means that you have to pick out half or 2/3 of what you would have drawn in the "new" version and not draw it at all--not even a simplified version. The CPU load from state changes doesn't care how fancy what you're going to draw with a draw call is, but only how many times you have to switch textures or vertex arrays or whatever.
This would be the same case if working with different DX versions.
Don't see what android has to do with any of this. Android as a whole basically performs like shit with its OpenGL-GS compared to DX run PC games. So getting significant gains can even be attributed to the introduction of C++ coding in Android recently.
I think this has been in the pipes for a while. Right now both NVidia and AMD comply with what Microsoft wants. So more than likely since Kepler and GCN, the GPU functionality has been there for this switch over. Its just that the DX version was not ready yet which is why those GPU came in as DX 11.1 and will instantly be DX12 cards.
What OpenGL and DirectX do is open up the common functionality in video cards for the developer without the developer having to worry about each and every single model. The difference is in their approach. DirectX dictates what must be in a version. OpenGL picks out common elements and builds its standard from there. This is also why we will not really be seeing anything from Mantle or NVidia's solution. Developers don't want to build code for just AMD or just NVidia. What Mantle has done is forced DirectX and OpenGL take overhead seriously.
One thing is sure ,if android other Linux were considered casual gamer platform before Intel,AMD and nvidia got rid of tha.yep in a very near future I suspect we all hear announcement like:android can do serious gaming.I am happy for android ,why only android ?because its already the number one os.ingress team must be very happy
Originally posted by drbaltazar One thing is sure ,if android other Linux were considered casual gamer platform before Intel,AMD and nvidia got rid of tha.yep in a very near future I suspect we all hear announcement like:android can do serious gaming.I am happy for android ,why only android ?because its already the number one os.ingress team must be very happy
I would think the limiting factor with Android is the audience first, and then the hardware. How many Android users are serious about gaming on Android with anything other than farming simulators when they have a console or a PC to do their serious gaming on? If there are people who want serious mobile gaming, they are then going to be limited by the hardware. They can get a Windows tablet and run Windows games on them and they have all their games, but a tablet doesn't have the horsepower to push the games the way a desktop machine or a console does. They might reach a minimum acceptable performance, but they aren't going to be serious gaming machines for anyone but people with a LOT of money to spend on things like Razer's handheld monstrosity. Even that makes concessions to the platform.
**
Linux has a shot, but that's going to be a five or ten year process, not something that happens over night.
I can not remember winning or losing a single debate on the internet.
Originally posted by grndzro DX12 has only been in the pipes since about August.
DX12 has been in development for 4 yrs - not since August - please read developer blogs about this.
No it wasn't. It dosen't matter what the devloper blogs say they are wrong. Whatever they did with DX12 prior to Mantle was scrapped.
DX12 is a copy of AMD's Mantle. MS hopped on board immediately and started work on DX12. Mantle was never designed to only give AMD the edge. It was designed to shove the industry forward. And it did.
Originally posted by VeryDusty Direct X IS NOT behind Open GL, based on everything I have read it is the opposite. Open GL is just getting to where DX11 is in abilities. Also, now that DX 12 will add in threading to CPU's and make it less heavy on them, it may make it easier to run, hence it will be better as far as overhead too.
have you seen the new NV OGL presentation? There are a ton of things in there that DX cannot currently do.
OpenGL was way behind DirectX roughly from 2007-2009, basically from the time DirectX 10 launched until OpenGL 4.0 launched. But now it has caught up and they're roughly even.
Some of the stuff in that presentation is still only in OpenGL extensions, not in the core specification. That makes it a nightmare to figure out which cards support which combination of extensions, as supporting extensions is optional. I'd be very, very hesitant to use recent extensions in a released game (or beta, for that matter) until at minimum both AMD and Nvidia announce that all of their OpenGL 4 cards support the extension--which isn't likely to happen a year or so after it gets added to the core specification, at which point, it's no longer an extension.
This becomes moot if the game is Mantle + NV OGL
And even then, you're heavily reliant upon driver support, so players with older drivers who can't update for whatever reason (this is actually decently common in laptops) wouldn't be able to play your new version. If you want them to be able to play an OpenGL 4 version at all, then you're stuck with creating and maintaining two separate OpenGL 4.x code paths. That's a nightmare for testing and debugging.
Do you think DX12 is going to be any easier? They would still have to have backwards compatibility with DX9/10/11 or risk a limited market.
And let's not forget that we're not talking about major new features here like geometry shaders or tessellation. What we're primarily talking about is ways to do exactly what you could do before except with less load on the CPU. If you have to have the reduced CPU load in order to make your game run smoothly, then you'd make it so that only people with very recent hardware can play your game at all.
So progress should always follow the lowest common denominator? I don't agree with that.
Reducing graphical settings doesn't necessarily let you get around this, either. Some graphical effects such as anti-aliasing or any post-processing effects are easy to turn off entirely. Texture resolutions are easy to reduce. But lightening the CPU load basically means that you have to pick out half or 2/3 of what you would have drawn in the "new" version and not draw it at all--not even a simplified version. The CPU load from state changes doesn't care how fancy what you're going to draw with a draw call is, but only how many times you have to switch textures or vertex arrays or whatever.
This would be the same case if working with different DX versions.
If you have a Mantle code path and also an OpenGL code path that does the same thing, then Mantle is redundant, so you're better off going OpenGL-only.
I wouldn't expect quick adoption of DirectX 12, either, especially if it's not ported back to Windows 7 and 8.
Originally posted by VeryDusty Direct X IS NOT behind Open GL, based on everything I have read it is the opposite. Open GL is just getting to where DX11 is in abilities. Also, now that DX 12 will add in threading to CPU's and make it less heavy on them, it may make it easier to run, hence it will be better as far as overhead too.
have you seen the new NV OGL presentation? There are a ton of things in there that DX cannot currently do.
OpenGL was way behind DirectX roughly from 2007-2009, basically from the time DirectX 10 launched until OpenGL 4.0 launched. But now it has caught up and they're roughly even.
Some of the stuff in that presentation is still only in OpenGL extensions, not in the core specification. That makes it a nightmare to figure out which cards support which combination of extensions, as supporting extensions is optional. I'd be very, very hesitant to use recent extensions in a released game (or beta, for that matter) until at minimum both AMD and Nvidia announce that all of their OpenGL 4 cards support the extension--which isn't likely to happen a year or so after it gets added to the core specification, at which point, it's no longer an extension.
This becomes moot if the game is Mantle + NV OGL
And even then, you're heavily reliant upon driver support, so players with older drivers who can't update for whatever reason (this is actually decently common in laptops) wouldn't be able to play your new version. If you want them to be able to play an OpenGL 4 version at all, then you're stuck with creating and maintaining two separate OpenGL 4.x code paths. That's a nightmare for testing and debugging.
Do you think DX12 is going to be any easier? They would still have to have backwards compatibility with DX9/10/11 or risk a limited market.
And let's not forget that we're not talking about major new features here like geometry shaders or tessellation. What we're primarily talking about is ways to do exactly what you could do before except with less load on the CPU. If you have to have the reduced CPU load in order to make your game run smoothly, then you'd make it so that only people with very recent hardware can play your game at all.
So progress should always follow the lowest common denominator? I don't agree with that.
Reducing graphical settings doesn't necessarily let you get around this, either. Some graphical effects such as anti-aliasing or any post-processing effects are easy to turn off entirely. Texture resolutions are easy to reduce. But lightening the CPU load basically means that you have to pick out half or 2/3 of what you would have drawn in the "new" version and not draw it at all--not even a simplified version. The CPU load from state changes doesn't care how fancy what you're going to draw with a draw call is, but only how many times you have to switch textures or vertex arrays or whatever.
This would be the same case if working with different DX versions.
If you have a Mantle code path and also an OpenGL code path that does the same thing, then Mantle is redundant, so you're better off going OpenGL-only.
I wouldn't expect quick adoption of DirectX 12, either, especially if it's not ported back to Windows 7 and 8.
When it only takes a few man months to support either and 1 more to support both it is kinda irrelevant.
Originally posted by grndzro When it only takes a few man months to support either and 1 more to support both it is kinda irrelevant.
A man-month is a significant amount of resources, even in a AAA-game, i wouldn't just poo-poo a single man month as some kind of rounding error. ANd what you think may be one man-month easily, quickly, and commonly cascades into an utter mess and expense.
Originally posted by grndzro When it only takes a few man months to support either and 1 more to support both it is kinda irrelevant.
A man-month is a significant amount of resources, even in a AAA-game, i wouldn't just poo-poo a single man month as some kind of rounding error. ANd what you think may be one man-month easily, quickly, and commonly cascades into an utter mess and expense.
If all you care about is writing the code in the first place, then sure, adding a second version probably isn't that hard.
But if you want that code to reliably work right on customers' computers? That's hard. The same OpenGL code can react differently on different drivers for different video cards from different video card vendors. Or from the same vendor. Or even different drivers for the same exact card. Throw in different CPUs, different chipsets, different amounts of memory, different amounts of memory bandwidth (both CPU and GPU) and you've got a ton of different configurations to support. The move toward discrete switchable graphics in laptops adds even more potential problems--especially driver problems, and it sometimes interferes with updating video drivers. And you want CrossFire and/or SLI to reliably work? Have fun with that.
Something that works flawlessly on several diverse computers that you test it on might have severe artifacting or immediately crash outright on someone else's computer. Or exactly the same computer with different video drivers. Or worse, maybe it will crash about once every two hours, which makes it much harder to track down and diagnose the bug.
If you're happy with your game running properly on 80% of your customers' computers out there that it theoretically should work on, that's not too hard, at least if we're only talking about the OpenGL side of things (as opposed to CPU-side bugs) and you can get it working right on one computer. But do you really want 20% of your prospective customers to be turned away because the game won't run even on their capable hardware--and for them to go tell their friends that your game is garbage as a result? No software project manager will find that acceptable.
You want your game to work on 100% of computers, of course. And you can wait for people to report glitches, track them down, and try to fix them. Indeed, you kind of have to do that to catch obscure issues that affect <1% of customers. Though you'd better hope that the issue is a simple crash to desktop and not "my video card burned up".
In my own experience with programming a (still unfinished) game, I've had code that worked flawlessly on my desktop, but immediately crashed about half of the time on my laptop. The culprit was the difference between being CPU-bound (laptop) and GPU-bound (desktop) and threading issues that only popped up in the former. I've had code that worked flawlessly on my desktop but refused to compile on my parents' computer. The culprit was a programming error on my part (setting a vec4 equal to a vec3) where the video drivers for my desktop figured out what I meant (ignore the last component) but those for my parents' computer refused to try to guess. I've had code that worked flawlessly on my desktop, then refused to even compile on exactly the same hardware after a video driver update. The culprit there was that AMD decided to disallow booleans as shader stage outputs after previously allowing them in earlier driver versions.
But even waiting for customers to report problems doesn't always work, because sometimes the customer's computer is defective. Sometimes they've got an unstable overclock. Or a memory module going bad. Or a spotty power supply causing problems. Or a really stupid hardware configuration that isn't what they tell you it is and doesn't meet your minimum system requirements. Or really old video drivers even though they insist that they have the latest version because Device Manager says so. Or malware doing weird things and interfering with your game. And they blame it on your game, even though it's 100% not your fault. And you can't always tell the difference between this happening and an actual problem with your game that merely doesn't affect many people.
That's intrinsic to all games, of course, regardless of which version of OpenGL or DirectX you're using. But you want to maintain two separate code paths for different versions of the API? Congratulations, you just doubled your validation work. Now you have to track down and fix everything described above--and you have to do it twice. Three versions? Four versions? That's more work yet.
Originally posted by grndzro When it only takes a few man months to support either and 1 more to support both it is kinda irrelevant.
A man-month is a significant amount of resources, even in a AAA-game, i wouldn't just poo-poo a single man month as some kind of rounding error. ANd what you think may be one man-month easily, quickly, and commonly cascades into an utter mess and expense.
Engines are adopting Mantle. They obviously see the value in it. With Cryengine supporting both Mantle and the latest OGL and adopting a subscription plan based model the pressure for next gen games to adopt Mantle and OGL is pretty high.
With the potential release of Mantle on Linux coupled with Wayland/Mir, and the cross platform nature of OGL I don't see many top developers waiting till DX12 and being platform locked.
IMO The better Star Citizen looks the odds of next gen games sticking with DX11 over the next 1.5 years goes down. And if SC comes to Linux the cross platform rush will be high. Especially if they get Wayland sorted out.
Quiz you make it sound like legacy support is all uncharted territory. Older game engines are already compatible with older hardware. New game engines should not be hamstrung with the need to support a 1950XTX.
Most of the backers of SC already know they will have to upgrade for the game. And their absurdly successful public funding shows that if games advance the standard that people are more than willing to upgrade.
The cost of adoption is also covered by having a better performing game, and cross platform compatibility. The dynamics of cost vs reward become skewed when there is a potential offset. And AMD bringing Mantle to Linux would disrupt the whole paradigm. All of a sudden you have an amazing development engine with next gen graphics that runs on Linux, and is affordable, and runs flawlessly on both AMD and NV. It could change everything if all the card fall into place.
Originally posted by grndzro The cost of adoption is also covered by having a better performing game, and cross platform compatibility. The dynamics of cost vs reward become skewed when there is a potential offset. And AMD bringing Mantle to Linux would disrupt the whole paradigm. All of a sudden you have an amazing development engine with next gen graphics that runs on Linux, and is affordable, and runs flawlessly on both AMD and NV. It could change everything if all the card fall into place.
OpenGL has been available since before DirectX, heck, since before Windows even dreamed of having a gaming-oriented API.
There was a period of time where it looked like OpenGL was going to take off. That was when the alternative was GLIDE. It had some very big backing - least of which was John Carmack and the crew at ID publishing from as early as GLQuake and going forward.
But then DirectX came out. It iterated rapidly. It was more nimble than OpenGL and able to embrace new technology faster. It ran on 95% of the systems games were being played on anyway.
So, cross-platform compatibility has always been an option, since before there was 3D gaming on Windows. Developers didn't take that path, largely for 2 reasons. New graphics features were supported unilaterally sooner in DirectX (particularly following the OpenGL 3.0 release delays and controversy), and the XBox, DirectX got them in the codepath with one of the major consoles, whereas OpenGL is only kinda sorta supported on PS3 as a sort of side-option.
You can try to make the case that Extensions make OpenGL the more nimble of the two, but truth is that extensions don't offer a standard level of support - they vary widely based on hardware and driver implementations (which means your code has to vary widely in order to support that feature on various hardware), and they can have legal ramifications associated with using them, since Extensions aren't covered under the same license as OpenGL itself.
Mantle doesn't offer any of that. It's Windows only, and it's further fragmented by being AMD-only, and even further fragmented by only being a fraction of the available AMD cards available. Now, all of that could change. But a lot of things ~could~ change. Having a handful of games announce support, and even having a major engine announce "support" doesn't mean a whole lot. The fact that Mantle more or less is made to accelerate APUs, and not high-end gaming rigs, won't help this uphill battle any more... people who are spending $1500+ on gaming rigs aren't going to see a huge benefit with Mantle.
A good parallel to Mantle is PhysX - which has a handful of major game releases, and even has baked-in support into several gaming engines. While it's impressive, espec ially when running GPU-accelerated, it's hardly been as game-changing as Aegia or nVidia would have liked it to have been. It pretty much only shows up in some really neat physics-based indie titles, or in AAA titles where nVidia basically pays for it to be used, because you can't do anything really exciting with it or it won't run on anything non-nVidia based.
OpenGL right now is seeing a bit of a renaissance for 2 reasons. The first is mobile development is almost entirely in OpenGL (granted, it's mostly in the limited-scope ES family), and because of Valve and a big push towards OSX/Linux gaming. That doesn't mean DirectX is dead, or even dying. None of the next-gen consoles support either OpenGL or Mantle (at least yet), and in the last 5 years or so, gaming development has been driven more by console growth than PC growth (although Valve and many Indie developers would love to change that).
Yea I know the history of OGL and it's fight against DX. Things are a bit different now. Both AMD and NV support the new OGL. Mantle is all icing on the cake.
AMD is in the process of developing a new Linux driver model that might improve it a lot. Time will tell if AMD is serious about it, but I hope they hit it out of the ballpark in the coming months.
The problem with cross platform compatibility in the past was AMD's abysmal Linux drivers, and the crap X window system, And the lack of Linux ports. All 3 could be fixed relatively soon.
I thought the PS4 API was OGL based. I'm pretty sure it is.
Standard level of support all comes down to weather or not both parties support it. And that seems to be the case with the new extensions. And I don't really consider Intel a player as far as graphics goes.
I think of Mantle as a future technology that may be useable today. IMO Mantle will be a big deal in SC even on high end hardware. I also think NV will do just as well with OGL in SC with slightly more emphasis on IPC, but if the latest OGL info from NV is correct it will perform very well. And if AMD supports the new OGL extensions by then it will be interesting which technology works best. I think if you have an AMD processor Mantle would be better. If you have an Intel processor and an AMD card OGL might perform better. We will see, it'l be interesting.
SC is on track for over 2 mil a month in funding. As those wonderful RSI vids/updates keep getting better I think SC might break 100M in funding in 2014. With an engine that can run on Linux, and that both AMD and NV probably don't like MS a whole lot right now they might actually put in enough work on Linux OGL drivers to make them work with Wayland/Mir.
PS4 OS is FreeBSD-based, but it uses 2 proprietary graphics APIs: GNM/GNMX (low/high level) and PSL (Playstation Shader Language). They are "brand new" and proprietary to the PS4.
A liitle bit of non-developer info about them in this article:
PS4 OS is FreeBSD-based, but it uses 2 proprietary graphics APIs: GNM/GNMX (low/high level) and PSL (Playstation Shader Language). They are "brand new" and proprietary to the PS4.
A liitle bit of non-developer info about them in this article:
Originally posted by grndzro Originally posted by RidelynnPS4 OS is FreeBSD-based, but it uses 2 proprietary graphics APIs: GNM/GNMX (low/high level) and PSL (Playstation Shader Language). They are "brand new" and proprietary to the PS4.A liitle bit of non-developer info about them in this article:http://www.eurogamer.net/articles/digitalfoundry-how-the-crew-was-ported-to-playstation-4I agree, it would be a big deal if AMD would through some effort into Linux drivers. They have been promising that for years though.
GNM and GNMX are heavily OGL based. Yes they are proprietary and have some DX11 features but they are based on OGL.
A FreeBSD OS would not have DX as a base. MS would hit Sony with the hammer of god if they did.
Do you have a source? I'd be interested in seeing that.
Comments
WE CAN AGREE ON SOMETHING!
dmit Quiz you cut in line....
I would say that depends on how similar Nvidia's implementation of their Mantle equivalent is. If a lot of the work can be fairly directly translated there would be no real reason for any game developer to even use DX12 aside from MS kickbacks. Having AMD Mantle logo right beside NV's TWIMTBP might offset any gains from DX12 publicity.
If that GDC Nvidia presentation showed me anything it was that NV is taking Mantle very seriously and are not going to wait till mid 2015 to get their ball rolling. I gained a bit more respect for NV.
It's not like there's many alternatives to a DX implementation. People aren't going to jump the Windows ship in droves just because DX12 is a little behind OpenGL or doesn't do something that Mantle does. What are people going to run their games on? A tablet with a tenth of the power? No. All Microsoft really has to do is keep up.
I can not remember winning or losing a single debate on the internet.
have you seen the new NV OGL presentation? There are a ton of things in there that DX cannot currently do.
http://blogs.nvidia.com/blog/2014/03/20/opengl-gdc2014/
Most of it is in line with Mantle's capabilities.
OpenGL was way behind DirectX roughly from 2007-2009, basically from the time DirectX 10 launched until OpenGL 4.0 launched. But now it has caught up and they're roughly even.
Some of the stuff in that presentation is still only in OpenGL extensions, not in the core specification. That makes it a nightmare to figure out which cards support which combination of extensions, as supporting extensions is optional. I'd be very, very hesitant to use recent extensions in a released game (or beta, for that matter) until at minimum both AMD and Nvidia announce that all of their OpenGL 4 cards support the extension--which isn't likely to happen a year or so after it gets added to the core specification, at which point, it's no longer an extension.
And even then, you're heavily reliant upon driver support, so players with older drivers who can't update for whatever reason (this is actually decently common in laptops) wouldn't be able to play your new version. If you want them to be able to play an OpenGL 4 version at all, then you're stuck with creating and maintaining two separate OpenGL 4.x code paths. That's a nightmare for testing and debugging.
And let's not forget that we're not talking about major new features here like geometry shaders or tessellation. What we're primarily talking about is ways to do exactly what you could do before except with less load on the CPU. If you have to have the reduced CPU load in order to make your game run smoothly, then you'd make it so that only people with very recent hardware can play your game at all.
Reducing graphical settings doesn't necessarily let you get around this, either. Some graphical effects such as anti-aliasing or any post-processing effects are easy to turn off entirely. Texture resolutions are easy to reduce. But lightening the CPU load basically means that you have to pick out half or 2/3 of what you would have drawn in the "new" version and not draw it at all--not even a simplified version. The CPU load from state changes doesn't care how fancy what you're going to draw with a draw call is, but only how many times you have to switch textures or vertex arrays or whatever.
Don't see what android has to do with any of this. Android as a whole basically performs like shit with its OpenGL-GS compared to DX run PC games. So getting significant gains can even be attributed to the introduction of C++ coding in Android recently.
I think this has been in the pipes for a while. Right now both NVidia and AMD comply with what Microsoft wants. So more than likely since Kepler and GCN, the GPU functionality has been there for this switch over. Its just that the DX version was not ready yet which is why those GPU came in as DX 11.1 and will instantly be DX12 cards.
What OpenGL and DirectX do is open up the common functionality in video cards for the developer without the developer having to worry about each and every single model. The difference is in their approach. DirectX dictates what must be in a version. OpenGL picks out common elements and builds its standard from there. This is also why we will not really be seeing anything from Mantle or NVidia's solution. Developers don't want to build code for just AMD or just NVidia. What Mantle has done is forced DirectX and OpenGL take overhead seriously.
I would think the limiting factor with Android is the audience first, and then the hardware. How many Android users are serious about gaming on Android with anything other than farming simulators when they have a console or a PC to do their serious gaming on? If there are people who want serious mobile gaming, they are then going to be limited by the hardware. They can get a Windows tablet and run Windows games on them and they have all their games, but a tablet doesn't have the horsepower to push the games the way a desktop machine or a console does. They might reach a minimum acceptable performance, but they aren't going to be serious gaming machines for anyone but people with a LOT of money to spend on things like Razer's handheld monstrosity. Even that makes concessions to the platform.
**
Linux has a shot, but that's going to be a five or ten year process, not something that happens over night.
I can not remember winning or losing a single debate on the internet.
DX12 has been in development for 4 yrs - not since August - please read developer blogs about this.
No it wasn't. It dosen't matter what the devloper blogs say they are wrong. Whatever they did with DX12 prior to Mantle was scrapped.
DX12 is a copy of AMD's Mantle. MS hopped on board immediately and started work on DX12. Mantle was never designed to only give AMD the edge. It was designed to shove the industry forward. And it did.
Johan Andersson @repi 7 h
Direct3D 12 blog with some more details: http://blogs.msdn.com/b/directx/arch...irectx-12.aspx … You may recognize the design
petr_tomicek @petr_tomicek 5 h
@repi Why does it feel like I am reading Mantle Programming Guide again?
12:52 - 20 de mar. de 2014 · Detalles
Johan Andersson @repi 2 h
@petr_tomicek
If you have a Mantle code path and also an OpenGL code path that does the same thing, then Mantle is redundant, so you're better off going OpenGL-only.
I wouldn't expect quick adoption of DirectX 12, either, especially if it's not ported back to Windows 7 and 8.
When it only takes a few man months to support either and 1 more to support both it is kinda irrelevant.
A man-month is a significant amount of resources, even in a AAA-game, i wouldn't just poo-poo a single man month as some kind of rounding error. ANd what you think may be one man-month easily, quickly, and commonly cascades into an utter mess and expense.
The Mythical Man-Month
If all you care about is writing the code in the first place, then sure, adding a second version probably isn't that hard.
But if you want that code to reliably work right on customers' computers? That's hard. The same OpenGL code can react differently on different drivers for different video cards from different video card vendors. Or from the same vendor. Or even different drivers for the same exact card. Throw in different CPUs, different chipsets, different amounts of memory, different amounts of memory bandwidth (both CPU and GPU) and you've got a ton of different configurations to support. The move toward discrete switchable graphics in laptops adds even more potential problems--especially driver problems, and it sometimes interferes with updating video drivers. And you want CrossFire and/or SLI to reliably work? Have fun with that.
Something that works flawlessly on several diverse computers that you test it on might have severe artifacting or immediately crash outright on someone else's computer. Or exactly the same computer with different video drivers. Or worse, maybe it will crash about once every two hours, which makes it much harder to track down and diagnose the bug.
If you're happy with your game running properly on 80% of your customers' computers out there that it theoretically should work on, that's not too hard, at least if we're only talking about the OpenGL side of things (as opposed to CPU-side bugs) and you can get it working right on one computer. But do you really want 20% of your prospective customers to be turned away because the game won't run even on their capable hardware--and for them to go tell their friends that your game is garbage as a result? No software project manager will find that acceptable.
You want your game to work on 100% of computers, of course. And you can wait for people to report glitches, track them down, and try to fix them. Indeed, you kind of have to do that to catch obscure issues that affect <1% of customers. Though you'd better hope that the issue is a simple crash to desktop and not "my video card burned up".
In my own experience with programming a (still unfinished) game, I've had code that worked flawlessly on my desktop, but immediately crashed about half of the time on my laptop. The culprit was the difference between being CPU-bound (laptop) and GPU-bound (desktop) and threading issues that only popped up in the former. I've had code that worked flawlessly on my desktop but refused to compile on my parents' computer. The culprit was a programming error on my part (setting a vec4 equal to a vec3) where the video drivers for my desktop figured out what I meant (ignore the last component) but those for my parents' computer refused to try to guess. I've had code that worked flawlessly on my desktop, then refused to even compile on exactly the same hardware after a video driver update. The culprit there was that AMD decided to disallow booleans as shader stage outputs after previously allowing them in earlier driver versions.
But even waiting for customers to report problems doesn't always work, because sometimes the customer's computer is defective. Sometimes they've got an unstable overclock. Or a memory module going bad. Or a spotty power supply causing problems. Or a really stupid hardware configuration that isn't what they tell you it is and doesn't meet your minimum system requirements. Or really old video drivers even though they insist that they have the latest version because Device Manager says so. Or malware doing weird things and interfering with your game. And they blame it on your game, even though it's 100% not your fault. And you can't always tell the difference between this happening and an actual problem with your game that merely doesn't affect many people.
That's intrinsic to all games, of course, regardless of which version of OpenGL or DirectX you're using. But you want to maintain two separate code paths for different versions of the API? Congratulations, you just doubled your validation work. Now you have to track down and fix everything described above--and you have to do it twice. Three versions? Four versions? That's more work yet.
Engines are adopting Mantle. They obviously see the value in it. With Cryengine supporting both Mantle and the latest OGL and adopting a subscription plan based model the pressure for next gen games to adopt Mantle and OGL is pretty high.
With the potential release of Mantle on Linux coupled with Wayland/Mir, and the cross platform nature of OGL I don't see many top developers waiting till DX12 and being platform locked.
IMO The better Star Citizen looks the odds of next gen games sticking with DX11 over the next 1.5 years goes down. And if SC comes to Linux the cross platform rush will be high. Especially if they get Wayland sorted out.
Quiz you make it sound like legacy support is all uncharted territory. Older game engines are already compatible with older hardware. New game engines should not be hamstrung with the need to support a 1950XTX.
Most of the backers of SC already know they will have to upgrade for the game. And their absurdly successful public funding shows that if games advance the standard that people are more than willing to upgrade.
The cost of adoption is also covered by having a better performing game, and cross platform compatibility. The dynamics of cost vs reward become skewed when there is a potential offset. And AMD bringing Mantle to Linux would disrupt the whole paradigm. All of a sudden you have an amazing development engine with next gen graphics that runs on Linux, and is affordable, and runs flawlessly on both AMD and NV. It could change everything if all the card fall into place.
OpenGL has been available since before DirectX, heck, since before Windows even dreamed of having a gaming-oriented API.
There was a period of time where it looked like OpenGL was going to take off. That was when the alternative was GLIDE. It had some very big backing - least of which was John Carmack and the crew at ID publishing from as early as GLQuake and going forward.
But then DirectX came out. It iterated rapidly. It was more nimble than OpenGL and able to embrace new technology faster. It ran on 95% of the systems games were being played on anyway.
So, cross-platform compatibility has always been an option, since before there was 3D gaming on Windows. Developers didn't take that path, largely for 2 reasons. New graphics features were supported unilaterally sooner in DirectX (particularly following the OpenGL 3.0 release delays and controversy), and the XBox, DirectX got them in the codepath with one of the major consoles, whereas OpenGL is only kinda sorta supported on PS3 as a sort of side-option.
You can try to make the case that Extensions make OpenGL the more nimble of the two, but truth is that extensions don't offer a standard level of support - they vary widely based on hardware and driver implementations (which means your code has to vary widely in order to support that feature on various hardware), and they can have legal ramifications associated with using them, since Extensions aren't covered under the same license as OpenGL itself.
Mantle doesn't offer any of that. It's Windows only, and it's further fragmented by being AMD-only, and even further fragmented by only being a fraction of the available AMD cards available. Now, all of that could change. But a lot of things ~could~ change. Having a handful of games announce support, and even having a major engine announce "support" doesn't mean a whole lot. The fact that Mantle more or less is made to accelerate APUs, and not high-end gaming rigs, won't help this uphill battle any more... people who are spending $1500+ on gaming rigs aren't going to see a huge benefit with Mantle.
A good parallel to Mantle is PhysX - which has a handful of major game releases, and even has baked-in support into several gaming engines. While it's impressive, espec ially when running GPU-accelerated, it's hardly been as game-changing as Aegia or nVidia would have liked it to have been. It pretty much only shows up in some really neat physics-based indie titles, or in AAA titles where nVidia basically pays for it to be used, because you can't do anything really exciting with it or it won't run on anything non-nVidia based.
OpenGL right now is seeing a bit of a renaissance for 2 reasons. The first is mobile development is almost entirely in OpenGL (granted, it's mostly in the limited-scope ES family), and because of Valve and a big push towards OSX/Linux gaming. That doesn't mean DirectX is dead, or even dying. None of the next-gen consoles support either OpenGL or Mantle (at least yet), and in the last 5 years or so, gaming development has been driven more by console growth than PC growth (although Valve and many Indie developers would love to change that).
Yea I know the history of OGL and it's fight against DX. Things are a bit different now. Both AMD and NV support the new OGL. Mantle is all icing on the cake.
AMD is in the process of developing a new Linux driver model that might improve it a lot. Time will tell if AMD is serious about it, but I hope they hit it out of the ballpark in the coming months.
The problem with cross platform compatibility in the past was AMD's abysmal Linux drivers, and the crap X window system, And the lack of Linux ports. All 3 could be fixed relatively soon.
I thought the PS4 API was OGL based. I'm pretty sure it is.
Standard level of support all comes down to weather or not both parties support it. And that seems to be the case with the new extensions. And I don't really consider Intel a player as far as graphics goes.
I think of Mantle as a future technology that may be useable today. IMO Mantle will be a big deal in SC even on high end hardware. I also think NV will do just as well with OGL in SC with slightly more emphasis on IPC, but if the latest OGL info from NV is correct it will perform very well. And if AMD supports the new OGL extensions by then it will be interesting which technology works best. I think if you have an AMD processor Mantle would be better. If you have an Intel processor and an AMD card OGL might perform better. We will see, it'l be interesting.
SC is on track for over 2 mil a month in funding. As those wonderful RSI vids/updates keep getting better I think SC might break 100M in funding in 2014. With an engine that can run on Linux, and that both AMD and NV probably don't like MS a whole lot right now they might actually put in enough work on Linux OGL drivers to make them work with Wayland/Mir.
PS4 OS is FreeBSD-based, but it uses 2 proprietary graphics APIs: GNM/GNMX (low/high level) and PSL (Playstation Shader Language). They are "brand new" and proprietary to the PS4.
A liitle bit of non-developer info about them in this article:
http://www.eurogamer.net/articles/digitalfoundry-how-the-crew-was-ported-to-playstation-4
I agree, it would be a big deal if AMD would through some effort into Linux drivers. They have been promising that for years though.
GNM and GNMX are heavily OGL based. Yes they are proprietary and have some DX11 features but they are based on OGL.
A FreeBSD OS would not have DX as a base. MS would hit Sony with the hammer of god if they did.
A FreeBSD OS would not have DX as a base. MS would hit Sony with the hammer of god if they did.
Do you have a source? I'd be interested in seeing that.
Sure.
https://www.youtube.com/watch?v=Qn0ZULNMSFU
And the follow up
https://www.youtube.com/watch?v=InbF1AYZFg8
And here
https://www.youtube.com/watch?feature=player_embedded&v=lEFEpxCjJ9w
I saw a lot of the early info/speculation on the PS4 API. Some of it was very convincing.