Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Games are less and less optimized.

2»

Comments

  • ArChWindArChWind Member UncommonPosts: 1,340
    Quizzical said:
    Some code needs to be optimized a lot more than others.  My guess is that it's typical for a few hundred lines of pixel/fragment shader code to account for the outright majority of the work that the GPU does.  Any respectable company is going to optimize that little bit of code.

    Meanwhile, a game can easily have tens or hundreds of thousands of lines of code that run so rarely that even if you could make it all run ten times as fast, no one would ever notice outside of synthetic timings.  So "optimizing" that code is not about performance, but about debugging and documenting it.  In some cases, games will even use scripting languages for a bunch of code that are slower than a compiled language by an order of magnitude or so--but only for the portions of code that run rarely and so performance doesn't matter.

    examples please.


    Byte compiled code runs at 85% to 90% efficient example C#, pyton,Java. C++ compiled runs at 99% to 100% efficient. to gain a couple nano seconds?  why? the bigger issues is not using the real power of the processor and that is its indexing capability. if then and else is a big killer to performance.

    ArChWind — MMORPG.com Forums

    If you are interested in making a MMO maybe visit my page to get a free open source engine.
  • WarlyxWarlyx Member EpicPosts: 3,368
    edited October 2015
    the last game i had issues (mmorpg ) was eq2 on release ....pfff i was on 512 mb of ram at the time , 5 min zoning was the norm for me D: that pc was purchased for FFXI :lol:  , for eq2 it sucked , i upgraded :) ,and later on purchased this one built for FFXIV 1.0 (yeah lol) , after all this years i changed the MB and GPU , added RAM too, and outside of some rushed game that was in dire need of patches to fix issues all the games have run smooth as silk.

    im still suprised how well this old pc moves the witcher 3 with nearly all at max.


  • BurntvetBurntvet Member RarePosts: 3,465
    edited October 2015
    Vanguard was the first MMO I can recall where lack of optimization shot it in the head.

    You needed a top end gaming rig at the time to get that thing to even work.

    And it was because McQuaid/Sigil sold that game in 80% completed form to SOE, who then released it 6-12 months too early.


    But to the main question, yes, games are optimized very badly these days, because they can be and still run well for a vast majority of customers. Good optimization takes time, and time equals money.

     Back when people had 64 megs of RAM and were running 386s, that was not the case. Code needed to be very tight, because there were simply no system resources to waste.

  • MrSnufflesMrSnuffles Member UncommonPosts: 1,117
    edited October 2015
    Quizzical said:
    The problem are not Game Engines or lazy developers. The problem are the GPU Manufacturers. All GPUs on the market are 32Bit, they don't understand 64Bit at all. The drivers are outdated to the point where they are not able to take advantage of modern multicore/thread CPUs.

    When your Graphics Hardware is more than a decade behind the rest of the Hardware you don't need to wonder why there is no "optimization". I fact the basic architecture has not changed since 1998.

    This will hopefully change with DX12 and a new Generation of GPUs that finally support 64Bit.
    This is complete nonsense.  For starters, any GPU with more than 4 GB of memory has 64-bit memory addressing or else it's impossible to use all of its memory.  You probably want 64-bit memory addressing before that, even, in order to allow the GPU to borrow system memory as needed.

    Next, GPUs can do 64-bit computations; it's merely much slower than 32-bit computations.  For example, 64-bit floating point computations are supported in OpenGL 4.0 or later; 64-bit integer and floating point computations are supported in any version of OpenCL at all.  If games wanted to use 64-bit computations, they could; they generally don't because there's not much advantage to doing so.

    Graphics is very heavily focused on 32-bit floating point computations.  In some places, even 16-bit floating point arithmetic would be plenty.  The extra precision of 64-bit computations would be mildly nice to avoid rounding errors in some places when doing geometry computations, mainly to allow one thing to cover another while being very close to it.  For color computations, 64-bit arithmetic has no advantage over 32-bit at all.

    Furthermore, GPU architectures have changed radically since 1998.  1998 itself brought the first GPUs with multiple shaders on a single GPU; today's GPUs have thousands of them.  We got programmable shaders in 2001; before that, GPUs were fixed-function architectures, and programming them really just meant giving them different data.  Then we got unified shaders in 2006, which makes possible GPU compute other than for graphics; more immediately, it allowed geometry shaders in graphics.  Greatly increasing use of SIMD scheduling over the course of a number of years made massive increases in compute capabilities possible.  Then we got tessellation in 2009.  Since then, we've had all sorts of little architectural changes to make GPU programming far more versatile, such as increasing register file sizes or the proliferation of high-throughput GDDR5 memory and more recently HBM.
    First: The memory address range has nothing to do with the calculations.

    Second: Do some reading before you post. GPU's are 32Bit systems. They use 32Bit for all calculations. There are no 64Bit calculations unless you hack them up so the GPU understands them and then they are slow as hell. End of story.

    Thirdly: The basic architecture is the same as 1998 only the number of shaders and memory bandwidth has changed with some small other optimizations and additions. The system is still the same 32Bit single threaded (drivers) architecture.

    The fact that the GPU is massively parallel and the drivers are single thread does not help either.
    Post edited by MrSnuffles on
    ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ஜ۩۞۩ஜ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬

    "It's pretty simple, really. If your only intention in posting about a particular game or topic is to be negative, then yes, you should probably move on. Voicing a negative opinion is fine, continually doing so on the same game is basically just trolling."
    - Michael Bitton
    Community Manager, MMORPG.com

    "As an online discussion about Star Citizen grows longer, the probability of a comparison involving Derek Smart approaches 1" - MrSnuffles's law

    "I am jumping in here a bit without knowing exactly what you all or talking about." 
    - SEANMCAD

    ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
  • sanshi44sanshi44 Member UncommonPosts: 1,187
    edited October 2015
    Simply put devs are basicly like why spend extra money and time optimising something that can run on most PC anyway, it would be better if they did but its just not worth there time and money.

    They could do so much more in improving the game and all that and doing different thing if they did optimize thing better but to them there no need to risk profit in doing so.

  • waynejr2waynejr2 Member EpicPosts: 7,771
    sanshi44 said:
    Simply put devs are basicly like why spend extra money and time optimising something that can run on most PC anyway, it would be better if they did but its just not worth there time and money.

    They could do so much more in improving the game and all that and doing different thing if they did optimize thing better but to them there no need to risk profit in doing so.


    Devs are not lazy and there is a limited amount of time to do work. 
    http://www.youhaventlived.com/qblog/2010/QBlog190810A.html  

    Epic Music:   https://www.youtube.com/watch?v=vAigCvelkhQ&list=PLo9FRw1AkDuQLEz7Gvvaz3ideB2NpFtT1

    https://archive.org/details/softwarelibrary_msdos?&sort=-downloads&page=1

    Kyleran:  "Now there's the real trick, learning to accept and enjoy a game for what it offers rather than pass on what might be a great playing experience because it lacks a few features you prefer."

    John Henry Newman: "A man would do nothing if he waited until he could do it so well that no one could find fault."

    FreddyNoNose:  "A good game needs no defense; a bad game has no defense." "Easily digested content is just as easily forgotten."

    LacedOpium: "So the question that begs to be asked is, if you are not interested in the game mechanics that define the MMORPG genre, then why are you playing an MMORPG?"




  • NobleNerdNobleNerd Member UncommonPosts: 759
    DeniZg said:
    If you asked the same question a year ago, I would agree that the games are not optimized well.
    But, many games lately have been optmized really well.
    GTA V is very well optimized across all platforms. Witcher 3 is optimized well for PC, but runs bad on consoles. ESO is optimized pretty well for all platforms. Battlefield and Battlefront are very well optimized on all platforms as well.
    What people usually fail to realize is that all MMORPGs are very CPU heavy and when they buy new PC with 50$ CPU and 16 gigs of RAM, the crying on forums begins.
    Correction on one.... ESO is NOT optimized well at all. It has huge fps swings, lag, lag, lag. Server response issues, people loading in to be dropped from the skies to their death, Improper loading of elements when zoning into different parts of the world and the list could go on. It has improved since early release, but should not be considered an example of "good optimization".

    On the part of it being worse now than years ago.... I am conflicted. There are games and MMOs out there that are. The best example is definitely FFXIV for MMOs. Also GW2 for still running on 32bit is a very smooth performing game. It just seems like developers are pushing stuff out without proper quality control or under budgeted and just don't care.


  • l2avisml2avism Member UncommonPosts: 386
    So many people making wrong assumptions about writing code and GPU architecture.

    While the GPU's still do the same thing that they have been doing for the last 10+ years, they keep adding the ability to do it more times at the same GPU clock cycle. Also they keep adding more registers to store things in.

    Graphics drivers are reentrant. This means that seperate threads from seperate processes all call code in the driver. This is why people were so afraid of WebGL. Any bug in the driver could allow any random website to hack into your windows kernel. This also means that graphics drivers are multithreaded, even if only one thread manages swapping vram into ram. Since windows Vista, the core logic of the graphics driver is handled by windows kernel, the graphics driver is really now just a go-between that maps driver SDK functions to hardware.


    Anyways: the MAIN REASON why MMO's are so poorly optimized is because they simply aren't optimized at all. Todays youngin's want things to look pretty. This means that developers take game engines intended for games like Crysis with like only 10 actors on screen at any given time and try to use them in an MMO where you have thousands of actors moving around. If you want to have an MMO that runs smoothly then you will probably have to do without bleeding edge graphics that require massive texture maps and high poly character models.
    Unfortunately single player games are made this way (take some off-the-shelf engine and add in assets and throw in a few lines of code and out the door it goes) because its fast and cheaper. When they try to make an MMO they just continue on as business as usual.
  • Dreamo84Dreamo84 Member UncommonPosts: 3,713
    Those screenshots are soooo not even close to being from the original EQ. Those are later expansion EQ, they've updated the graphics quite a bit.

    image
  • KiyorisKiyoris Member RarePosts: 2,130
    edited October 2015
    Dreamo84 said:
    Those screenshots are soooo not even close to being from the original EQ. Those are later expansion EQ, they've updated the graphics quite a bit.
    Yes, I said it was from Depths in my OP, 3rd sentence. That era is still 10 years ago.

    Using pictures from the very beginning of EQ would have been 15 years ago, which would have defeated my argument, because back then, graphics were still drastically improving.

    The slowdown, in my opinion, happened around 10 years ago, which is why I chose dreadspire for scenery.
  • VelifaxVelifax Member UncommonPosts: 413
    This is not quite as true as one might think.

    Thought experiment;

    recreate the exact same scene in an ooollld game and a new one. pixel for pixel visually identical.

    Now consider the "amount of codes" required in the new game. All the textures are physics based, meaning they actively respond to lighting. Sometimes the textures are actually generated at run time right on the gpu. The scale of possible view distance is vastly increased (double precision). The textures can be deformed and blended in real time. All the objects in the scene are alive and are monitoring their environment for things that change their appearance. Half the surfaces reflect in various ways. Extensive level of detail deformations are changing much of the scene. Etc.

    There is a LOT more going on behind the scenes than is shown in a still picture. The graphical quality does not increase linearly with graphical prowess.
  • QuizzicalQuizzical Member LegendaryPosts: 25,499
    edited October 2015
    Quizzical said:
    The problem are not Game Engines or lazy developers. The problem are the GPU Manufacturers. All GPUs on the market are 32Bit, they don't understand 64Bit at all. The drivers are outdated to the point where they are not able to take advantage of modern multicore/thread CPUs.

    When your Graphics Hardware is more than a decade behind the rest of the Hardware you don't need to wonder why there is no "optimization". I fact the basic architecture has not changed since 1998.

    This will hopefully change with DX12 and a new Generation of GPUs that finally support 64Bit.
    This is complete nonsense.  For starters, any GPU with more than 4 GB of memory has 64-bit memory addressing or else it's impossible to use all of its memory.  You probably want 64-bit memory addressing before that, even, in order to allow the GPU to borrow system memory as needed.

    Next, GPUs can do 64-bit computations; it's merely much slower than 32-bit computations.  For example, 64-bit floating point computations are supported in OpenGL 4.0 or later; 64-bit integer and floating point computations are supported in any version of OpenCL at all.  If games wanted to use 64-bit computations, they could; they generally don't because there's not much advantage to doing so.

    Graphics is very heavily focused on 32-bit floating point computations.  In some places, even 16-bit floating point arithmetic would be plenty.  The extra precision of 64-bit computations would be mildly nice to avoid rounding errors in some places when doing geometry computations, mainly to allow one thing to cover another while being very close to it.  For color computations, 64-bit arithmetic has no advantage over 32-bit at all.

    Furthermore, GPU architectures have changed radically since 1998.  1998 itself brought the first GPUs with multiple shaders on a single GPU; today's GPUs have thousands of them.  We got programmable shaders in 2001; before that, GPUs were fixed-function architectures, and programming them really just meant giving them different data.  Then we got unified shaders in 2006, which makes possible GPU compute other than for graphics; more immediately, it allowed geometry shaders in graphics.  Greatly increasing use of SIMD scheduling over the course of a number of years made massive increases in compute capabilities possible.  Then we got tessellation in 2009.  Since then, we've had all sorts of little architectural changes to make GPU programming far more versatile, such as increasing register file sizes or the proliferation of high-throughput GDDR5 memory and more recently HBM.
    First: The memory address range has nothing to do with the calculations.

    Second: Do some reading before you post. GPU's are 32Bit systems. They use 32Bit for all calculations. There are no 64Bit calculations unless you hack them up so the GPU understands them and then they are slow as hell. End of story.

    Thirdly: The basic architecture is the same as 1998 only the number of shaders and memory bandwidth has changed with some small other optimizations and additions. The system is still the same 32Bit single threaded (drivers) architecture.

    The fact that the GPU is massively parallel and the drivers are single thread does not help either.
    To the contrary, at least some GPUs have at least some dedicated hardware for 64-bit computations.  For example:

    https://www.amd.com/Documents/firepro-s9150-datasheet.pdf

    That's the same Hawaii chip as in a Radeon R9 290/290X/390/390X, though the double precision performance is crippled on Radeon cards.  And it can do 64-bit fma at half the speed of 32-bit fma.  You can't do a 64-bit fma by chaining together two 32-bit operations.  You need a whole lot of dedicated 64-bit fma silicon to do that.  Incidentally, if you want to know why Hawaii-based GPUs are less efficient at graphics than various other modern GPUs, that's one big thing to point at.

    Now, it's certainly the case that GPUs are still heavily optimized for 32-bit floating point computations.  But if 32-bit is the only feature of an architecture that matters and you say that GPUs are still 32-bit, then would you say that a Radeon R9 Fury X, GeForce GTX Titan X, ARM Cortex A15, and Intel 80486 are all the same architecture?
  • MrSnufflesMrSnuffles Member UncommonPosts: 1,117
    edited October 2015
    Quizzical said:
    snip..
    To the contrary, at least some GPUs have at least some dedicated hardware for 64-bit computations.  For example:

    https://www.amd.com/Documents/firepro-s9150-datasheet.pdf

    That's the same Hawaii chip as in a Radeon R9 290/290X/390/390X, though the double precision performance is crippled on Radeon cards.  And it can do 64-bit fma at half the speed of 32-bit fma.  You can't do a 64-bit fma by chaining together two 32-bit operations.  You need a whole lot of dedicated 64-bit fma silicon to do that.  Incidentally, if you want to know why Hawaii-based GPUs are less efficient at graphics than various other modern GPUs, that's one big thing to point at.

    Now, it's certainly the case that GPUs are still heavily optimized for 32-bit floating point computations.  But if 32-bit is the only feature of an architecture that matters and you say that GPUs are still 32-bit, then would you say that a Radeon R9 Fury X, GeForce GTX Titan X, ARM Cortex A15, and Intel 80486 are all the same architecture?
    I was simply saying that the GPU's have not evolved as much as they should have. The 32bit architecture is one issue the second even bigger is the outdated single thread driver architecture that has not caught up with the CPU generations.

    The drivers and DX are hopelessly outdated and hopefully DX11.3 and DX12 is going to change that. DX10 had a limited thread safe implementation and DX11 was built for Multithreading but has not reached the full potential until we get DX11.3

    This change is going to be very noticeable and initial tests show a possible game performance increase of up to 50%.
    ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ஜ۩۞۩ஜ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬

    "It's pretty simple, really. If your only intention in posting about a particular game or topic is to be negative, then yes, you should probably move on. Voicing a negative opinion is fine, continually doing so on the same game is basically just trolling."
    - Michael Bitton
    Community Manager, MMORPG.com

    "As an online discussion about Star Citizen grows longer, the probability of a comparison involving Derek Smart approaches 1" - MrSnuffles's law

    "I am jumping in here a bit without knowing exactly what you all or talking about." 
    - SEANMCAD

    ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
  • Solar_ProphetSolar_Prophet Member EpicPosts: 1,960
    I agree, definitely. For a non-MMO, Vermintide is a prime example. The game does look good, but not good enough to justify the resources it hogs. 

    Witcher 3 runs like a dog. I haven't even spent more than an hour playing it because unless I turn everything off, it runs at about 40FPS if I'm lucky. It's simply unplayable, even at the recommended settings. Very disappointed in CDProjekt's half-hearted effort here. I could play it on low, but if I wanted to do that I'd have bought the console version. 

    SWTOR... it still sucks on optimization. It runs only slightly better on the system I have now than it on the system I owned at release, despite a fairly hefty increase in power. That game should be running 60FPS everywhere, yet in plenty of areas it drops well below that despite its dated look. I'm digging the story, but performance is just dreadful, even after they claimed to have improved it. 

    FFXIV looks great and is incredibly well optimized. I can run raids with a ton of lights, particles, and other effects going off nonstop with barely a hitch at almost the highest settings. The only thing it needs are some hi-res textures for the PC version. 

    There are just too many games out there which don't look good enough to justify their high system requirements. Hell, one of the best looking games out there right now has extremely low requirements... Pillars of Eternity. Gorgeous artwork, simply gorgeous. 

    Divinity Original Sin is up there too. 

    Sublevel Zero gets an honorable mention for its blend of pixel art and modern shading / lighting techniques, and I have to mention Transistor's beautiful art. 

    AN' DERE AIN'T NO SUCH FING AS ENUFF DAKKA, YA GROT! Enuff'z more than ya got an' less than too much an' there ain't no such fing as too much dakka. Say dere is, and me Squiggoff'z eatin' tonight!

    We are born of the blood. Made men by the blood. Undone by the blood. Our eyes are yet to open. FEAR THE OLD BLOOD. 

    #IStandWithVic

Sign In or Register to comment.