Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

is it time to upgrade my graphics card?

majimaji Member UncommonPosts: 2,091

Hi.

 

Currently I got this computer:

cpu: i5-3570k

ram: 8gb

board: ga-z77-d3h

graphics card: radeon hd 6870

 

In general I can play most games pretty well. Just in PvP of GW2 the FPS drops, unless I reduce the lighting and texture quality.

 

My question is: does the graphics card still fit to the rest? Or does it clog down the other stuff, and so by buying a newer one I'd gain quite a boost?

 

Thanks for any replies.

Let's play Fallen Earth (blind, 300 episodes)

Let's play Guild Wars 2 (blind, 45 episodes)

Comments

  • CaldrinCaldrin Member UncommonPosts: 4,505

    my i7 2700k, 8gb ram and ati 6850 does fine in WvWvW with everything turned right up so i cant see why you would be having issues with your 6870..

    The latest generation of nvidia cards are really good so popping in a geforce 670 would give you a nice boost yes and thats what ill be doing at some point this year. Mainly because i do like to update my graphics card now and then and my 6850 is having a few issues with more graphical intensive games than GW2..
  • CleffyCleffy Member RarePosts: 6,414

    Your graphics card crushes the specs of GW2.  If there is any problems you are experiencing, you can be positive they are the result of ArenaNet and not a problem with your system.  If you would feel better, you can always get an SSD.

    Most likely the issue you are experiencing is because GW2 is a 32 bit game.  When you are in an area with alot of character data loaded into memory, you can reach the 3 GB cap of memory on 32-bit games.  This is going to happen alot in PVP scenarios because the system needs to load alot of variables and actions into memory.

    It might also be a problem with GW2 with new processors.  You are the second person having a problem with ivy bridge processors.  There are also problems with bulldozer processors.

  • majimaji Member UncommonPosts: 2,091
    Originally posted by Caldrin

    my i7 2700k, 8gb ram and ati 6850 does fine in WvWvW with everything turned right up so i cant see why you would be having issues with your 6870..

    I guess recording the gameplay in HD sucks up quite some system resources as well, what with writing more than a GB video data on the harddisk every minute. But I thought that would mostly drain the CPU, so the increase in FPS when reducing the lighting shouldn't have anything to do with this.

    GW2 is a 32 bit game.

    Ah, maybe that's one of the reasons then.

     

    Thanks for the replies. Since everything but GW2 runs fluently in all cases, I guess I'll stick to my graphics card for a bit longer then.

    Let's play Fallen Earth (blind, 300 episodes)

    Let's play Guild Wars 2 (blind, 45 episodes)

  • Loke666Loke666 Member EpicPosts: 21,441

    Well, only you can really decide that. Here is an interesting chart of card performing in Dx9 and as you see is the 6870 not really performing that great but both ATI and Nvidia have not released their new drivers yet that will have tweaks for GW2 so you might want to wait until those are out before deciding,

    The mists do have a lot of players on screen at the same time so a high end card is probably recommended if you want to max the game out. A ATI 7990 or Nvidia 690 would be really sweet but they are rather on the expensive side.

  • QuizzicalQuizzical Member LegendaryPosts: 25,499

    If it's for Guild Wars 2, then there's a pretty good chance that your processor is the limiting factor.  And you can't upgrade the processor, because there isn't a faster one on the market.

    I think the reason some people are complaining about frame rates in GW2 while others say it runs great even on comparable hardware is that people have different frame rate expectations.  Some get a steady 30 frames per second and say it's perfectly smooth.  Others see the same steady 30 frames per second and say it's horrible and they need to fix it.

  • syntax42syntax42 Member UncommonPosts: 1,385

    What resolution do you play at?  That has a big impact on framerates.  Higher resolutions = more pixels to draw = lower framerates.  

    I don't care for fancy shadow effects, and I know how much of a strain they add to hardware.  I typically turn shadows and lighting down to low or medium settings in all games that I play.  You may not feel the same way, but I'll bet there are some settings you could turn off or reduce to improve framerates without caring for the difference they make.

     

    Some common settings to adjust are:

    Bloom - Makes fuzzy lighting effects.  Medium to heavy load.  Probably controlled by post-processing setting in GW2.

    Anti- Aliasing - Smooths jagged edges.  Load depends on resolution, GPU, and setting.  Turn this down or off at higher resolutions.

    Shadows - Medium load.

    Post-processing - More fancy effects that you can live without.  Medium to heavy load.

    Blur - Heavy load.  Definitely turn this off if you have a high resolution monitor.

     

     

    Play around with your settings to find a graphics quality you are satisfied with, and performance you can tolerate.

    A SSD won't improve framerates.  It will reduce or elminate hitching, but most games and superfetch are designed to prevent that now.  Get the SSD for faster loading and faster boot times.

  • TopherpunchTopherpunch Member UncommonPosts: 86

    I have the same card for my computer. I am sorry to report that ATI is at the bottom of the totem pole when it comes to driver updates for new games. Most of these games are made with nVidia hardware, so they work best on those types of systems. SOme of the older cards dont have as much trouble, but the newer ATI cards with newer games always are a bit frumpy when the games hit the shelves, give it time and keep an eye for updates for your drivers. Also check different forums for certain setting to be shut off. It is sometimes a glitch where one setting bogs your entire system down. Don't give up on your card! Even if it sounds like a jet aeroplane is landing in your room while you are doing some intense graphical stuff!

    image


    Come check out what I have to say on my blog http://civilgamer.com

    Also check out http;//agonasylum.com for Darkfall player trading and stories forums

  • Loke666Loke666 Member EpicPosts: 21,441
    Originally posted by Topherpunch

    I have the same card for my computer. I am sorry to report that ATI is at the bottom of the totem pole when it comes to driver updates for new games. Most of these games are made with nVidia hardware, so they work best on those types of systems. SOme of the older cards dont have as much trouble, but the newer ATI cards with newer games always are a bit frumpy when the games hit the shelves, give it time and keep an eye for updates for your drivers. Also check different forums for certain setting to be shut off. It is sometimes a glitch where one setting bogs your entire system down. Don't give up on your card! Even if it sounds like a jet aeroplane is landing in your room while you are doing some intense graphical stuff!

    Eventually they will give you the correct drivers, ATI have always been a bit slow with that, they are a lot better now than 10 years ago at least.

    My top recommendation for GW2 is instead to get a new good SSD, whiile it wont improve the actual performance it will cut down all the load times a lot. The GFX card issue will solve itself with new drivers and better game optimization, and both are in the work.

  • QuizzicalQuizzical Member LegendaryPosts: 25,499
    Originally posted by Topherpunch

    I have the same card for my computer. I am sorry to report that ATI is at the bottom of the totem pole when it comes to driver updates for new games. Most of these games are made with nVidia hardware, so they work best on those types of systems. SOme of the older cards dont have as much trouble, but the newer ATI cards with newer games always are a bit frumpy when the games hit the shelves, give it time and keep an eye for updates for your drivers. Also check different forums for certain setting to be shut off. It is sometimes a glitch where one setting bogs your entire system down. Don't give up on your card! Even if it sounds like a jet aeroplane is landing in your room while you are doing some intense graphical stuff!

    ArenaNet's own numbers found that Guild Wars 2 was, if anything, slightly more favorable to AMD graphics than to Nvidia.  And no, AMD certainly isn't the worst at driver updates.  Intel is the worst at driver updates.  AMD and Nvidia are both pretty good.

  • BarbarbarBarbarbar Member UncommonPosts: 271
    Originally posted by Caldrin

    my i7 2700k, 8gb ram and ati 6850 does fine in WvWvW with everything turned right up so i cant see why you would be having issues with your 6870..

    The latest generation of nvidia cards are really good so popping in a geforce 670 would give you a nice boost yes and thats what ill be doing at some point this year. Mainly because i do like to update my graphics card now and then and my 6850 is having a few issues with more graphical intensive games than GW2..

    I´m running with an overclocked i7-2600K and a GTX570. On a 2550x1440 resolution, and I have between 45-60 FPS in WvWvW  on high settings. It may dip below that when it gets hectic, but I haven't been monitoring it that closely. Never any lag or episodes it just runs fluently. Way better than expected as I assumed my GPU would limit the game, much more than it turns out.

    I am begginning to suspect that the i7 is what makes this game play fluently. It may just be the hyperthreading, as GW 2 leaves alot of the calculations to be done by the cpu, and lets hyperthreading cores do Audio. I don't know, but in their tech forums, people complain using an i5, and people are happy if they are using an i7.

    Users.

    Edit: I can certainly see all 8 cores being used when I play GW 2.

  • QuizzicalQuizzical Member LegendaryPosts: 25,499
    Originally posted by Barbarbar

    I am begginning to suspect that the i7 is what makes this game play fluently. It may just be the hyperthreading, as GW 2 leaves alot of the calculations to be done by the cpu, and lets hyperthreading cores do Audio. I don't know, but in their tech forums, people complain using an i5, and people are happy if they are using an i7.

    Users.

    Edit: I can certainly see all 8 cores being used when I play GW 2.

    That is nonsense, and a lot of the people in that thread don't even seem to know what it is.  Hyperthreading is when a single processor core has extra scheduling resources on it, so that the core can handle two threads at once.  The two threads can't both execute instructions at the same time, but one can use the core while the other doesn't have anything ready to execute.  This allows the core to bounce back and forth between threads at a much more fine-grained level than Windows simply running one thread for a while, then telling the core to stop running that thread and switch to a different one.

    Intel says that hyperthreading can improve performance by up to 30% in programs that would scale well to at least twice as many physical processor cores as you have.  That's "up to" in the same sense as your ISP quotes speeds of up to 50 Mbps or whatever, and even programs that scale flawlessly to arbitrarily many cores don't necessarily see anywhere near a 30% speed boost.  I once did a test with a simple program that trivially scaled to arbitrarily many processor cores, and found that hyperthreading increased performance by about 15% in my program.

    Hyperthreading has a processor core present itself to the OS as two logical cores, so that Windows can have a thread running on each at the same time.  Windows won't even use hyperthreading unless what you're doing would scale to more physical processor cores than you have.  The problem is that if you have two cores plus hyperthreading and you have two threads, if you put those two threads on the same core even with hyperthreading, one thread would often be waiting for the other.  Put the two threads on two different physical cores and they can both go full speed all the time.

    People have said that Guild Wars 2 scales to about three processor cores.  (You can test this by having a computer with a lot of physical cores and disabling them one at a time in the BIOS and seeing what happens to performance.)  If that's the case, then I'd expect to see modest improvements from hyperthreading on a dual core processor, but essentially no gain on a quad core processor.

    Audio may be a separate thread, but it has nothing to do with hyperthreading.  Processing audio for a game isn't very demanding, anyway.

  • RidelynnRidelynn Member EpicPosts: 7,383

    I can confirm that GW2 will float across 8 cores (making no differentiation between physical and HT cores). I can also confirm that it doesn't heavily load any of those cores. Looking at load levels, I would agree with an estimate of 3-4 cores probably being optimal for the game.

    Programmers typically cannot differentiate where their code will run, only that it branches out and is capable of being run in parallel. Depending on how many different forks you make can affect how many different cores your program runs on.

    That being said, you have very little control of which core you run on, and even less on if that core is a real physical core or a logical HT core.

    If you create 25 threads, that doesn't mean your program will scale flawlessly to 25 cores. That just means it has 25 different paths of simultaneous execution, and depending on what those threads are doing, there could actually be a performance hit on pushing it across too many cores (particularly if some closely related threads require accessing the same bits of cached data, which may not be present in the high speed low level physical core caches). Windows will make it's own threads for your program - such as Input manager, window frame manager, file and disk access, DirectX has a lot of different threads, etc. The programmers don't explicitly make all of these threads, some of them come automatically by virtue of being a Windows-based program and running with Microsoft APIs.

    The operating system typically decides which threads get run on which core, via a bit of code called the Scheduler. You can give a few parameters to a scheduler (such as priority level). The scheduler is the bit that decides if it wants to activate HT cores, or let them remain dormant (so that Turbo Boost can kick in), and allocates threads accordingly.

    In my own tests a P4 Northwood HT had a performance gain of about 10-15% (in applications that could take advantage of 2 cores), and Nahelam had a performance benefit on the order of 30-40% (on applications that could take advantage of the extra logical cores). If you look at synthetics - say Passmark for a I7 920 (at 2.66G) vs a i5 750 (at 2.66G) - you see a difference of 5,449 versus 4,290 - or about 25%. Sure, there are other differences between those chips (1156 vs 1366, cache sizes, etc) - but the main difference between the two is basically just the addition of HT, and at the same clock speed, you net about 25% gain in a synthetic benchmark designed to scale across as many cores as it can.

    In my experience with a Bloomfield, GW2 does float across 8 cores, but since those logical HT cores are only about 30-40% the speed of a real physical core, and none of the cores I've seen are heavily loaded, I don't think they are the critical difference between laggy or not. I think that is more the Windows scheduler taking advantage of the extra cores as they are available, because there are threads that could lend themselves to additional parallelism at the expense of disabling TurboBoost (not that my Bloomfield has much Boost in the first place). You could roughly compare a Core i7 Quad Core (with it's HT) as having comparable performance to a 5-core non-HT of the same clock speed (not that Intel makes that, but it's a fairly accurate estimate). So if a quad core i5 isn't enough, the i7 isn't going to add that much more umph - basically just one more core's worth of performance.

    I suppose I could turn off HT and disable cores and see if my framerate moves - I can't get into Wv3 often (damn que times) to test it there, and that seems to be where most people claim to have problems - although i don't know why that would be significantly different than, say, Lions Arch where there are several people in the same tight vicinity. The hard part is, I don't really have a static test case to compare it to, so it would be hard to see small changes in performance and be able to differentiate that from just small changes in the environment (fewer people in the area, fewer models loaded, fewer particle effects on screen, etc).

  • BarbarbarBarbarbar Member UncommonPosts: 271
    Originally posted by Ridelynn

    I can confirm that GW2 will float across 8 cores (making no differentiation between physical and HT cores). I can also confirm that it doesn't heavily load any of those cores. Looking at load levels, I would agree with an estimate of 3-4 cores probably being optimal for the game.

    Programmers typically cannot differentiate where their code will run, only that it branches out and is capable of being run in parallel. Depending on how many different forks you make can affect how many different cores your program runs on.

    That being said, you have very little control of which core you run on, and even less on if that core is a real physical core or a logical HT core.

    If you create 25 threads, that doesn't mean your program will scale flawlessly to 25 cores. That just means it has 25 different paths of simultaneous execution, and depending on what those threads are doing, there could actually be a performance hit on pushing it across too many cores (particularly if some closely related threads require accessing the same bits of cached data, which may not be present in the high speed low level physical core caches). Windows will make it's own threads for your program - such as Input manager, window frame manager, file and disk access, DirectX has a lot of different threads, etc. The programmers don't explicitly make all of these threads, some of them come automatically by virtue of being a Windows-based program and running with Microsoft APIs.

    The operating system typically decides which threads get run on which core, via a bit of code called the Scheduler. You can give a few parameters to a scheduler (such as priority level). The scheduler is the bit that decides if it wants to activate HT cores, or let them remain dormant (so that Turbo Boost can kick in), and allocates threads accordingly.

    In my own tests a P4 Northwood HT had a performance gain of about 10-15% (in applications that could take advantage of 2 cores), and Nahelam had a performance benefit on the order of 30-40% (on applications that could take advantage of the extra logical cores). If you look at synthetics - say Passmark for a I7 920 (at 2.66G) vs a i5 750 (at 2.66G) - you see a difference of 5,449 versus 4,290 - or about 25%. Sure, there are other differences between those chips (1156 vs 1366, cache sizes, etc) - but the main difference between the two is basically just the addition of HT, and at the same clock speed, you net about 25% gain in a synthetic benchmark designed to scale across as many cores as it can.

    In my experience with a Bloomfield, GW2 does float across 8 cores, but since those logical HT cores are only about 30-40% the speed of a real physical core, and none of the cores I've seen are heavily loaded, I don't think they are the critical difference between laggy or not. I think that is more the Windows scheduler taking advantage of the extra cores as they are available, because there are threads that could lend themselves to additional parallelism at the expense of disabling TurboBoost (not that my Bloomfield has much Boost in the first place). You could roughly compare a Core i7 Quad Core (with it's HT) as having comparable performance to a 5-core non-HT of the same clock speed (not that Intel makes that, but it's a fairly accurate estimate). So if a quad core i5 isn't enough, the i7 isn't going to add that much more umph - basically just one more core's worth of performance.

    I suppose I could turn off HT and disable cores and see if my framerate moves - I can't get into Wv3 often (damn que times) to test it there, and that seems to be where most people claim to have problems - although i don't know why that would be significantly different than, say, Lions Arch where there are several people in the same tight vicinity. The hard part is, I don't really have a static test case to compare it to, so it would be hard to see small changes in performance and be able to differentiate that from just small changes in the environment (fewer people in the area, fewer models loaded, fewer particle effects on screen, etc).

    There´s an interesting article about it here.

  • QuizzicalQuizzical Member LegendaryPosts: 25,499

    One thing to remember is that calling graphics API (DirectX in this case) commands is single-threaded.  Everything else is probably pretty easy to break into as many threads as you want.  If the rendering thread has to do 1/3 of the work, then no matter how many cores you throw at it, you're never going to get more than triple the performance that you would with a single core.

    That doesn't automatically mean that three cores will triple the performance of one, however.  Neither does it mean that adding a fifth or sixth core won't have any benefits over three or four.  If you break everything else into 20 threads that all run at once at the start of a frame to get everything ready for the rendering thread, they might crowd out the rendering thread so that it mostly has to wait until the others are done.  In that case, more cores means getting the other threads done faster, which improves performance.  But that's really an issue of poor optimization.

    The real goal of optimization is to leave the one thread that does the graphics API calls with as little to do as possible.  Some of the relatively recent innovations that most games still aren't using can help here.  If you want to do particle effects and have to recompute every single particle on the processor every single frame and then upload it to the video card, that's going to be slow.  If you only have the processor compute a little bit of data and then let geometry shaders generate a bunch of particles, that takes most of the work off of the rendering thread.  If used intelligently, tessellation also gives you ways to have the processor only handle a few triangles, and then break it up into a lot more triangles on the video card.

    The problem, of course, is that older API versions (DirectX 9.0c and OpenGL 3.1 and earlier) don't support this.  And if you assume that people have more modern hardware, then either a bunch of people who don't can't play your game at all, or else you have to completely recode a bunch of thigns twice.

  • RidelynnRidelynn Member EpicPosts: 7,383


    Originally posted by Quizzical
    One thing to remember is that calling graphics API (DirectX in this case) commands is single-threaded.  Everything else is probably pretty easy to break into as many threads as you want.  If the rendering thread has to do 1/3 of the work, then no matter how many cores you throw at it, you're never going to get more than triple the performance that you would with a single core.That doesn't automatically mean that three cores will triple the performance of one, however.  Neither does it mean that adding a fifth or sixth core won't have any benefits over three or four.  If you break everything else into 20 threads that all run at once at the start of a frame to get everything ready for the rendering thread, they might crowd out the rendering thread so that it mostly has to wait until the others are done.  In that case, more cores means getting the other threads done faster, which improves performance.  But that's really an issue of poor optimization.The real goal of optimization is to leave the one thread that does the graphics API calls with as little to do as possible.  Some of the relatively recent innovations that most games still aren't using can help here.  If you want to do particle effects and have to recompute every single particle on the processor every single frame and then upload it to the video card, that's going to be slow.  If you only have the processor compute a little bit of data and then let geometry shaders generate a bunch of particles, that takes most of the work off of the rendering thread.  If used intelligently, tessellation also gives you ways to have the processor only handle a few triangles, and then break it up into a lot more triangles on the video card.The problem, of course, is that older API versions (DirectX 9.0c and OpenGL 3.1 and earlier) don't support this.  And if you assume that people have more modern hardware, then either a bunch of people who don't can't play your game at all, or else you have to completely recode a bunch of thigns twice.

    The window update is a single thread on DX9, but there have been ways to multithreaded around that for a long time.

    This is from AMD in 2008
    http://developer.amd.com/gpu_assets/S2008-Scheib-ParallelRenderingSiggraph.pdf

    Not far off from what you describe - you split the grunt work into parallel computations that update an intermediate buffer, then use that buffer to update your single threaded DX9 object.

  • QuizzicalQuizzical Member LegendaryPosts: 25,499
    Originally posted by Ridelynn

     


    Originally posted by Quizzical
    One thing to remember is that calling graphics API (DirectX in this case) commands is single-threaded.  Everything else is probably pretty easy to break into as many threads as you want.  If the rendering thread has to do 1/3 of the work, then no matter how many cores you throw at it, you're never going to get more than triple the performance that you would with a single core.

     

    That doesn't automatically mean that three cores will triple the performance of one, however.  Neither does it mean that adding a fifth or sixth core won't have any benefits over three or four.  If you break everything else into 20 threads that all run at once at the start of a frame to get everything ready for the rendering thread, they might crowd out the rendering thread so that it mostly has to wait until the others are done.  In that case, more cores means getting the other threads done faster, which improves performance.  But that's really an issue of poor optimization.

    The real goal of optimization is to leave the one thread that does the graphics API calls with as little to do as possible.  Some of the relatively recent innovations that most games still aren't using can help here.  If you want to do particle effects and have to recompute every single particle on the processor every single frame and then upload it to the video card, that's going to be slow.  If you only have the processor compute a little bit of data and then let geometry shaders generate a bunch of particles, that takes most of the work off of the rendering thread.  If used intelligently, tessellation also gives you ways to have the processor only handle a few triangles, and then break it up into a lot more triangles on the video card.

    The problem, of course, is that older API versions (DirectX 9.0c and OpenGL 3.1 and earlier) don't support this.  And if you assume that people have more modern hardware, then either a bunch of people who don't can't play your game at all, or else you have to completely recode a bunch of thigns twice.


     

    The window update is a single thread on DX9, but there have been ways to multithreaded around that for a long time.

    This is from AMD in 2008
    http://developer.amd.com/gpu_assets/S2008-Scheib-ParallelRenderingSiggraph.pdf

    Not far off from what you describe - you split the grunt work into parallel computations that update an intermediate buffer, then use that buffer to update your single threaded DX9 object.

    Well yes, that's much of the "everything else" that is easy to split into many threads.  Other threads do the work of figuring out exactly where an object is relative to the camera and compute the exact uniforms.  All that the rendering thread sees is that it needs to upload this float as the value of one uniform, this array for another, this matrix for a third, and so forth, then use this texture, that vertex data, some particular program, and anything else that intrinsically involves an API call, and then send the relevant drawing command.

    There may be a little bit of other work that the rendering thread should do, rather than naively spamming whatever it sees show up in the queue that it draws from.  If several consecutive things use the same texture or vertex data or whatever, you don't want to switch from a texture to itself several times in a row, or worse, upload the same data several times in a row.  Calling glUseProgram is expensive in OpenGL, and I'd expect the DirectX equivalent of it to be expensive, too, so it also makes sense to collate drawing calls to do everything that uses a particular program at once if you can.  If there are ways to pull that out of the rendering thread while still getting the full speed boost of getting to skip redundant API calls, and without other threads tripping over each other, I'm not aware of them.

    If you want to draw enough things, then eventually, even if all that the rendering thread does is to spam API calls, it will still run poorly, and having a hundred processor cores couldn't help.  Fortunately, "enough" is quite a lot.

  • QuizzicalQuizzical Member LegendaryPosts: 25,499
    Originally posted by ZacKxFair
    It's just very simple if you like to play console games into your pc go for nvidia. since most of the games has phsyx while AMD doesn't have this device so they can set the graphics into full setting. And if your a MMORPG player go for AMD, less electricity consumption and you can  run it for more than 3 days. 

    GPU PhysX is so rare as to be basically irrelevant.  If you're running PhysX on the processor, then it doesn't matter what video card you have.

    Nvidia's Kepler cards have roughly caught up in energy efficiency.  Indeed, among current generation GPUs, AMD's Tahiti (Radeon HD 7900 series) is the only one with notably worse energy efficiency than the others.

  • Four0SixFour0Six Member UncommonPosts: 1,175

    You need justification to buy upgraded gear?

    N E V E R

    As soon as you even toy with the idea of an upgrade...the answer is always YES, if you can afford it.

     

    *Thumbs up to getting more gear*

    /silly rant off

  • eddieg50eddieg50 Member UncommonPosts: 1,809
      I currently have very similar systym to Op but I have a 7870 and it runs smooth as silk
  • gabbelgabbel Member UncommonPosts: 21
    MMO s are mostly more cpu draining. I tested this in eve and darkfall with hundreds of players. A 6870 should be fairly enough for any current MMO. On the other side the 1gb memory will limit the perfmance for resolutions above 1920x1080.
  • ShakyMoShakyMo Member CommonPosts: 7,207
    Op your system should handle gw2 no problems.

    Unless:
    You are running dual monitors
    You have a bunch of other software running in the background (not a voice App though, should handle that fine)
    You are expecting flawless performance on the highest settings when there's 100s of players on screen, you won't get that unless you have an uber system and a very good internet connection

    Sticking a 670 in won't change things much - imo, your internet is probably the limiting factor.

    Also gw2 isn't a "nvidia game", there's no massive nvidia logo when it starts up as a massive clue, it performs equally well on equivelent cards of either type.

    Some games favour one card type or the other but not gw2.

    If you tone one thing down, for WvW, go with shadows first, that seems fairly cpu intensive.
  • eddieg50eddieg50 Member UncommonPosts: 1,809
      Yea even with my 7870 I turned down the shadows a bit
  • AndistotleAndistotle Member UncommonPosts: 124
    I have the same prosessor and RAM as you do, but I have the Nvidia Gtx 670. The system runs great on all games. I realy recomend GTX 670. It is a very good card imo. It is not so expencive anymore either.
Sign In or Register to comment.