It looks like you're new here. If you want to get involved, click one of these buttons!
Okay, here's the deal. My system is not a top of the line one, but it fills my needs and I can run most games in highest settings - only TSW I can't because of too low RAM on my GPUs.
My GPUs are 2x EVGA GTX460 SC (v1.0) running at 763Mhz by factory OC, but I can boost them pretty stable up to 850Mhz though.
My CPU is an AMD Phenom X6 1090T running at 3.2Mhz at the moment and the highest I can set it without playing with voltage is 3.5Mhz.
In the nVidia SLI config I can set where if I want PhysX handlede on GPU, CPU or automatic. It was set by default to automatic, which most of the time went to the 2nd GPU. But with games requiring more and more GPU power (for which I'm using the SLI setup), I'm wondering if it's wise to keep PhysX on the GPU instead of on the CPU. I do not know how much faster or slower the GTX460 GPU is compared to the 1090T CPU, if there's a difference at all.
Any insight on this and advice on where to keep PhysX running?
Comments
Have you considered to pass me €500 for the latest gen cards..? Economic crisis doesn't really help my wallet and you don't really help answering my question.
Also, I just played TSW on maxed settings for High and still have 40 FPS in crowded areas. No reason to upgrade IMO...
If your framerates are fine, what issue are you having?
You don't need the latest video card to get something better than what you currently have. Save some money and buy one generation older.
Well, having no money for a new GPU and I want to keep using SLI for the performance I have with these 2 GPUs, I guess that the only option left is using the CPU for the PhysX. Then the question remains, is a Phenom 1090T 'fast enough' to use for PhysX when running the average game...
Also, you were talking about using a newer GPU and one of the GTX460s for the PhysX. Can you do that, 2 completely different GPUs in a system without SLi and use one of them for PhysX? It's the 1st time I've heard about that...
Did you know that you really do not get much boost in graphic performance using SLI? You might never even notice any difference if you put the physx on the gpu.
Roses are red
Violets are blue
The reviewer has a mishapen head
Which means his opinion is skewed
...Aldous.MF'n.Huxley
Well,
You have a finite amount of processing power, some on your CPU, and some on your GPU.
Each game will use those resources in differing amounts. GW2 nad EQ2, for instance, are heavily CPU-bound. Games like Crysis 3 can eat as much GPU power as you can throw at them.
So, Default probably isn't a bad choice. PhysX is optimized by nVidia for nVidia, so for the same effect it will take less from a GPU than it would from the CPU, and hence Default or GPU would normally be your best bet - but if your in a game that is GPU bound, and have CPU cycles to spare, then it would make sense to throw it on the CPU.
I guess the TL;DR is it's going to vary from title to title - Default will be a safe bet for the most part, but if you notice a lot of GPU bottlenecking throw it over on CPU and see if you get better results.
This is, of course, assuming you know that your going to have to turn down settings from time to time regardless. Just shifting where your PhysX is being processed (in like the 8 games that support PhysX in the first place - of which I can only think of PS2, BL2, and Batman) isn't going to be some magical cure-all.
Yes you can absolutely setup a different GPU as a dedicated PhysX card.
http://www.hardwaresecrets.com/article/How-to-Install-and-Configure-a-Dedicated-PhysX-Video-Card/1763
With that said it's not really going to be that much of an advantage at this point in time for the reasons stated by both Quiz and Rid.
Personally in your case I'd just leave it on Auto... The only game I saw you mention by name in your post was TSW (which as far as I know doesn't have any PhysX support.)
The only game I've personally ever played that eventually had GPU PhysX was Planetside 2. (It didn't have it at launch.) Other than that I think I played maybe one game that had any PhysX support at all and it was definitely not gpu level.
If a game that uses PhysX is designed to have it run on the CPU, then even if you could force it to run on the GPU and had a spare GPU that wasn't otherwise in use, forcing it to run on the GPU could easily hurt your performance.
In order to benefit from using a GPU, you need to be able to transfer a small amount of data to the video card, do a large amount of SIMD-friendly computations on that data, and then transfer a small amount of data back to system memory. While some physics computations can be made to fit that paradigm, that sort of heavy-duty number crunching would likely overwhelm many players' CPUs if you tried to run it there (as you'll have to for people who don't have an Nvidia card), so games are likely to avoid that if planning on running PhysX on the CPU. On the CPU, you can readily do a lot of branching (e.g., for collision detection), alternate rapidly between physics and non-physics computations, and not worry about scaling to more than a few CPU cores. Try to take perfectly sensible CPU physics code and force it to run on a GPU and it will probably choke and run vastly slower than it would have on the CPU.
And that's for people who have a powerful, spare Nvidia video card that wouldn't otherwise have been used. There are very, very few such gamers. If you want to run the PhysX code on the same GPU as is handling graphics, then you pay a heavy context switching penalty when the video card has to stop everything, switch from DirectX or OpenGL to PhysX, do some PhysX computations, then stop everything again to switch back.
That's one of the reasons why, on technical merit, using GPU PhysX is a completely wacky idea. Geometry shaders and the tessellation stages offer enough versatility that you'd likely have been able to fit much of the GPU physics that you wanted into the normal graphics pipeline without the heavy penalty for context switching. Compute shaders as offered in both DirectX 11 and OpenGL 4.3 offer additional versatility by letting stick GPU computations anywhere you want, again without the context switching penalty. Even if that's not enough versatility to do what you want (in which case, what you want likely wouldn't be GPU-friendly at all, whether PhysX or otherwise), why not use the full versatility of DirectCompute or OpenCL to do a lot more than PhysX can--and also run well on AMD cards in addition to Nvidia?
GPU PhysX is basically a combination of a tech demo and a marketing stunt. It's not something that should be taken seriously for gameplay purposes.