for a game with the concurrency new world gets it's to expect.
Also what has been shown is that the cards that died were defective to start with, NW like the uncapped fps menu for example just managed the right conditions to trigger an issue that was already there.
What is spook is indeed seeing crazy expensive high end cards with hardware that goes puff without even its failsafes doing what they are supposed to do oof.
You should be able to run a video card at full throttle indefinitely and not have any overheating issues.
This is obviously a manufacturing issue, not anything to do with New World.
No. Consumer cards aren't built to run all out indefinitely. Vendors generally assume that if a card dies after several thousand hours of heavy use, that's fine. The professional video cards that are built to be able to run all out indefinitely often (but not always) use the same chips, but with lower clock speeds and voltages to be able to handle the long-term heavy load reliably.
But the card should run indefinitely without overheating issues within its limited lifespan.
Ultimately all products have limited lifespan and will eventually break. Some sooner and some later. That's a different issue.
There's no excuse for a video card to run at 70 C while sitting in a queue, just like there's no excuse for a video card to run at 70 C while idle at desktop. Sitting in a queue should be almost the same as idle at desktop.
Except the queue page actually renders a game scene in background, with a huge landscape and every grass leave and tree leaf moving... but then you'd have to play the game to know that.
But that's my point: it shouldn't do that. It's stupid to do that. There's no sane reason to push your hardware hard for hours on end when it doesn't matter. It's ridiculous to make your video card go all out when you're just sitting in a queue and probably AFK, or at least tabbed out to another program.
There's no excuse for a video card to run at 70 C while sitting in a queue, just like there's no excuse for a video card to run at 70 C while idle at desktop. Sitting in a queue should be almost the same as idle at desktop.
Except the queue page actually renders a game scene in background, with a huge landscape and every grass leave and tree leaf moving... but then you'd have to play the game to know that.
You should be able to turn something like that off though, or at least realising what is happening that option could have been put in by now.
You should be able to run a video card at full throttle indefinitely and not have any overheating issues.
This is obviously a manufacturing issue, not anything to do with New World.
No. Consumer cards aren't built to run all out indefinitely. Vendors generally assume that if a card dies after several thousand hours of heavy use, that's fine. The professional video cards that are built to be able to run all out indefinitely often (but not always) use the same chips, but with lower clock speeds and voltages to be able to handle the long-term heavy load reliably.
But the card should run indefinitely without overheating issues within its limited lifespan.
Ultimately all products have limited lifespan and will eventually break. Some sooner and some later. That's a different issue.
The problem with this argument is that what the New World is doing is ramping your card to the extreme during what should be a complete low to no impact process. Sitting in a log in queue should in no way be pushing your video card to its limits. That makes as much sense as a high performance race car being forced to keep the peddle to the metal while sitting in the parking lot.
There's no excuse for a video card to run at 70 C while sitting in a queue, just like there's no excuse for a video card to run at 70 C while idle at desktop. Sitting in a queue should be almost the same as idle at desktop.
Except the queue page actually renders a game scene in background, with a huge landscape and every grass leave and tree leaf moving... but then you'd have to play the game to know that.
I think the constant and extreme waving of every leaf and blade of grass puts a workload on the GPU. New World has almost frantic movement, supposedly from wind.
I've complained about this before; as a sailor, I notice and pay attention to wind. Where is the wind coming from? In almost every computer game, they've just oped to wave stuff around randomly to simulate wind, there is no wind and it isn't coming from a direction.
New World should scale way back on all this frenetic movement, or allow us a slider option to do it ourselves.
I've a really hard time believing NW is frying anyone's video card.
When I've tried to run demanding games either the setting have to be low or the FPS craters. I've never had a card just blow up.
This sounds like marketing to me.
"New World takes no prisoners... IT WIL LFRY YOUR cARD BIOTCHES!"
I've had two games recently overheat my video card. The Ship of Heroes beta got to 200FPS, and the GPU was exceeding 100C. New World does the same thing, the FPS is too fast, and the card overheats.
I've a really hard time believing NW is frying anyone's video card.
When I've tried to run demanding games either the setting have to be low or the FPS craters. I've never had a card just blow up.
This sounds like marketing to me.
"New World takes no prisoners... IT WIL LFRY YOUR cARD BIOTCHES!"
There are way more demanding games, with ray tracing notably, and I played those for hours with the fan of the graphic card blowing, because it was 20°C hotter than New World, without any issues. All this "no, a consumer graphic card cannot run on max for hours" bullshit is amusing me. It's cards for damned GAMERS, they SHOULD be able to run for hours on high load. At least mine does.
There are two issues here: reliability and simple existence.
For reliability, most GPU chips are specced for 100C (that's boiling point) 24 hours a day for 5 years. The chip has to be able to run at that temp for that duration, without faults. Anything over 100C can cause failures earlier, mainly due to metal electromigration.
For existence, you can probably run quite a bit hotter than that, for shorter periods. Up until the point that the material actually melts. For example, let's say you want to run at 115C (approx 240 degrees Fahrenheit), but you only use it 12 hours a day for 5 years. That is probably feasible.
So for me, overheating means 100C or more. Do we know what temps the cards are at when they die?
There's no excuse for a video card to run at 70 C while sitting in a queue, just like there's no excuse for a video card to run at 70 C while idle at desktop. Sitting in a queue should be almost the same as idle at desktop.
Except the queue page actually renders a game scene in background, with a huge landscape and every grass leave and tree leaf moving... but then you'd have to play the game to know that.
But that's my point: it shouldn't do that. It's stupid to do that. There's no sane reason to push your hardware hard for hours on end when it doesn't matter. It's ridiculous to make your video card go all out when you're just sitting in a queue and probably AFK, or at least tabbed out to another program.
It's only a problem (if you can call it that way) because there's queues. But yeah, they could freeze the background when the queue dialog appears.
If being stuck in a queue would instantly blue screen your computer, that would also only be a problem if there were queues. When you're writing the code to handle what the game will do if there are queues, you really should assume that there are queues.
I've a really hard time believing NW is frying anyone's video card.
When I've tried to run demanding games either the setting have to be low or the FPS craters. I've never had a card just blow up.
This sounds like marketing to me.
"New World takes no prisoners... IT WIL LFRY YOUR cARD BIOTCHES!"
There are way more demanding games, with ray tracing notably, and I played those for hours with the fan of the graphic card blowing, because it was 20°C hotter than New World, without any issues. All this "no, a consumer graphic card cannot run on max for hours" bullshit is amusing me. It's cards for damned GAMERS, they SHOULD be able to run for hours on high load. At least mine does.
It's not high frame rates that push a video card hard. I've run a video card at many thousands of frames per second and it was fine. For that experiment, each frame drew very little.
Similarly, it's not extremely demanding frames that push a video card hard. If you double the amount of work that each frame asks a video card to do, that can easily cut the frame rate in half and leave the card working exactly as hard as before.
It's not even the amount of movement that causes cards to work harder. In most cases, every frame is drawn completely independent of all others, though there are a handful of exceptions for certain recent effects. How many objects you have to draw and how much of the screen they cover can make a huge difference in the workload, but it doesn't matter if they're moving. Sometimes immobile objects can have a workload reduced by precomputing certain things, such as having lighting effects built into the textures rather than recomputing them each frame.
What does affect the workload is keeping more of the chip active more of the time. The full details of that are probably only understood by the people who design the particular chips, as the details will vary from one chip to another. But it's easy enough to explain a simple example.
GPUs have a number of different components that are meant to work in parallel. Some will burn a lot more power than others. Keeping more components busy more of the time, and especially the higher power components, will push the card harder.
Let's take a simple example. Suppose that a GPU had two types of parts, which we'll call A and B. And let's suppose that the ratio of how much of A per B a game uses is fixed by each game, and doesn't vary with time at all. A given GPU has 256 of A and 32 of B. If game 1 needs 16 times as much of A as B, then it will keep the 256 of A active all of the time, but on average, only 16 of B. If game 2 needs 4 times as much of A as B, then it will keep 32 of B active all of the time, but only 128 of A on average. If game 3 needs 8 times as much of A as B, then it can keep all of both A and B mostly active most of the time. Game 3 will then tend to push that GPU harder than games 1 or 2.
Comments
Also what has been shown is that the cards that died were defective to start with, NW like the uncapped fps menu for example just managed the right conditions to trigger an issue that was already there.
What is spook is indeed seeing crazy expensive high end cards with hardware that goes puff without even its failsafes doing what they are supposed to do oof.
Ultimately all products have limited lifespan and will eventually break. Some sooner and some later. That's a different issue.
The problem with this argument is that what the New World is doing is ramping your card to the extreme during what should be a complete low to no impact process. Sitting in a log in queue should in no way be pushing your video card to its limits. That makes as much sense as a high performance race car being forced to keep the peddle to the metal while sitting in the parking lot.
------------
2024: 47 years on the Net.
------------
2024: 47 years on the Net.
------------
2024: 47 years on the Net.
Similarly, it's not extremely demanding frames that push a video card hard. If you double the amount of work that each frame asks a video card to do, that can easily cut the frame rate in half and leave the card working exactly as hard as before.
It's not even the amount of movement that causes cards to work harder. In most cases, every frame is drawn completely independent of all others, though there are a handful of exceptions for certain recent effects. How many objects you have to draw and how much of the screen they cover can make a huge difference in the workload, but it doesn't matter if they're moving. Sometimes immobile objects can have a workload reduced by precomputing certain things, such as having lighting effects built into the textures rather than recomputing them each frame.
What does affect the workload is keeping more of the chip active more of the time. The full details of that are probably only understood by the people who design the particular chips, as the details will vary from one chip to another. But it's easy enough to explain a simple example.
GPUs have a number of different components that are meant to work in parallel. Some will burn a lot more power than others. Keeping more components busy more of the time, and especially the higher power components, will push the card harder.
Let's take a simple example. Suppose that a GPU had two types of parts, which we'll call A and B. And let's suppose that the ratio of how much of A per B a game uses is fixed by each game, and doesn't vary with time at all. A given GPU has 256 of A and 32 of B. If game 1 needs 16 times as much of A as B, then it will keep the 256 of A active all of the time, but on average, only 16 of B. If game 2 needs 4 times as much of A as B, then it will keep 32 of B active all of the time, but only 128 of A on average. If game 3 needs 8 times as much of A as B, then it can keep all of both A and B mostly active most of the time. Game 3 will then tend to push that GPU harder than games 1 or 2.