I am studying architecture which requires basicall gaming specifications. I chose. 15.6" screen bc the power usage is significantly less.
ya the battery probably last longer and if u dont want to always walk with ur cable around i think its good, My laptop is a 17p screen and last 9 hours if i work on and 5 hours with insane video and work ,gameing lol
all depend how u configure ur laptop .. balanced or high performance, ur can always adjust the light of the screen and all
~The only opinion that matters is your own.Everything else is just advice,~
OP - don't listen to these fools who question why you want what you want. You said you want a gaming laptop that is $1500. I will give you the advice you seek without stupid questions :P
Is probably the best bang for your buck that you will get. The only thing I'd recommend is upgrading the SSD to:
128 GB ADATA S501 V2 SATA III 6.0G/s Gaming MLC Solid State Disk (Single Hard Drive)
That is the 3rd generation SSD that is screaming fast and even @ 120GB, space will go fast if you are gaming and load up school programs, etc.
The 2nd drive is nessessary for downloading music, movies, etc., so good call on that.
The graphics card is one of the most important parts of a laptop if you want to game on it for a few years. The GTX 560M is quite nice.
For wireless you also chose this Intel® Centrino® Ultimate-N 6300 a/b/g/n Wireless Adapter [Intel WiDi Ready] - good job, that is a must on gaming laptops. Never skimp on the wifi's on a gaming laptop
If I were you and had that budget, that is definitely the laptop I'd get. If under a freak accident, you get some extra money to spend on a laptop, I'd look at the Sager you linked, and on that bump the graphics card to the GTX 485M and again make sure to get the Ultimate N 6300 wifi card. It's a bit more expensive, but the GTX 485M card is decently better than the 560M (worth 300 more if you spending that much IMO). It will help you play the latest games down the road longer. The SSD isn't absolutely necessary, but dear lord does it make a difference in loading times, starting apps, and booting up. I went SSD with my Clevo and I will never look back again.
P.S.
Dont expect to get more than an hour or two out of those laptops. You want a gaming laptop, kiss battery life goodbye, but you still have ease of mobility as long as power is around and most of the time it is heh.
OP - don't listen to these fools who question why you want what you want. You said you want a gaming laptop that is $1500. I will give you the advice you seek without stupid questions :P
Is probably the best bang for your buck that you will get. The only thing I'd recommend is upgrading the SSD to:
128 GB ADATA S501 V2 SATA III 6.0G/s Gaming MLC Solid State Disk (Single Hard Drive)
That is the 3rd generation SSD that is screaming fast and even @ 120GB, space will go fast if you are gaming and loaid up school programs, etc.
The 2nd drive is nessessary for downloading music, movies, etc., so good call on that.
The graphics card is one of the most important parts of a laptop if you want to game on it for a few years. The GTX 560M is quite nice.
For wireless you also chose this Intel® Centrino® Ultimate-N 6300 a/b/g/n Wireless Adapter [Intel WiDi Ready] - good job, that is a must on gaming laptops. Never skimp on the wifi's on a gaming laptop
If I were you and had that budget, that is definitely the laptop I'd get. If under a freak accident, you get some extra money to spend on a laptop, I'd look at the Sager you linked, and on that bump the graphics card to the GTX 485M and again make sure to get the Ultimate N 6300 wifi card. It's a bit more expensive, but the GTX 485M card is decently better than the 560M (worth 300 more if you spending that much IMO). It will help you play the latest games down the road longer. The SSD isn't absolutely necessary, but dear lord does it make a difference in loading times, starting apps, and booting up. I went SSD with my Clevo and I will never look back again.
P.S.
Dont expect to get more than an hour or two out of those laptops. You want a gaming laptop, kiss battery life goodbye, but you still have ease of mobility as long as power is around and most of the time it is heh.
There are so many things wrong with that post.
First, it's not possible to make useful recommendations without knowing how the laptop will be used. And the original poster still isn't saying there. He might not even know himself.
Next, what SSD is that? From A-Data's web site, it looks like it's a Marvell controller. Crucial and Intel also sell SSDs based on a Marvell controller, but they have to write their own firmware for it. Writing SSD firmware is hard. Marvell tried to write their own firmware, and it was a diaster. Do you really trust A-Data to find good firmware for the SSD, when they don't have access to either Crucial's or Intel's, which are the only two demonstrably good firmwares for the SSD controller? A-Data's history of shenanigans with SSD's is not encouraging.
The GeForce GTX 560M offers poor performance per watt. The only real reason to consider it at all is that laptop vendors other than Alienware don't seem to want to use the Radeon HD 6870M.
The GeForce GTX 485M has no such justification, as sites that offer the GTX 485M almost invariably offer the Radeon HD 6970M, which offers about the same performance while using far less power. The Radeon HD 6970M also costs a lot less.
From Sager, it's only a $65 upgrade from the GeForce GTX 560M. That's maybe 60% more performance at a cost of increasing the price tag by about 4%. If you're not willing to pay that, then you shouldn't be looking at $1500 gaming laptops.
The Cyber Power PC model doesn't offer the Radeon HD 6970M at all. I guess it does try to compensate for that by being cheaper. The default configuration is awful, though. You absolutely do not want a st-st-st-stuttering JMicron SSD. That's not at all similar to an SSD that is actually good.
OP - don't listen to these fools who question why you want what you want. You said you want a gaming laptop that is $1500. I will give you the advice you seek without stupid questions :P
Is probably the best bang for your buck that you will get. The only thing I'd recommend is upgrading the SSD to:
128 GB ADATA S501 V2 SATA III 6.0G/s Gaming MLC Solid State Disk (Single Hard Drive)
That is the 3rd generation SSD that is screaming fast and even @ 120GB, space will go fast if you are gaming and loaid up school programs, etc.
The 2nd drive is nessessary for downloading music, movies, etc., so good call on that.
The graphics card is one of the most important parts of a laptop if you want to game on it for a few years. The GTX 560M is quite nice.
For wireless you also chose this Intel® Centrino® Ultimate-N 6300 a/b/g/n Wireless Adapter [Intel WiDi Ready] - good job, that is a must on gaming laptops. Never skimp on the wifi's on a gaming laptop
If I were you and had that budget, that is definitely the laptop I'd get. If under a freak accident, you get some extra money to spend on a laptop, I'd look at the Sager you linked, and on that bump the graphics card to the GTX 485M and again make sure to get the Ultimate N 6300 wifi card. It's a bit more expensive, but the GTX 485M card is decently better than the 560M (worth 300 more if you spending that much IMO). It will help you play the latest games down the road longer. The SSD isn't absolutely necessary, but dear lord does it make a difference in loading times, starting apps, and booting up. I went SSD with my Clevo and I will never look back again.
P.S.
Dont expect to get more than an hour or two out of those laptops. You want a gaming laptop, kiss battery life goodbye, but you still have ease of mobility as long as power is around and most of the time it is heh.
There are so many things wrong with that post.
First, it's not possible to make useful recommendations without knowing how the laptop will be used. And the original poster still isn't saying there. He might not even know himself.
Next, what SSD is that? From A-Data's web site, it looks like it's a Marvell controller. Crucial and Intel also sell SSDs based on a Marvell controller, but they have to write their own firmware for it. Writing SSD firmware is hard. Marvell tried to write their own firmware, and it was a diaster. Do you really trust A-Data to find good firmware for the SSD, when they don't have access to either Crucial's or Intel's, which are the only two demonstrably good firmwares for the SSD controller? A-Data's history of shenanigans with SSD's is not encouraging.
The GeForce GTX 560M offers poor performance per watt. The only real reason to consider it at all is that laptop vendors other than Alienware don't seem to want to use the Radeon HD 6870M.
The GeForce GTX 485M has no such justification, as sites that offer the GTX 485M almost invariably offer the Radeon HD 6970M, which offers about the same performance while using far less power. The Radeon HD 6970M also costs a lot less.
From Sager, it's only a $65 upgrade from the GeForce GTX 560M. That's maybe 60% more performance at a cost of increasing the price tag by about 4%. If you're not willing to pay that, then you shouldn't be looking at $1500 gaming laptops.
The Cyber Power PC model doesn't offer the Radeon HD 6970M at all. I guess it does try to compensate for that by being cheaper. The default configuration is awful, though. You absolutely do not want a st-st-st-stuttering JMicron SSD. That's not at all similar to an SSD that is actually good.
Sorry I have not been very clear, Quizzical. I will be using the laptop as follows:
1) For graphical arch programs like AutoCad
2) Plugged into the wall in my dorm room for gaming
3) Taking notes during class (not positive if I will have a place to plug in)
Thank you for your patience Quizzical, and everyone else, in helping me. I am not a computer wiz so bear with me! I understand your point about the Radeon 6970 vs. the nVidia card. I need to bring battery consumption down while still keeping reasonable meat in the machine. Does all of this help to clarify any statements I may have confused you with before? If not, what else do you need to know?
First, it's not possible to make useful recommendations without knowing how the laptop will be used. And the original poster still isn't saying there. He might not even know himself.
Next, what SSD is that? From A-Data's web site, it looks like it's a Marvell controller. Crucial and Intel also sell SSDs based on a Marvell controller, but they have to write their own firmware for it. Writing SSD firmware is hard. Marvell tried to write their own firmware, and it was a diaster. Do you really trust A-Data to find good firmware for the SSD, when they don't have access to either Crucial's or Intel's, which are the only two demonstrably good firmwares for the SSD controller? A-Data's history of shenanigans with SSD's is not encouraging.
The GeForce GTX 560M offers poor performance per watt. The only real reason to consider it at all is that laptop vendors other than Alienware don't seem to want to use the Radeon HD 6870M.
The GeForce GTX 485M has no such justification, as sites that offer the GTX 485M almost invariably offer the Radeon HD 6970M, which offers about the same performance while using far less power. The Radeon HD 6970M also costs a lot less.
From Sager, it's only a $65 upgrade from the GeForce GTX 560M. That's maybe 60% more performance at a cost of increasing the price tag by about 4%. If you're not willing to pay that, then you shouldn't be looking at $1500 gaming laptops.
The Cyber Power PC model doesn't offer the Radeon HD 6970M at all. I guess it does try to compensate for that by being cheaper. The default configuration is awful, though. You absolutely do not want a st-st-st-stuttering JMicron SSD. That's not at all similar to an SSD that is actually good.
If someone wants a gaming laptop, you would expect them to buy the best that they can, you don't need to ask what they are doing. Gaming is one of the most demanding things on a laptop (can be equal to video editing/data crunching, etc). Unless he is just word processing, then obviously he doesn't need a nice machine, but he WANTS one... so who gives a **** what he is doing with it. Best bang for the buck is.... best bang for the buck. @ 1500 he will be fine with school stuff and gaming.
You are right on the SSD - I assumed that was an Intel as the site I buy custom laptops from (malibal.com) offers it. So OP may be better off buying their own and installing it :P
560M is defintely not the best card out there, thats why I say specifically get the 485M is a better route to go, but it costs more and pushes him over budget. If you have a budget, whats the point in going 300 over it?
To be fair, I stay away from Radeon personally. I have had terrible luck with them in desktops, but perhaps their mobile versions are better. Nvidia drivers just tend to work better for me. Not once have a I had a problem with them, even doing more "advanced" things like adjusting resolution for TVs, etc. where as I have had a lot of problems with AMD's drivers doing the same things.
Just looked up benchmarks of the Radeon 6970M and it's indeed a bit better than the 560M, but the 485M completely smashes both, but it's defintiely in a price bracket higher lol.
Sorry I have not been very clear, Quizzical. I will be using the laptop as follows:
1) For graphical arch programs like AutoCad
2) Plugged into the wall in my dorm room for gaming
3) Taking notes during class (not positive if I will have a place to plug in)
Thank you for your patience Quizzical, and everyone else, in helping me. I am not a computer wiz so bear with me! I understand your point about the Radeon 6970 vs. the nVidia card. I need to bring battery consumption down while still keeping reasonable meat in the machine. Does all of this help to clarify any statements I may have confused you with before? If not, what else do you need to know?
Thank you all!
For 1), will that be in class, or only in your dorm room? On any higher end gaming card, any graphically-intensive stuff cannot be done on the battery. Period. A lower end computer that is lighter on battery consumption could do some graphically intensive stuff on the battery, though the battery won't last very long that way.
If you are only going to do high performance stuff in your dorm room, and not also in class, then you'd be far better off with a cheap laptop plus a gaming desktop. I really don't know what you're going to do with the laptop in class. If a student uses a laptop in a course I teach, I usually assume that the student isn't paying attention. That might be different for architecture courses. I guess it's easier to take notes on a laptop when it's just text than when you need complex formulas, unusual symbols, diagrams, or that sort of stuff. Few students know LaTeX well enough to take notes in it on the fly.
So let's consider the option of both a gaming desktop in your room, and also a cheap laptop that you take to class. When in your room, you'd have a vastly nicer computer to use. You'd have a faster processor, faster video card, better keyboard, and better monitor. It would be easier to keep cool, and the heat would be released off to the side and out of the way rather than right underneath the keyboard. It would be ergonomically a lot better, too. It would be more reliable, and easier to upgrade if you decide you need more performance later.
So what about the cheap laptop in class? That would be smaller and lighter, so it's easier to carry to and from class. It would have a battery that could last through all of your classes all day long without needing a recharge until you get home, rather than trying to recharge between classes and having the battery die every day if you have classes in consecutive time periods. It would stay cooler and run quieter, as you could get a laptop that released virtually no heat at idle, which is the situation if you're taking notes on it.
You could also use the laptop as a backup in case your desktop has problems, so that you're not stuck without a computer entirely. A laptop is more likely to die than your desktop would be, however. And getting both the gaming desktop and also the cheap laptop would probably be cheaper than getting a $1500 gaming laptop, too.
That's an awful lot of advantages to a combination of a cheap laptop plus a gaming desktop. The one big disadvantage is if you there is a situation where you can plug in your laptop and need something with high performance graphics, and this situation occurs while in class or while otherwise away from your room and without access to a university-owned computer. For this, however, even a cheap Llano E4-based system would at least get you something functional, with graphical performance not that far shy of most of the laptops that the university recommends and will sell you directly.
-----
"If someone wants a gaming laptop, you would expect them to buy the best that they can, you don't need to ask what they are doing."
Except that what is "best" depends heavily on what they're doing. If battery ilfe, size, weight, noise, temperatures, and/or reliability matter, then trying to cram high end gaming hardware into that may be a poor choice, even apart from the price tag.
"560M is defintely not the best card out there, thats why I say specifically get the 485M is a better route to go, but it costs more and pushes him over budget. If you have a budget, whats the point in going 300 over it?"
What if you could get the performance of the GeForce GTX 485M for a price tag only slightly higher than that of the GeForce GTX 560M and power consumption well between them? Good deal, no? If he gets the Sager, then that is an option, and it's called a Radeon HD 6970M.
"To be fair, I stay away from Radeon personally. I have had terrible luck with them in desktops, but perhaps their mobile versions are better. Nvidia drivers just tend to work better for me."
And how long ago was this? In recent years, AMD and Nvidia have been about as good on desktop Windows drivers. AMD has also unified their desktop and laptop drivers, so if you're not trying to use discrete switchable graphics or buying from a company that disables driver updates, the laptop drivers should as good as the desktop ones.
"Just looked up benchmarks of the Radeon 6970M and it's indeed a bit better than the 560M, but the 485M completely smashes both, but it's defintiely in a price bracket higher lol."
The Radeon HD 6970M performs about the same as the GeForce GTX 485M in games. Getting a proper apples to apples comparison in laptops is hard, as you can't just swap out a video card and use the same machine as before, the way you can in desktops. But you can get a pretty good approximation from the analogous desktop cards.
A Radeon HD 6970M is basically the same hardware as a desktop Radeon HD 6850. A GeForce GTX 485M is basically the same hardware as a desktop GeForce GTX 560 Ti, though the latter is technically a respin of the die used in the former. A GeForce GTX 560M is basically the same hardware as desktop GeForce GTX 550 Ti. A Radeon HD 6850 offers maybe 80% of the performance of a GeForce GTX 560 Ti, while a GeForce GTX 550 Ti offers maybe 55% of that performance. A clear win for the higher end Nvidia card, right?
Not so much when you look at clock speeds. A GeForce GTX 560 Ti has a core clock speed of 822 MHz and a memory clock speed of 1001 MHz. A GeForce GTX 485M cuts those to 575 MHz and 750 MHz, respectively. That means you get less than 75% of the memory bandwidth and 70% of everything else. Meanwhile, a Radeon HD 6850 has clock speeds of 775 MHz core and 1000 MHz memory. A Radeon HD 6970M only cuts these to 680 MHz and 900 MHz, respectively. That means you get 90% of the memory bandwidth and 88% of everything else. Multiply those by the desktop Radeon card getting about 80% of the performance of the desktop GeForce card and you get that the GeForce GTX 485M and the Radeon HD 6970M are essentially tied.
Meanwhile, the GeForce GTX 550 Ti is clocked at 900 MHz core and 1026 MHz memory. The GeForce GTX 560M cuts those to 775 MHz and 625 MHz, respectively. That means you get about 61% of the memory bandwidth and 86% of everything else. The memory bandwidth for the desktop card is actually exaggerated by the mismatched memory channels, but the laptop GeForce GTX 560M still loses more of the performance of the desktop card than the Radeon HD 6970M does.
Meanwhile, in a desktop, the Radeon HD 6850 and GeForce GTX 550 Ti have comparable power consumption. What happens when you give them comparable underclocks for a laptop? My guess is that the GeForce GTX 560M probably uses less power than the Radeon HD 6970M, largely because the latter is loaded up with 2 GB of video memory. But that the Radeon HD 6970M's is more easily compared to a laptop card with 60% of its performance rather than one with nearly the same performance is a huge win for AMD in performance per watt.
Maybe you found some synthetic benchmark that is highly favorable to Nvidia's architecture and makes the Radeon HD 6970M perform more like a GeForce GTX 560M and less like a GeForce GTX 485M. Or maybe you were looking at power consumption numbers. (Just kidding.) But that's not typical of actual gaming performance.
I have an extra GeForce GTX 460 and 16GB (4x4GB) Corsair RAM lying around my closet... I contacted a local computer shop to see what kind of rig they could build me for a reasonable price. Hopefully I will be able to get both a desktop and a laptop.
In games there are some where it does better and others where it's only slightly better, but basically wins in everything.
It's been about 2 years since I used a Radeon at work (silly, but we build soso gaming desktops in our conference rooms for random fun on our conference room LCDs), so it's very possible that they have worked out their kinks. AMD has been sucking for a long time, so I would love to see them compete with nvidia personally (which it sounds like they are from your words as I realize you are very in the know -- not kidding). So perhaps when I build my new desktop in Sept-Nov timeframe I will give AMD a look again. I used to have the ATI Radon 9800 Pro back in the day and loved it, but a ton has changed since then.
I have an extra GeForce GTX 460 and 16GB (4x4GB) Corsair RAM lying around my closet... I contacted a local computer shop to see what kind of rig they could build me for a reasonable price. Hopefully I will be able to get both a desktop and a laptop.
Thank you for your time and knowledge!
Careful, as they might well massively overcharge you, or use cheap junk parts that will make the system unreliable. For a desktop, the ideal thing is to build your own, especially if you already have some of the parts. If you have the parts, then assembling them is easier than you might thing. If you don't know what parts to get, then I could help with that. It looks like your university will sell you a Windows 7 Ultimate license for $80, and you might need to build your own to take advantage of that.
If you can't or won't build your own, then the next best thing is getting one built to order from a site that will tell you exactly what parts they'll use. I guess you could get one with integrated graphics, and then add your own video card. You'd have to pick a case and power supply around the video card that you're going to add, though. If you buy a prebuilt computer, the case and power supply won't be able to handle your discrete card. For integrated graphics, that's most easily done with an AMD 880G or Intel Z68 chipset. For memory, you could find somewhere that will ship it with a single 2 GB module, then pull it out and put your own memory in.
Though if you happen to have a video card and 16 GB of system memory laying around, I'd somewhat suspect that you'd have peripherals, too, and that brings down the cost if you don't have to buy those.
In games there are some where it does better and others where it's only slightly better, but basically wins in everything.
Are you looking at the game benchmarks on that link? Because that's not what I see there. I see nine situations where either the Radeon HD 6970M or the GeForce GTX 485M are below 60 frames per second, and both are listed. Each wins four of those, and they tie in the other one.
Or are you basing your recommendation on the belief that 200 frames per second is somehow better than 150, even though your monitor can only display 60?
So far, you're giving out a lot of contradictory information. You make it sound like you have the slightest clue what you're going to do with the laptop, but just want to get a gaming laptop and hope it works for whatever you decide to use it for later. There's a significant chance that it won't, and then you'll be stuck with a $1500 laptop that you have to use like a poor quality, overpriced desktop, and still have to go buy some other laptop for whatever you needed the laptop for in the first place.
So let's see if you know what you're going to use the laptop for. Where are you going to use it? In your dorm room? In a classroom? At your parents' house? On an airplane, train, or bus? Outside sitting on the grass?
Next, what are you going to use it for in each place? The demands of gaming are very different from the demands of e-mail and web browsing. Finally, will you have the option to plug it in in each place that you use the laptop, or will it have to run from the battery?
If you don't know how you're going to use it, then it's not possible to determine what you should get.
well, according to the requirements, he needs a laptop with enough power to use autocad, 3D studio max, adobe creative suite(photoshop and illustrator), and microsoft office. now, those are fairly heafty applications(especially 3DS max) and if he can run those on a laptop, he shouldnt have any problems playing games on the same 15.6" screen.(1440x900 most likely)
he's in a fairly unique situation where his laptop wont just be used for taking notes. he will likely need to actually do work on the laptop during collaboration and class projects. so a brazo laptop is out of the question. fortunately, I think Llano has enough processing power to run the required applications and still allow for the battery life of portability. a Llano with 8 to 16 gigs of ram and a SSD may be his best option for overall best tool that meets his requirements.
In games there are some where it does better and others where it's only slightly better, but basically wins in everything.
Are you looking at the game benchmarks on that link? Because that's not what I see there. I see nine situations where either the Radeon HD 6970M or the GeForce GTX 485M are below 60 frames per second, and both are listed. Each wins four of those, and they tie in the other one.
Or are you basing your recommendation on the belief that 200 frames per second is somehow better than 150, even though your monitor can only display 60?
Regardless of what a monitor can display or the human eye can percieve, power is power. I would imagine something that can put out 200 fps would last a bit longer than a card that can do 150, no ? :P
Can't believe you would argue that 200 fps isn't better than 150 fps lol. Just kinda common sense there, but I am sure you will find an argument for that too! Honestly, I valued you opinion a bit more until you said that lol.
Suppose that at certain graphical settings, card A can do 200 frames per second, and card B can do 150 frames per second. If you increase the graphical settings a lot, then card A can do 40 frames per second and card B can do 50 frames per second. Which card is better? If you take a blind average of synthetic benchmarks, you'll prefer card A. If real-world performance is all that matters, you'll prefer card B. So which is it?
I say that the difference at the low settings doesn't matter, but the difference at the high settings does. Card B is better, because it lets you turn settings higher while keeping adequate frame rates.
This isn't random. If you look through reviews for cards from the last two generations, you'll find many, many cases where a given GeForce card beats a given Radeon card at low settings when they're both fast enough that the difference doesn't matter, but the Radeon card wins at higher settings when they're slow enough tthe difference does matter. You'll find very few cases where the reverse happens.
And it's not a fluke that it's this way, as it's due to architectural differences. Higher settings put more additional stress on some parts of a GPU than others. AMD intentionally beefed up the parts that will take additional stress at higher settings, while merely saying that substantially over 60 frames per second before you hit a bottleneck in all real games is good enough for the parts that don't take much additional stress at higher settings. Nvidia did this to some degree, but didn't do it as well as AMD did.
The most famous example of this is tessellation. A GeForce GTX 580 has 16 tessellation units (part of the "polymorph engines"). Even a GeForce GTX 560M has four. Meanwhile, a top of the line Radeon HD 6970 only has two. AMD's top of the line card from the previous generation, the Radeon HD 5870, only had one. A synthetic tessellation benchmark can tell the difference quite clearly, and Fermi cards absolutely destroy Evergreen and Northern Islands cards there.
Yet even the tessellator in the Radeon HD 5870 can process hundreds of mllions of triangles per second. At a resolution of 1920x1080, 60 frames per second, and tessellating to the degree that you have a separate triangle for each pixel on the screen, you're barely over a hundred million triangles per second. The hardware tessellator in even the bottom of the line Radeon HD 5450 would laugh at that workload.
Yet if a real game tried to tessellate to that degree, every card on the market would completely choke because other hardware can't deal with the rasterization workload. Cypress, Cayman, Barts (Radeon HD 6970M, among others), GF104 (GeForce GTX 485M, among others), and GF114 (GeForce GTX 580M, among others) all have two raster engines, and they're all clocked somewhat similarly. When it comes time to actually process the triangles that you've created by tessellation in real games, Fermi's tesselation advantage is gone. Fermi's advantage only appears if you tessellate to the degree that you discard the overwhelming majority of the triangles without using them. That's something a synthetic tessellation benchmark might do, but is completely stupid for real games, as it will cripple performance without making the game look any better. Which is why real games don't do that.
-----
"well, according to the requirements, he needs a laptop with enough power to use autocad, 3D studio max, adobe creative suite(photoshop and illustrator), and microsoft office. now, those are fairly heafty applications(especially 3DS max) and if he can run those on a laptop, he shouldnt have any problems playing games on the same 15.6" screen.(1440x900 most likely)"
There's a question of whether he needs to have a laptop, and also run all of those programs on his own computer. Or whether he needs to run all of those programs on the laptop in particular, while in class. If the latter, there's also the question of whether there's a place to plug the laptop in, or whether it will have to use the battery. If the programs don't need to be run that intensively on the laptop, then a Llano A4-based laptop for $500 will work just fine.
If he needs to run those graphical programs in class on the battery, then a high end gaming laptop simply isn't an option. It won't run graphically intensive stuff on the battery at all. In that case, he'd have to get a lower end gaming laptop, and then would also want a desktop for gaming. Llano is again easily the best option for the laptop there.
If he needs to run the graphical programs in class but can plug in the laptop, then the high end gaming laptop would work. There would still be a pretty good case for getting both the Llano-based laptop and also a gaming desktop. But the high end gaming laptop makes more sense in this situation. The university's officially recommended laptops don't include any high end gaming laptops, however.
Originally posted by Quizzical Originally posted by Aarinak I have an extra GeForce GTX 460 and 16GB (4x4GB) Corsair RAM lying around my closet... I contacted a local computer shop to see what kind of rig they could build me for a reasonable price. Hopefully I will be able to get both a desktop and a laptop. Thank you for your time and knowledge!
Careful, as they might well massively overcharge you, or use cheap junk parts that will make the system unreliable. For a desktop, the ideal thing is to build your own, especially if you already have some of the parts. If you have the parts, then assembling them is easier than you might thing. If you don't know what parts to get, then I could help with that. It looks like your university will sell you a Windows 7 Ultimate license for $80, and you might need to build your own to take advantage of that.
That possibility exists certainly, but I would say that a good Mom&Pop computer shop is a good alternative to building it yourself, and much better than the lower end build-your-own sites. Especially if they will just put all your own parts together for a nominal fee ($50-100), you can pick out each and every part, and often times they will be willing to service it as well.
Suppose that at certain graphical settings, card A can do 200 frames per second, and card B can do 150 frames per second. If you increase the graphical settings a lot, then card A can do 40 frames per second and card B can do 50 frames per second. Which card is better? If you take a blind average of synthetic benchmarks, you'll prefer card A. If real-world performance is all that matters, you'll prefer card B. So which is it?
I say that the difference at the low settings doesn't matter, but the difference at the high settings does. Card B is better, because it lets you turn settings higher while keeping adequate frame rates.
This isn't random. If you look through reviews for cards from the last two generations, you'll find many, many cases where a given GeForce card beats a given Radeon card at low settings when they're both fast enough that the difference doesn't matter, but the Radeon card wins at higher settings when they're slow enough tthe difference does matter. You'll find very few cases where the reverse happens.
And it's not a fluke that it's this way, as it's due to architectural differences. Higher settings put more additional stress on some parts of a GPU than others. AMD intentionally beefed up the parts that will take additional stress at higher settings, while merely saying that substantially over 60 frames per second before you hit a bottleneck in all real games is good enough for the parts that don't take much additional stress at higher settings. Nvidia did this to some degree, but didn't do it as well as AMD did.
The most famous example of this is tessellation. A GeForce GTX 580 has 16 tessellation units (part of the "polymorph engines"). Even a GeForce GTX 560M has four. Meanwhile, a top of the line Radeon HD 6970 only has two. AMD's top of the line card from the previous generation, the Radeon HD 5870, only had one. A synthetic tessellation benchmark can tell the difference quite clearly, and Fermi cards absolutely destroy Evergreen and Northern Islands cards there.
Yet even the tessellator in the Radeon HD 5870 can process hundreds of mllions of triangles per second. At a resolution of 1920x1080, 60 frames per second, and tessellating to the degree that you have a separate triangle for each pixel on the screen, you're barely over a hundred million triangles per second. The hardware tessellator in even the bottom of the line Radeon HD 5450 would laugh at that workload.
Yet if a real game tried to tessellate to that degree, every card on the market would completely choke because other hardware can't deal with the rasterization workload. Cypress, Cayman, Barts (Radeon HD 6970M, among others), GF104 (GeForce GTX 485M, among others), and GF114 (GeForce GTX 580M, among others) all have two raster engines, and they're all clocked somewhat similarly. When it comes time to actually process the triangles that you've created by tessellation in real games, Fermi's tesselation advantage is gone. Fermi's advantage only appears if you tessellate to the degree that you discard the overwhelming majority of the triangles without using them. That's something a synthetic tessellation benchmark might do, but is completely stupid for real games, as it will cripple performance without making the game look any better. Which is why real games don't do that.
Now you are just confusiing me - are you arguing for the 485M or the Radeon? Because basically every max settings benchmark shows the 485M owning the Radeon 6970M :P
Glad we agree - I don't even bother looking at benchmarks below high and ultra cause whats the point? Your point is exactly why the 485M is better lol. I am guessing you didn't even look at the benchmarks but eh whatever. Apparently I don't sit as high on my horse as you ;P
Suppose that at certain graphical settings, card A can do 200 frames per second, and card B can do 150 frames per second. If you increase the graphical settings a lot, then card A can do 40 frames per second and card B can do 50 frames per second. Which card is better? If you take a blind average of synthetic benchmarks, you'll prefer card A. If real-world performance is all that matters, you'll prefer card B. So which is it?
I say that the difference at the low settings doesn't matter, but the difference at the high settings does. Card B is better, because it lets you turn settings higher while keeping adequate frame rates.
This isn't random. If you look through reviews for cards from the last two generations, you'll find many, many cases where a given GeForce card beats a given Radeon card at low settings when they're both fast enough that the difference doesn't matter, but the Radeon card wins at higher settings when they're slow enough tthe difference does matter. You'll find very few cases where the reverse happens.
And it's not a fluke that it's this way, as it's due to architectural differences. Higher settings put more additional stress on some parts of a GPU than others. AMD intentionally beefed up the parts that will take additional stress at higher settings, while merely saying that substantially over 60 frames per second before you hit a bottleneck in all real games is good enough for the parts that don't take much additional stress at higher settings. Nvidia did this to some degree, but didn't do it as well as AMD did.
The most famous example of this is tessellation. A GeForce GTX 580 has 16 tessellation units (part of the "polymorph engines"). Even a GeForce GTX 560M has four. Meanwhile, a top of the line Radeon HD 6970 only has two. AMD's top of the line card from the previous generation, the Radeon HD 5870, only had one. A synthetic tessellation benchmark can tell the difference quite clearly, and Fermi cards absolutely destroy Evergreen and Northern Islands cards there.
Yet even the tessellator in the Radeon HD 5870 can process hundreds of mllions of triangles per second. At a resolution of 1920x1080, 60 frames per second, and tessellating to the degree that you have a separate triangle for each pixel on the screen, you're barely over a hundred million triangles per second. The hardware tessellator in even the bottom of the line Radeon HD 5450 would laugh at that workload.
Yet if a real game tried to tessellate to that degree, every card on the market would completely choke because other hardware can't deal with the rasterization workload. Cypress, Cayman, Barts (Radeon HD 6970M, among others), GF104 (GeForce GTX 485M, among others), and GF114 (GeForce GTX 580M, among others) all have two raster engines, and they're all clocked somewhat similarly. When it comes time to actually process the triangles that you've created by tessellation in real games, Fermi's tesselation advantage is gone. Fermi's advantage only appears if you tessellate to the degree that you discard the overwhelming majority of the triangles without using them. That's something a synthetic tessellation benchmark might do, but is completely stupid for real games, as it will cripple performance without making the game look any better. Which is why real games don't do that.
Now you are just confusiing me - are you arguing for the 485M or the Radeon? Because basically every max settings benchmark shows the 485M owning the Radeon 6970M :P
Glad we agree - I don't even bother looking at benchmarks below high and ultra cause whats the point? Your point is exactly why the 485M is better lol. I am guessing you didn't even look at the benchmarks but eh whatever. Apparently I don't sit as high on my horse as you ;P
He is saying that anything above 60 FPS does not matter (and it really doesn't), so he is only looking at benchmarks which compare both the 485M and the 6970M AND have settings where the 6970M drops below 60 FPS.
Now, where exactly do you see "every" max settings benchmark for the 485M owning the 6970M? In Metro 2033, the 485M averages 48.1 FPS on high whereas the 6970M averages 50 FPS. Now, increase the settings to ultra, and the 485M manages to pull 16 FPS while the 6970M gets 18 FPS. Examples aren't proofs, but this one example already disproves your statement.
High and ultra don't matter. What you need to look at is framerates. The difference between 20 and 25 FPS is notable. The difference between 150 and 200 FPS is not. So what if the 485M gets you those extra 50 FPS on relatively light loads? You won't even notice them, so "power is power" doesn't work here.
There's a reason Quizzical appears to sit on a high horse-he knows his stuff. When he states something, you know that what he says is applicable in real-world situations and that his knowledge is correct almost all the time. If he has given false information, he quickly acknowledges it and corrects himself. If he has actually started to get into the more technical aspects, then you can safely assume that he is correct.
That said, while he does appear to be arguing for the 6970M, his earlier post states that both cards have essentially taken the same amount of wins in the games where they both were tested and achieved less than 60 FPS. It is then easy to assume that they are equal in power when it actually matters, but when you factor in the cost of the cards, the 6970M is far, far ahead of the 485M.
Suppose that at certain graphical settings, card A can do 200 frames per second, and card B can do 150 frames per second. If you increase the graphical settings a lot, then card A can do 40 frames per second and card B can do 50 frames per second. Which card is better? If you take a blind average of synthetic benchmarks, you'll prefer card A. If real-world performance is all that matters, you'll prefer card B. So which is it?
I say that the difference at the low settings doesn't matter, but the difference at the high settings does. Card B is better, because it lets you turn settings higher while keeping adequate frame rates.
This isn't random. If you look through reviews for cards from the last two generations, you'll find many, many cases where a given GeForce card beats a given Radeon card at low settings when they're both fast enough that the difference doesn't matter, but the Radeon card wins at higher settings when they're slow enough tthe difference does matter. You'll find very few cases where the reverse happens.
And it's not a fluke that it's this way, as it's due to architectural differences. Higher settings put more additional stress on some parts of a GPU than others. AMD intentionally beefed up the parts that will take additional stress at higher settings, while merely saying that substantially over 60 frames per second before you hit a bottleneck in all real games is good enough for the parts that don't take much additional stress at higher settings. Nvidia did this to some degree, but didn't do it as well as AMD did.
The most famous example of this is tessellation. A GeForce GTX 580 has 16 tessellation units (part of the "polymorph engines"). Even a GeForce GTX 560M has four. Meanwhile, a top of the line Radeon HD 6970 only has two. AMD's top of the line card from the previous generation, the Radeon HD 5870, only had one. A synthetic tessellation benchmark can tell the difference quite clearly, and Fermi cards absolutely destroy Evergreen and Northern Islands cards there.
Yet even the tessellator in the Radeon HD 5870 can process hundreds of mllions of triangles per second. At a resolution of 1920x1080, 60 frames per second, and tessellating to the degree that you have a separate triangle for each pixel on the screen, you're barely over a hundred million triangles per second. The hardware tessellator in even the bottom of the line Radeon HD 5450 would laugh at that workload.
Yet if a real game tried to tessellate to that degree, every card on the market would completely choke because other hardware can't deal with the rasterization workload. Cypress, Cayman, Barts (Radeon HD 6970M, among others), GF104 (GeForce GTX 485M, among others), and GF114 (GeForce GTX 580M, among others) all have two raster engines, and they're all clocked somewhat similarly. When it comes time to actually process the triangles that you've created by tessellation in real games, Fermi's tesselation advantage is gone. Fermi's advantage only appears if you tessellate to the degree that you discard the overwhelming majority of the triangles without using them. That's something a synthetic tessellation benchmark might do, but is completely stupid for real games, as it will cripple performance without making the game look any better. Which is why real games don't do that.
Now you are just confusiing me - are you arguing for the 485M or the Radeon? Because basically every max settings benchmark shows the 485M owning the Radeon 6970M :P
Glad we agree - I don't even bother looking at benchmarks below high and ultra cause whats the point? Your point is exactly why the 485M is better lol. I am guessing you didn't even look at the benchmarks but eh whatever. Apparently I don't sit as high on my horse as you ;P
He is saying that anything above 60 FPS does not matter (and it really doesn't), so he is only looking at benchmarks which compare both the 485M and the 6970M AND have settings where the 6970M drops below 60 FPS.
Now, where exactly do you see "every" max settings benchmark for the 485M owning the 6970M? In Metro 2033, the 485M averages 48.1 FPS on high whereas the 6970M averages 50 FPS. Now, increase the settings to ultra, and the 485M manages to pull 16 FPS while the 6970M gets 18 FPS. Examples aren't proofs, but this one example already disproves your statement.
High and ultra don't matter. What you need to look at is framerates. The difference between 20 and 25 FPS is notable. The difference between 150 and 200 FPS is not. So what if the 485M gets you those extra 50 FPS on relatively light loads? You won't even notice them, so "power is power" doesn't work here.
There's a reason Quizzical appears to sit on a high horse-he knows his stuff. When he states something, you know that what he says is applicable in real-world situations and that his knowledge is correct almost all the time. If he has given false information, he quickly acknowledges it and corrects himself. If he has actually started to get into the more technical aspects, then you can safely assume that he is correct.
That said, while he does appear to be arguing for the 6970M, his earlier post states that both cards have essentially taken the same amount of wins in the games where they both were tested and achieved less than 60 FPS. It is then easy to assume that they are equal in power when it actually matters, but when you factor in the cost of the cards, the 6970M is far, far ahead of the 485M.
If you look at the game benchmarks on high and ultra, the 485 beats the 6970 anywhere from 0-24% more performance (depends on the game). If something is outperformining now, will it not last longer a little longer? I guess I am confused by you guys all saying that nothing above 60FPS matters. If something can run much higher than that, won't it be able to stay strong longer down the road? You are investing in something that should be able to run games better down the road.
If I am investing money into a machine, i want to get the most life out of it before I upgrade... I am not going to try to buy a system that just barely gets me 60FPS now and expect it to perform for years and years with games, etc. (Not saying Radeon just barely gets 60FPS in all games, just saying in general about getting a card that barely gets FPS since that is all that "matters").
On average, in situations where the difference matters, the GeForce GTX 485M is a little bit faster than the Radeon HD 6970M. If you wanted to call it 5% faster, I wouldn't quibble. If you were going strictly by performance, you'd prefer the GTX 485M.
The issue is that performance isn't the only criterion. In order to get that extra 5% performance or whatever, it takes a lot more power. How much more? We don't really know. If we guess from the desktop cards, then maybe 20% or 30% more. Nvidia doesn't list a TDP for the GeForce GTX 485M, and even if they did, it probably wouldn't be an honest TDP comparable to AMD's claim of 100 W for the Radeon HD 6970M. The difference between 100 W and 120 W is no big deal in a desktop, but in a laptop, it sure is. That much extra power consumption in a laptop is enough that I'd say that the Radeon HD 6970M is clearly the better card for most people. Even the Radeon HD 6970M puts out more heat than you'd want from a laptop gaming card, which is why I'd rather see it come with 1 GB of video memory rather than 2 GB.
And then there is also the price tag. As Sager prices it, the GeForce GTX 485M costs $230 more than the Radeon HD 6970M. That's in line with what other e-tailers charge. That much more money performance that is arguably a little better? If that's what you want, then find a Radeon HD 6990M, which is clearly faster than the GeForce GTX 485M, and tends to be a bit faster than the GeForce GTX 580M, even. And it likely uses less power than either of those Nvidia cards, while also being cheaper to buy.
-----
I started typing up a post once recently with a title to the effect of, "AMD and Nvidia try to fry your laptop". I accidentally closed the browser and lost the text, so I didn't finish it. The basic dilemma is this. AMD has much better performance per watt and performance per mm^2 than Nvidia. For any arbitrary gaming laptop card that Nvidia could make, AMD can make one that gives better performance with lower power consumption and a lower price tag. On the other hand, for any arbitrary gaming laptop card that AMD can make, Nvidia can make one with higher absolute performance.
The problem is that they keep going back and forth, ratcheting up the power consumption and performance. AMD can release a new card with better performance than Nvidia, and then Nvidia can respond with a card with better performance yet, at the cost of much higher power consumption. AMD can release a new card with better performance yet, and less power consumption than Nvidia, but more power consumption than AMD's previous high end. And then Nvidia can go higher performance and power consumption yet.
Two generations ago, Nvidia basically had the high end to itself. This was partially because AMD's mobile drivers were a complete disaster (no driver updates for you, ever!), and partially because AMD's cards had too high of idle power consumption. The Mobility Radeon HD 4870 was the highest performance laptop card, but it was still a terrible laptop card. With the high end basicaly all to itself, Nvidia could say, we'll get you the best performance that we can in a 75 W TDP, because a laptop video card really shouldn't put out more than 75 W.
Now that there is competition and the TDPs are ratcheting up, that 75 W barrier is long since surpassed. AMD claims a TDP of 100 W on the Radeon HD 6970. I haven't seen any official TDP on the GeForce GTX 485M, GeForce GTX 580M, or Radeon HD 6990M, but it's a safe bet that they all pull a lot more power than the Radeon HD 6970M. It's no wonder that Asus and MSI are avoiding all of those cards. Now if only they'd use a Radeon HD 6870M with 1 GB of video memory, we'd be set and have good $1200-$1400 gaming laptops to choose from. But they won't for some mysterious reason.
Now, don't get me wrong. I like that AMD and Nvidia are both putting products out. I just wish that they wouldn't go so overboard with the power consumption in the quest for a top performance "halo" card that no one should actually buy.
Ok, that post makes perfect sense to me and there really is nothing to argue against it there. I have been looking at the AMD mobile cards and price per watt and performance is defintiely above nvidia. For absolute performance, nvidia had been top, but the 6990M is the best card coming and is cheaper than nvidias top and I would guess uses less power than nvidia's as well.
So basically there are some trade offs is what you are getting at :P sure nvidia may have some cards that are better, but you pay more, use more power, etc. AMD has cheaper cards, use less power and in some cases are very compariable to nvidia's best (and now with the 6990M, better) in terms of performance.
Now I am just wishing my Clevo P170HM had the Radeon 6790M, cause gaming I get maybe 30 minutes of battery life. Maybe and mine has a GTX 280M
If I were to recommend a laptop to a friend today, I would tell them to wait for the 6990M to come out and get that instead of the GTX 485M or 580M by far... hope that comes out soon cause that is looknig to be sick.
A Radeon HD 6970M also uses a lot more power than a GeForce GTX 280M, which had a TDP of 75W. The Radeon HD 6990M is a fully functional Barts chip, rather than fusing off two of the SIMD engines like the 6970M does. The 6990M is also clocked higher. Basically, the 6990M is to the 6970M as the (desktop) 6870 is to the 6850, though there's also a bigger clock speed difference in the desktop parts.
Alienware is already selling the Radeon HD 6990M. In the M17x, they charge $300 more for a GeForce GTX 580M than for a Radeon HD 6990M. In the M18x, it's a difference of $700, for two GTX 580Ms in SLI rather than two 6990Ms in CrossFire. AVA Direct has the Radeon HD 6990M listed as (Pre-Order), but $299.50 cheaper than the GeForce GTX 580M.
Personally, I wouldn't want that much power consumption in a laptop. I wouldn't consider a gaming laptop before the currently unannounced 22 nm successor to Trinity that is presumably coming in 2013. But different people have different preferences, I suppose.
Comments
ya the battery probably last longer and if u dont want to always walk with ur cable around i think its good, My laptop is a 17p screen and last 9 hours if i work on and 5 hours with insane video and work ,gameing lol
all depend how u configure ur laptop .. balanced or high performance, ur can always adjust the light of the screen and all
~The only opinion that matters is your own.Everything else is just advice,~
OP - don't listen to these fools who question why you want what you want. You said you want a gaming laptop that is $1500. I will give you the advice you seek without stupid questions :P
The comp you linked:
http://www.cyberpowerpc.com/system/Xplorer_X6-9500_Gaming_Notebook/
Is probably the best bang for your buck that you will get. The only thing I'd recommend is upgrading the SSD to:
128 GB ADATA S501 V2 SATA III 6.0G/s Gaming MLC Solid State Disk (Single Hard Drive)
That is the 3rd generation SSD that is screaming fast and even @ 120GB, space will go fast if you are gaming and load up school programs, etc.
The 2nd drive is nessessary for downloading music, movies, etc., so good call on that.
The graphics card is one of the most important parts of a laptop if you want to game on it for a few years. The GTX 560M is quite nice.
For wireless you also chose this Intel® Centrino® Ultimate-N 6300 a/b/g/n Wireless Adapter [Intel WiDi Ready] - good job, that is a must on gaming laptops. Never skimp on the wifi's on a gaming laptop
If I were you and had that budget, that is definitely the laptop I'd get. If under a freak accident, you get some extra money to spend on a laptop, I'd look at the Sager you linked, and on that bump the graphics card to the GTX 485M and again make sure to get the Ultimate N 6300 wifi card. It's a bit more expensive, but the GTX 485M card is decently better than the 560M (worth 300 more if you spending that much IMO). It will help you play the latest games down the road longer. The SSD isn't absolutely necessary, but dear lord does it make a difference in loading times, starting apps, and booting up. I went SSD with my Clevo and I will never look back again.
P.S.
Dont expect to get more than an hour or two out of those laptops. You want a gaming laptop, kiss battery life goodbye, but you still have ease of mobility as long as power is around and most of the time it is heh.
There are so many things wrong with that post.
First, it's not possible to make useful recommendations without knowing how the laptop will be used. And the original poster still isn't saying there. He might not even know himself.
Next, what SSD is that? From A-Data's web site, it looks like it's a Marvell controller. Crucial and Intel also sell SSDs based on a Marvell controller, but they have to write their own firmware for it. Writing SSD firmware is hard. Marvell tried to write their own firmware, and it was a diaster. Do you really trust A-Data to find good firmware for the SSD, when they don't have access to either Crucial's or Intel's, which are the only two demonstrably good firmwares for the SSD controller? A-Data's history of shenanigans with SSD's is not encouraging.
The GeForce GTX 560M offers poor performance per watt. The only real reason to consider it at all is that laptop vendors other than Alienware don't seem to want to use the Radeon HD 6870M.
The GeForce GTX 485M has no such justification, as sites that offer the GTX 485M almost invariably offer the Radeon HD 6970M, which offers about the same performance while using far less power. The Radeon HD 6970M also costs a lot less.
From Sager, it's only a $65 upgrade from the GeForce GTX 560M. That's maybe 60% more performance at a cost of increasing the price tag by about 4%. If you're not willing to pay that, then you shouldn't be looking at $1500 gaming laptops.
The Cyber Power PC model doesn't offer the Radeon HD 6970M at all. I guess it does try to compensate for that by being cheaper. The default configuration is awful, though. You absolutely do not want a st-st-st-stuttering JMicron SSD. That's not at all similar to an SSD that is actually good.
Sorry I have not been very clear, Quizzical. I will be using the laptop as follows:
1) For graphical arch programs like AutoCad
2) Plugged into the wall in my dorm room for gaming
3) Taking notes during class (not positive if I will have a place to plug in)
Thank you for your patience Quizzical, and everyone else, in helping me. I am not a computer wiz so bear with me! I understand your point about the Radeon 6970 vs. the nVidia card. I need to bring battery consumption down while still keeping reasonable meat in the machine. Does all of this help to clarify any statements I may have confused you with before? If not, what else do you need to know?
Thank you all!
If someone wants a gaming laptop, you would expect them to buy the best that they can, you don't need to ask what they are doing. Gaming is one of the most demanding things on a laptop (can be equal to video editing/data crunching, etc). Unless he is just word processing, then obviously he doesn't need a nice machine, but he WANTS one... so who gives a **** what he is doing with it. Best bang for the buck is.... best bang for the buck. @ 1500 he will be fine with school stuff and gaming.
You are right on the SSD - I assumed that was an Intel as the site I buy custom laptops from (malibal.com) offers it. So OP may be better off buying their own and installing it :P
560M is defintely not the best card out there, thats why I say specifically get the 485M is a better route to go, but it costs more and pushes him over budget. If you have a budget, whats the point in going 300 over it?
To be fair, I stay away from Radeon personally. I have had terrible luck with them in desktops, but perhaps their mobile versions are better. Nvidia drivers just tend to work better for me. Not once have a I had a problem with them, even doing more "advanced" things like adjusting resolution for TVs, etc. where as I have had a lot of problems with AMD's drivers doing the same things.
Just looked up benchmarks of the Radeon 6970M and it's indeed a bit better than the 560M, but the 485M completely smashes both, but it's defintiely in a price bracket higher lol.
For 1), will that be in class, or only in your dorm room? On any higher end gaming card, any graphically-intensive stuff cannot be done on the battery. Period. A lower end computer that is lighter on battery consumption could do some graphically intensive stuff on the battery, though the battery won't last very long that way.
If you are only going to do high performance stuff in your dorm room, and not also in class, then you'd be far better off with a cheap laptop plus a gaming desktop. I really don't know what you're going to do with the laptop in class. If a student uses a laptop in a course I teach, I usually assume that the student isn't paying attention. That might be different for architecture courses. I guess it's easier to take notes on a laptop when it's just text than when you need complex formulas, unusual symbols, diagrams, or that sort of stuff. Few students know LaTeX well enough to take notes in it on the fly.
So let's consider the option of both a gaming desktop in your room, and also a cheap laptop that you take to class. When in your room, you'd have a vastly nicer computer to use. You'd have a faster processor, faster video card, better keyboard, and better monitor. It would be easier to keep cool, and the heat would be released off to the side and out of the way rather than right underneath the keyboard. It would be ergonomically a lot better, too. It would be more reliable, and easier to upgrade if you decide you need more performance later.
So what about the cheap laptop in class? That would be smaller and lighter, so it's easier to carry to and from class. It would have a battery that could last through all of your classes all day long without needing a recharge until you get home, rather than trying to recharge between classes and having the battery die every day if you have classes in consecutive time periods. It would stay cooler and run quieter, as you could get a laptop that released virtually no heat at idle, which is the situation if you're taking notes on it.
You could also use the laptop as a backup in case your desktop has problems, so that you're not stuck without a computer entirely. A laptop is more likely to die than your desktop would be, however. And getting both the gaming desktop and also the cheap laptop would probably be cheaper than getting a $1500 gaming laptop, too.
That's an awful lot of advantages to a combination of a cheap laptop plus a gaming desktop. The one big disadvantage is if you there is a situation where you can plug in your laptop and need something with high performance graphics, and this situation occurs while in class or while otherwise away from your room and without access to a university-owned computer. For this, however, even a cheap Llano E4-based system would at least get you something functional, with graphical performance not that far shy of most of the laptops that the university recommends and will sell you directly.
-----
"If someone wants a gaming laptop, you would expect them to buy the best that they can, you don't need to ask what they are doing."
Except that what is "best" depends heavily on what they're doing. If battery ilfe, size, weight, noise, temperatures, and/or reliability matter, then trying to cram high end gaming hardware into that may be a poor choice, even apart from the price tag.
"560M is defintely not the best card out there, thats why I say specifically get the 485M is a better route to go, but it costs more and pushes him over budget. If you have a budget, whats the point in going 300 over it?"
What if you could get the performance of the GeForce GTX 485M for a price tag only slightly higher than that of the GeForce GTX 560M and power consumption well between them? Good deal, no? If he gets the Sager, then that is an option, and it's called a Radeon HD 6970M.
"To be fair, I stay away from Radeon personally. I have had terrible luck with them in desktops, but perhaps their mobile versions are better. Nvidia drivers just tend to work better for me."
And how long ago was this? In recent years, AMD and Nvidia have been about as good on desktop Windows drivers. AMD has also unified their desktop and laptop drivers, so if you're not trying to use discrete switchable graphics or buying from a company that disables driver updates, the laptop drivers should as good as the desktop ones.
"Just looked up benchmarks of the Radeon 6970M and it's indeed a bit better than the 560M, but the 485M completely smashes both, but it's defintiely in a price bracket higher lol."
The Radeon HD 6970M performs about the same as the GeForce GTX 485M in games. Getting a proper apples to apples comparison in laptops is hard, as you can't just swap out a video card and use the same machine as before, the way you can in desktops. But you can get a pretty good approximation from the analogous desktop cards.
A Radeon HD 6970M is basically the same hardware as a desktop Radeon HD 6850. A GeForce GTX 485M is basically the same hardware as a desktop GeForce GTX 560 Ti, though the latter is technically a respin of the die used in the former. A GeForce GTX 560M is basically the same hardware as desktop GeForce GTX 550 Ti. A Radeon HD 6850 offers maybe 80% of the performance of a GeForce GTX 560 Ti, while a GeForce GTX 550 Ti offers maybe 55% of that performance. A clear win for the higher end Nvidia card, right?
Not so much when you look at clock speeds. A GeForce GTX 560 Ti has a core clock speed of 822 MHz and a memory clock speed of 1001 MHz. A GeForce GTX 485M cuts those to 575 MHz and 750 MHz, respectively. That means you get less than 75% of the memory bandwidth and 70% of everything else. Meanwhile, a Radeon HD 6850 has clock speeds of 775 MHz core and 1000 MHz memory. A Radeon HD 6970M only cuts these to 680 MHz and 900 MHz, respectively. That means you get 90% of the memory bandwidth and 88% of everything else. Multiply those by the desktop Radeon card getting about 80% of the performance of the desktop GeForce card and you get that the GeForce GTX 485M and the Radeon HD 6970M are essentially tied.
Meanwhile, the GeForce GTX 550 Ti is clocked at 900 MHz core and 1026 MHz memory. The GeForce GTX 560M cuts those to 775 MHz and 625 MHz, respectively. That means you get about 61% of the memory bandwidth and 86% of everything else. The memory bandwidth for the desktop card is actually exaggerated by the mismatched memory channels, but the laptop GeForce GTX 560M still loses more of the performance of the desktop card than the Radeon HD 6970M does.
Meanwhile, in a desktop, the Radeon HD 6850 and GeForce GTX 550 Ti have comparable power consumption. What happens when you give them comparable underclocks for a laptop? My guess is that the GeForce GTX 560M probably uses less power than the Radeon HD 6970M, largely because the latter is loaded up with 2 GB of video memory. But that the Radeon HD 6970M's is more easily compared to a laptop card with 60% of its performance rather than one with nearly the same performance is a huge win for AMD in performance per watt.
Maybe you found some synthetic benchmark that is highly favorable to Nvidia's architecture and makes the Radeon HD 6970M perform more like a GeForce GTX 560M and less like a GeForce GTX 485M. Or maybe you were looking at power consumption numbers. (Just kidding.) But that's not typical of actual gaming performance.
I have an extra GeForce GTX 460 and 16GB (4x4GB) Corsair RAM lying around my closet... I contacted a local computer shop to see what kind of rig they could build me for a reasonable price. Hopefully I will be able to get both a desktop and a laptop.
Thank you for your time and knowledge!
Not to get off topic too much, but here is where I saw the benchmarks:
http://www.notebookcheck.net/AMD-Radeon-HD-6970M.43077.0.html
In games there are some where it does better and others where it's only slightly better, but basically wins in everything.
It's been about 2 years since I used a Radeon at work (silly, but we build soso gaming desktops in our conference rooms for random fun on our conference room LCDs), so it's very possible that they have worked out their kinks. AMD has been sucking for a long time, so I would love to see them compete with nvidia personally (which it sounds like they are from your words as I realize you are very in the know -- not kidding). So perhaps when I build my new desktop in Sept-Nov timeframe I will give AMD a look again. I used to have the ATI Radon 9800 Pro back in the day and loved it, but a ton has changed since then.
Careful, as they might well massively overcharge you, or use cheap junk parts that will make the system unreliable. For a desktop, the ideal thing is to build your own, especially if you already have some of the parts. If you have the parts, then assembling them is easier than you might thing. If you don't know what parts to get, then I could help with that. It looks like your university will sell you a Windows 7 Ultimate license for $80, and you might need to build your own to take advantage of that.
If you can't or won't build your own, then the next best thing is getting one built to order from a site that will tell you exactly what parts they'll use. I guess you could get one with integrated graphics, and then add your own video card. You'd have to pick a case and power supply around the video card that you're going to add, though. If you buy a prebuilt computer, the case and power supply won't be able to handle your discrete card. For integrated graphics, that's most easily done with an AMD 880G or Intel Z68 chipset. For memory, you could find somewhere that will ship it with a single 2 GB module, then pull it out and put your own memory in.
Though if you happen to have a video card and 16 GB of system memory laying around, I'd somewhat suspect that you'd have peripherals, too, and that brings down the cost if you don't have to buy those.
Are you looking at the game benchmarks on that link? Because that's not what I see there. I see nine situations where either the Radeon HD 6970M or the GeForce GTX 485M are below 60 frames per second, and both are listed. Each wins four of those, and they tie in the other one.
Or are you basing your recommendation on the belief that 200 frames per second is somehow better than 150, even though your monitor can only display 60?
well, according to the requirements, he needs a laptop with enough power to use autocad, 3D studio max, adobe creative suite(photoshop and illustrator), and microsoft office. now, those are fairly heafty applications(especially 3DS max) and if he can run those on a laptop, he shouldnt have any problems playing games on the same 15.6" screen.(1440x900 most likely)
he's in a fairly unique situation where his laptop wont just be used for taking notes. he will likely need to actually do work on the laptop during collaboration and class projects. so a brazo laptop is out of the question. fortunately, I think Llano has enough processing power to run the required applications and still allow for the battery life of portability. a Llano with 8 to 16 gigs of ram and a SSD may be his best option for overall best tool that meets his requirements.
Regardless of what a monitor can display or the human eye can percieve, power is power. I would imagine something that can put out 200 fps would last a bit longer than a card that can do 150, no ? :P
Can't believe you would argue that 200 fps isn't better than 150 fps lol. Just kinda common sense there, but I am sure you will find an argument for that too! Honestly, I valued you opinion a bit more until you said that lol.
Suppose that at certain graphical settings, card A can do 200 frames per second, and card B can do 150 frames per second. If you increase the graphical settings a lot, then card A can do 40 frames per second and card B can do 50 frames per second. Which card is better? If you take a blind average of synthetic benchmarks, you'll prefer card A. If real-world performance is all that matters, you'll prefer card B. So which is it?
I say that the difference at the low settings doesn't matter, but the difference at the high settings does. Card B is better, because it lets you turn settings higher while keeping adequate frame rates.
This isn't random. If you look through reviews for cards from the last two generations, you'll find many, many cases where a given GeForce card beats a given Radeon card at low settings when they're both fast enough that the difference doesn't matter, but the Radeon card wins at higher settings when they're slow enough tthe difference does matter. You'll find very few cases where the reverse happens.
And it's not a fluke that it's this way, as it's due to architectural differences. Higher settings put more additional stress on some parts of a GPU than others. AMD intentionally beefed up the parts that will take additional stress at higher settings, while merely saying that substantially over 60 frames per second before you hit a bottleneck in all real games is good enough for the parts that don't take much additional stress at higher settings. Nvidia did this to some degree, but didn't do it as well as AMD did.
The most famous example of this is tessellation. A GeForce GTX 580 has 16 tessellation units (part of the "polymorph engines"). Even a GeForce GTX 560M has four. Meanwhile, a top of the line Radeon HD 6970 only has two. AMD's top of the line card from the previous generation, the Radeon HD 5870, only had one. A synthetic tessellation benchmark can tell the difference quite clearly, and Fermi cards absolutely destroy Evergreen and Northern Islands cards there.
Yet even the tessellator in the Radeon HD 5870 can process hundreds of mllions of triangles per second. At a resolution of 1920x1080, 60 frames per second, and tessellating to the degree that you have a separate triangle for each pixel on the screen, you're barely over a hundred million triangles per second. The hardware tessellator in even the bottom of the line Radeon HD 5450 would laugh at that workload.
Yet if a real game tried to tessellate to that degree, every card on the market would completely choke because other hardware can't deal with the rasterization workload. Cypress, Cayman, Barts (Radeon HD 6970M, among others), GF104 (GeForce GTX 485M, among others), and GF114 (GeForce GTX 580M, among others) all have two raster engines, and they're all clocked somewhat similarly. When it comes time to actually process the triangles that you've created by tessellation in real games, Fermi's tesselation advantage is gone. Fermi's advantage only appears if you tessellate to the degree that you discard the overwhelming majority of the triangles without using them. That's something a synthetic tessellation benchmark might do, but is completely stupid for real games, as it will cripple performance without making the game look any better. Which is why real games don't do that.
-----
"well, according to the requirements, he needs a laptop with enough power to use autocad, 3D studio max, adobe creative suite(photoshop and illustrator), and microsoft office. now, those are fairly heafty applications(especially 3DS max) and if he can run those on a laptop, he shouldnt have any problems playing games on the same 15.6" screen.(1440x900 most likely)"
There's a question of whether he needs to have a laptop, and also run all of those programs on his own computer. Or whether he needs to run all of those programs on the laptop in particular, while in class. If the latter, there's also the question of whether there's a place to plug the laptop in, or whether it will have to use the battery. If the programs don't need to be run that intensively on the laptop, then a Llano A4-based laptop for $500 will work just fine.
If he needs to run those graphical programs in class on the battery, then a high end gaming laptop simply isn't an option. It won't run graphically intensive stuff on the battery at all. In that case, he'd have to get a lower end gaming laptop, and then would also want a desktop for gaming. Llano is again easily the best option for the laptop there.
If he needs to run the graphical programs in class but can plug in the laptop, then the high end gaming laptop would work. There would still be a pretty good case for getting both the Llano-based laptop and also a gaming desktop. But the high end gaming laptop makes more sense in this situation. The university's officially recommended laptops don't include any high end gaming laptops, however.
That possibility exists certainly, but I would say that a good Mom&Pop computer shop is a good alternative to building it yourself, and much better than the lower end build-your-own sites. Especially if they will just put all your own parts together for a nominal fee ($50-100), you can pick out each and every part, and often times they will be willing to service it as well.
The trick is finding a good one.
Windows has a special atm.
Buy a Win7 laptop get an xbox for free.
http://www.microsoft.com/Presspass/press/2011/may11/05-19MSPCXBOXPR.mspx
Now you are just confusiing me - are you arguing for the 485M or the Radeon? Because basically every max settings benchmark shows the 485M owning the Radeon 6970M :P
Glad we agree - I don't even bother looking at benchmarks below high and ultra cause whats the point? Your point is exactly why the 485M is better lol. I am guessing you didn't even look at the benchmarks but eh whatever. Apparently I don't sit as high on my horse as you ;P
He is saying that anything above 60 FPS does not matter (and it really doesn't), so he is only looking at benchmarks which compare both the 485M and the 6970M AND have settings where the 6970M drops below 60 FPS.
Now, where exactly do you see "every" max settings benchmark for the 485M owning the 6970M? In Metro 2033, the 485M averages 48.1 FPS on high whereas the 6970M averages 50 FPS. Now, increase the settings to ultra, and the 485M manages to pull 16 FPS while the 6970M gets 18 FPS. Examples aren't proofs, but this one example already disproves your statement.
High and ultra don't matter. What you need to look at is framerates. The difference between 20 and 25 FPS is notable. The difference between 150 and 200 FPS is not. So what if the 485M gets you those extra 50 FPS on relatively light loads? You won't even notice them, so "power is power" doesn't work here.
There's a reason Quizzical appears to sit on a high horse-he knows his stuff. When he states something, you know that what he says is applicable in real-world situations and that his knowledge is correct almost all the time. If he has given false information, he quickly acknowledges it and corrects himself. If he has actually started to get into the more technical aspects, then you can safely assume that he is correct.
That said, while he does appear to be arguing for the 6970M, his earlier post states that both cards have essentially taken the same amount of wins in the games where they both were tested and achieved less than 60 FPS. It is then easy to assume that they are equal in power when it actually matters, but when you factor in the cost of the cards, the 6970M is far, far ahead of the 485M.
If you look at the game benchmarks on high and ultra, the 485 beats the 6970 anywhere from 0-24% more performance (depends on the game). If something is outperformining now, will it not last longer a little longer? I guess I am confused by you guys all saying that nothing above 60FPS matters. If something can run much higher than that, won't it be able to stay strong longer down the road? You are investing in something that should be able to run games better down the road.
If I am investing money into a machine, i want to get the most life out of it before I upgrade... I am not going to try to buy a system that just barely gets me 60FPS now and expect it to perform for years and years with games, etc. (Not saying Radeon just barely gets 60FPS in all games, just saying in general about getting a card that barely gets FPS since that is all that "matters").
On average, in situations where the difference matters, the GeForce GTX 485M is a little bit faster than the Radeon HD 6970M. If you wanted to call it 5% faster, I wouldn't quibble. If you were going strictly by performance, you'd prefer the GTX 485M.
The issue is that performance isn't the only criterion. In order to get that extra 5% performance or whatever, it takes a lot more power. How much more? We don't really know. If we guess from the desktop cards, then maybe 20% or 30% more. Nvidia doesn't list a TDP for the GeForce GTX 485M, and even if they did, it probably wouldn't be an honest TDP comparable to AMD's claim of 100 W for the Radeon HD 6970M. The difference between 100 W and 120 W is no big deal in a desktop, but in a laptop, it sure is. That much extra power consumption in a laptop is enough that I'd say that the Radeon HD 6970M is clearly the better card for most people. Even the Radeon HD 6970M puts out more heat than you'd want from a laptop gaming card, which is why I'd rather see it come with 1 GB of video memory rather than 2 GB.
And then there is also the price tag. As Sager prices it, the GeForce GTX 485M costs $230 more than the Radeon HD 6970M. That's in line with what other e-tailers charge. That much more money performance that is arguably a little better? If that's what you want, then find a Radeon HD 6990M, which is clearly faster than the GeForce GTX 485M, and tends to be a bit faster than the GeForce GTX 580M, even. And it likely uses less power than either of those Nvidia cards, while also being cheaper to buy.
-----
I started typing up a post once recently with a title to the effect of, "AMD and Nvidia try to fry your laptop". I accidentally closed the browser and lost the text, so I didn't finish it. The basic dilemma is this. AMD has much better performance per watt and performance per mm^2 than Nvidia. For any arbitrary gaming laptop card that Nvidia could make, AMD can make one that gives better performance with lower power consumption and a lower price tag. On the other hand, for any arbitrary gaming laptop card that AMD can make, Nvidia can make one with higher absolute performance.
The problem is that they keep going back and forth, ratcheting up the power consumption and performance. AMD can release a new card with better performance than Nvidia, and then Nvidia can respond with a card with better performance yet, at the cost of much higher power consumption. AMD can release a new card with better performance yet, and less power consumption than Nvidia, but more power consumption than AMD's previous high end. And then Nvidia can go higher performance and power consumption yet.
Two generations ago, Nvidia basically had the high end to itself. This was partially because AMD's mobile drivers were a complete disaster (no driver updates for you, ever!), and partially because AMD's cards had too high of idle power consumption. The Mobility Radeon HD 4870 was the highest performance laptop card, but it was still a terrible laptop card. With the high end basicaly all to itself, Nvidia could say, we'll get you the best performance that we can in a 75 W TDP, because a laptop video card really shouldn't put out more than 75 W.
Now that there is competition and the TDPs are ratcheting up, that 75 W barrier is long since surpassed. AMD claims a TDP of 100 W on the Radeon HD 6970. I haven't seen any official TDP on the GeForce GTX 485M, GeForce GTX 580M, or Radeon HD 6990M, but it's a safe bet that they all pull a lot more power than the Radeon HD 6970M. It's no wonder that Asus and MSI are avoiding all of those cards. Now if only they'd use a Radeon HD 6870M with 1 GB of video memory, we'd be set and have good $1200-$1400 gaming laptops to choose from. But they won't for some mysterious reason.
Now, don't get me wrong. I like that AMD and Nvidia are both putting products out. I just wish that they wouldn't go so overboard with the power consumption in the quest for a top performance "halo" card that no one should actually buy.
Ok, that post makes perfect sense to me and there really is nothing to argue against it there. I have been looking at the AMD mobile cards and price per watt and performance is defintiely above nvidia. For absolute performance, nvidia had been top, but the 6990M is the best card coming and is cheaper than nvidias top and I would guess uses less power than nvidia's as well.
So basically there are some trade offs is what you are getting at :P sure nvidia may have some cards that are better, but you pay more, use more power, etc. AMD has cheaper cards, use less power and in some cases are very compariable to nvidia's best (and now with the 6990M, better) in terms of performance.
Now I am just wishing my Clevo P170HM had the Radeon 6790M, cause gaming I get maybe 30 minutes of battery life. Maybe and mine has a GTX 280M
If I were to recommend a laptop to a friend today, I would tell them to wait for the 6990M to come out and get that instead of the GTX 485M or 580M by far... hope that comes out soon cause that is looknig to be sick.
A Radeon HD 6970M also uses a lot more power than a GeForce GTX 280M, which had a TDP of 75W. The Radeon HD 6990M is a fully functional Barts chip, rather than fusing off two of the SIMD engines like the 6970M does. The 6990M is also clocked higher. Basically, the 6990M is to the 6970M as the (desktop) 6870 is to the 6850, though there's also a bigger clock speed difference in the desktop parts.
Alienware is already selling the Radeon HD 6990M. In the M17x, they charge $300 more for a GeForce GTX 580M than for a Radeon HD 6990M. In the M18x, it's a difference of $700, for two GTX 580Ms in SLI rather than two 6990Ms in CrossFire. AVA Direct has the Radeon HD 6990M listed as (Pre-Order), but $299.50 cheaper than the GeForce GTX 580M.
Personally, I wouldn't want that much power consumption in a laptop. I wouldn't consider a gaming laptop before the currently unannounced 22 nm successor to Trinity that is presumably coming in 2013. But different people have different preferences, I suppose.
Why US always get good shit....