It looks like you're new here. If you want to get involved, click one of these buttons!
The problem is that the chips just can't clock as high. The top clocked chip has a max turbo of 3.8 GHz, as compared to 4.4 GHz for Haswell. IPC is a little better, but not enough to make up for the lower clock speeds.
But they do use less power. It's a laptop chip, not a desktop chip. No wonder there were rumors that Intel wasn't going to release a desktop version at all.
Comments
The intention is for Skylake to be the desktop "high end" market.
Broadwell was intended purely for mobile/laptop, so like quiz said its actually slower than haswell is. But thats intentional.
"The surest way to corrupt a youth is to instruct him to hold in higher esteem those who think alike than those who think differently."
- Friedrich Nietzsche
At least it finally beats AMD APUs, though for 2x the price. It took Intel 3 years to get their gpus up to speed(intel gpu guy said a while back they weren't really trying before so it didn't count). Now if only they can actually release drivers :P
5775c runs GTA V on 140 fps on min settings
What? Nobody won. I don't know if you have noticed, but consoles are on a downward spiral. intel chips are in most servers around the world.
We got ourselves a comedian here...
"The surest way to corrupt a youth is to instruct him to hold in higher esteem those who think alike than those who think differently."
- Friedrich Nietzsche
So Broadwell beats Kaveri in games where you're CPU-limited? Some previous Intel graphics have beaten AMD when CPU-bound, too. The reviews I found tended to be running this at minimum or low settings, in which case, the GPU might not be a meaningful bottleneck. I'd like to see what happens if they test the same chips at high settings so that you know that the GPU is the bottleneck. Even if it's 10 fps for this chip and 8 fps for that one, at least you get some relative information on what the GPU can do.
As for how many years it took Intel to get their GPUs up to speed, we're on 17 years and counting. Not three. Definitely not three.
Earth apparently, time to jump ship.
Tom's Hardware's summary:
The laptop dual cores were slower than Haswell, too. Do you think that's also intentional?
I think that the underlying issue is that Intel simply couldn't get it to clock all that high. That might be caused by their troubled 14 nm process node, in which case, Sky Lake could easily share the same malady. For both Intel and AMD, the highest clocking chips came on 32 nm, even though both have since moved on to newer, better nodes.
Do I believe that a 192 shader part in Broadwell can hang with a 512 shader part in Kaveri in things that push the shaders hard with 32-bit floating point computations? In a word: no. But if you test in things that don't push the GPU much, or lean mostly on memory bandwidth rather than what goes on in the GPU proper, sure they can.
There's also the issue that Broadwell added 16-bit floating point capability. Some GPU architectures have done this as a way to have more throughput at the cost of reduced precision, especially in cell phones. Sometimes 16-bit really is enough, but sometimes it's not. If the drivers try to guess, and guess wrong, things go wrong--likely graphical artifacting. There are compelling reasons why modern GeForce and Radeon cards stay away from 16-bit floating point computations.
You could be correct, obviously new processes bring new challenges.
My understanding, and i could be wrong, is that they were going for power efficiency. So, while it might be slower clock for clock than haswell, if its doing the same amount of work for less power, than its still a good thing for mobile. Most people aren't super concerned about performance with mobile so much as battery life.
I'll have to do some research as its been a few months since i read any in depth broadwell stuff, but i do remember the focus being on power efficiency (kind of similar with Nvidia and Maxwell).
"The surest way to corrupt a youth is to instruct him to hold in higher esteem those who think alike than those who think differently."
- Friedrich Nietzsche
Intel 3 year sarcasm aside, and tomshardware fake benchmarks aside, when the new A10-7870K and 5775c go head to head on leveled settings the broadwell is between the new kaveri and ~8% faster than it, depending on the exact program.
Though I'm personally waiting for gamegpu ru or some of those other russian sites to do some comparison, cause they're the only ones who do logical settings for games.
Broadwell is faster on a per clock cycle basis than Haswell. The problem is that it can't clock as high.
Actually, it looks like they've got 8. I was thinking it was 4. So that means it's a 384 shader part.
But what's also important is not just how many shaders there are, but how they're accessed and what instructions they can do. AMD and Nvidia post a lot of details to help programmers figure out how to optimize code for their architectures. For example:
http://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#maximize-instruction-throughput
Intel gives rather less details than that. The most I could find is a claim that there are 8 shaders, four of which only have a few instructions, and the other four of which it doesn't say what they have. Do I believe that the other 4 shaders in an EU can all do the expensive floating point operations (e.g., exponential, trig, reciprocal) at a rate of one per clock cycle? No. If they could, Intel would probably say so.
lol 16-bit floating point calculations on a 3D game. Hurry move this vertex 0.1 meters over 60 frames when you only have a significant of 1024. Still most 1st generations on a new process node are typically slower, more expensive, and don't have headroom. As the process matures they speed up. The same thing happened when Intel was moving to 22nm.
I don't think people really understand the power of having the major market share. Everyone builds around your hardware. What make Intel good in the consumer space are 2 things. 1st they are on a smaller process node, so have more transistors to work with. 2nd the software developers build around their architecture and prevent things like thrashing in order to have stable performance for the majority of consumers. The first part AMD really isn't able to compete since they are currently a process node behind. But as Quizzical said, its not really a factor since the fastest CPUs from Intel and AMD are on 32nm. However, the 2nd part is very true and can be seen in the professional market. Intel is not always the best choice for certain types of applications. The architecture in AMDs FX series for example are better for rendering 3D CGI.
Now if all console games are running AMD x86-64 processors. What do you think are the chances those makers will also port the games to PC since they will be x86 anyway? I think its a pretty good chance. So to me I see most games being fastest on excavator cores. The only problem in the PC space right now is those are all low watt and lack a decent clock.