Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Broadwell desktop chips arrive: slower than Haswell, and more expensive, too.

QuizzicalQuizzical Member LegendaryPosts: 25,531

The problem is that the chips just can't clock as high.  The top clocked chip has a max turbo of 3.8 GHz, as compared to 4.4 GHz for Haswell.  IPC is a little better, but not enough to make up for the lower clock speeds.

But they do use less power.  It's a laptop chip, not a desktop chip.  No wonder there were rumors that Intel wasn't going to release a desktop version at all.

Comments

  • MavolenceMavolence Member UncommonPosts: 635
    Will it still be possible in a year from now to purchase the haswell i5's as in the 4th generation if one wanted to? Also will there be like a 5th generation haswell or will it all be broadwell from here on out?
  • HrimnirHrimnir Member RarePosts: 2,415
    Originally posted by Mavolence
    Will it still be possible in a year from now to purchase the haswell i5's as in the 4th generation if one wanted to? Also will there be like a 5th generation haswell or will it all be broadwell from here on out?

    The intention is for Skylake to be the desktop "high end" market.

    Broadwell was intended purely for mobile/laptop, so like quiz said its actually slower than haswell is.  But thats intentional.

    "The surest way to corrupt a youth is to instruct him to hold in higher esteem those who think alike than those who think differently."

    - Friedrich Nietzsche

  • 13lake13lake Member UncommonPosts: 719

    At least it finally beats AMD APUs, though for 2x the price. It took Intel 3 years to get their gpus up to speed(intel gpu guy said a while back they weren't really trying before so it didn't count). Now if only they can actually release drivers :P

     

    5775c runs GTA V on 140 fps on min settings :)

  • Mondo80Mondo80 Member UncommonPosts: 194
    AMD won over intel years ago.  The PS4, Xbox One and the Wii U all use AMD chips w/ integrated graphics.  Millions of each systems have already been sold around the world and developers are pushing to code specifically for AMD to get the most out of them.  It won't be to long before AMD plops down a 16 core chip with their own version of hyperthreading.
  • maybebakedmaybebaked Member UncommonPosts: 305
    Originally posted by Mondo80
    AMD won over intel years ago.  The PS4, Xbox One and the Wii U all use AMD chips w/ integrated graphics.  Millions of each systems have already been sold around the world and developers are pushing to code specifically for AMD to get the most out of them.  It won't be to long before AMD plops down a 16 core chip with their own version of hyperthreading.

    What? Nobody won. I don't know if you have noticed, but consoles are on a downward spiral. intel chips are in most servers around the world. 

  • HrimnirHrimnir Member RarePosts: 2,415
    Originally posted by Mondo80
    AMD won over intel years ago.  The PS4, Xbox One and the Wii U all use AMD chips w/ integrated graphics.  Millions of each systems have already been sold around the world and developers are pushing to code specifically for AMD to get the most out of them.  It won't be to long before AMD plops down a 16 core chip with their own version of hyperthreading.

    We got ourselves a comedian here...

    "The surest way to corrupt a youth is to instruct him to hold in higher esteem those who think alike than those who think differently."

    - Friedrich Nietzsche

  • QuizzicalQuizzical Member LegendaryPosts: 25,531
    Originally posted by 13lake

    At least it finally beats AMD APUs, though for 2x the price. It took Intel 3 years to get their gpus up to speed(intel gpu guy said a while back they weren't really trying before so it didn't count). Now if only they can actually release drivers :P

     

    5775c runs GTA V on 140 fps on min settings :)

    So Broadwell beats Kaveri in games where you're CPU-limited?  Some previous Intel graphics have beaten AMD when CPU-bound, too.  The reviews I found tended to be running this at minimum or low settings, in which case, the GPU might not be a meaningful bottleneck.  I'd like to see what happens if they test the same chips at high settings so that you know that the GPU is the bottleneck.  Even if it's 10 fps for this chip and 8 fps for that one, at least you get some relative information on what the GPU can do.

    As for how many years it took Intel to get their GPUs up to speed, we're on 17 years and counting.  Not three.  Definitely not three.

  • HeretiqueHeretique Member RarePosts: 1,536
    Originally posted by DMKano
    Originally posted by Hrimnir
    Originally posted by Mondo80
    AMD won over intel years ago.  The PS4, Xbox One and the Wii U all use AMD chips w/ integrated graphics.  Millions of each systems have already been sold around the world and developers are pushing to code specifically for AMD to get the most out of them.  It won't be to long before AMD plops down a 16 core chip with their own version of hyperthreading.

    We got ourselves a comedian here...

     

    Indeed.

     

    This same type of "couldn't be more wrong" posts happen in MMO discussion threads too  - and I always wonder what world do these folks live in?

     

    Earth apparently, time to jump ship.

  • gervaise1gervaise1 Member EpicPosts: 6,919

    Tom's Hardware's summary:

    • a 14nm manufacturing process gives Intel a distinct advantage, which manifests as four IA cores fast enough for desktop workloads and a significantly more complex graphics engine able to hang with many main stream add-in cards, all crammed into a modest 65W TDP.
    And by "hang around with main stream" the review is talking about as good as or better than e.g. an over clocked R7 250X with GDDR 5 or GTX 560 (non-Ti). 
     
    Anadtech - same thing basically (the graphics part is fast) although they haven't crunched the numbers.
     
    So the price basically includes a main stream graphics card; needs less power needed so smaller motor, so less heat, simpler fan solution; smaller case option.... these could catch on. The chips may be unlocked as well it seems.
     
    Depends of course on Skylark. Not 100% clear tough whether all Skylake variants will use the new gpu solution. Looks like the fastest one will - with 50% more "executable units". Other variants .... maybe not.  
  • QuizzicalQuizzical Member LegendaryPosts: 25,531
    Originally posted by Hrimnir
    Originally posted by Mavolence
    Will it still be possible in a year from now to purchase the haswell i5's as in the 4th generation if one wanted to? Also will there be like a 5th generation haswell or will it all be broadwell from here on out?

    The intention is for Skylake to be the desktop "high end" market.

    Broadwell was intended purely for mobile/laptop, so like quiz said its actually slower than haswell is.  But thats intentional.

    The laptop dual cores were slower than Haswell, too.  Do you think that's also intentional?

    I think that the underlying issue is that Intel simply couldn't get it to clock all that high.  That might be caused by their troubled 14 nm process node, in which case, Sky Lake could easily share the same malady.  For both Intel and AMD, the highest clocking chips came on 32 nm, even though both have since moved on to newer, better nodes.

  • QuizzicalQuizzical Member LegendaryPosts: 25,531
    Originally posted by gervaise1

    Tom's Hardware's summary:

    • a 14nm manufacturing process gives Intel a distinct advantage, which manifests as four IA cores fast enough for desktop workloads and a significantly more complex graphics engine able to hang with many main stream add-in cards, all crammed into a modest 65W TDP.
    And by "hang around with main stream" the review is talking about as good as or better than e.g. an over clocked R7 250X with GDDR 5 or GTX 560 (non-Ti). 
     
    Anadtech - same thing basically (the graphics part is fast) although they haven't crunched the numbers.
     
    So the price basically includes a main stream graphics card; needs less power needed so smaller motor, so less heat, simpler fan solution; smaller case option.... these could catch on. The chips may be unlocked as well it seems.
     
    Depends of course on Skylark. Not 100% clear tough whether all Skylake variants will use the new gpu solution. Looks like the fastest one will - with 50% more "executable units". Other variants .... maybe not.  

    Do I believe that a 192 shader part in Broadwell can hang with a 512 shader part in Kaveri in things that push the shaders hard with 32-bit floating point computations?  In a word:  no.  But if you test in things that don't push the GPU much, or lean mostly on memory bandwidth rather than what goes on in the GPU proper, sure they can.

    There's also the issue that Broadwell added 16-bit floating point capability.  Some GPU architectures have done this as a way to have more throughput at the cost of reduced precision, especially in cell phones.  Sometimes 16-bit really is enough, but sometimes it's not.  If the drivers try to guess, and guess wrong, things go wrong--likely graphical artifacting.  There are compelling reasons why modern GeForce and Radeon cards stay away from 16-bit floating point computations.

  • HrimnirHrimnir Member RarePosts: 2,415
    Originally posted by Quizzical
    Originally posted by Hrimnir
    Originally posted by Mavolence
    Will it still be possible in a year from now to purchase the haswell i5's as in the 4th generation if one wanted to? Also will there be like a 5th generation haswell or will it all be broadwell from here on out?

    The intention is for Skylake to be the desktop "high end" market.

    Broadwell was intended purely for mobile/laptop, so like quiz said its actually slower than haswell is.  But thats intentional.

    The laptop dual cores were slower than Haswell, too.  Do you think that's also intentional?

    I think that the underlying issue is that Intel simply couldn't get it to clock all that high.  That might be caused by their troubled 14 nm process node, in which case, Sky Lake could easily share the same malady.  For both Intel and AMD, the highest clocking chips came on 32 nm, even though both have since moved on to newer, better nodes.

    You could be correct, obviously new processes bring new challenges.

    My understanding, and i could be wrong, is that they were going for power efficiency.  So, while it might be slower clock for clock than haswell, if its doing the same amount of work for less power, than its still a good thing for mobile.  Most people aren't super concerned about performance with mobile so much as battery life.

    I'll have to do some research as its been a few months since i read any in depth broadwell stuff, but i do remember the focus being on power efficiency (kind of similar with Nvidia and Maxwell).

    "The surest way to corrupt a youth is to instruct him to hold in higher esteem those who think alike than those who think differently."

    - Friedrich Nietzsche

  • 13lake13lake Member UncommonPosts: 719
    Originally posted by Quizzical

    So Broadwell beats Kaveri in games where you're CPU-limited?  Some previous Intel graphics have beaten AMD when CPU-bound, too.  The reviews I found tended to be running this at minimum or low settings, in which case, the GPU might not be a meaningful bottleneck.  I'd like to see what happens if they test the same chips at high settings so that you know that the GPU is the bottleneck.  Even if it's 10 fps for this chip and 8 fps for that one, at least you get some relative information on what the GPU can do.

    As for how many years it took Intel to get their GPUs up to speed, we're on 17 years and counting.  Not three.  Definitely not three.

    Intel 3 year sarcasm aside, and tomshardware fake benchmarks aside, when the new A10-7870K and 5775c go head to head on leveled settings the broadwell is between the new kaveri and ~8% faster than it, depending on the exact program.

     

    Though I'm personally waiting for gamegpu ru or some of those other russian sites to do some comparison, cause they're the only ones who do logical settings for games.

  • QuizzicalQuizzical Member LegendaryPosts: 25,531
    Originally posted by Hrimnir
    Originally posted by Quizzical
    Originally posted by Hrimnir
    Originally posted by Mavolence
    Will it still be possible in a year from now to purchase the haswell i5's as in the 4th generation if one wanted to? Also will there be like a 5th generation haswell or will it all be broadwell from here on out?

    The intention is for Skylake to be the desktop "high end" market.

    Broadwell was intended purely for mobile/laptop, so like quiz said its actually slower than haswell is.  But thats intentional.

    The laptop dual cores were slower than Haswell, too.  Do you think that's also intentional?

    I think that the underlying issue is that Intel simply couldn't get it to clock all that high.  That might be caused by their troubled 14 nm process node, in which case, Sky Lake could easily share the same malady.  For both Intel and AMD, the highest clocking chips came on 32 nm, even though both have since moved on to newer, better nodes.

    You could be correct, obviously new processes bring new challenges.

    My understanding, and i could be wrong, is that they were going for power efficiency.  So, while it might be slower clock for clock than haswell, if its doing the same amount of work for less power, than its still a good thing for mobile.  Most people aren't super concerned about performance with mobile so much as battery life.

    I'll have to do some research as its been a few months since i read any in depth broadwell stuff, but i do remember the focus being on power efficiency (kind of similar with Nvidia and Maxwell).

    Broadwell is faster on a per clock cycle basis than Haswell.  The problem is that it can't clock as high.

  • 13lake13lake Member UncommonPosts: 719
    Do intel EU units have 8 or 10 shaders in them ?, iris 6200 pro is either 384 shaders or 480 shaders depending on the configuration
  • QuizzicalQuizzical Member LegendaryPosts: 25,531
    Originally posted by 13lake
    Do intel EU units have 8 or 10 shaders in them ?, iris 6200 pro is either 384 shaders or 480 shaders depending on the configuration

    Actually, it looks like they've got 8.  I was thinking it was 4.  So that means it's a 384 shader part.

    But what's also important is not just how many shaders there are, but how they're accessed and what instructions they can do.  AMD and Nvidia post a lot of details to help programmers figure out how to optimize code for their architectures.  For example:

    http://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#maximize-instruction-throughput

    Intel gives rather less details than that.  The most I could find is a claim that there are 8 shaders, four of which only have a few instructions, and the other four of which it doesn't say what they have.  Do I believe that the other 4 shaders in an EU can all do the expensive floating point operations (e.g., exponential, trig, reciprocal) at a rate of one per clock cycle?  No.  If they could, Intel would probably say so.

  • CleffyCleffy Member RarePosts: 6,414

    lol 16-bit floating point calculations on a 3D game. Hurry move this vertex 0.1 meters over 60 frames when you only have a significant of 1024. Still most 1st generations on a new process node are typically slower, more expensive, and don't have headroom. As the process matures they speed up. The same thing happened when Intel was moving to 22nm.

    I don't think people really understand the power of having the major market share. Everyone builds around your hardware. What make Intel good in the consumer space are 2 things. 1st they are on a smaller process node, so have more transistors to work with. 2nd the software developers build around their architecture and prevent things like thrashing in order to have stable performance for the majority of consumers. The first part AMD really isn't able to compete since they are currently a process node behind. But as Quizzical said, its not really a factor since the fastest CPUs from Intel and AMD are on 32nm. However, the 2nd part is very true and can be seen in the professional market. Intel is not always the best choice for certain types of applications. The architecture in AMDs FX series for example are better for rendering 3D CGI.

    Now if all console games are running AMD x86-64 processors. What do you think are the chances those makers will also port the games to PC since they will be x86 anyway? I think its a pretty good chance. So to me I see most games being fastest on excavator cores. The only problem in the PC space right now is those are all low watt and lack a decent clock.

Sign In or Register to comment.