Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Kaveri Reviews are out

RidelynnRidelynn Member EpicPosts: 7,383

Umm... I'm sorry for anyone who was waiting for one of these.

Yes, the integrated graphics are nice, and much improved over previous APUs. New heterogeneous instructions allow mixed CPU/GPU instructions to be performed very efficiently, and all together that makes for a very compelling CPU.

But raw CPU performance is... not really that much different. Sometimes it's better than Richland by a little, sometimes not by a little.

For existing games, and probably all PC games that don't utilize heterogenous instructions or Mantle (which will probably be most of them), you'll see better GPU performance, but barely faster CPU performance over Richland.

It's still a very nice part for a budget system. And the GCN cores make it very compelling, and better for gaming - as long as you aren't CPU or RAM speed limited. But for anyone expecting 15-20% on the CPU side, well, you only see that with heterogeneous loads, and nothing really supports that yet.

Comments

  • syntax42syntax42 Member UncommonPosts: 1,385
    Originally posted by Ridelynn

    It's still a very nice part for a budget system. And the GCN cores make it very compelling, and better for gaming - as long as you aren't CPU or RAM speed limited. But for anyone expecting 15-20% on the CPU side, well, you only see that with heterogeneous loads, and nothing really supports that yet.

    Do the new console APUs support that feature?  If so, I would expect development to support it on the PC in the near-future.

  • TheLizardbonesTheLizardbones Member CommonPosts: 10,910

    What about performance in the future? Given that both next gen consoles are using AMD's apus, are future games likely to take advantage of both the cpu and gpu having full access to all the available memory and stuff like that?

    **

    :-)

    syntax beat me to the question.

    I can not remember winning or losing a single debate on the internet.

  • sacredfoolsacredfool Member UncommonPosts: 849

    Yes, both XBO and PS4 feature HSA and hUMA or whatever it is AMD calls it. 

     

    I haven't heard however anything about games that use it being developed however, but then I don't read much about games other then MMOs. Still even if the games exist, my bet is we will see new APUs from AMD before those games are released.


    Originally posted by nethaniah

    Seriously Farmville? Yeah I think it's great. In a World where half our population is dying of hunger the more fortunate half is spending their time harvesting food that doesn't exist.


  • CleffyCleffy Member RarePosts: 6,414

    I was not expecting much usefulness in the cpus efficiency. In most applications its pretty much a dual core that runs a bit slower since it has all this unused quad core efficiency mumbo jumbo. Under heterogenous loads or loads that make use of the cpu's design it will be faster, and considering AMDs position that's just not being adopted under typical x86 development. It might change with 64-bit as that's not an Intel Compiler. At this point most applications should be developed in 64-bit on Windows machines. I guess this explains why AMD has not offered a refresh of FX processors, no point if the per core efficiency is the same. Also on the reviews its good to note the A10-6800k runs 400 MHz faster than the A10-7850K, so per clock efficiency probably has improved.

    The real interesting aspect of Kaveri is with games, and its something we will not see with the current reviews since there are not the motherboards that utilize some huge selling points of Kaveri like the GDDR5 memory controller. GDDR5 would be a pretty good base building point for a Steam Box, or purpose built gaming machine.

  • sacredfoolsacredfool Member UncommonPosts: 849
    Originally posted by Cleffy

    The real interesting aspect of Kaveri is with games, and its something we will not see with the current reviews since there are not the motherboards that utilize some huge selling points of Kaveri like the GDDR5 memory controller. GDDR5 would be a pretty good base building point for a Steam Box, or purpose built gaming machine.

    I admit I am biased for Intel/Nvidia, and have been for years, but even if I look at it objectively...

    The Kaveri APUs are something special but they are not useful for anything other then entry level gaming. Whats more important, it won't change - AMDs architecture is hitting it's limits and they can't do anything about that. No amount of software is going to change it, be it mantle or games starting to take advantage of the hUMA. 

    Kaveri looks like a nice future-proof system for those that want to spend $500 or be able to play on their work PC. It's hard to justify a GTX640  to your boss but buying that Kaveri APU is totally innocent... honest! LAN party anyone? ;)

    Still, a Kaveri system won't be a gaming machine without a dedicated GPU, and if you get a dedicated GPU you can just was well get an i5.

     

    Kaveri is actually surprisingly good, I expected less.  I see a use for kaveri outside of gaming. These things will probably be great for laptops and office PCs. 


    Originally posted by nethaniah

    Seriously Farmville? Yeah I think it's great. In a World where half our population is dying of hunger the more fortunate half is spending their time harvesting food that doesn't exist.


  • QuizzicalQuizzical Member LegendaryPosts: 25,531

    Kaveri is like Llano:  a laptop part through and through.  And it will be a major advance there, as it can bring power consumption way down as compared to previous parts without giving up much performance.

    Unfortunately, like Llano, it isn't able to squeeze that much more performance out of giving it a lot more power headroom.  100 W in a desktop is not a big deal, and Richland was able to get vastly more performance out of 100 W than 45 W.  Kaveri, not so much.

    Before, I thought it was strange that AMD wasn't planning an FX-series chip with Steamroller cores.  Now I don't.  It's just not enough of an advance to justify making a whole new chip, especially when it needs a bunch of server stuff to be worthwhile.

    What I do think is strange, though, is that apparently Kaveri isn't coming to laptops for quite a while.  That's exactly where Kaveri is needed, and badly.

  • CleffyCleffy Member RarePosts: 6,414

    The real interesting thing with Kaveri is the plan for AMD to release the same CPU that's in the PS4 to its partners for desktop PCs.

    Its hard to say AMDs architecture is hitting its limits when its not even being utilized. The main issue with AMDs hardware is that there is barely any support for it. Given that x86-64 is an AMD architecture and beat out Intel, the rollout of 64-bit applications which has been terribly delayed will have a noticeable impact on improving AMD performance. Also the adoption of heterogeneous computing will mean that the single 128 bit FPU per 2 cores will be made up for with the massive amount of floating point processing power available with over 500 stream processors.

  • QuizzicalQuizzical Member LegendaryPosts: 25,531
    Originally posted by Cleffy

    The real interesting thing with Kaveri is the plan for AMD to release the same CPU that's in the PS4 to its partners for desktop PCs.

    What are you talking about?  That doesn't even make sense.  The PS4 chip would make an awful desktop chip because the single-threaded CPU performance is so low.  The CPU cores in Kaveri have nothing to do with those in the PS4.

  • RidelynnRidelynn Member EpicPosts: 7,383

    PS4/XBone are Jaguar-based cores, evolved from the low-power Bobcat design. Kaveri has Steamroller-based cores, which is based loosely on the higher performance FX-line.

    Both use GCN for the GPU side though.

  • sacredfoolsacredfool Member UncommonPosts: 849
    Originally posted by Quizzical
    Originally posted by Cleffy

    The real interesting thing with Kaveri is the plan for AMD to release the same CPU that's in the PS4 to its partners for desktop PCs.

    What are you talking about?  That doesn't even make sense.  The PS4 chip would make an awful desktop chip because the single-threaded CPU performance is so low.  The CPU cores in Kaveri have nothing to do with those in the PS4.

    I am not sure what he meant either... I doubt even Steam Boxes will use Jaguar, though that is more probable. 

    And Cleffy, how much do you wanna bet new software is/isn't going to save AMDs Kaveri architecture? 


    Originally posted by nethaniah

    Seriously Farmville? Yeah I think it's great. In a World where half our population is dying of hunger the more fortunate half is spending their time harvesting food that doesn't exist.


  • CleffyCleffy Member RarePosts: 6,414

    lol ok it's not going to be a Kaveri Chip, but they are releasing that chip to it's partners. The chip would clearly be used as a budget gaming machine, or a small form factor machine. Of course it would be bad in normal workloads.

    I think there is a simple fundamental reason why AMD will perform better at 64-bit. Most 32-bit applications are compiled using the Intel Compiler that favors its architecture. People still use the Intel Compiler for x86-64, but this is still a more favorable compile for AMD than was 32-bit. There will still be a persistent performance difference due to Intel being a die size smaller, but its not as bad as with 32-bit applications.

    Also adoption is hurting AMD more than anything. Pretty much half the CPU side is useless under most work loads because it does not know how to utilize the cores.

  • TheLizardbonesTheLizardbones Member CommonPosts: 10,910


    Originally posted by sacredfool

    Originally posted by Quizzical

    Originally posted by Cleffy The real interesting thing with Kaveri is the plan for AMD to release the same CPU that's in the PS4 to its partners for desktop PCs.
    What are you talking about?  That doesn't even make sense.  The PS4 chip would make an awful desktop chip because the single-threaded CPU performance is so low.  The CPU cores in Kaveri have nothing to do with those in the PS4.
    I am not sure what he meant either... I doubt even Steam Boxes will use Jaguar, though that is more probable. 

    And Cleffy, how much do you wanna bet new software is/isn't going to save AMDs Kaveri architecture? 



    Low end SteamMachines start with an A8 apu and an R9 270 discrete video card and run $499.

    **

    Wait, it's an A6, not an A8.

    http://www.engadget.com/2014/01/06/cyberpowerpc-steam-machines/

    I can not remember winning or losing a single debate on the internet.

  • TheLizardbonesTheLizardbones Member CommonPosts: 10,910

    So here's a question. I currently have an Intel Core2 Duo Q660 cpu, and an AMD HD5770 card with 1GB of memory.

    I can get an i5-4430 Haswell or an AMD A10-7850k for the same price. I can get a motherboard and memory for either at the same price. In a pinch, I could use the HD5770 card with either if that would be a better graphics option, though that doesn't seem likely.

    Which will give a better experience in the short term, if what I'm doing is playing video games, and possibly using Microsoft Office? I rarely play the latest and greatest game that pushes the boundaries of what video cards can do, but it's not impossible that I would I plan on buying a low to midrange video card at some point in the future. Does that change either one of these chips being a better purchase?

    **

    The short term option involves not purchasing a video card right now.

    I can not remember winning or losing a single debate on the internet.

  • syntax42syntax42 Member UncommonPosts: 1,385
    Originally posted by lizardbones

    So here's a question. I currently have an Intel Core2 Duo Q660 cpu, and an AMD HD5770 card with 1GB of memory.

    I can get an i5-4430 Haswell or an AMD A10-7850k for the same price. I can get a motherboard and memory for either at the same price. In a pinch, I could use the HD5770 card with either if that would be a better graphics option, though that doesn't seem likely.

    The i5-4430's integrated graphics are probably on-par or worse than the HD 5770.  If you go that route, plan to get a real graphics card in the future.

    The AMD APUs have much better integrated graphics with a CPU that isn't nearly as good as their Intel competition.  They are good enough for gaming, though.  I have a A8-4500M in my laptop and it plays SWTOR at low-med graphics settings and gets 60fps in all but the most crowded or graphically-intense areas.

    If you're looking for a short-term solution, go with the AMD APU.  If you're looking for a long-term solution, either could work, as you might be able to replace the APU in the future without spending much money.  The advantage of the i5 is that it offers a lot more CPU power and you could pair it with a real GPU for a gaming experience that is close to what today's $1000 machines offer.

  • jdnewelljdnewell Member UncommonPosts: 2,237
    Personally I would go with the I5 and use the 5770 until you can afford to buy a better GPU. The 5770 will be fine to play games on until an upgrade is bought IMO.
  • TheLizardbonesTheLizardbones Member CommonPosts: 10,910


    Originally posted by syntax42
    Originally posted by lizardbones So here's a question. I currently have an Intel Core2 Duo Q660 cpu, and an AMD HD5770 card with 1GB of memory. I can get an i5-4430 Haswell or an AMD A10-7850k for the same price. I can get a motherboard and memory for either at the same price. In a pinch, I could use the HD5770 card with either if that would be a better graphics option, though that doesn't seem likely.
    The i5-4430's integrated graphics are probably on-par or worse than the HD 5770.  If you go that route, plan to get a real graphics card in the future.

    The AMD APUs have much better integrated graphics with a CPU that isn't nearly as good as their Intel competition.  They are good enough for gaming, though.  I have a A8-4500M in my laptop and it plays SWTOR at low-med graphics settings and gets 60fps in all but the most crowded or graphically-intense areas.

    If you're looking for a short-term solution, go with the AMD APU.  If you're looking for a long-term solution, either could work, as you might be able to replace the APU in the future without spending much money.  The advantage of the i5 is that it offers a lot more CPU power and you could pair it with a real GPU for a gaming experience that is close to what today's $1000 machines offer.




    I am leaning towards the Intel proc, because I know it's a stronger processor, and because my experience with my existing hardware has been so good. My 5770 doesn't seem to be struggling a lot with the stuff I'm doing now anyway. I think I was just hoping that the AMD part would be some kind of miracle piece of hardware.

    I can not remember winning or losing a single debate on the internet.

  • RidelynnRidelynn Member EpicPosts: 7,383

    I wouldn't hold my breath for PC games to incorporate the heterogeneous stuff - that's really to allow mixed CPU/GPU calculations. Things like OpenCL would see a big benefit, and it may be that AMD includes some of it in Mantle so that it's transparent and could be used in games, but Mantle isn't DirectX, and I don't see many developers forgoing DirectX for Mantle, at least en masse.

    I think those instructions are more for the HPC side of the house, where these little APU's make for excellent low power microserver options. You get the mixed CPU/GPU, plus the heterogeneous instructions that reduce the penalty for when your code has to switch from CPU to GPU for whatever reason. That is realy compelling for the high-performance crowd -- the kind that would otherwise be eyeballing Tesla or Phi, but don't quite have enough budget, power, or facilities to handle it. A rack full of micro-sized APU's could put out some decent teraflops without needing it's own powerplant or private investment firm, and the heterogeneous instructions eliminate some significant bottlenecks that traditional co-processors have when they have to communicate over a bus rather than intra-die. It's not aimed to replace Tesla/Phi, those are high end parts that are commanding a high end price tag, but it's meant to fill in the gap for the teams that could use some of that capability but don't have the budget - and that's a significant audience, it's just not the gaming audience.

    But for gaming - the GPU gets used mostly for graphics, not mixed CPU/GPU calculations -- at least with today's gaming engines, that could change, but I don't think it will have a huge impact because the bulk of your GPU cycles are still going to be dedicated to just producing your graphics.

  • QuizzicalQuizzical Member LegendaryPosts: 25,531
    Originally posted by lizardbones

    So here's a question. I currently have an Intel Core2 Duo Q660 cpu, and an AMD HD5770 card with 1GB of memory.

    I can get an i5-4430 Haswell or an AMD A10-7850k for the same price. I can get a motherboard and memory for either at the same price. In a pinch, I could use the HD5770 card with either if that would be a better graphics option, though that doesn't seem likely.

    Which will give a better experience in the short term, if what I'm doing is playing video games, and possibly using Microsoft Office? I rarely play the latest and greatest game that pushes the boundaries of what video cards can do, but it's not impossible that I would I plan on buying a low to midrange video card at some point in the future. Does that change either one of these chips being a better purchase?

    **

    The short term option involves not purchasing a video card right now.

    As compared to Kaveri, the Radeon HD 5770 could be anywhere from roughly even to dramatically faster, depending on how limited by memory bandwidth you are.  Intel HD Graphics 4600 is much, much slower than either.  The only reason I could see you using the integrated graphics much is if either the video card dies and you need a backup, or if you want to do some GPU compute on Kaveri--which is unlikely.

    So I'd actually recommend neither.  The Core i5-4430 takes away too much clock speed.  Either pay more for a higher bin (3.9 GHz turbo versus 3.2 GHz) or save yourself a bunch of money by going with a cheaper AMD processor.

  • QuizzicalQuizzical Member LegendaryPosts: 25,531
    Originally posted by Ridelynn

    I wouldn't hold my breath for PC games to incorporate the heterogeneous stuff - that's really to allow mixed CPU/GPU calculations. Things like OpenCL would see a big benefit, and it may be that AMD includes some of it in Mantle so that it's transparent and could be used in games, but Mantle isn't DirectX, and I don't see many developers forgoing DirectX for Mantle, at least en masse.

    I think those instructions are more for the HPC side of the house, where these little APU's make for excellent low power microserver options. You get the mixed CPU/GPU, plus the heterogeneous instructions that reduce the penalty for when your code has to switch from CPU to GPU for whatever reason. That is realy compelling for the high-performance crowd -- the kind that would otherwise be eyeballing Tesla or Phi, but don't quite have enough budget, power, or facilities to handle it. A rack full of micro-sized APU's could put out some decent teraflops without needing it's own powerplant or private investment firm, and the heterogeneous instructions eliminate some significant bottlenecks that traditional co-processors have when they have to communicate over a bus rather than intra-die. It's not aimed to replace Tesla/Phi, those are high end parts that are commanding a high end price tag, but it's meant to fill in the gap for the teams that could use some of that capability but don't have the budget - and that's a significant audience, it's just not the gaming audience.

    But for gaming - the GPU gets used mostly for graphics, not mixed CPU/GPU calculations -- at least with today's gaming engines, that could change, but I don't think it will have a huge impact because the bulk of your GPU cycles are still going to be dedicated to just producing your graphics.

    There's some cool stuff that you can do if you can assume that transferring data between the CPU and GPU is trivial in both directions, rather than having to go through a PCI Express bus.  But if you assume everyone has that, then your game chokes on anyone who uses a discrete video card--even a high end card.  We might see it some on PS4 and Xbox One, and maybe some PC ports of console games will have it as an optional code path.  But it's not going to be widespread in PC gaming until discrete video cards are all but dead.  Which could easily happen sometime around "never".  So basically, I agree with your first paragraph.

    As for HPC, what you need depends tremendously on what algorithms you need to run.  If you don't need FMA, video cards lose a lot of their appeal.  Same if you need a lot of integer computations or a good bit of branching.  If you have an algorithm that mixes portions that need high single-threaded performance (should run on CPU) with SIMD-friendly no-branching floating-point operations (should run on GPU) appropriately, then Kaveri might well be a miracle chip.  I don't know if any such algorithms happen to exist in the wild, though.  Still, it's probably possible to write a synthetic where a 45 W Kaveri gets more than double the performance of any non-Kaveri single-socket system you could build--even comparing it to Xeon E7 plus Tesla, Fire Pro, or Xeon Phi.

  • BarbarbarBarbarbar Member UncommonPosts: 271

    The Gigabyte Steammachine stands out as it is the only one AFAIK that hasn't got a dedicated GPU. Instead if has adopted a different strategy, to use a large i7, in order to get the best integrated graphics Intel can offer. Iris Pro 5200 I think it's called.

    http://www.tomshardware.com/news/gigabyte-brix-pro-steam-machine,25704.html

    I think Kaveri could be interesting here. I'm suprised that the Anandtech review shows the Iris do so well, but surely there is money to be saved ditching the i7 and using Kaveri. You lose cpu power of course, but would the user segment even notice?

     

     

     

  • QuizzicalQuizzical Member LegendaryPosts: 25,531
    Originally posted by Barbarbar

    The Gigabyte Steammachine stands out as it is the only one AFAIK that hasn't got a dedicated GPU. Instead if has adopted a different strategy, to use a large i7, in order to get the best integrated graphics Intel can offer. Iris Pro 5200 I think it's called.

    http://www.tomshardware.com/news/gigabyte-brix-pro-steam-machine,25704.html

    I think Kaveri could be interesting here. I'm suprised that the Anandtech review shows the Iris do so well, but surely there is money to be saved ditching the i7 and using Kaveri. You lose cpu power of course, but would the user segment even notice?

     

     

     

    Kaveri has a much faster GPU than the Iris Pro, but the latter has a 128 MB cache that can be used for GPU cache and massively save on memory bandwidth.  If you're mostly bandwidth-limited, then that's how the Iris Pro can win.  Pair both chips with 1066 MHz DDR3 and you'd probably see the Iris Pro win in most games.

  • BarbarbarBarbarbar Member UncommonPosts: 271
    Originally posted by Quizzical
    Originally posted by Barbarbar

    The Gigabyte Steammachine stands out as it is the only one AFAIK that hasn't got a dedicated GPU. Instead if has adopted a different strategy, to use a large i7, in order to get the best integrated graphics Intel can offer. Iris Pro 5200 I think it's called.

    http://www.tomshardware.com/news/gigabyte-brix-pro-steam-machine,25704.html

    I think Kaveri could be interesting here. I'm suprised that the Anandtech review shows the Iris do so well, but surely there is money to be saved ditching the i7 and using Kaveri. You lose cpu power of course, but would the user segment even notice?

     

     

     

    Kaveri has a much faster GPU than the Iris Pro, but the latter has a 128 MB cache that can be used for GPU cache and massively save on memory bandwidth.  If you're mostly bandwidth-limited, then that's how the Iris Pro can win.  Pair both chips with 1066 MHz DDR3 and you'd probably see the Iris Pro win in most games.

    Well I don't know what the setup was, but I was referring to the Anandtech review posted, where the Iris seems on par with Kaveri. 

    http://www.anandtech.com/show/7677/amd-kaveri-review-a8-7600-a10-7850k/13

    http://www.anandtech.com/show/7677/amd-kaveri-review-a8-7600-a10-7850k/12

     

     

     

  • QuizzicalQuizzical Member LegendaryPosts: 25,531
    Originally posted by Barbarbar
    Originally posted by Quizzical
    Originally posted by Barbarbar

    The Gigabyte Steammachine stands out as it is the only one AFAIK that hasn't got a dedicated GPU. Instead if has adopted a different strategy, to use a large i7, in order to get the best integrated graphics Intel can offer. Iris Pro 5200 I think it's called.

    http://www.tomshardware.com/news/gigabyte-brix-pro-steam-machine,25704.html

    I think Kaveri could be interesting here. I'm suprised that the Anandtech review shows the Iris do so well, but surely there is money to be saved ditching the i7 and using Kaveri. You lose cpu power of course, but would the user segment even notice?

     

     

     

    Kaveri has a much faster GPU than the Iris Pro, but the latter has a 128 MB cache that can be used for GPU cache and massively save on memory bandwidth.  If you're mostly bandwidth-limited, then that's how the Iris Pro can win.  Pair both chips with 1066 MHz DDR3 and you'd probably see the Iris Pro win in most games.

    Well I don't know what the setup was, but I was referring to the Anandtech review posted, where the Iris seems on par with Kaveri. 

    http://www.anandtech.com/show/7677/amd-kaveri-review-a8-7600-a10-7850k/13

    http://www.anandtech.com/show/7677/amd-kaveri-review-a8-7600-a10-7850k/12

     

     

     

    AnandTech used 2133 MHz memory, which is how Kaveri managed to mostly beat Iris Pro in spite of having far less bandwidth available.  Having huge GPU caches is where the industry is headed, but on 28 nm, it's still very expensive.  At 14/16 nm, we'll see it on a lot of chips, ranging all the way from tablet SoCs to top of the line discrete video cards.

  • BarbarbarBarbarbar Member UncommonPosts: 271

    Maybe I'm not reading it right, because it seems Iris is beating Kaveri in some of the Anandtech benches. But their only HD bench they crank every setting to extreme and bring all the test subjects to a standstill more or less. I mean what do they want the reader to conclude showing 9.92 FPS and 12.80 FPS.

    I think both Hardwarecanucks and Hardwareheaven reviews are making much more sense.

    http://www.hardwarecanucks.com/forum/hardware-canucks-reviews/65031-amd-kaveri-a10-7850k-a8-7600-review-24.html

    http://www.hardwareheaven.com/reviews/1918/pg8/amd-a10-7850k-kaveri-apu-review-featuring-gigabyte-g1sniper-a88x-call-of-duty-ghosts.html

Sign In or Register to comment.