Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Zambezi, dissapointment or expected?

ZezdaZezda Member UncommonPosts: 686

http://hardocp.com/article/2011/10/11/amd_bulldozer_fx8150_desktop_performance_review

http://www.tomshardware.co.uk/fx-8150-zambezi-bulldozer-990fx,review-32295.html

 

So initial reviews of their top FX model show terrible IPC and terrible Power Consumption. Good performance when it gets the chance to see properly threaded loads but at same clock speeds and same threads as intel chips it gets left so far behind it really isn't funny.

Now that's all well and fine, but seeing as intels Ivy Bridge chips are not too far off just how long do AMD expect these chips to remain competative? Ivy B is planning to increase core count to 6 or 8 cores (that's 12 and 16 threads) along with the projected 20% performance increase over Sandy B (and the GPU increase is suppose to be fairly big as well).

 

What is everyone's impression on the new Zambezi parts? I understand that things are moving towards multi-threaded apps but I can't help but think AMD jumped the gun on this and killed their IPC for the modular design when the modular design still isn't heads above the previous.

 

[Edit]

I also forgot that socket 2011 is suppose to be quad channel ram rather than dual channel or even triple channel like socket 1366 was. So the memory platform should be vastly improved (Not that memory has had a huge impact on anything in years).

«1

Comments

  • YalexyYalexy Member UncommonPosts: 1,058

    These parts are good for servers or machines ment for rendering, other then that you're better off with intel's i5-2500k at the same price.

  • ZezdaZezda Member UncommonPosts: 686

    Originally posted by Yalexy

    These parts are good for servers or machines ment for rendering, other then that you're better off with intel's i5-2500k at the same price.

    I think the problem is that for rendering things you're better off doing it on the GPU and if you are doing anything like bitcoin or highly specialized programs involving number crunching again you are better off doing it on GPU.

    I can see it being usefull in the server market where they expect there to be a constantly high use of all cores. But having said that, is it still going to hold over Ivy Bridge when that lands?

  • YalexyYalexy Member UncommonPosts: 1,058


    Originally posted by Zezda


    Originally posted by Yalexy
    These parts are good for servers or machines ment for rendering, other then that you're better off with intel's i5-2500k at the same price.

    I think the problem is that for rendering things you're better off doing it on the GPU and if you are doing anything like bitcoin or highly specialized programs involving number crunching again you are better off doing it on GPU.
    I can see it being usefull in the server market where they expect there to be a constantly high use of all cores. But having said that, is it still going to hold over Ivy Bridge when that lands?

    I'm talking about rendering in Cinema4D, 3DS Max or the like, and these renders are 100% done on the CPU so far. And to get the best results (shortest render-time) there, you want the most cores at highest clockspeeds.
    Because of this we run a special render-machine at work with 24 cores (4x Opteron 6136) on a badass Tyan-board sporting 128GB of RAM. And now laugh at the GPU, which is a single ATI FirePro V7800 ^^

  • BarbarbarBarbarbar Member UncommonPosts: 271

    The Hardware sites surely aren't impressed. The guy from Anandtech gets all mushy and water flows from his eyes:

    "The good news is AMD has a very aggressive roadmap ahead of itself; here's hoping it will be able to execute against it. We all need AMD to succeed. We've seen what happens without a strong AMD as a competitor. We get processors that are artificially limited and severe restrictions on overclocking, particularly at the value end of the segment. We're denied choice simply because there's no other alternative. I don't believe Bulldozer is a strong enough alternative to force Intel back into an ultra competitive mode, but we absolutely need it to be that. I have faith that AMD can pull it off, but there's still a lot of progress that needs to be made. AMD can't simply rely on its GPU architecture superiority to sell APUs; it needs to ramp on the x86 side as well—more specifically, AMD needs better single threaded performance. Bulldozer didn't deliver that, and I'm worried that Piledriver alone won't be enough. But if AMD can stick to a yearly cadence and execute well with each iteration, there's hope. It's no longer a question of whether AMD will return to the days of the Athlon 64, it simply must. Otherwise you can kiss choice goodbye."

    The guy from Bit-tech gets really really angry:

    "We therefore feel totally vindicated that at no point did we recommend any bit-tech reader to buy a Socket AM3+ motherboard ‘to get ready for Bulldozer.’ We merely reviewed these boards on the premise that they were new and that people might wish to buy one as an upgrade for a Phenom II system – we had no idea whether Bulldozer would be good, bad or indifferent, so we urged caution. Turns out we were right: the FX-8150 is a stinker. "      

     

  • ZezdaZezda Member UncommonPosts: 686

    After reading it more and more I'm getting less and less impressed with it to be honest.

     

    I can see where they are going with it but I think that going for cores then IPC will come off worse than chasing IPC then cores. Although I think they gave up chasing pure efficiency years ago and instead opted to get more cores out the door.

     

    What really worries me is that I think Intel will have a relatively easy time putting their cnosiderable IPC advantage into setups with more and more cores while AMD are going to struggle to increase the efficiency of their existing 'Many Core' strategy. I mean, looking at how much clock for clock performance they have lost compared to even Thuban and how deep they have made their pipeline I can't see that being easy to work through.

  • QuizzicalQuizzical Member LegendaryPosts: 25,499

     

    It's a server part, folks.

    An FX-8150 sometimes manages to hang with a Core i7-2600K in programs that can push eight cores.  But that's not what we wanted from it.  AMD needed to handily beat the 2600K in well-threaded workloads, and be not that much worse than the Core i5-2500K in lightly threaded ones.  Instead, it offers single-threaded performance comparable to a Phenom II X4 980 or Phenom II X6 1100T.

    In fairness to AMD, those are higher clocked Phenom II parts than the Phenom II X4 955 that people will commonly buy today.  So it's not like the FX-8150 is no progress at all.  But it's not what AMD needed in order to be competitive in the desktop space.

    The best case for AMD is that Bulldozer modules will turn out like VLIW5:  a forward-looking architecture for which the initial part (Radeon HD 2900 XT) was hot, late, and slow, but which contained some important innovations that would eventually be vindicated.  AMD had to go through the Radeon HD 2900 XT in order to get to the Radeon HD 4000, 5000, and 6000 series, all of which were better than their respective Nvidia contemporaries.

    And that could yet happen.  Rumors say that there is at least one glitch that hurts IPC and will be fixed by a respin in a few months.  A respin and/or process node improvements could yield higher clock speeds at lower voltages.  Not much software uses AVX or FMA4 yet, but that is coming.

    Even so, AMD getting better doesn't necessarily mean gaining on Intel.  Sandy Bridge has AVX, too, and will see more benefit from it than Bulldozer.  Intel's 32 nm process node is mature, but they're looking to move to 22 nm soon, so they'll see process node improvements, too.

    Most critically, Windows 7 doesn't know what a Bulldozer module is, and can't schedule threads appropriately for it.  Treating an FX-8150 as either eight independent cores or four cores plus hyperthreading (e.g., a Core i7-2600K) is wrong.  I don't know which of the two Windows 7 does, but either one will hurt performance.

    The FX-8150 already fares better in moderately threaded workloads in Windows 8 than in Windows 7.  This is remarkable in part because Sandy Bridge fares worse in Windows 8.  Now, there is a lot of optimization work left for Microsoft to do, so Sandy Bridge will probably run Windows 8 at least as well as 7.  But it's not hard to imagine Bulldozer seeing bigger further improvements from that same optimization work.  Even so, being 80% as fast as a Core i5-2500K rather than 70% doesn't really fix the problem.

    Such improvements would be good news for AMD shareholders, but don't benefit people wanting to run an FX-8150 in Windows 7 today.  Even if VLIW5 was eventually a good idea, the Radeon HD 2900 XT never did become a good card.

    For the last few months, I've advised people assembling a computer on a tight budget to get a Socket AM3+ system, to have the opportunity to upgrade in the future.  I don't regret that, as the alternative on smaller budgets was a Socket AM3 system that is no better today.

    The FX-8150 isn't the upgrade that you're looking for.  People looking to upgrade the processor two or three years from now will benefit from whatever respins or process node improvements happen before then.  They'll likely be able to get a much better processor than the FX-8150, and cheaper, too.

    That's the optimistic scenario for AMD, however.  The worst case is that this is AMD's NetBurst, except without the process node advantage.  NetBurst used a deeper pipeline to achieve high clock speeds, at the expense of not doing that much per clock cycle.  The high clock speeds meant too much heat, and that meant the clock speeds couldn't go as high as Intel wanted.  And that meant that the Pentium 4 lost to the Athlon 64, in spite of higher clock speeds.

    Bulldozer seems to have the same problems, and for about the same reasons.  Even so, a 19 stage Bulldozer pipeline is a long way from NetBurst's 30.  More typical for recent high-performance x86 processors is 14-16.

    Intel was eventually forced to ditch NetBurst entirely.  The Core 2 Duo that launched in 2006 was architecturally closer to 1995's Pentium Pro than to 2005's Pentium D.  Some of NetBurst's innovations did eventually make their way into Sandy Bridge, so it wasn't a total loss for Intel.  But it was a disaster.

    And that is even though NetBurst enjoyed Intel's process node advantage.  Bulldozer doesn't have that luxury, but is stuck with AMD's process node disadvantage.

    That's not to say that Bulldozer is useless today, however.  As I said at the start, it's a server part.  It's not just for fun that there are four HyperTransport links (three of which are disabled) in the desktop part.  Server workloads that can push 16 cores in Opteron Interlagos don't need those cores to all run at 4.2 GHz in order to perform well.  More cores clocked lower beats fewer cores clocked higher if you can put all of the cores to good use.

    Opteron Interlagos will likely be a worthy successor to Opteron Magny-Cours.  But then, the latter was a terrible gaming part, too.

  • ZezdaZezda Member UncommonPosts: 686

    @ Quiz,

     

    I know what you're saying with regards to it being more server orientated but what do you think in regards to AMD dropping way way behind in IPC?

     

    Imo it is going to start really hurting them when Intel come out with their new 6 and 8 core CPU's around the corner in combination with the quad channel ram. I mean AMD are saying these are 8 'cores' but reality is that there are a lot of shared resources there to make that core count happen. If intel dropped an 8 core 16 thread version of Sandy B on the new process node without any performance enhancement at all it would still absolutely crush AMD's new Bulldozer parts in threaded apps and still retain the IPC advantage. First instinct would tell me that even with the new node Intel's 8 core version of Sandy B would be a huge die size and AMD might be able to pull ahead in terms of die size efficiency but Bulldozer itself is pretty huge.

    If intel done something like that just how much would the additional cost turn the server market off?

  • JayFiveAliveJayFiveAlive Member UncommonPosts: 601

    This is a huge disappointment for me. I was waiting for BD to release to build a new rig. Early leaked benchmarks looked bad, but I was optimistic. Now that it's actually out and we have some benchmarks by the real hardware testers.. Wow. wtf. :( In a few cases it is indeed a better proc, but in many cases it's worse than the 2500k, and even more strange, the 1100T beats it in some cases. WTF.

     

    Here's to hoping their stock tanks a bit more and I will buy some lol. I think AMD still has it in them, but BD = flop IMO. When intel drops its next CPU's, AMD is going to be way behind. Come on AMD, get it together!!! We believe in you!! Until then, Intel - keep up the great work. Maybe you will cause AMD to get their stuff together. 

  • RidelynnRidelynn Member EpicPosts: 7,383

    Pretty much what I expected. I was skeptical early on when everyone was saying it was going to beat Sandy Bridge - I figured at best they match it, and in some cases it will. I think that despite being a server part, even when we see the desktop oriented part, we won't see much change: at best I think it will still match Sandy Bridge, and I think it will struggle to do that.

    The budget platform basically stays the same, and it pretty well explains why AMD cut the price by about $100.

    I'm not an AMD or Intel fanboy, I've owned several of both brands of CPU's. I just like healthy competition, because it drives down prices and increases performance across the board.

    I am glad AMD did price them competitively with regard to performance. That keeps the CPU relevant even if it isn't the top performer. You don't need to be the top dog to play in the yard, you just need to be in line with price vs performance - and their GPU line has shown that to be a very acceptable strategy.

    We can hope that the next iteration of Bulldozer will show some tangible improvements. Quiz's reference to the XT2900 was exactly what I was thinking of when I saw the initial benchmarks. A little growing pain today for a better road map and overall gains tomorrow. Everyone thought Intel was going to die when they introduced the Core architecture and it was clocked nearly a full 2Ghz slower than the Pentium D - especially after Intel had been pushing Ghz for so many years, but that turned out to be the best thing Intel has done in probably since the introduction of the Pentium and it's introduction woes (I remember videos of people popping popcorn on a P5, it was the first CPU where you really needed an active heatsink, and it took a while for the community to accept that fact).

    Hopefully, it's a lil pain and embarrassment now, and take a bit of a loss on the high end bins, for future gains. But only time will tell if that is the case, and it could take years for the technology to mature (the Itanium technology still hasn't really done anything).

  • drbaltazardrbaltazar Member UncommonPosts: 7,856

    come on the 4170 was also tested gees!it is clocked at 4.2 .the issue this time around is also at ms end.lot of stuff amd use window 7 doesnt understand what buldozer want w7 to do .and from the look of things it look like ms wont patch .they might

    i say might have a fix in w8 but it isnt a given.

    wonder how this proc do on linux tho!since linux dev are very agile and can turn around in a dime to support all those feature that arent working or bugged in windows!

    hell might even be great with mac os lol.one thing is sure it aint going fast anytime soon on windows side!

  • JayFiveAliveJayFiveAlive Member UncommonPosts: 601

    The thing is, why build a proc that isn't supported by the OS for optimal use. AMD released their CPU roadmap and they will have a new CPU out by the time Windows 8 comes. It seems rather silly/pointless to me. I guess I hope they are at least building on the BD technology that took them so long for future CPUs.

  • drazzahdrazzah Member UncommonPosts: 437

    Its just an excuse from AMD, the whole FX line is a bust. New archi that is slower then the ancient phenom II archi. sad but true.

    image

  • coomscooms Member Posts: 219

    Lets just say my christmas list went from an amd fx 8 core, am3+ motherboard, and radeon 6670 to an i5, intel mobo compatible with ivy bridge, and nvidia 550 ti.

     

    To me it seems AMD was trying too hard to try and beat sandy instead of making real game changer ideas. APUs I believe come close but just think about how amazing they could be if they had an option with 6 cores and an intergrated gpu about as powerful as a 6800+ or something to that extent. I think amd could have taken the budget market even more AND dominated laptops if they found ways to keep something like that from overheating.

  • drbaltazardrbaltazar Member UncommonPosts: 7,856

    was gona go with 8150 but i ll wait for ivy bridge!i ll just update to 980 temporarely!since it is in about a year!

  • QuizzicalQuizzical Member LegendaryPosts: 25,499

    Originally posted by cooms

    Lets just say my christmas list went from an amd fx 8 core, am3+ motherboard, and radeon 6670 to an i5, intel mobo compatible with ivy bridge, and nvidia 550 ti.

     

    To me it seems AMD was trying too hard to try and beat sandy instead of making real game changer ideas. APUs I believe come close but just think about how amazing they could be if they had an option with 6 cores and an intergrated gpu about as powerful as a 6800+ or something to that extent. I think amd could have taken the budget market even more AND dominated laptops if they found ways to keep something like that from overheating.

    I get deciding to go with an Intel processor.  But why switch the video card?  AMD is still ahead of Nvidia there, and with AMD launching new cards this year or very early next year and Nvidia not doing so, that gap is only going to widen.  With the discontinued, clearance-priced GeForce GTX460s disappearing, by Christmas, itwill probably be hard to justify buying an Nvidia card cheaper than the ~$180 GeForce GTX 560.  If AMD executes on Southern Islands as well as they did with Evergreen, it could be hard to justify buying any Nvidia card at all.

    To the contrary, AMD is very much trying to go for game-changing products.  Bobcat and Llano are exactly that in some markets.  Indeed, Bobcat is so good that when Intel launched its competitor, Cedar Trail Atom, they didn't even bother to herald the launch with a press release, let alone review samples.  Had everything gone as planned for Llano and Zambezi, they would very much be revolutionary products.

    How exactly do you propose to feed data to 6 x86 cores and an integrated Barts GPU while keeping costs and power down?  Barts has quad channel GDDR5.  Try to match that bandwidth with DDR3 and you're looking at about 10-12 memory channels.  Motherboard manufacturers are finding it awkward to lay out the traces for four channels in Sandy Bridge-E.  And that's in a full size ATX motherboard that goes in a mid tower case.  In a laptop it's harder.  Solder GDDR5 onto the motherbaord and power consumption soars, both at idle and at load.

  • RabiatorRabiator Member Posts: 358

    I call it a disappointment.

    From the reviews I've read so far, it seems that the FX needs the higher clockspeed and two (integer) cores more to be slightly faster than the Thuban 6-core. In single thread applications and at the same clock speeds, it is actually a bit slower than its predecessor.

    So if AMD had just shrinked the Thuban to 32mn structures and increased its clock speed, the results might have been better and cheaper to manufacture.

    For the sake of AMD and competition in general, I hope they can recover from this flop like Intel did with the Pentium4. I mean the "Northwood" that came before the awful "Prescott". That one was actually a decent CPU for its time.

  • RabiatorRabiator Member Posts: 358

    Originally posted by Quizzical

    To the contrary, AMD is very much trying to go for game-changing products.  Bobcat and Llano are exactly that in some markets.  Indeed, Bobcat is so good that when Intel launched its competitor, Cedar Trail Atom, they didn't even bother to herald the launch with a press release, let alone review samples.  Had everything gone as planned for Llano and Zambezi, they would very much be revolutionary products.

    How exactly do you propose to feed data to 6 x86 cores and an integrated Barts GPU while keeping costs and power down?  Barts has quad channel GDDR5.  Try to match that bandwidth with DDR3 and you're looking at about 10-12 memory channels.  Motherboard manufacturers are finding it awkward to lay out the traces for four channels in Sandy Bridge-E.  And that's in a full size ATX motherboard that goes in a mid tower case.  In a laptop it's harder.  Solder GDDR5 onto the motherbaord and power consumption soars, both at idle and at load.

    IMHO the APU in its current form is only good for low end to mid range graphics, due to the problems with memory bandwidth. The Llano is already pushing the limits, performing weaker than a Phenom II X4 plus a discrete graphics card  that looks similar on paper (say a HD5570).

    Things may change once you can put a GB of video RAM into the APU. Until then, I'll stick to separate parts for my PCs.

  • QuizzicalQuizzical Member LegendaryPosts: 25,499

    Originally posted by Rabiator

    So if AMD had just shrinked the Thuban to 32mn structures and increased its clock speed, the results might have been better and cheaper to manufacture.

    But could they have increased the clock speed?  They already did shrink Stars cores to 32 nm.  Look at the top bin clock speeds:

    http://www.newegg.com/Product/Product.aspx?Item=N82E16819103942

    2.9 GHz and no turbo.  Both the Deneb and Thuban versions at 45 nm hit 3.7 GHz, the latter with turbo.

    It's entirely plausible that a year from now, AMD will be selling Piledriver cores with better IPC than Thuban, a stock clock speed of 4.2 GHz, turbo up to 5 GHz, lower real-world power consumption than Zambezi, and Windows 8 squeezing more performance out of moderately-threaded programs via better thread scheduling.  That would very much be a nice product, competitive with both Sandy Bridge-E and Ivy Bridge.

    Now, that's hardly a done deal, but it's highly plausible.  If AMD just kept trying to shrink Stars, they'd only fall ever further behind Intel.  The Bulldozer approach at least gives them a chance of being competitive.  They aren't there today, but there's at least a plausible route for them to get there.

  • drbaltazardrbaltazar Member UncommonPosts: 7,856

    the issue here is ms lot of feature are bugged or plain arent even understood by w7 ,until amd and ms hangout each other

    and speak on how to link w7 to fx new way of doing nobody is going anywhere.as for w7 or w8 .ms is far along enough that i doubt they can add support to fx new feature.and has we all know ms product have a tendancy to return to very conservative setting when window doesnt like or doesnt understand what it is seeing.i wouldnt be surprised to see that w7  or w8 is throttling fx by 50% because it doesnt understand what fx is asking it to do!we might even be lucky it work at all.

    and like we all know debugging software for thing like processor isnt a fun job!we ll have to wait and see what the issue is i highly doubt there is nothing wrong.something between w receive and amd send isnt being comprehended by w .

  • VolgoreVolgore Member EpicPosts: 3,872

    Too bad...seems like this is just another of AMD's cases of "...under certain given circumstances it performs rather well"

    I was about to wait for the Bully to hit the market, but got a very good deal on a 2500k with board and ram which i couldn't resist. Here i am, stable 4.8ghz and checking those Zambezi reviews, i feel quite comfortable with my choice.

    image
  • levin70levin70 Member Posts: 87

    I would take what Quiz alluded to even farther.  The chipsets appear to have been driven by the server guys from start to finish.  Why, I have no idea, but if you look at what the BD cores will be able to scale to in the not to distant futures on the server side it just appears from my standpoint that these chips were designed for servers and the desktop market was an afterthought

  • QuizzicalQuizzical Member LegendaryPosts: 25,499

    The server chipsets (C32 and G34) are totally different from the desktop chipsets (970, 990X, and 990FX).  The desktop chipsets are perfectly sensible for desktops (and more so than Intel's H61, H67, P67, Z68 mess), with the only real flaw being that USB 3.0 support in the chipset would be nice.

  • CleffyCleffy Member RarePosts: 6,414

    Just what I was hoping for.  A drop in chip for a dual socket G34 that has superior rendering capabilities.

    I think AMD has offered up a good chip when you put it into context.  In single threaded apps, it performs about the same as a Thuban core.  In multi-threaded apps its better.  The thing to remember is that in single threaded apps, its not like they require anything more then a Thuban.  Right now in the context of gaming, poorly threaded games are more graphics bound then processor bound.  In multi-threaded games like RTS, you will see an advantage with Bulldozer.  Over the last decade, the need for the most powerful CPU has dwindled in the consumer sector.

    The thing that makes AMD still viable is the entire platform.  When it comes to integrated graphics, any AMD chip is better then even the most powerful Intel.  When it comes to modern support.  You have more PCI-e lines running through AMD chips then Intel.  My generation old 880FX has SATA 6, and USB 3.0 that is still difficult to come by on Intel branded motherboards.

    That said, I don't think the current crop of Zambezi will do well.  I think the APU versions will sell better, and there is definetly a bug in them that isn't making them work properly.

  • RidelynnRidelynn Member EpicPosts: 7,383


    Originally posted by Cleffy
    That said, I don't think the current crop of Zambezi will do well.  I think the APU versions will sell better, and there is definetly a bug in them that isn't making them work properly.

    I think you are 100% correct here. I think the APU version will be an entirely different animal and will do quite well - everyone looking at a budget option will have it in spades, moreso than Llano has produced even.

  • duelkoreduelkore Member Posts: 228

    I have been drinking but here goes.    What amd failed to realize is that the bulk of corporations do not like to invest in new hardware.  They dont understand it, they dont care.  They dont want to buy a new server unless they have to.   Thats only and new starts or shit just broke.  Atleast that is my experience.  Considering that most servers are print servers, exchange servers, or centralized user shares..... what do they really need increased performance for?  Most of my customers have a seperate server for each utility I just mentioned.   I think I have 20+ clients that use dual core servers... with no issues at all.  Hell, I have clients that use single core exchange servers.  It gets the job done when I need to add AD/Exchange users/mailboxes etc.

     

    What they failed to realize on the gpu/cpu thing is that this is really geared to office workers.   Most office workes, me included, require atleast two monitors.  Many programmers, web designers, etc, require 3 or more.  This means you  need to buy an adaptor of sorts  This could be a usb-vga/dvi or a y connector.  Usually the cheapest solution is to just buy a shitty video card with dual adaptors or even the crappy dvi and display port of hdmi.  That totally negated the money saved on a cpu with gpu solution.

Sign In or Register to comment.