It looks like you're new here. If you want to get involved, click one of these buttons!
Last year, we saw the first decent x86 tablet chips show up: AMD Temash and Intel Bay Trail Atom. Intel offered better CPU performance while AMD offered better CPU performance. That seems to have mostly been the story in other markets for as long as APUs have existed.
As with Kabini and Temash, Beema and Mullins are the same chip. Beema is the 15 W laptop version, while Mullins is the 4.5 W tablet version--albeit also usable for cheap, passively cooled laptops. Neither the CPU nor the GPU is substantially changed from the previous generation; while Beema/Mullins uses Puma+ cores to Kabini/Temash's Jaguar cores, there isn't much difference between them--and in particular, at the same clock speed, they'll offer identical performance. The GPU, meanwhile, is still AMD's latest and greatest GCN architecture.
So Beema and Mullins are no big deal, right? Hardly. I'm going to go out on a limb and predict that when it launches, Mullins will not only destroy Bay Trail Atom in both CPU and GPU performance, but it will beat Intel's next generation Cherry Trail Atom in both CPU and GPU performance, too.
How can that be, if the CPU and GPU are scarcely changed from an architecture that lost on the CPU side to Bay Trail? Two words: turbo boost. (That's actually far from the full story, but it's the easiest thing to explain.) The benefits of turbo are well-established, and Temash was conspicuously missing it. AMD's explanation was that with the rush to push something out the door and get Jaguar cores ready for the PS4 and Xbox One--both platforms in which turbo is irrelevant, as you can easily run the CPU cores at max speed all of the time--AMD didn't have time to build a good platform around its CPU and GPU.
With Beema and Mullins, that changes. Kabini offered CPU clock speeds of 1.5 GHz at 15 W and 2.0 GHz at 25 W. Beema will offer CPU turbo clock speeds up to 2.4 GHz at 15 W. Temash offered CPU clock speeds that wouldn't go over 1 GHz, and the quad core version was 8 W. Mullins will offer a quad core CPU with turbo up to 2.2 GHz in a tablet-friendly 4.5 W.
Now, those are turbo speeds, of course. Mullins won't let you run all four cores at 2.2 GHz while pushing the GPU hard. But it will let you push one core to 2.2 GHz while basically everything else is idle. And what programs would ever want to do that? Oh, just about all of them except for games.
Recall that Jaguar cores already offer vastly higher IPC than Silvermont Atom cores--meaning, at the same clock speed, they'll run about 50%-60% faster. When the comparison was a Temash chip with the clock speed stuck at 1.0 GHz versus a Bay Trail Atom chip that could turbo up to 2.4 GHz, the latter won in CPU performance, even with a huge IPC disadvantage. Mullins letting the CPU clock up to 2.2 GHz will handily destroy that and more than double the single-threaded performance of Temash.
And that's such a big jump that it's very unlikely that even Intel's next generation Cherry Trail Atom will be able to catch it. With Intel focusing on a die shrink to 14 nm and unlikely to offer much in the way of IPC improvements, they'd probably need to push clock speeds to somewhere in the ballpark of 3.5 GHz to catch AMD in single-threaded performance. With further die shrinks as likely to reduce maximum clock speeds as increase them, it's a pretty safe bet that that's not going to happen short of liquid nitrogen overclocking--and possibly not even then.
And why do I say that AMD will beat Intel on the GPU side, too? Have you seen how badly AMD beat Intel on the GPU side last generation? Do I believe that a die shrink to 14 nm will let Intel make up that gap? In a word: no. Intel will gain in GPU performance, but not catch AMD. And that's still ignoring the issue of video drivers, where AMD holds a commanding lead over Intel.
Now, Intel will beat AMD in highly-threaded CPU performance. Intel will be able to run four CPU cores at 2.4 GHz or whatever simultaneously and indefinitely. Push all four CPU cores in Mullins and you'll be lucky to get half of that clock speed over extended periods of time. But an FX-8350 handily beats a Core i5-4670K in highly-threaded CPU performance, too. That doesn't mean it's a better CPU overall. And that's even though there are many desktop applications that can put many CPU cores to good use; in tablets, there are rather fewer. If you're writing an application that needs enough CPU performance that threading your code is a huge deal, you're probably not targeting tablets.
Bay Trail Atom also tends to use less power than AMD Temash at typical (mostly idle) loads, and will probably tend to use less than AMD Mullins, too. Cherry Trail Atom will almost certainly use less yet. In tablets, that matters, so it's not like it's a total victory in all phases for AMD. But Mullins-based tablets will probably offer a few hours of battery life under heavy load, and much more than that at more typical non-gaming loads.
Still, if the Intel-based tablet offers 8 hours of battery life and the AMD-based tablet offers 6, do you care about that difference? (Obviously battery life depends on a number of other factors--including the battery itself.) Maybe you do, but maybe not. If the AMD Mullins-based tablet offers plenty of battery life for you, getting markedly higher performance in everything is an easy call.
That said, Mullins isn't going to be such an advance in tablet gaming. It will offer by far the best single-threaded tablet performance that the world has ever seen, at least if you exclude the ridiculous Ivy Bridge- and Haswell-based tablets. It will offer four CPU cores. It will offer a very capable GPU. But you know what it won't offer? The ability to let several CPU cores and the GPU run all out simultaneously for extended periods of time. That, of course, is like games like to do.
For that, we'll need to wait for die shrinks. Remember that above, I'm comparing a 28 nm Mullins chip to both a 22 nm Bay Trail Atom and 14 nm Cherry Trail Atom. But die shrinks are coming. AMD could realistically shrink to 20 nm next year and 14 or 16 nm the year after. Even if Intel shrinks to 10 nm in 2016 (which is hardly guaranteed, especially for Atom), the gap between 14 and 10 nm is vastly smaller than between 28 and 14. Intel is going to need a massive overhaul to both its CPU and GPU architectures to be competitive with AMD in tablets in the very near future.
Comments
I wanna see these benchmarked vs Tegra with power consumption.
So far I have yet to find concrete Tegra K1 watts. Only the Nvidia marketing info which I don't trust.
Nvidia marketing on Tegra K1 basically amounts to saying:
1) it can give high performance
2) it can be low power
And hoping that people think it can do both at once, not realizing that the former means high clock speeds and the latter low clock speeds.
Even so, I'm not sure how low of power the Tegra K1 can realistically go, as 1 Kepler SMX is an awful lot of GPU for a low power device. That's 50% more shaders than AMD is willing to put into Beema/Mullins, and Kepler tends to use more power per shader than AMD's GCN architecture. To be fair, Kepler also offers a little more performance per shader than AMD's GCN, but Beema/Mullins is targeting higher performance ranges and bigger form factors than Tegra. AMD has no ambitions of putting Beema/Mullins into cell phones; Beema is the laptop variant and Mullins the tablet variant. Meanwhile, Tegra's markets are tablets at the high end and cell phones at the low end; a laptop based on Tegra can't run Windows 7/8 or Mac OS X.
I just don't see the market for x86 tablets.
I know a lot of people "say" they would love one, if only it... was just like their laptop/desktop.
Although it's nice to see the technology continuing to evolve in that direction, I am positive that something good will come of it - I just don't think chasing the ARM tablet market is what will ultimately be the driver.
Really, I think the best part of this is how it raises the bar for entry-level graphics capabilities in those "Blue Light Special" cheap notebooks & subcompacts, not in how many tablets it will probably get used in.
I will say... the difference between 6 hours, 8 hours, and the 10-12 that the current generation of ARM-based tablets can get under "typical" work loads is ~huge~.
For a tablet, battery life is arguably the most important specification, and certainly an extremely important one even if you don't regard it as the most important.
Desktop PC's are always tethered via cords.
Laptops are commonly tethered - most people will plug in their laptop if they have the option to do so, regardless of if they need to do so or not.
Tablets are rarely tethered -- the only time I see it commonly out in the wild is if it's being used as a POS/register/kiosk, or the user has absolutely no battery left at all and is trying to use it and charge it at the same time.
For many short/overnight business trips, our guys at the office will just take their tablets for email/slideshows/light work, and leave the laptop at home. Including watching a movie or playing Solitare on the flight there and back, and using it all day on the trip ~ a <2 year old tablet typically can handle an entire overnight business trip without a charge at all until they get back home. And that's why they are so popular, rather than the entire laptop bag.
That's the difference between a 6 or 8 hour, and a 12 hour battery.
Sure, if you needed to crunch a huge spreadsheet or compile a program or whip up a database, that lower powered ARM tablet isn't the tool for the job. But then again, if I needed to do any of that, any tablet probably isn't the right tool - I'd want a good bit of screen real estate, a full keyboard/mouse, easy switching between documents and other resources/programs, and probably several other amenities that don't typically come with the tablet form factor regardless of what performance that tablet has under the hood.