https://thestack.com/iot/2016/02/05/intel-william-holt-moores-law-slower-energy-efficient-chips/Intel has said that new technologies in chip manufacturing will favour
better energy consumption over faster execution times – effectively
calling an end to ‘Moore’s Law’, which successfully predicted the
doubling of density in integrated circuits, and therefore speed, every
two years.
Intel: “We’re going to see major transitions. The new technology will be fundamentally different. The best pure technology improvements we can make will bring
improvements in power consumption but will reduce speed.”
Everyone who follows this tech a bit saw this coming though, benefits from FinFet are over, EUV is years behind, ASML stocks got a massive hit
.
The investment needed for more speed are way too high, together with PC sales tanking, there is simply no money to keep moore's law alive.
There is still a lot of money to be found in the mobile market, it's a very profitable market, and Intel its focus is shifting to there. Also, "the internet of things" like Intel mentions, basically putting chips in..everything..is a growing market.
Comments
Stock traders knew this news was coming when the costs and delays of EUV became apparent.
"We wasted billions upon billions to try to produce something faster but managed only 5% in some specific tasks and now we have to spin it to look good"
And all those who said Intel was just "holding off faster chips" lol. Nope. Nothing new for Intel for forseeable future (what everyone with at least modicum of interest inthe area knew), except "lets change a socket every year and not update previous platforms, theres enough suckers out there"
Witch is bullshit they forgot something very important, they become more energy efficient to...
More never spoke about speed, no clue where they got that from.
He was talking about the number of transistors in an integrated circuit.
Donno who wrote the article but its a complete moron.
Here a little tip.
"NVIDIA Pascal GPU Feature 17 Billion Transistors, Almost Twice The Transistors of Fiji"
Or does more's law only goes for Intell cpu's ? (rotfl)
Fury X has a total of 8.9 Billion transistors.
GTX Titan X comes with 8.0 Billion transistors.
(x-box cpu has 5.0 btw )
Article: "effectively calling an end to "Moore’s Law’, which successfully predicted the doubling of density in integrated circuits, and therefore speed"
Nothing wrong with the article.
4k isn't enough to entice those people to upgrade, I'm a gamer and I don't even care about it, I'd rather have 1080P at 4x the framerate
Intel has been very active in USB 3.1 and NVMe SSD, but that won't be enough for people to upgrade their PC.
But you have to understand that Intel already got those benefits, FinFET was introduced with Ivy Bridge and Haswell, it took Intel from 32nm to 22nnm.
Those beneifts are already "used up" by Intel. Nanowire FET and TFET are more focused on energy efficiency, and nowhere near as promising as FinFET.
FinFET is not a new technology that just came around the courner, it has been around a long time, DARPA was talking about multigate transistors decades ago. You finally see it appearing in GPU, but Intel has long enjoyed those benefits, that increase in transistors is already "used up" by Intel.
basically the Ghz speed wont be going up..?
Sorry for the ridiculously stupid question >.<
(the internet of things is putting a chip in everything, so your coffee maker can talk to your PC)
One of the reasons is that it has become incredibly costly to keep making CPU faster, and the market to support PC CPU has kinda been undermined by smartphones and mobile devices (PC sales have plummeted).
They see a future in making chips more power efficient, instead of more powerful.
(both have 1.4 billion transistors)
"One of the reasons is that it has become incredibly costly to keep making CPU faster, and the market to support PC CPU has kinda been undermined by smartphones and mobile devices (PC sales have plummeted).
Ah okey and smart phones dont have IC's ? :pleased:
"Sparc M7" 10 billion transistors b.t.w.
http://siliconangle.com/blog/2015/10/26/oracle-debuts-first-systems-with-10-billion-transistor-sparc-m7-chip/
This all took place around the time WOW and Vanguard released, and since then my gaming laptops have been upgraded to take advantage of better graphic processing capabilities and not so much in CPU processing.
On the business side back in the 90s spreadsheets could take so long recalculating people put up dummy screens of their please wait message to hide the fact they were playing games at work.
Now no matter how large my spreadsheets get, or how many I have open, recalculating takes only an instant, with the only real limitations or slowdowns being the time it takes to open them from a network drive.
No need to increase CPU speeds until a new product comes out that needs the extra power, so I can see why Intel is making this move.
"True friends stab you in the front." | Oscar Wilde
"I need to finish" - Christian Wolff: The Accountant
Just trying to live long enough to play a new, released MMORPG, playing New Worlds atm
Fools find no pleasure in understanding but delight in airing their own opinions. Pvbs 18:2, NIV
Don't just play games, inhabit virtual worlds™
"This is the most intelligent, well qualified and articulate response to a post I have ever seen on these forums. It's a shame most people here won't have the attention span to read past the second line." - Anon
So no real reason to try to get the most out of games on PC, porting is so much cheaper.
Those PC vs PS4 vs Xbox One comparison videos on Youtube...are a bit depressing to look at sometimes, often they look exactly the same on all 3 platforms (with some ridiculously minor differences). There's the FPS difference, but ok, graphically they look the same.
Ghz is just refers to how many cycles per second its rated at, or hertz, as Mhz = millions of cycles per second, Ghz = billions of cycles per second.
Ignore Kyoris's doom and gloom however, as it bears no relation to actual hardware sales, only to complete unit sales, which is not the entirety of the PC market by a long shot, as Intel and AMD's CPU sales are not solely dependent on PC unit sales, as is also the case with Nvidia and AMD's GPU sales not being dependent on PC unit sales.
not hard to increase transistor count if your die size is the size of a small pancake
http://www.pocket-lint.com/news/126289-intel-claims-that-by-2026-processors-will-have-as-many-transistors-as-there-are-neurons-in-a-brain
B.T.W. that SPARC 7 cpu reminds me of my Dec Alpha RISC cpu i have somwhere.....
Found it even has a motherboard on it and it works to even has PCI slots :pleased:
(and thats 1992 1.7 million transistors)
Just put a card in an open slot and you have USB 3.1, no need to change your mobo.
(unless you filled your PCI Express slots with dual GPU cards, in which case you're out of luck lol)
But....I have never really felt I needed a faster SSD or USB slot. The only time I really transfer data over USB, like large data, is when I plug in an SD card reader to transfer picture from my DSLR...and that's fast enough.
A back up through USB is something I only do like once a month, it's fast enough.
For general computing, it doesn't seem to be faster, it actually seems significantly slower.
It is only for specific, very specific algorithms, for very very specific problems that you see quantum speedup.
IMEC made the first 5nm chips: http://www.electronicsweekly.com/news/business/manufacturing/imec-and-cadence-tape-out-first-5nm-ic-2015-10/ thanks to EUV.
But, EUV got delayed, a lot, and the machinese are more expensive than anticipated, and not too many are buying them yet, which is why you saw ASML's stock tumble like crazy.
Also, uptime was low at first (uptime is how long a machine will keep running putting out chips), it's now better, at 70%.
ASML EUV machine:
We don't upgrade because 1) We don't really feel the need to (0 hype about VR or 4K gaming) and 2) The value of the US dollar is so damn high, the electronics prices have almost doubled in our country.
Can't wait for the next dollar crash. xD
Hopefully the upcoming AMD architecture would be able to compete with Intel so they would lower the prices. 600$ for a CPU is a bit over the top, given you get a 980ti for that much money, or you know, the latest iPhone with all of its camera and display gimmicks and processing and graphics power.
Increase in resolution by 4 = 1/3th of the FPS.
When we went from 800*600 to Full HD, we had the increases in hardware speed to support the move. And the increase in resolution was easily apparent. That's not the case with 4k.
From my "4k is pointless" thread, increasing resolution by 4 drops your framerate by about 70%:
http://www.pcper.com/reviews/Systems/Quad-Core-Gaming-Roundup-How-Much-CPU-Do-You-Really-Need
Just compare different CPU results from 1080 -> 1440, and those are already old and outdated games.
Also mantle results as a preview of dx12 and vulcan make CPUs even less important. When there was CPU bottleneck mantle helped a lot bringing 65$ CPU and 400$ CPU withn 30% on 1080p. On 4k there would be 0 difference.
Unless youre doing some tri or 4 way high end GPU setup....no you dont have to worry about CPU at all.