What is it that is discovered every year that allows processors and GPUs to keep getting better...

What is it that is discovered every year that allows processors and GPUs to keep getting better? Why is improvement so steady?

Other urls found in this thread:

quora.com/What-are-some-alternatives-to-silicon-for-making-transistors
twitter.com/SFWRedditGifs

moar transistors

Chineses.

wut?

finFET

>What is it that is discovered every year?
That if they don't change the socket again soon, their profits are going to drop.

>Why is improvement so steady?
If your competitor is 10 years behind, then you don't need to invest in revolutionary tech to maintain your edge. So you improve your process a little bit and that lets you squeeze a little more shit on there every year for that 5% gain.

The main limitation in the performance of CPUs is that the signals can only travel certain physical distance in certain time, you can never make it faster than that, so a smaller processor is better than a larger one because distances are short. What's happening is that since big bang everything is expanding, but at the same time the energy in the universe remains constant, meaning that the same energy is now covering more space in less time. Due to quantum physics and astrophysics, processors are getting more and more powerful as the universe expands.

OP here, that must be why you get smarter with age.

why? because people at university are doing their homework on how to defeat quantum tunneling

Nothing. They already have all they need. They're just drip feeding it to you to get the most money possible.

...

This.
I had a chat with my machine learning professor with industrial ties in uni, he says that he's 110% sure that nvidia is sitting on a huge pile of intellectual properties, and they will pull one out to use only when they think AMD has an improvement.

tech advancement is actually going to slow done severely in the next 5 years or so because we've hit the penultimate usefullness in ailicon and can no longer get any more "space" out of it. next step is to find something that replaces solicon but we have NO idea what that can be or if there even is

>What is it that is discovered every year that allows processors and GPUs to keep getting better?

More money goyim.

quantum computing

It's definitely not what you think it is.

...

KEK

You're fucking joking do you know what the current record for the largest computation ever done via a quantum computer? 3X5= Fucking 15 man. And that's a processor created strictly with only computational math in mind...you expect that shit to just magically take over and surpass current standards in 5 years when it can't even out perform an abacus?

Development of hardware is extremely expensive. Incremental changes occur because nobody wants to waste a release cycle because they fucked up an update. They incrementally experiment with slightly different techniques to increase yield and power.

Tech companies have a hard enough time migrating software from one form to another, hardware is much harder than that.

Not that guy, but it wouldn't surprise me if in the next two decades it becomes more approachable in the main stream.

Just in the last five years, people went from saying they're all smoke and mirrors to playing skeptic when the first proof of concept quantum CPU came into fruition. Breakthroughs have been made in improving accuracy of quantum processors and memory to store qbit states exists now. There has even been some developments towards significantly reducing the cooling requirements for a quantum processor.

The Intel 4004 started in basic calculators, and look at where R&D over several decades has brought it. It's not going to take over in five years and it might not even end up in consumer hardware, but we've always been bad at predicting the future of technology and so given a nominal development cycle like what our current mainstream contenders now use and I'm sure it'll either come into the same light or something developed off of it will instead.

...

The only actual right answer. Source : I have designed CPUs at IBM, Intel, and Sun.

>What is it that is discovered every year that allows processors and GPUs to keep getting better?

But they aren't getting almost any better, it's like they have chips ready for the next 10 years but they are holding them because marketing

The problem isn't the viablity of Quantum Computing. The problem is that there is absolutely no evidence that its actually any better than current computers at general purpose tasks, and all evidence points to the fact that at BEST it matches it with specific optimizations.

If course I could be terribly wrong because similar stuff was said about your standard Von Neumann computers, but it seems very unlikely that Quantum Computers will be any useful outside of a specific set of problems, (But good god they will be amazing at those problems"

Sitting on a huge pile of IPs is one thing, designing useful functions around them in a high performance computing device is another entirely.

Both Nvidia and Intel made plenty of shitty choices in the past even when they had infinite money to throw at the design. See Geforce FX, Itanium, Netburst, any Intel GPU since the 90s till now, Fermi...

is this what you do, just go on the internet and tell lies?

I doubt it, or they'd just sell the dankest shit at an exponential, exorbitant price/perf because people with e peen syndrome will buy it.

Unless you mean more gamewerks closed source, bribed in stuff that makes games run worse with no real improvement visually (that couldn't be obtained otherwise).

>Geforce FX
That's a fairly popular series of GPUs with people building older systems for late 90s and early 00s games due to their features and compatibility. They have the features many late 90s games require, and the performance to make them run at higher resolutions and higher framerates.

>Fermi
Seen people here talk about those GPUs as being one of nvidias better for async compute. Don't know if there is any truth to that.

Even tho these chips might have sucked back then, they've become somewhat interesting in later years.

Everything has already been discovered, they slowly unlock it little by little so you have an incentive to throw money into jewtels pockets.

the only physical paramater is the transistor size which kept shrinking almost exponentially throuought the last 40 years. This will end due the physical limits in 5 or 6 years.

Since more and more transistors can be put in the same area, you can have more functionality at the same or less than a year or two ago.

Since you fuckers spend most of the time watching shitty youtube videos and playing games, the functionality has been shfiting mainly towards serving graphics.

Other functionalities that were added over the years, floating point units, multiple level caching, native encrypting/decrypting modules.

Maybe in the incoming years Network on Chips will be a key player in continuing improving the performance without relying on the physical limitations, but since usual user applications like tablets and mobile phones don't need parallelism that much, computer tech will stagnate for the normal user when it comes to the processor improvements within couple of years.

Not sure if trolling or you're actually this retarded. By your logic the same processor would automatically get faster as it gets older.

...

>be tech illiterate
>decide to check out Cred Forums
>see this thread and think "yeah sure, this looks vanilla enough for me"
>see this post
What the fuck is wrong with you

tl;dr
>More transistors = more power
>Smaller transistors = more power density
>Smaller transistors = more powerful chips

So the name of the game is to try shrink transistor sizes. The "__ nM" on a chip refers to transistor size.

This dude named Moore observed that transistor density doubled every two years, and predicted this trend would continue. It turned into the basic roadmap for the industry. Companies shoot for following Moore's Law, as it's called.

Thing is shrinking transistors is fucking hard. You need to be continually advancing in materials science and lithography technology.

Recently the industry has started to lag behind Moore's Law, as it has become extremely difficult and expensive to shrink transistor sizes. You start running into quantum problems and all sorts of weird shit. Some people predict transistor sizes will soon stop shrinking, as it will no longer feasible to do so.

If we reach such a point, advances in chip speeds would need to come from increasingly complex 3D architecture on the chip.

So In a couple years I can just buy the best processor and be set for life?

>If we reach such a point, advances in chip speeds would need to come from increasingly complex 3D architecture on the chip.

this only works because we are stuck on silcon and the theroy is based on trasistors not over all computing

So therefore we must find something better than

software is lagging behind all hardware advances since the core 2 duo days
Onlyt recently got a six core to be useable in AAA+++ games

we have not moved from a base clock of 3.0Ghz ~5ghz to any higher frequencey and nor is is possiable to be stable higher than 5.0Ghz on silcon at this moment.

highest IPC on silion on intel has been meet on their 4 core 6700k

the high core count is still unstable see IPC and multicore scores vs older arcs

silicon transistors should be abandoned should the end of moores law come to pass
which it fact has already happened for base clocks and IPC but we still have energy consumption and core count to get better.

Better at not only on hardware but software as well.

OP here. Just tried reading this. quora.com/What-are-some-alternatives-to-silicon-for-making-transistors

This is pathetic, I don't know enough chemistry, quantum physics, or computer engineering to understand.

Good thing I'm going back to school soon.

>quora.com/What-are-some-alternatives-to-silicon-for-making-transistors

it basiclly says this
currently the only way to keep silicon compeditive is to bond it with better materail to lower its resistive properties and make faster cpu's he also goes on to mention that this is almost at its end and there are few better ways to bond silicon better than before.

then ramblings about alteratives bit being not as strong or durable or cheap as silicon

In the end these sincenetist just don't want to move on from silicon because the alteratives are too hard to fix at this moment

he goes on to say that in 40 years the'll have to do something about silicon but untill then it will have to be "good enough"

I don't expect the industry to move away from silicon for a while. There's just too much knowledge built around it. There's so much we know about silicon that pertains to our ability to process it at the current scale without significant faults or errors. Going away from silicon would mean throwing all that out the window.

There's also the economies of scale built around silicon that makes it (relatively) cheap.

And we've tried other materials, they just didn't pan out. In the 80s everyone thought GaAs was the future. Direct band gap, greater electron mobility, all that. But it just hasn't become feasible for anything other than niche applications where cost doesn't matter.

the technology exists already though; that cpu they are going to be selling in 2030? they can already make it now and they do, just not for the civilian market.

the i7extreme is a topend cpu... for the civilian market. but of course, intel can and does make cpu's that are x amounts more powerful, and they will sell them to you in about a decade.

It already is slowing down.
CPU's can't keep up with law of Moore anymore since 2 years or so now.

That is because a phenomenon called quantum tunneling is limiting smaller transistors

>current computing grinds to a halt
>companies trying to find a way yo fain an edge
>start investing more and more in quantum
>eventually fully dedicated to investing in quantum
Yes
Unless something easier and more accesible shows up, that is

Because GPUs operate on parallelization while CPUs don't. CPUs need to perform complex tasks in a relatively procedural manner, which means they benefit from more complex instructions and higher clock speeds. Adding more cores is only going to help so much. GPUs can just add more cores (stream processors) which will make them faster as long as the computations they do are highly parallelizable.