What's stopping us? (1 Viewer)

prabha_friend

Prabhakaran Karuppaih
Local time
Today, 14:31
Joined
Mar 22, 2009
Messages
784
Once there were components which are very Large and Heavy...(Like Hardisks and other components) Now there are so many are put inside a tiny block. But why only a few processors can be put inside a block now. Why not thousands of it inside? What's stopping us?
 

The_Doc_Man

Immoderate Moderator
Staff member
Local time
Today, 04:01
Joined
Feb 28, 2001
Messages
27,186
We are approaching a limit with traditional CPU chips.
(A) the tools we use to burn paths on silicon are based on angstrom-level light wavelengths (1 angstrom = 0.1 nanometers = 0.1 billionth of a meter).
(B) approaching current-flow levels where we are stretching the "law of large numbers" (a statistical thing)
(C) heat dissipation because micro-chips are not superconductors.
(D) Path lengths for data flow

(A) Silicon chips are laid out using a light-sensitive coating so we can do aluminum deposition on a pre-burned chip path. The light we use to photo-etch lines on a silicon wafer can only be focused to a certain thickness, even using lasers and really precise micro-lenses. I.e. there are limits to how small a beam of light can be focused - and we are reaching that point. We already are looking at chips that stack one on top of the other to jam things closer together (but see (D) below). The circuits laid down by photo-etching can only get down to a certain size before things start to blur, or worse, the etched lines get too small to carry a current. The lines we can burn aren't everything, though. Those lines have to form a transistor, and the limit for the moment seems to be about 7 nanometers for a single transistor. Below that size, see (B) below. Another part of the problem is that the electric fields involved in the circuits have a potential "bleed", where the electric field of component 1 can affect the field of component 2 if they are too close together. Some authorities suggest that if two microchip transistors are closer together than 45 nanometers that they can influence each other as an extraneous field rather than as a desired effect.

Just for comparison, an atom of silicon is about 0.2 angstroms across so that 7 nm transistor is 70 anstroms across, or 350 silicon atoms wide.

(B) Our circuits are working on nanoamperes. One ampere is 6 (and a fraction) x 10^18 electrons, so a nanoamp is 6x10^9, or 6 billion electrons. A lot of the laws for electric flow are based on statistical mechanics which means they in turn depend on the laws of statistics. The law of large numbers usually talks about the average of individual events but in the case of electric flow, the individual events are the behaviors of individual electrons flowing through ANY circuit in a statistically similar way. When you reduce the amount of current flowing through the circuit (in order to reduce the heat generated by resistance of that circuit), you start to jigger with circuit behavior and reliability. See as separate reading topic: Electric circuit settling time - which among other things can relate to "ringing" in a circuit, when its voltage profile oscillates briefly before settling into a steady state. The smaller the circuit, the more susceptible it is to ringing.

(C) Remember that we are pumping a lot of current through the computer chip as a whole, as proved by how hot they can get using the old (I^2)(R) rule for power consumption. If we make the traces TOO much smaller, they won't dissipate heat fast enough and will melt.

(D) Path length becomes an issue in that essentially a modern computer chip is like a maze and current has to flow through the wires. (OK, for the purists, there are issues with the term "current flow" - but it is a common way to discuss what actually happens, so let's not divert.) If we added more CPU area on the chip, we still need to get the electrical signals out of there in a timely manner. Because current doesn't flow in those chips as-the-crow-flies. It travels along specific circuit paths.

It is estimated that one square millimeter on a microchip represents about 1.75 meters of circuitry (i.e. if you grabbed one end of the circuit on the chip and unraveled it like it was a continuous piece of wire.) The FASTEST that a signal could EVER go would be the speed of light (though actually a circuit can't go that fast because resistance and other factors are involved). It takes light 5.9 nanoseconds to travel 1.75 meters. Everything else would have to be SLOWER than that. If you make the maze smaller on the chip, the circuit (unraveled) won't get that much shorter so the signal can't travel that much faster. And adding more circuitry only makes things slower, not faster, because more chips means more things to connect - which means the path gets LONGER to get from point 1 to point 2. So there is a point of diminishing returns here.

Now... the short answers:
(A) limits on the manufacturing process, specifically the precision in laying down photo-etch lines
(B) Approaching a statistically low number of electrons in some circuits
(C) Issues in heat dissipation.
(D) Speed limits for energy propagation through the circuits.

Some promising techniques are cropping up using light-wave circuits, which would be faster and at the same time cooler than traditional chips. We also cannot forget about quantum computers that turn the traditional computer paradigm on its head. But we may have reached a limit on traditional silicon chips.
 

Isaac

Lifelong Learner
Local time
Today, 02:01
Joined
Mar 14, 2017
Messages
8,777

Users who are viewing this thread

Top Bottom