Moore’s Law is a theory about the availability of transistors on integrated circuits. In 1965, Intel co-founder Gordon Moore observed that the number of transistors per square inch on an integrated circuit had doubled each year since its invention. Moore predicted that this trend would continue until the mid-2020s. The process of adding transistors involves shrinking the relative size of each transistor by half to create room rather than increasing the size of the circuit itself. This is why computer engineers have been able to create devices over time that are simultaneously smaller and more powerful than their predecessors.
Fellow leaders in the computer science community began to refer to Moore’s observations as a “law” despite its lack of empirical evidence. In 1975, Moore revised his theory to state that the transistors would double every two years, noting a slight decline in the rate of duplication. Moore’s insight has held true over many decades and has enabled nearly every subsequent innovation in the computer industry.
Why is Moore’s Law important?
Moore’s Law has allowed the computer industry-specifically semiconductor manufacturers-to forecast advances in circuit engineering and the relative profitability of each advancement. It created a roadmap for the ubiquity of computer technology, including that in consumer electronics, artificial intelligence, and supercomputers. In effect, Moore’s Law also predicted the increasing affordability and accessibility of computer technology, as the cost per transistor decreases when there are more transistors available on a single chip.
The end of Moore’s Law
In recent years, Moore’s Law has slowly fallen out of relevance. The predictions Moore made were related to the speed of innovation and that speed has slowed, just as Moore expected. Most recently, semiconductor foundry TSMC has announced that it plans to release a 3nm (nanometer) sometime in 2022. By comparison, the diameter of a single atom measures somewhere between 0.1 and 0.5 nanometers, so there is a finite limit to how small a single transistor can become. Some industry experts have theorized that this trend will create a shift in the way chips are used; rather than a one-size-fits-all approach, chips will be used for highly specialized purposes so that the computing power can be focused more efficiently.