Moore’s Law Definition & Meaning

Moore’s Law is a theory about the availability of transistors on integrated circuits. In 1965, Intel co-founder Gordon Moore observed that the number of transistors per square inch on an integrated circuit had doubled each year since its invention. Moore predicted that this trend would continue until the mid-2020s. The process of adding transistors involves shrinking the relative size of each transistor by half to create room rather than increasing the size of the circuit itself. This is why computer engineers have been able to create devices over time that are simultaneously smaller and more powerful than their predecessors.

Fellow leaders in the computer science community began to refer to Moore’s observations as a “law” despite its lack of empirical evidence. In 1975, Moore revised his theory to state that the transistors would double every two years, noting a slight decline in the rate of duplication. Moore’s insight has held true over many decades and has enabled nearly every subsequent innovation in the computer industry.

Why is Moore’s Law important?

Moore’s Law has allowed the computer industry—specifically semiconductor manufacturers—to forecast advances in circuit engineering and the relative profitability of each advancement. It created a roadmap for the ubiquity of computer technology, including that in consumer electronics, artificial intelligence, and supercomputers. In effect, Moore’s Law also predicted the increasing affordability and accessibility of computer technology, as the cost per transistor decreases when there are more transistors available on a single chip.

The end of Moore’s Law

In recent years, Moore’s Law has slowly fallen out of relevance. The predictions Moore made were related to the speed of innovation and that speed has slowed, just as Moore expected. Most recently, semiconductor foundry TSMC has announced that it plans to release a 3nm (nanometer) sometime in 2022. By comparison, the diameter of a single atom measures somewhere between 0.1 and 0.5 nanometers, so there is a finite limit to how small a single transistor can become. Some industry experts have theorized that this trend will create a shift in the way chips are used; rather than a one-size-fits-all approach, chips will be used for highly specialized purposes so that the computing power can be focused more efficiently.

 

 

Avatar
Kaiti Norton
Kaiti Norton is a Nashville-based Content Writer for TechnologyAdvice, a full-service B2B media company. She is passionate about helping brands build genuine connections with their customers through relatable, research-based content. When she's not writing about technology, she's sharing her musings about fashion, cats, books, and skincare on her blog.

Top Articles

Huge List Of Texting and Online Chat Abbreviations

From A3 to ZZZ we list 1,559 text message and online chat abbreviations to help you translate and understand today's texting lingo. Includes Top...

How To Create A Desktop Shortcut To A Website

This Webopedia guide will show you how to create a desktop shortcut to a website using Firefox, Chrome or Internet Explorer (IE). Creating a desktop...

The History Of Windows Operating Systems

Microsoft Windows is a family of operating systems. We look at the history of Microsoft's Windows operating systems (Windows OS) from 1985 to present...

Hotmail [Outlook] Email Accounts

  By Vangie Beal Hotmail is one of the first public webmail services that can be accessed from any web browser. Prior to Hotmail and its...

Common Business-Oriented Language (COBOL)...

What is COBOL? COBOL stands for Common Business-Oriented Language. It is a 60-year-old programming...

Shared Hosting Definition &...

Shared hosting is a web hosting model in which multiple sites occupy the...

Database Integration Definition &...

Database integration consolidates data from multiple sources to provide businesses with more comprehensive...