The binary system in computing uses the base 2 number system to power the majority of computational processes. Unlike the base 10 number system widely used in mathematics, base 2 systems use only two numbers: 0 and 1. In a computer system, the numbers 0 and 1 represent the two states of electricity: off and on. At their most fundamental level, then, all computing processes and data are made of combinations of 0s and 1s. The first binary computer was developed by an American professor in 1939.
In this definition...
Bits, bytes, and sequences of bits
In computing, the numbers 0 and 1 are known as bits (or binary digits). Bits comprise the smallest possible unit of data in a computer system.
A sequence of bits is called a string or array. Typically, an array of eight bits makes up a byte, another unit of data storage. However, depending on the system or hardware, the number of bits in a byte can vary. An octet is a specific type of byte that has only eight bits.
A byte is the smallest unit of data that can be addressed in computer memory. They’re known as “addressable units,” and computer memory is called “byte-addressable” if the computer’s central processing unit can address a single byte for processing.
To make discussing the number of bits easier, prefixes for multiples of a thousand are used. These include:
- 1 kilobit (approximately 1,000 bits)
- 1 megabit (approximately 1,000 kilobits)
- 1 gigabit (approximately 1,000 megabits)
- 1 terabit (approximately 1,000 gigabits)
Now why did we say “approximately” 1,000 kilobits? Most numbering patterns use base 10, which includes multiples of 10. But the binary system uses base 2. Technically, the proper multiplication of bits, using base 2 numbering, would mean that one kilobit equals 1,024 bits.
Some technology experts hold these conventions lightly, and some do not. It’s common to see bits or bytes expressed as either multiples of 1,000 or of 1,024. Each is a valid measurement.
The binary system and data transmission
Bits are a measurement of digital data, and they are also used to quantify how much data is being transmitted or streamed over a unity of time (typically one second). This measurement is called bitrate. In media streaming, bitrate is used as a measure of sound or image quality. Higher bitrates indicate higher-definition audio or video quality.
Bitrate—for example, megabits per second—is abbreviated Mb/s or, more commonly, Mbps. Megabytes per second, in contrast, is abbreviated MB/s. Bitrate measures the amount of data being transmitted over time, while bytes measure amounts of data being stored.
Problems with the binary system
Binary code is difficult and time-consuming for developers and programmers to read, as mentioned above. As a result, numbering systems like the hexadecimal convert binary’s ones and zeros to numeric/letter pairs which are more easily understood.
One of the most common issues caused by the binary system is a data storage problem colloquially known as bit flipping. Bit flipping occurs when a 0 becomes a 1 or vice versa due to processing errors, power surges, or deliberate action by developers. Bit flipping, as a manipulation performed by computer experts, can be intentional and helpful. But accidental bit flipping, known as a soft error, can change parts of or an entire file, rendering it useless. In other words, bit flipping can lead to data loss.
Soft errors are caused by radiation on a computer chip in memory or by worn-down memory cells. Since bits are controlled by electricity, if the electrical charge in a memory cell is degraded enough, the bit may eventually read as a 0 rather than a 1.
Bit flipping can also be used as a method of cyber attack, particularly in encryption. If an attacker causes bits to be changed in a string, the cipher may become more difficult–or impossible–to solve.
A binary system in astronomy includes two stars that coexist in orbit due to gravitational pull.