# How bits, bytes, ones, and zeros help a computer think

These computing units add up to make complex operations.

## Share

Computers today are capable of wonderful marvels and complex calculations. But if you break down one of these problem-solving engines into its essentials, at the heart of it you’ll find the most basic unit of memory: a bit. Bits are tiny, binary switches that underlie many of the fundamental operations computers perform. It is the smallest unit of memory that exists in two states: on and off, otherwise known as one and zero. Bits can also represent information and values like true (one) and false (zero), and are considered the language of machines.

Arranging these bits into clever and intricate matrices on semiconductor chips allow computer scientists to perform a wide variety of tasks, like encoding information and retrieving data from memory. As computer scientists stack more and more of these switches onto a processing unit, the switches can become unwieldy to manage, which is why bits are sometimes organized into sets of eight, also known as a byte.

## Bits vs. bytes

The value states that can be represented in bits can grow exponentially. So if you have eight bits, or a byte, you can represent 256 states or values. Counting with bits works a little like counting on an abacus, but the column values are orders of two (128, 64, 32, 16, 8, 4, 2, 1). So while zero and one in the decimal number system correspond to zero and one in the binary number system, two in decimal is 10 in binary, three in decimal is 11 in binary, and four in decimal is 100 in binary. The biggest number you can make with a byte is 255, which in binary is 11111111, because it’s 128+64+32+16+8+4+2+1.

You can also represent more complex information with bytes than you can with bits. While bits can only be one or zero, bytes can store data such as characters, symbols, and large numbers.

Bytes are also commonly the smallest unit of information that can be “addressed.” That means that bytes can literally have addresses of sorts that tell the computer which cross wires (or cross streets, if you want to imagine a chip as a tiny city) to retrieve the stored value from. All programs come with pre-made commands, or operation codes, that correlate addresses with values, and values with variables. Different types of written codes can correlate the 256 states in a byte to items like letters. For example, the ASCII code for computer text (which assigns numeric values to letters, punctuation marks, and other characters) says that if you have a byte that looks like 01000100, or the deci-numeral 68, that corresponds to an uppercase “D.” By ordering the bytes in interesting combinations, you can also use codes to make colors

Bytes as a unit let you gauge the amount of memory you can store for different types of information. For example, if you were to type a note with 1,000 individual letters, that would take up 1,000 bytes of memory. Historically, because the industry wanted to keep count in binary, it still used units like kilobytes, megabytes, gigabytes, and terabytes, but here’s where it gets even more complicated: A kilobyte is not always 1,000 bytes (as the prefix would have you assume).

[Related: Best cloud storage services of this year]

In fact, a kilobyte is actually 2^10, or 1,024 bytes. The same can be said for the other units of memory—they’re a rough representation for bytes. A gigabyte is slightly larger than a billion bytes (2^30), and a terabyte is roughly a trillion bytes (2^40). Special prefixes, like kibi, mebi, gibi, were later introduced to account for the differences, although many computer scientists still prefer to stick with the old naming system.

## Internet speed is measured in bits

Although data volume is measured in bytes (the largest hard drive in the world has around 100 terabytes of storage), data speeds, like those offered by internet companies telling you how fast certain services are, tend to be measured in bits. That’s because the internet shuttles data one single bit at a time.

Think of it like a stream of ones and zeros. For example, the bytes making up an email are chopped up into their constituent bits on one end, and reassembled (sometimes out of order) on the other end.