top of page

BINARY

There are only two binary values able to be used by transistors: true and false. 

We can use true and false to represent the values 1 and 0, respectively. 

In order to represent larger things, we just need to add more digits.

Adding digits works the same way as with decimal numbers.

With decimal numbers, there are only 10 possible values a digit can be: 0 through 9. To represent larger numbers, we just add more digits to the front.

For example, let's look at 263. It has 2 100's, 6 10's, and 3 1's.

Each column has a different multiplier. In this table, it's 100, 10, and 1.

Each multiplier is 10 times larger than the one to the right.

This is because each column has 10 possible digits to work with, 0 through 9, after which you have to carry one to the next column.

Because of this, decimal notation is also called base-ten notation.

Binary works the same way. It's just base-two notation. There are only 2 possible digits in binary: 1 and 0. This means that each multiplier has to be 2 times larger than the one to the right. Instead of 100's, 10's and 1's, we now have 4's, 2's and 1's.

For example, let's look at the binary number 101. It has 1 four, 0 twos and 1 one. By adding them together, we have the number 5 in base ten.

In order to represent larger numbers, binary needs many more digits.

For example, let's use the binary number 10110111.

We can convert it to base-ten in the same way. We have 1 x 128, 0 x 64, 1 x 32, 1 x 16, 0 x 8, 1 x 4, 1 x 2, and 1 x 1. This adds up to 183.

Math with binary numbers is also similar to base-ten.

10110111

00010011

+

Just like with base-ten, we start with the ones column. 1 + 1 = 2. However, the symbol for 2 in binary is 10. We put 0 as the sum and carry the 1 to the next column. 

1 1

10110111

00010011

+

10

We repeat this process, just like with base-ten. We end up with 11001010, which is the number 202 in base-ten.

Let's add the binary numbers 10110111 and 00010011.

1

10110111

00010011

+

0

1 + 1, plus the 1 carried from the previous column, equals 3, or 11 in binary. We put 1 as the sum and carry 1 to the next column. 

1

1

1

1

1

10110111

00010011

+

11001010

Each binary digit, 1 or 0, is called a bit. We were adding 8-bit numbers. With 8 bits, there are 256 possible values ranging from 0 to 255.

8 bits is such a common size in computing, it has its own unit: a byte. A byte is 8 bits.

You may have heard of kilobytes, megabytes, gigabytes, and terabytes. 

These units have 2 different definitions, a base-ten definition and a binary definition.

According to the official base-ten definitions1 kilobyte is 1000 (10³) bytes, 1 megabyte is 10⁶ bytes, 1 gigabyte is 10⁹ bytes, and 1 terabyte is 10¹² bytes.

According to the binary definitions, 1 kilobyte is 1024 (2¹⁰) bytes, 1 megabyte is 2²⁰ bytes, 1 gigabyte is 2³⁰ bytes, and 1 terabyte is 2⁴⁰ bytes.

The official units that are supposed to be used for the binary definitions are the kibibyte, mebibyte, gibibyte, and tebibyte, respectively.

However, the base-ten units are commonly associated with the binary definitions. In fact, even Microsoft Windows uses the binary definitions for kilobyte, megabyte, gigabyte and terabyte.

This discrepancy with the uses of kilobyte, megabyte, gigabyte, and terabyte is a problem that will have to be resolved soon.

Most modern computers are 32-bit or 64 bit computers. They operate in chunks of 32 or 64 bits. With 32 bits, there are almost 4.3 billion possible values. Due to this wide range, modern graphics are very high quality, as they have millions to billions of possible colors.

When representing numbers, most computers use the first bit to specify whether the number is positive (0) or negative (1) and then use the remaining 31 bits for the number. This allows a range of around plus or minus 2 billion. 

However, many times even this many isn't enough. 64 bit numbers allow an even wider range of numbers.

Computers also must deal with decimal numbers, known as floating point numbers. Using the most common IEEE 754 standard, decimal values are stored in a way similar to scientific notation, with an exponent and a coefficient. In a 32-bit floating point number the first bit specifies if the number is positive or negative. The next 8 bits store the exponent. The remaining 23 bits store the coefficient.

Computers also need a way to store text and other characters.

An early standard was ASCII, a 7-bit code that could store 128 different values. It could encode capital and lowercase letters, digits 1 through 9, and various other characters and commands. ASCII became widely used, allowing interoperability, the ability for different computers built by different companies to exchange data.

However, ASCII was only designed for English, so as more countries began using computers, multiple encoding problems emerged. 

In order to solve these problems, Unicode was created as the universal encoding scheme.

The most common version of Unicode has 16 bits and is able to store over a million codes.

This is enough for every single character from every language in use today, as well as space for math symbols and even emoji.

Other file formats, such as PNG's or MP4's, use binary numbers to encode sounds or colors of pixels in photos, movies, and music.

It is important to note that everything displayed on your computer are nothing but long sequences of 1's and 0's.

bottom of page