Skip to main content

How exactly does binary code work? - José Américo N L F Freitas

1,371,775 Views

16,133 Questions Answered

TEDEd Animation

Let’s Begin…

Imagine trying to use words to describe every scene in a film, every note in a song, or every street in your town. Now imagine trying to do it using only the numbers 1 and 0. Every time you use the Internet to watch a movie, listen to music, or check directions, that’s exactly what your device is doing, using the language of binary code. José Américo N L F de Freitas explains how binary works.

Additional Resources for you to Explore

The binary system used in computers is a numeric system, like the decimal that we use in our day-to-day lives. The only difference between them is that the decimal uses ten symbols to represent numbers (0 to 9), while binary uses only two (0 and 1). This page has an interesting parallel between the two systems, including the representation of rational numbers. This lesson illustrates how the binary system is used, along with other numeric systems. In decimal systems, we know that any number can be represented—you just add digits as the represented number gets bigger. Hence, a system with "only" ten symbols is enough to represent virtually infinite possibilities. Binary works the exact same way: if you need to represent more information, you can add extra bits to your memory. However, using systems with less symbols is not easy: a number represented in binary requires around three times more digits to be written than its decimal representation.

Wouldn't it be better to use the widespread decimal system then? Actually, we use binary computers because binary devices are easier to implement than "decimal devices.” Currently, a computer's central processing units (CPU) are made of electrical components called "transistors.” Check out this lesson to see how transistors are made, how they operate, and how they have changed in the last century. Binary can also be implemented in multiple other ways. An optical fiber transmits data encoded in pulses of light, while hard disks store information using magnetic fields. You can check several examples of binary devices, even solutions that use more than one transistor to implement a single bit, here. The choice of which technology will be used depends specifically on each application and the most important parameters are cost, speed, size, and robustness.

Now that we know how to build devices that can represent numbers, we can expand their scope by mapping those numbers to a set of interest. A set of interest can be literally anything, such as letters in the alphabet or colors in a palette.

The effort to encode letters with industry standards began in the 1960s, when the American Standard Code for Information Interchange (ASCII) was developed. This encoding used 7 bits, which were mapped to the English letters and special characters. The values corresponding to each symbol are here. As computers became more popular worldwide, the need for tables containing more symbols emerged. Nowadays, the most used encoding is the UTF-8, which was created in 1993 and maps more than one million symbols to binary, including characters in all known alphabets and even emojis. You can navigate through the UTF-8 table here. In this encoding, each character can take one to four bytes to be represented, as shown in the first table here.

Colors are also mapped to binary sequences. Besides the RGB system, other systems were conceived to represent colors. The HSL system also uses three components to represent colors: hue, which varies from 0 to 360, with the color “red” being mapped to 0, “green” to 120 and “blue” to 240; saturation, which is the intensity of the colored component; and lightness, the "white level" of the color. The CMYK uses four components, corresponding to the levels of cyan, magenta, yellow, and black in the pixel. This system is called "subtractive," which means that, as a component gets larger, the pixel emits less light. It is very convenient for printers, where the ink acts as a "filter" for the white canvas of the paper. In this paragraph, the name of each color system has a link to interactive panels that allow you to see the colors corresponding to each possible encoding. If you want to see how those systems compare with each other, check this website.

It is possible to reduce the number of bits without losing information, optimizing data transmission or storage. Strategies that perform compression include the run-length encoding, the Lempel-Ziv algorithm, and the Huffman coding. The Lempel-Ziv algorithm replaces repeated patterns in the data with a token. Both the token and the pattern are added to the compressed file, so the decoder can accurately rebuild the original file. Although this post's discussion is not related to binary, it illustrates the Lempel-Ziv algorithm. The Huffman coding counts the number of occurrences of symbols in the file and creates a new binary encoding for each one of those symbols. Symbols that are more frequent receive a shorter binary sequence, reducing the size of the file. The ZIP files are created by using a combination of those algorithms.

Binary is a way to represent information like any other language. Engineers use international standards to attribute meaning to states of binary devices; they are able to represent letters, colors and even sounds. Just as a painting of pipe is not the pipe itself, the meaning of each bit is not embedded in itself, but in the program that reads it.

Next Section »

About TED-Ed Animations

TED-Ed Animations feature the words and ideas of educators brought to life by professional animators. Are you an educator or animator interested in creating a TED-Ed Animation? Nominate yourself here »

Meet The Creators

  • Educator José Américo NLF Freitas
  • Director Qa'ed Mai
  • Script Editor Alex Gendler
  • Animator Qa'ed Mai
  • Associate Producer Bethany Cutmore-Scott, Elizabeth Cox
  • Content Producer Gerta Xhelo
  • Editorial Producer Alex Rosenthal
  • Narrator Addison Anderson

More from Math In Real Life