Those ones and zeros might not look like anything to you, but in binary code the numbers are actually saying “Hello!” Science Friday’s Ariel Zych shares how you can write your own name using binary code.
Any code that uses just two symbols to represent information is considered binary code. Different versions of binary code have been around for centuries, and have been used in a variety of contexts. For example, Braille uses raised and unraised bumps to convey information to the blind, Morse code uses long and short signals to transmit information, and the example above uses sets of 0s and 1s to represent letters. Perhaps the most common use for binary nowadays is in computers: binary code is the way that most computers and computerized devices ultimately send, receive, and store information.
In computers, the two symbols used for binary code are 0 and 1, usually grouped in a specific sequence to represent information. Each 0 or 1 is called a “bit,” and different computers use different numbers of bits at a time. For example, most web servers interpret bits in groups of 8, while an iPhone 6 is built on a processor that handles bits in groups of 64. “UTF-8” is an 8-bit binary code, which means that it uses unique groups of eight 0s and 1s at a time to code for different letters, numbers, and symbols (for example *&^%$#@ and !), as well as other more complex functions. UTF-8 (which stands for “Unicode Transformation Format 8-bit”) is the version most commonly used by computers and the World Wide Web to represent information.
Take a look through the key below and try to spell something using UTF-8 binary code. Try your name! Find the 8-bit binary code sequence for each letter of your name, writing it down with a small space between each set of 8 bits. For example, if your name starts with the letter A, your first letter would be 01000001.
Text-to-binary conversion chart (using a UTF-8 version of binary)