Bit is short for Binary digit. A bit is a single digit, either a 1 or a 0, and it is the fundamental unit of information in computing, communications and physics. Binary numbers (bits) are stored within a computer's microchips by turning an electrical current "on" or "off"; a 1 is represented by an "on" or high voltage current, and a 0 is represented by an "off" or low current.
When you hit a key or click a mouse button, you send tiny electronic on/ off signals to the computer. Each tiny electronic signal is one bit. The computer usually groups these tiny signals, or bits, into bigger chunks to work with: a series of eight bits strung together is a byte; a byte typically creates one character (letter, number, etc.) on your screen. A series of 1024 bytes strung together is a kilobyte. You've probably noticed that most computer storage devices (such as disks) and software files (your documents, for instance) are measured in kilobytes or megabytes (a megabyte is 1024 kilobytes). Well, those measurements are referring to how many electronic on/off signals it took to create and store the information. The more information there is, the greater the number of kilobytes or megabytes.
For example, the letter A is stored by saving the eight bits, 01000001.
Because bits are so small, they are rarely referred to as units of storage. However, bytes, kilobytes (roughly a thousand or 1,024 bytes), megabytes (roughly a million or 1,048,576 bytes), and gigabytes (roughly a thousand megabytes or 1,073,741,824 bytes) are more commonly used.
Of course, all of these are based on the humble bit. The size of a file, the storage capacity of a disk, or the amount of computer memory can all be measured in bits.
More fun from Steve: You can represent any number as a series of bits you just have to use the "base 2" or binary counting system. In our accustomed base 10, or decimal, system, 7 equals 111. Reading from the right, each place signifies a power of 2 rather than a power of 10; in this case, the digit on the right means 1, the middle digit means 2, the one on the left, 4, for a total of 7. The expression 101 in binary equals 5 in decimal (1 plus 0 plus 4). In binary, three bits (digits) are enough to make any number between 0 and 7. Four bits cover everything from 0 to 16, eight bits take you up to 256, and 32 bits allow for numbers as large as 4,294,967,296.
Related Articles (You May Also Like)