Most computer operations involve manipulation of data stored in various memories. The smallest unit of computer memory is called a bit. When a bit is set its value is taken to be equal to 1 and when it is reset its value is taken to be 0. Thus, a bit can have one of the two values, i.e., 0 or 1. In electronics, memory circuits are called flip-flops. A flip-flop is the name given to a memory circuit which behaves in a manner similar to a switch. When a switch is set on, there is high voltage at its output and it remains on till it is reset ill which case the voltage at its output terminal becomes 0. Similarly, when a flip-flop is set its value is taken as 1 and when it is reset its value becomes 0. One flip-flop represents one bit of memory. The output state can be manipulated according to the input signal to the flip-flop. Since bits are extremely small units of memory, in computer programs we often deal with groups of bits. A group of 8 bits is called a byte.
A nibble is a cute little term for half of a byte, but hardly anyone really uses the word anymore. One byte is eight bits, so a nibble is four bits. If you really want to know what bits and bytes are, please see their definitions under bit. (Nibble is sometimes spelled "nybble.")
Computer is a collection of digital electronic circuits in which the memory circuits play a major role. The input to as well as outputs from the microprocessor are all expressed in terms of voltages applied on the specified set of terminals meant for input and outputs, respectively. We may code the high value of voltage (typical value 5 V) as 1 and low value of voltage (typical value 0) as 0. Thus, an instruction to the microprocessor of the computer may be represented as a sequence of 0s and 1s (ones and zeros), such as 11001001. These ones and zeros are electronic signals that tell the computer what to do. In fact, microprocessors are designed to carry out a large number of instructions. The manufacturers supply the set of instructions and their respective codes.
Byte-Addressing: Any memory addressing scheme in which the smallest value that can have its own unique address is a BYTE: in other words, each successive address identifies a different eight-bit quantity. This contrasts with various word-addressing schemes, where for example a 32-bit word might be the smallest addressable object.
A method of encoding binary information so that it can be transmitted across text-only communications systems such as Internet EMAIL(which will pass only ASCII characters in the range 32 to 126 decimal). Base64 is used by the MIME protocol to encode binary attachments to email.
A byte is a group of eight bits. Bit, short for Binary digit,which can represent any number in the range 00000000 To 11111111 Binary, or 0 To 255 Decimal. From the early days of digital computing, it is the basic unit of information within a computer, equal to a 1 or a 0. It is used to measure both memory size (Kilobytes, Megabytes) and data transfer speed (Kilobytes per second). Eight individual electronic on/off signals, strung together to make a message that the computer can interpret. Bits are stored within the computer's microchips and are led by control the flow of electrical currents; a 1 is represented by an "on" or high voltage electrical current, and a 0 is represented by an "off" or low current. A byte is formed by combining eight bits together to store the equivalent of one character. For example, the letter A (a single byte) is made up of the eight bits 01000001.
Bit is short for Binary digit. A bit is a single digit, either a 1 or a 0, and it is the fundamental unit of information in computing, communications and physics. Binary numbers (bits) are stored within a computer's microchips by turning an electrical current "on" or "off"; a 1 is represented by an "on" or high voltage current, and a 0 is represented by an "off" or low current.
The binary system is a method for working with numbers based on only two digits: 1 and 0 (binary is also known as "base two"). Binary numbers are the basis for computer storage. Input into the computer is changed into binary numbers that the computer can store and manipulate. A binary numbering system uses a series of 1's and 0's to represent any number. Non-numbers (such as the letter D) or characters (such as a question mark) are assigned an eight digit binary number so that they too can be represented within the computer.
Arithmetic shift: A type of PROCESSOR INSTRUCTION that moves all the BITS making up a binary WORD one or more places to the left or right: bits that move off the end of the word are discarded and zeroes are introduced to fill the vacated positions. However, unlike the otherwise similar LOGICAL SHIFT operations, arithmetic shifts preserve the sign of a number by propagating the value of its MOST SIGNIFICANT BIT. Hence they can be used in calculations, offering a fast way to perform multiplication and division by powers of two.
ASCII Stands for American Standard Code for Information Interchange (pronounced 'as-key'). This is a standard set of characters understood by all computers, consisting mostly of letters and numbers plus a few basic symbols such as $ and %. Which employs the 128 possible 7-bit integers to encode the 52 uppercase and lowercase letters and 10 numeric digits of the Roman alphabet, plus punctuation characters and some other symbols. The fact that almost everyone agrees on ASCII makes it relatively easy to exchange information between different programs, different operating systems, and even different computers.
Meaning of BCD – "Binary Coded Decimal", is a method that use binary digits 0 which represent “off” and 1 which represent “on”. BCD has been in use since the first UNIVAC computer. Each digit is called a bit. Four bits are called a nibble and is used to represent each decimal digit (0 through 9).
Bicycle = 2 wheels.
Biplane = 2 wings
Binoculars = 2 eyepieces
EBCDIC(pronounced "ebb see dick") is short for extended binary coded decimal interchange code is eight bits, or one byte, wide. This is a coding system used to represent characters-letters, numerals, punctuation marks, and other symbols in computerized text. A character is represented in EBCDIC by eight bit. EBCDIC mainly used on IBM mainframe and IBM midrange computer operating systems. Each byte consists of two nibbles, each four bits wide. The first four bits define the class of character, while the second nibble defines the specific character inside that class.
BINARY TO OCTAL CONVERSION
Convert 10110111.1_{2} to octal:
Hexadecimal is a number system in base 16. It utilizes the numbers o through 9 and the letters A through F. The hex system is convenient to use in programming because it is compatible with the binary system and is easier to read and more compact. Two hexadecimal numbers can represent one byte. For example, 2B7D equals 0010 1011 0111 1101 in binary.
Convert 10110111.1_{2} to hexadecimal:
ASCII : ASCII codes represent text in computers, communications equipment, and other devices that work with text. ASCII, pronounced "ask-ee" is the acronym for American Standard Code for Information Interchange. It's a set of characters which, unlike the characters in word processing documents, allow no special formatting like different fonts, bold, underlined or italic text. ASCII is computer code for the interchange of information between terminals.