• Skip to main content
  • Skip to primary sidebar
  • Skip to secondary sidebar
  • Skip to footer

Computer Notes

Library
    • Computer Fundamental
    • Computer Memory
    • DBMS Tutorial
    • Operating System
    • Computer Networking
    • C Programming
    • C++ Programming
    • Java Programming
    • C# Programming
    • SQL Tutorial
    • Management Tutorial
    • Computer Graphics
    • Compiler Design
    • Style Sheet
    • JavaScript Tutorial
    • Html Tutorial
    • Wordpress Tutorial
    • Python Tutorial
    • PHP Tutorial
    • JSP Tutorial
    • AngularJS Tutorial
    • Data Structures
    • E Commerce Tutorial
    • Visual Basic
    • Structs2 Tutorial
    • Digital Electronics
    • Internet Terms
    • Servlet Tutorial
    • Software Engineering
    • Interviews Questions
    • Basic Terms
    • Troubleshooting
Menu

Header Right

Home » Graphics » Basic of Computer Graphics

What is Graphics Adapter?

By Dinesh Thakur

An EXPANSION CARD that enables a personal computer to create a graphical display. The term harks back to the original 1981 IBMPC which could display only text, and required such an optional extra card to ‘adapt’ it to display graphics.

The succession of graphics adapter standards that IBM produced throughout the 1980s (CGA,EGA,VGA)both defined and confined the graphics capability of personal computers until the arrival of IBM-compatibles and the graphical Windows operating system, which spawned a whole industry manufacturing graphics adapters with higher capabilities.

Nowadays most graphics adapters are also powerful GRAPHICS ACCELERATOR capable of displaying 24-bit colour at resolutions of 1280 x 1024 pixels or better. Providing software support for the plethora of different makes of adapter has become an onerous task for software manufacturers.

What is Graphics?

By Dinesh Thakur

Pictures on a computer display or the process of creating pictures on a computer display. The term came into use when there was still a distinction between computers that could display only text and those that could also display pictures. This distinction is lost now that almost all computers employ a GRAPHICAL USER INTERFACE.

Since a computer can ultimately deal only with binary numbers, pictures have to be DIGITIZED (reduced to lists of numbers) in order to be stored, processed or displayed. The most common graphical display devices are the CATHODE RAY TUBE inside a monitor, and the PRINTER, both of which present pictures as two-dimensional arrays of dots. The most natural representation for a picture inside the computer is therefore just a list of the colours for each dot in the output; this is called a BITMAPPED representation. Bitmapping implies that each dot in the picture corresponds to one or more bits in the computer’s video memory.

An alternative representation is to treat a picture as if it were composed of simple geometric shapes and record the relative positions of these shapes; this is called a vector representation (see more under VECTOR GRAPHICS). A typical vector representation might build up a picture from straight lines, storing only the coordinates of the endpoints of each line.

Vector and bitmapped representations have complementary strengths and weaknesses. Most modern output devices work by drawing dots, which makes it more efficient to display bitmapped images. There are a few, mostly obsolete, display technologies, such as the GRAPH PLOTTER or the VECTOR DISPLAY tube, that can draw lines directly, but it is more typical for a vector image to be first converted into bitmap (the process called RENDERING) before displaying it on a dot-oriented device. Rendering involves extra work for the computer, which is performed either by software or by special hardware called a GRAPHICS ACCELERATOR.

Vector representations are easily edited by moving, resizing or deleting individual shapes, whereas in a bitmap all that can be changed is the colour of the individual pixels. Vector images can easily be rotated and scaled to different sizes with no loss of quality, the computer simply multiplying the endpoint coordinates by a suitable factor. A bitmap, on, the other hand, can be magnified only by duplicating each pixel, which gives an unsightly jagged effect.

Vector representations are most suitable for pictures that are actually drawn on the computer (such as engineering drawings, document layouts or line illustrations), and for images that must be reproduced at various sizes (such as fonts). On the other hand, bitmaps are more suitable for manipulating photographs of real world scenes and objects that contain many continuously varying colours and ill-defined shapes – both scanners and digital cameras produce bit-mapped images as their output.

What is File Compression?

By Dinesh Thakur

A class of techniques for shrinking the size of a data file to reduce its storage requirement, by processing the data it contains using some suitably reversible algorithm. Compression methods tend to be more effective for a particular kind of data, so that text files will typically be compressed using a different algorithm from graphics or sound files.

 

For example RUN-LENGTH EN-CODING is very effective for compressing flat-shaded computer graphics, which contain many long runs of identical PIXEL values, but is quite poor for compressing tonally rich photographic material where adjacent pixels are different. Dictionary-based algorithms such as LEMPELZIV COMPRESSION are very effective for compressing text, but perform poorly on binary data such as pictures or sound.

Compression may be applied automatically by an operating system or application, or applied manually using a utility program such as PKZIP, TAR, LHA or STUFFITS.

What is Object-oriented Graphics?

By Dinesh Thakur

Also known as vector graphics, object-oriented graphics are shapes represented with mathematical formulas. (This is very different from bitmapped graphics, in which the image is mapped to the pixels on the screen, dot by dot.)

 In a program that uses object-oriented graphics, each separate element you draw-every circle, every line, and every rectangle-is defined and stored as a separate object. Each object is defined by its vector points, or end points. Because each graphic object is defined mathematically, rather than as a specific set of dots, you can change its proportions, or make it larger or smaller, or resize it, stretch it, rotate it, change its pattern, etc

 without distorting the line width or affecting the object’s sharpness and clarity (the resolution). Because each object is a separate entity, you can overlap objects in any order and change that order whenever you feel like it. To select a graphical object in an object-oriented graphics program, you usually just click on the object with the pointer. When you select it, a set of handles, little black squares, appear on or around the object (compare marquee). By dragging the handles, you can change the size or shape of the object or the curviness of any curved lines. You can also copy or cut a selected object to the Clipboard, or move it around on the screen, without disturbing any other object

 The resolution of object-oriented graphics is device independent. This means that if you print a graphic image to a printer that has a resolution of 300 dots per inch, the graphic will print at 300 dots per inch. If you print the same image to an image setter that has a resolution of 2540 dots per inch, the graphic will print at 2540 dots per inch. (Bitmapped graphics, though, always print at the same resolution.)

Fonts, or typefaces, can also be objected-oriented, but they’re not usually referred to this way-instead, such fonts are also known as outline fonts, scalable fonts, or vector fonts.

What is Graphical User Interface?

By Dinesh Thakur

A graphical user interfaceis fondly called “GUI” pronounced “gooey.” The word “graphical” means pictures; “user” means the person who uses it; “interface” means what you see on the screen and how you work with it. So a graphical user interface, then, means that you (the user) get to work with little pictures on the screen to boss the computer around, rather than type in lines of codes and commands.

(GUI) An INTERACTIVE outer layer presented by a computer software product (for example an operating system) to make it easier to use by operating through pictures as well as words. Graphical user interfaces employ visual metaphors, in which objects drawn on the computer’s screen mimic in some way the behaviour of real objects, and manipulating the screen object controls part of the program.

A graphical user interface usesmenusandicons(pictorial representations) to choose commands, start applications, make changes to documents, store files, delete files, etc. You can use the mouse to control a cursor or pointer on the screen to do these things, or you can alternatively use the keyboard to do most actions. A graphical user interface is considereduser-friendly.

The most popular GUI metaphor requires the user to point at pictures on the screen with an arrow pointer steered by a MOUSE or similar input device. Clicking the MOUSE BUTTONS while pointing to a screen object selects or activates that object, and may enable it to be moved across the screen by dragging as if it were a real object

Take, for example, the action of scrolling a block of text that is too long to fit onto the screen. A non-graphical user interface might offer a ‘scroll’ command, invoked by pressing a certain combination of keys, say CTRL+S. Under a GUI, by contrast, a picture of an object called a SCROLLBAR appears on the screen, with a movable button that causes the text to scroll up and down according to its position. Similarly, moving a block of text in a WORD PROCESSOR that employs a GUI involves merely selecting it by dragging the mouse pointer across it until the text becomes HIGHLIGHTED, then dragging the highlighted area to its intended destination.

There is now an accepted ‘vocabulary’ of such screen objects which behave in more or less similar ways across different applications, and even across different operating systems. These include: WINDOWS, ICONS, pull down and pop-up MENUS, BUTTONS and button bars, check boxes, dialogues and tabbed property sheets. Variants of these GUI objects are used to control programs under Microsoft Windows, Apple’s MacOS, and on UNIX systems that have a windowing system such as Motif or KDE installed.

GUIs have many advantages and some disadvantages. They make programs much easier to learn and use, by exploiting natural hand-to-eye coordination instead of numerous obscure command sequences. They reduce the need for fluent typing skills, and make the operation of software more comprehensible and hence less mysterious and anxiety- prone. For visually-oriented tasks such as word processing, illustration and graphic design they have proved revolutionary.

On the deficit side, GUIs require far more computing resources than older systems. It is usual for the operating system itself to draw most of the screen objects (via SYSTEM CALLS) to relieve application programs from the overhead of creating them from scratch each time, which means that GUI-based operating systems require typically 100 to 1000 times more working memory and processing power than those with old text-based interfaces.

GUIs can also present great difficulties for people with visual disabilities, and their interactive nature makes it difficult to automate repetitive tasks by batch processing. Neither do GUIs automatically promote good user interface design. Hiding 100 poorly-chosen commands behind the tabs of a property sheet is no better than hiding them among an old-fashioned menu hierarchy – the point is to reduce them to 5 more sensible ones

Historically, the invention of the GUI must be credited to Xerox PARC where the first GUI based workstations – the XEROX STAR and XEROX DORADO – were designed in the early 1970s. These proved too expensive and too radical for commercial exploitation, but it was following a visit to PARC by Steve Jobs in the early 1980s that Apple released the LISA, the first commercial GUI computer, and later the more successful MACINTOSH. It was only following the 1990 release of Windows version 3.0 that GUIs became ubiquitous on IBM-compatible PCs.

What is Dithering?

By Dinesh Thakur

Dithering is a trick many graphic applications use to fool your eye into seeing a whole lot more colors (or grey tones) on the screen than are really there. The computer achieves this optical illusion by mixing together different colored pixels (tiny dots on the screen that make up an image) to trick the eye into thinking that a totally new color exists. For instance, since pixels are so tiny, if the computer intermingles a series of black with white dots then you’re going to think you’re seeing gray.

 

Color dithering smoothes out images by creating intermediate shades between two more extreme colors (called a blend). Dithering also makes the best use of the limited number of available colors, like when you open a 24-bit color image (millions of colors) on a computer that’s only capable of displaying 8-bit (256 colors).

There is a half tone effect for black-and-white images called dithering. Rather than dots of varying sizes, a dithered image has dots or squiggles all the same size, arranged in such a way as to create the illusion of gray values

What is Rendering?

By Dinesh Thakur

Since large drawings cannot fit in their entirety on display screens, they can either be compressed to fit, thereby obscuring details and creating clutter, or only a portion of the total drawing can be displayed. The portion of a 2D or 3D object to be displayed is chosen through specification of a rectangular window that limits what part of the drawing can be seen.

A 2D window is usually defined by choosing a maximum and minimum value for its x- and y-coordinates, or by specifying the center of the window and giving its maximum relative height and width. Simple subtractions or comparisons suffice to determine whether a point is in view. For lines and polygons, a clipping operation is performed that discards those parts that fall outside of the window. Only those parts that remain inside the window are drawn or otherwise rendered to make them visible.

What is coordinate?

By Dinesh Thakur

Built into your computer is a mapping system, or grid, complete with the ability to pinpoint any location or coordinate in the application window. This grid is laid out in the common x,y format-x being the horizontal units of measure starting from the left side of the screen, and y being the units starting from the top of the screen. It’s easy to see that 0,0 would be the upper left corner of the screen. Now, if you’re only using your computer for word processing, then you have no real use for knowing exactly where your cursor is. But in the painting and drawing world, knowing these coordinates is very helpful-to say the least-and it’s essential in a lot of instances. Nearly all graphic and page layout applications give you a separate window which shows the coordinates of where your cursor is located at any given moment. By watching your coordinates you can move, create, shape, or select objects or portions thereof with great precision.

What is Compression?

By Dinesh Thakur

The processing of a set of data in order to reduce its size. Compression may be performed both to reduce the amount of storage space occupied (say, to fit the data onto a single CD) and to reduce the time it takes to transmit (say, over a slow telephone line). Compressed data must be decompressed by reversing the process before it can be read or modified.

When you compress computerized information, you make it smaller (taking up less space on the disk), meaning that less data is needed to represent exactly the same information. Using a compression utility, you can compress files stored on disk so that they take up less disk space and leave more space for other files. Some programs have the ability to compress data that’s being held in memory, allowing the computer to keep more data in memory and thus spend less time retrieving data from the disk. And some modems and communications software can compress the data they send back and forth to one another. Since there’s less data, it takes less time to transfer them.

Keep in mind that even when you are archiving files you never want to compress your original and only file. All it takes is to lose one bit, one electronic signal, from a compressed file and that file is destroyed. Bits get lost all the time. In an uncompressed file, it’s a minor problem, but in a compressed file, it can be a catastrophe of considerable dimension.

There are many known compression/ decompression ALGORITHMaSn,d the search for better ones has become commercially important since most new communications technologies (such as digital television) can only work with effective data compression.

All compression algorithms work by uncovering and eliminating redundancy in the data, and there is an important distinction between those that preserve all the information in the data (loss/ess methods) and those that sacrifice some information for greater compression (lossy methods). Lossy algorithms such as JPEG and MPEG are suitable only for final delivery of data to end users, as the information losses accumulate each time the material is recompressed and decompressed.

What is CODEC?

By Dinesh Thakur

CODEC is a shorthand way of saying “compressor/decompressor.” It refers to a variety of software products that determine how a movie file, such as QuickTime, should be condensed, or compressed, to save space on the hard disk and to make the movie run faster. You might choose a different CODEC for video images than you would for still photography images. The different choices strike a different balance between picture quality and the size of the file (how many megabytes it requires to store it on the hard disk).

What is CMYK?

By Dinesh Thakur

The acronym CMYK (pronounced as the individual letters: CM Y K) stands for the process colors cyan, magenta, yellow, and black. These four process colors are the transparent ink colors that a commercial press uses to recreate the illusion of a full-color photograph or illustration on the printed page. If you look at any printed color image in a magazine, especially if you look at it through a magnifying glass (a “loupe”), you will see separate dots of ink in each of the four colors. These four colors, in varying intensities determined by the dot size and space around the dot, combine together to create the wide range of colors you appear to see.

To get these four colors from the full-color image, the image must be separated into the varying percentages of each of the colors. There are several very sophisticated methods of doing this, and the result is a four color separation.

Desktop color systems and the powerful page layout and art programs are now capable of making four-color separations for us. I can also create a color in my publication to match a color in the photograph. For instance, if I want to print my headlines in the same slate blue as in the model’s tie, the computer could separate the headline color into the four different layers (sometimes called “plates”) as follows: 91%Cyan, 69%Magenta, 9% Yellow, and 2% Black. The photograph itself would be separated into its variations of CMYK. When these four percentages of transparent ink are printed on top of each other, the colors combine to make the full-color photograph and the slate blue of the headlines, all at the same time.

This is different from spot color, where each spot of color is a separate, opaque ink color out of a can, such as red or blue or peach.

What is CLUT?

By Dinesh Thakur

CLUT stands for color look-up table. A CWT is a software palette or set of 256 colors (it’s actually a resource) that resides within the system software and most color-capable applications. On a computer with 8-bit color (those that are only capable of displaying a total of 256 colors), a CWT is a necessary reference to let the computer know which 256 colors out of the available 16.7 million colors (24-bit color) it can use at one time. If you think of all those 16.7 million colors as being a big (ok, very big) box of crayons, you can visualize a CWT as being a small box of handpicked colors that someone has handed you to work with. Many applications give you the option of choosing which 256 colors you want to work with. You often can set up your own palette for each particular file. For instance, if you were painting a picture of a man’s face, a palette of 256 different flesh tones would be more useful than a palette containing 256 colors found in the range between black and burgundy. Take the time to explore your particular application and its documentation for a variable palette feature.

What is CGM (Computer Graphics Metafile)?

By Dinesh Thakur

CGM stands for computer graphics metafile, which is an international standard file format for graphic images. Most CGM files are vector graphics, although it is possible to store raster graphics in the CGM format. The purpose of creating a standard is to enable users of different systems and different programs to exchange the same graphic file. It is extremely difficult, though, to create a standard so strict that it can work seamlessly everywhere. A CGM file created in one program may not necessarily be read by every other program.

The Windows metafile format (WPM) developed by Microsoft Corporation may eventually supplant CGM as the vector graphics standard.

What is double buffering?

By Dinesh Thakur

A technique called double buffering permits one set of data to be used while another is collected. It is used with graphics displays, where one frame buffer holds the current screen image while another acquires the bits that will make up the next image. When it is ready, the buffers are switched, the new screen is displayed, and the process continues. 

This reduces the minimum time between successive frames to the time required to switch buffers, rather than the time required to RENDER a whole frame, so avoiding a lengthy dark space between frames.

In a typical situation, a processor will be capable of producing data several orders of magnitude faster than a peripheral can accept it. In order to make most efficient use of the processor, the data will be placed in a buffer and its location made known to the peripheral. The peripheral then proceeds to empty the buffer while the processor is freed for other work.

What is Bump Mapping?

By Dinesh Thakur

Bump Mapping: An extension of the technique Of TEXTURE MAPPING to create more realistic 3D images, in which an additional BITMAP (the bump map) applied to a surface contains not colour data but small displacements to be applied to the surface normal at each point. After the image is rendered, these displacements alter the angles of reflected rays in such a way as to convey the illusion of surface relief, even though the surface actually remains completely smooth.

What is Bitmap and Bitmap Editor?

By Dinesh Thakur

Bitmap: A table of digital BITS used to represent, for example, a picture or a text character, each bit in the table being interpreted as the presence or absence of a screen PIXEL or a printed dot. The principle can be illustrated by the following table, which represents the letter Z as a 6 x 6 table of bits:

1  1  1 1 1 1

0 0 0 0 1 0

0 0 0 1 0 0

0 0 1 0 0 0

0 1 0 0 0 0

1 1 1  1  1 1

Bitmap Editor: The generic name for programs, such as Adobe’s PHOTOSHOP, whose function is to create and manipulate BITMAPPED images: that is files in which pictures are stored as a collection of individual PIXELS rather than as geometric descriptions. bitmap editor enables its user to change the colour of individual pixels or whole areas of pixels at once, using a variety of tools that mimic the effect of paintbrushes, pencils, spray cans and more. Typical bitmap editors also support a variety of graphics FILTER operations, such as EDGE ENHANCEMENT that can be applied automatically to the whole image.

What is Bitmapped Display?

By Dinesh Thakur

Bitmapped Display: Strictly, a display in which each PIXEL on the screen is represented by a BIT stored in VIDEO MEMORY which would limit its applicability to black-and-white images only. More frequently used, however, to describe any display in which each pixel corresponds to a byte or word in video memory, which covers all contemporary computer colour displays. The term was coined in distinction to the now-obsolete VECTOR DISPLAY, which drew lines instead of pixels.

What is Bit Block Transfer?

By Dinesh Thakur

Bit Block Transfer (bitblt, bitblit): An operation used in computer graphics programming that moves a block of bits en masse from one location in memory to another. If these bits represent display pixels, the effect is to move part of an image from one place to another, and so bitblt is much used in graphical user interface code to display WINDOWS, ICONS and FONT characters quickly. Because this operation is used so extensively, many modern microprocessors provide special instructions to speed it up and a hardware GRAPHICS ACCELERATOR usually contains a dedicated unit called a BLITTER that performs the operation as quickly as possible.

What is bitmapped font?

By Dinesh Thakur

Bitmapped Font, Bitmap Font: A character FONT in which each individual letter form is stored as a table of PIXELS (a picture), in contrast to an OUTLINE FONT where each character is stored as a set of lines or strokes (a description of how to draw the character). Bitmapped fonts are fast and easy to RENDER onto a screen or printer – by simply copying the bits for the character – and for this reason were preferred on older computer systems (up to and including MS-DOS PCs) that used CHARACTER-BASED displays.

 

Bitmapped fonts render correctly only at the size they were created: to enlarge or reduce their characters involves duplicating or removing pixels, which gives the letters an unattractive jaggy appearance. In contrast, outline fonts can be scaled to any size (above a minimum) with little loss of quality and hence they have almost entirely displaced bitmapped fonts, except for applications such as instruments and hand-held computers with small fixed-size displays. Examples of bitmapped fonts include the fixed-pitch Courier and MS Serif fonts supplied with Windows.

All fonts (typefaces) that you see on the screen are bitmapped. That’s the only way the computer can display the typeface on the screen, since the screen is composed of dots (pixels). Some fonts have no other information to them than the bitmapped display you see on the screen, while other fonts have additional data that is used by the printer to print the typeface smoothly on a page (outline, or scalable fonts).

What is bitmap?

By Dinesh Thakur

A bitmap is an image or shape of any kind-a picture, a text character, a photo-that’s composed of a collection of tiny individual dots. A wild landscape on your screen is a bitmapped graphic, or simply a bitmap. Remember that whatever you see on the screen is composed of tiny dots called pixels. When you make a big swipe across the screen in a paint program with your computerized “brush,” all that really happens is that you turn some of those pixels on and some off. You can then edit that bitmapped swipe dot by dot; that is, you can change any of the pixels in the image. Bitmaps can be created by a scanner, which converts drawings and photographs into electronic form, or by a human artist (like you) working with a paint program.

A computer screen is made up of thousands of dots of light, called pixels (short for picture elements). A single pixel is composed of up to three rays of light, red, blue, and green, blended into a single dot on-screen. By combining these rays and changing their intensity, virtually any color can be displayed on-screen. The number of bits required to display a single pixel onscreen varies by the total number of colors a particular monitor can display. The larger the number of possible colors, the larger the number of bits’ required to describe the exact color needed. Regardless of the actual number of bits required, a bit map is a series of these bits stored in memory, which form a pattern when read left to right, top to bottom. When decoded by the computer and displayed as pixels on-screen, this pattern forms the image of a picture.

The simplest bitmaps are monochrome, which have only one color against a background. For these, the computer needs just a single bit of information for each pixel (remember, a bit is the smallest unit of data the computer recognizes). One bit is all it takes to turn the dot off (black) or on (white). To produce the image you see, the bits get “mapped” to the pixels on the screen in a pattern that displays the image.

In images containing more than black and white, you need more than one bit to specify the colors or shades of gray of each dot in the image. Multicolor images are bitmaps also. An image that can have many different colors or shades of gray is called a “deep bitmap,” while a monochrome bitmap is known as a “bilevel bitmap.” The “depth” of a bitmap-how many colors or shades it can contain – has a huge impact on how much memory and/or disk space the image consumes. A 256-color bitmap needs 8 times as much information, and thus disk space and memory, as a monochrome bitmap.

The resolution of a bitmapped image depends on the application or scanner you use to create the image, and the resolution setting you choose at the time. It’s common to find bitmapped images with resolutions of 72 dots per inch (dpi), 144 dpi, 300 dpi, or even 600 dpi. A bitmap’s resolution is permanently fixed-a bitmapped graphic created at 72 dpi will print at 72 dpi even on a 300 dpi printer such as the LaserWriter. On the other hand, you can never exceed the resolution of your output device (the screen, printer, or what have you); even though you scanned an image at 600 dpi, it still only prints at 300 dpi on a LaserWriter, since that’s the LaserWriter’s top resolution.

You can contrast bitmapped images with vector or object-oriented images, in which the image is represented by a mathematical description of the shapes involved. You can edit the shapes of an object graphic, but not the individual dots. On the other hand, object-oriented graphics are always displayed or printed at the maximum resolution of the output device. But keep in mind that an object-oriented graphic is still displayed as a bitmap on the screen.

Bit-mapped fonts and bit-mapped graphics use pixels to form pictures or letters. However, because of the number of bits required to encode a single pixel, bit-mapped fonts and graphics consume a great deal of memory. In comparison, trying to create a perfect circle by coloring the squares on a piece of graph paper demonstrates the problems inherent with this method of displaying text and graphics. Because a computer screen is layed out in a grid of dots (pixels) like graph paper, a distortion will show up along the angled and curved lines in an image. This distortion is called “jaggies” or “aliasing.”

What is anode?

By Dinesh Thakur

The positively charged ELECTRODE that attracts ELECTRONS within a current-consuming device such as an electrolytic cell, discharge tube or valve. In a current-producing BATTERY, the anode is the electrode that receives electrons internally and hence is connected to the external negative terminal.

What is analogue or analog video?

By Dinesh Thakur

Analogue video: A video signal that is captured transmitted and stored as a continuously varying voltage, rather than as a stream of bits as in digital video. Up until the advent of digital TV in the late 1990S, television worked by transmitted analogue video signals, and older video tape recorders such as VHS, PAL, Betamax and Umatic all store analogue signals.

The disadvantage of analogue video is that it is prone to noise interference, while its advantage is its great density: a domestic 3- hour VHS cassette holds the equivalent of 16 gigabytes of digital data.

What is Aspect ratio?

By Dinesh Thakur

Aspect ratiois a fancy term for “proportion,” or the ratio of width to height. for example 4:3 for a computer screen. For instance, if a direction in a software manual tells you to “hold down the Shift key while you resize a graphic in order to maintain the aspect ratio,” it simply means that if youdon’thold down the Shift key you will stretch the image out of proportion.

Some combinations of computers and printers have trouble maintaining the correctaspect ratio when the image goes from the screen to the printer, or when the image is transferred from one system to another, so the aspect ratio can be an important specification to consider when choosing hardware.

The aspect ratio of the screen determines the most efficient screen RESOLUTIONS and the most desirable shape for individual PIXELS, all of which may have to change upon the introduction of HIGH DEFINITION TELEVISION.

What is aliasing?

By Dinesh Thakur

Aliasing

Aliasing has two definitions, depending on whether you’re talking about pictures or sounds.

When a diagonal line or a curved arc drawn on the screen looks as if it was made out of bricks, when it looks like stair steps instead of a slide, the effect is technically called aliasing. Most of us would say it had the jaggies. It can be ameliorated by the technique of ANTIALIASING.

The phenomenon by which a digitized sound sample may pick up unwanted spurious frequencies. It affects digitally reproduced sound, which is the kind of sound your computer probably makes. (The beeps you hear from a standard PC speaker aren’t digital, but a sound board you plug into your PC creates sound digitally; the Mac has built-in digital sound.)

Digital sound is based on a sequence of numbers (digits) that are converted into sound waves by electronic circuits. The computer has to guess at what sound to make between each number in the sequence. If the time between each value is too long (if the “sampling rate” is low), you hear the mistaken guesses as a metallic, static distortion called aliasing. To squelch aliasing, you need a soundcard with a sampling rate of around 40 kilohertz (40,000 times a second) or higher.

Both these phenomena result from sampling the data at a frequency below its NYQUIST FREQUENCY: that is, displaying a line on a screen of too Iowa resolution or sampling the sound at too Iowa frequency. In fact aliasing is a potential source of distortion when sampling any form of data, another example being the optical illusion that wheels are rotating backwards often seen when old movies are shown on television.

What is 32-bit color?

By Dinesh Thakur

On a color monitor, each pixel has three dots arranged in a triad-red, green, and one blue dot. Each dot can deal with a maximum of 8 bits, which makes a total of 24 bits per pixel. With the possibility of combining the 256 levels of color in each of the three color dots, 24-bit color gives you the awesome potential of 16.7 million colors on your screen (256 times 3). Many of these colors differ so slightly that even the most acute observer couldn’t tell the difference between them. Simply stated: 16 million colors is more than enough. (How do you get black and white if there are three colored dots? If all dots are on, the pixel is white; if all dots are off, the pixel is black.)

Now, you will often hear of 32-bit color, which there isn’t, really. Those other 8 bits don’t offer any extra color, but they do offer the capacity for masking and channeling.

What is Grayscale?

By Dinesh Thakur

On a grayscale monitor, each pixel can accept from 1 to 8 bits of data, which will show from 1 to 256 shades of gray.

If there are 2 bits per pixel, there are four possible combinations of on and off: on/on, off/off, on/off, and off/on. Each of these combinations displays a different shade of gray (including black and white).

If there are 4 bits per pixel (24), you will have 16 levels of gray

If there are 8 bits per pixel, there are 256 possible combinations (28). This is the maximum number of grays possible on any grayscale monitor, which is plenty because our eyes can’t distinguish more than that number of grays anyway.

What is Monochrome?

By Dinesh Thakur

If an item is monochrome, that means it uses only one color on a differently colored background. In a monochrome monitor, these pixels have only one color phosphor. The picture is created with, say, black dots (or lines) against a white background. A monochrome image pixel can have two values, on (white) or off (black), and this can be represented by 1-bit as either 0 or 1. Most printers are monochrome, meaning they only print black toner on white paper.

If an image is One-bit that means 1-bit of information is sent to the pixel on the screen. That bit can turn the pixel on (white) or off (black). All 1-bit images are black-and-white. On amonochrome monitor, the pixels can’t deal with more than just that one bit of data so that’s all you can ever get is black and white. 

Data Compression – What is the Data Compression? Explain Lossless Compression and Lossy Compression.

By Dinesh Thakur

Data compression is the function of presentation layer in OSI reference model. Compression is often used to maximize the use of bandwidth across a network or to optimize disk space when saving data.

There are two general types of compression algorithms:

 

1. Lossless compression

2. Lossy compression

                         Types of Compression

Lossless Compression

Lossless compression compresses the data in such a way that when data is decompressed it is exactly the same as it was before compression i.e. there is no loss of data.

A lossless compression is used to compress file data such as executable code, text files, and numeric data, because programs that process such file data cannot tolerate mistakes in the data.

Lossless compression will typically not compress file as much as lossy compression techniques and may take more processing power to accomplish the compression.

Lossless Compression Algorithms

The various algorithms used to implement lossless data compression are :

 

1. Run length encoding

2. Differential pulse code modulation

3. Dictionary based encoding

1. Run length encoding

• This method replaces the consecutive occurrences of a given symbol with only one copy of the symbol along with a count of how many times that symbol occurs. Hence the names ‘run length’.

• For example, the string AAABBCDDDD would be encoded as 3A2BIC4D.

• A real life example where run-length encoding is quite effective is the fax machine. Most faxes are white sheets with the occasional black text. So, a run-length encoding scheme can take each line and transmit a code for while then the number of pixels, then the code for black and the number of pixels and so on.

• This method of compression must be used carefully. If there is not a lot of repetition in the data then it is possible the run length encoding scheme would actually increase the size of a file.

2. Differential pulse code modulation

• In this method first a reference symbol is placed. Then for each symbol in the data, we place the difference between that symbol and the reference symbol used.

• For example, using symbol A as reference symbol, the string AAABBC DDDD would be encoded as AOOOl123333, since A is the same as reference symbol, B has a difference of 1 from the reference symbol and so on.

3. Dictionary based encoding

• One of the best known dictionary based encoding algorithms is Lempel-Ziv (LZ) compression algorithm.

• This method is also known as substitution coder.

• In this method, a dictionary (table) of variable length strings (common phrases) is built.

• This dictionary contains almost every string that is expected to occur in data.

• When any of these strings occur in the data, then they are replaced with the corresponding index to the dictionary.

• In this method, instead of working with individual characters in text data, we treat each word as a string and output the index in the dictionary for that word.

• For example, let us say that the word “compression” has the index 4978 in one particular dictionary; it is the 4978th word is usr/share/dict/words. To compress a body of text, each time the string “compression” appears, it would be replaced by 4978.

Lossy Compression

Lossy compression is the one that does not promise that the data received is exactly the same as data send i.e. the data may be lost.

This is because a lossy algorithm removes information that it cannot later restore.

Lossy algorithms are used to compress still images, video and audio.

Lossy algorithms typically achieve much better compression ratios than the lossless algorithms.

Audio Compression

• Audio compression is used for speech or music.

• For speech, we need to compress a 64-KHz digitized signal; For music, we need to compress a 1.411.MHz signal

 

• Two types of techniques are used for audio compression:

 

1. Predictive encoding

2. Perceptual encoding

                        Techniques of Audio Compression

Predictive encoding

• In predictive encoding, the differences between the samples are encoded instead of encoding all the sampled values.

• This type of compression is normally used for speech.

• Several standards have been defined such as GSM (13 kbps), G. 729 (8 kbps), and G.723.3 (6.4 or 5.3 kbps).

Perceptual encoding

• Perceptual encoding scheme is used to create a CD-quality audio that requires a transmission bandwidth of 1.411 Mbps.

• MP3 (MPEG audio layer 3), a part of MPEG standard uses this perceptual encoding.

• Perceptual encoding is based on the science of psychoacoustics, a study of how people perceive sound.

• The perceptual encoding exploits certain flaws in the human auditory system to encode a signal in such a way that it sounds the same to a human listener, even if it looks quite different on an oscilloscope.

• The key property of perceptual coding is that some sounds can mask other sound. For example, imagine that you are broadcasting a live flute concert and all of a sudden someone starts striking a hammer on a metal sheet. You will not be able to hear the flute any more. Its sound has been masked by the hammer.

• Such a technique explained above is called frequency masking-the ability of a loud sound in one frequency band to hide a softer sound in another frequency band that would have been audible in the absence of the loud sound.

• Masking can also be done on the basis of time. For example: Even if the hammer is not striking on a metal sheet, the flute will be inaudible for a short period of time because the ears turn down its gain when they start and take a finite time to turn up again.

• Thus, a loud sound can numb our ears for a short time even after the sound has stopped. This effect is called temporal masking.

MP3

• MP3 uses these two phenomena, i.e. frequency masking and temporal masking to compress audio signals.

• In such a system, the technique analyzes and divides the spectrum into several groups. Zero bits are allocated to the frequency ranges that are totally masked.

• A small number of bits are allocated to the frequency ranges that are partially masked.

• A larger number. of bits are allocated to the frequency ranges that are not masked.

• Based on the range of frequencies in the original analog audio, MP3 produces three data rates: 96kbps, 128 kbps and 160 kbps.

How the interactive graphics displayed works? Explain

By Dinesh Thakur

All operations on computers are in terms of 0’s and 1’s and hence figures are also to be stored in terms of 0’s and 1’s. Thus a picture file, when viewed inside the memory, can be no different from other files – a string of Os and 1s. However, their treatment when they are to be displayed makes the difference. Pictures are actually formed with the help of frame-buffer display as follows

 

 Frame buffer display contains a frame buffer, which is a storage device and stores the image in terms of 0’s and 1’s. It contains the 0’s and 1’s in terms of 8’s, or multiples of 8’s in a row. These 0’s and 1’s will be read by display controller one line at a time and sent to the screen after converting them from digital to analog. The display controller reads the contents of frame buffer one line at a time or entire digits at time. These digital images after converting into the analog will be displayed on the screen. The following figure illustrates this             graphics display system

Figures can be stored and drawn in two ways – either by line drawing or by Raster graphic methods. In the line drawing scheme, the figures are represented by equations – for example a straight line can be represented by the equation y=mx+c, a circle by x2+y2=r2 etc. If (x, y) are representative points, then all these (x,y) value pairs which satisfy the equations form a part of the figure while those that do not, lie outside the figure. Thus, to generate any figure, obviously the equation of the figure is to be known. Then all points that satisfy the equation are evaluated. These are the points to be illuminated on the screen.                                  points to be illuminated

 

A moving electronic beam, as we know illuminates the screen, or the monitor. Whenever the beam is switched on, the electrons illuminate the phosphorescent screen and display a point. In the line drawing schemes, this beam is made to traverse the path of the figure to be traced and we get the figure we need. For example, in the above cited example if the electron beam is made to move from a to be along the points, we get the line.

 

The raster scan mechanism uses a different technique and is often found more convenient to manipulate and operate with. In this case, a “frame buffer”, (a chunk of memory) is made to store the pixel values. (Remember, the screen can be thought of as having beam made up of a number of horizontal rows of pixels (picture cells), each pixel representing a point on the picture. In fact the number of such horizontal and vertical points indicate higher resolutions and therefore better pictures.

 

Typical resolutions are like 640 X 480, 860 X 640, 1024 x 860 etc., where the figures indicate the number of rows and the number of pixels along each row respectively on a computer screen (unlike in standard mathematics) the top left hand point indicates the origin or the point (0,0) and the distances are measured horizontally and vertically as shown).

            Hori verticlal

Now, assuming a 1024 x 1024 point screen, any figure that is to be displayed within this space. The “frame buffer” stores “status” of each of these pixels – say 0 indicates the pixel is off and hence is not a part of the picture and 1 indicates it is a part of the picture, and is to be displayed. This data is used to display the pictures.

Working of an Interactive Graphics Display

By Dinesh Thakur

Interactive graphics display consists of three components [Read more…] about Working of an Interactive Graphics Display

Next Page »

Primary Sidebar

Computer Graphics Tutorials

Computer Graphics

  • CG - Home
  • CG - Introduction
  • CG - Applications
  • CG - Applications
  • CG - Raster Vs Random Scan Display
  • CG - Frame Buffer
  • CG - DVST
  • CG - CRT Display
  • CG - DDA
  • CG - Transformation
  • CG - Cathode Ray Tube
  • CG - Bresenham’s Line Algorithm
  • CG - Pixel
  • CG - Data Compression
  • CG - Clipping
  • CG - Shadow Mask CRT
  • CG - Line Drawing Algorithm
  • CG - Text Clipping
  • CG - Refresh Rates
  • CG - CRT/Monitor
  • CG - Interactive Graphics Display
  • CG - Raster Vs Random Scan System
  • CG - Liquid Crystal Display
  • CG - Scan Converting a Line
  • CG - Monitors Types
  • CG - Display Types
  • CG - Sutherland-Hodgeman Clipping
  • CG - Bitmap
  • CG - Antialiasing
  • CG - Refresh Rates
  • CG - Shadow Mask Vs Beam Penetration
  • CG - Scan Converting a Point
  • CG - Image Resolution
  • CG - Double Buffering
  • CG - Raster Vs Random Scan
  • CG - Aspect Ratio
  • CG - Ambient Light
  • CG - Image Processing
  • CG - Interactive Graphics Displayed
  • CG - Shadow Mask CRT
  • CG - Dithering
  • CG - GUI
  • CG - CLUT
  • CG - Graphics
  • CG - Resolutions Types
  • CG - Transformations Types
  • CG - Half-toning Effect
  • CG - VGA
  • CG - Aliasing
  • CG - CGA

Other Links

  • Computer Graphics - PDF Version

Footer

Basic Course

  • Computer Fundamental
  • Computer Networking
  • Operating System
  • Database System
  • Computer Graphics
  • Management System
  • Software Engineering
  • Digital Electronics
  • Electronic Commerce
  • Compiler Design
  • Troubleshooting

Programming

  • Java Programming
  • Structured Query (SQL)
  • C Programming
  • C++ Programming
  • Visual Basic
  • Data Structures
  • Struts 2
  • Java Servlet
  • C# Programming
  • Basic Terms
  • Interviews

World Wide Web

  • Internet
  • Java Script
  • HTML Language
  • Cascading Style Sheet
  • Java Server Pages
  • Wordpress
  • PHP
  • Python Tutorial
  • AngularJS
  • Troubleshooting

 About Us |  Contact Us |  FAQ

Dinesh Thakur is a Technology Columinist and founder of Computer Notes.

Copyright © 2025. All Rights Reserved.

APPLY FOR ONLINE JOB IN BIGGEST CRYPTO COMPANIES
APPLY NOW