• Skip to main content
  • Skip to primary sidebar
  • Skip to secondary sidebar
  • Skip to footer

Computer Notes

Library
    • Computer Fundamental
    • Computer Memory
    • DBMS Tutorial
    • Operating System
    • Computer Networking
    • C Programming
    • C++ Programming
    • Java Programming
    • C# Programming
    • SQL Tutorial
    • Management Tutorial
    • Computer Graphics
    • Compiler Design
    • Style Sheet
    • JavaScript Tutorial
    • Html Tutorial
    • Wordpress Tutorial
    • Python Tutorial
    • PHP Tutorial
    • JSP Tutorial
    • AngularJS Tutorial
    • Data Structures
    • E Commerce Tutorial
    • Visual Basic
    • Structs2 Tutorial
    • Digital Electronics
    • Internet Terms
    • Servlet Tutorial
    • Software Engineering
    • Interviews Questions
    • Basic Terms
    • Troubleshooting
Menu

Header Right

Home » Graphics » Computer Graphics

Two Dimensional Transformations

Simple Line Drawing Method

Graphics Device

Basic of Computer Graphics

Computer Graphics

What is Halftone?

By Dinesh Thakur

The process by which CONTINUOUS TONE photographs are reproduced in print (for example in newspapers and magazines) by reducing them to a grid of tiny dots. (The resulting image is also commonly known as a halftone.) Each individual dot is printed using a single coloured ink of fixed intensity, and the intensity of colour the reader perceives is controlled by varying the size and density of the dots to reveal more or less of the underlying white paper.

The superficial similarity between this use of dots in printing and the PIXELS that constitute computer images is misleading, since pixels actually vary in colour and intensity rather than size. When computer-generated pictures are prepared for printing, each pixel gets broken up into a sub-grid of halftone dots that simulate its tonal value spatially. The pixel and halftone grids may interfere with each other and cause the problem called SCREEN CLASH.

What is Graphics Adapter?

By Dinesh Thakur

An EXPANSION CARD that enables a personal computer to create a graphical display. The term harks back to the original 1981 IBMPC which could display only text, and required such an optional extra card to ‘adapt’ it to display graphics.

The succession of graphics adapter standards that IBM produced throughout the 1980s (CGA,EGA,VGA)both defined and confined the graphics capability of personal computers until the arrival of IBM-compatibles and the graphical Windows operating system, which spawned a whole industry manufacturing graphics adapters with higher capabilities.

Nowadays most graphics adapters are also powerful GRAPHICS ACCELERATOR capable of displaying 24-bit colour at resolutions of 1280 x 1024 pixels or better. Providing software support for the plethora of different makes of adapter has become an onerous task for software manufacturers.

What is Graphics?

By Dinesh Thakur

Pictures on a computer display or the process of creating pictures on a computer display. The term came into use when there was still a distinction between computers that could display only text and those that could also display pictures. This distinction is lost now that almost all computers employ a GRAPHICAL USER INTERFACE.

Since a computer can ultimately deal only with binary numbers, pictures have to be DIGITIZED (reduced to lists of numbers) in order to be stored, processed or displayed. The most common graphical display devices are the CATHODE RAY TUBE inside a monitor, and the PRINTER, both of which present pictures as two-dimensional arrays of dots. The most natural representation for a picture inside the computer is therefore just a list of the colours for each dot in the output; this is called a BITMAPPED representation. Bitmapping implies that each dot in the picture corresponds to one or more bits in the computer’s video memory.

An alternative representation is to treat a picture as if it were composed of simple geometric shapes and record the relative positions of these shapes; this is called a vector representation (see more under VECTOR GRAPHICS). A typical vector representation might build up a picture from straight lines, storing only the coordinates of the endpoints of each line.

Vector and bitmapped representations have complementary strengths and weaknesses. Most modern output devices work by drawing dots, which makes it more efficient to display bitmapped images. There are a few, mostly obsolete, display technologies, such as the GRAPH PLOTTER or the VECTOR DISPLAY tube, that can draw lines directly, but it is more typical for a vector image to be first converted into bitmap (the process called RENDERING) before displaying it on a dot-oriented device. Rendering involves extra work for the computer, which is performed either by software or by special hardware called a GRAPHICS ACCELERATOR.

Vector representations are easily edited by moving, resizing or deleting individual shapes, whereas in a bitmap all that can be changed is the colour of the individual pixels. Vector images can easily be rotated and scaled to different sizes with no loss of quality, the computer simply multiplying the endpoint coordinates by a suitable factor. A bitmap, on, the other hand, can be magnified only by duplicating each pixel, which gives an unsightly jagged effect.

Vector representations are most suitable for pictures that are actually drawn on the computer (such as engineering drawings, document layouts or line illustrations), and for images that must be reproduced at various sizes (such as fonts). On the other hand, bitmaps are more suitable for manipulating photographs of real world scenes and objects that contain many continuously varying colours and ill-defined shapes – both scanners and digital cameras produce bit-mapped images as their output.

What is Gouraud Shading?

By Dinesh Thakur

An algorithm employed in 3D GRAPHICS to fool the eye into seeing as a smoothly curving surface an object that is actually constructed from a mesh of polygons. Gouraud’s algorithm requires the colour at each vertex of a polygon to be supplied as data, from which it INTERPOLATES the colour of every PIXEL inside the polygon: this is a relatively fast procedure, and may be made faster still if implemented in a hardware GRAPHICS ACCELERATOR chip. 

What is File Compression?

By Dinesh Thakur

A class of techniques for shrinking the size of a data file to reduce its storage requirement, by processing the data it contains using some suitably reversible algorithm. Compression methods tend to be more effective for a particular kind of data, so that text files will typically be compressed using a different algorithm from graphics or sound files.

 

For example RUN-LENGTH EN-CODING is very effective for compressing flat-shaded computer graphics, which contain many long runs of identical PIXEL values, but is quite poor for compressing tonally rich photographic material where adjacent pixels are different. Dictionary-based algorithms such as LEMPELZIV COMPRESSION are very effective for compressing text, but perform poorly on binary data such as pictures or sound.

Compression may be applied automatically by an operating system or application, or applied manually using a utility program such as PKZIP, TAR, LHA or STUFFITS.

What is VGA (video graphics array)?

By Dinesh Thakur

VGA, which stands for video graphics array, is currently the most popular standard for PC screen display equipment. Technically, a VGA is a type of video adapter (circuitry in the computer that controls the screen). IBM developed the VGA for its PS/2 line of computers (the name “Video Graphics Array” is an IBM trademark), but loads of other manufacturers make VGA add-in boards (that plug into a slot in the pc) and VGA chips (in some pcs, these VGA chips are built right into the main part of the computer, the motherboard). A VGA monitor is a monitor that works with a VGA adapter.

A standard VGA system displays up to 640x 480 pixels (little dots) on the screen, with up to 16 different colors at a time. In lower resolution, 320x 200 pixels, the screen can show up to 256 colors at once. These specifications are much better than the older video adapter standards, the CGA and EGA, but they’re not good enough for many people. If you’re buying a new system or replacing an older video adapter, make sure you get a “Super VGA” adapter, which can handle higher resolutions (800x 600 or higher) and many more colors. Remember though, that the higher the resolution and the more colors you have to work with, the slower the display will function, and the more memory you’ll need on the card.

Unlike EGA and CGA monitors, VGA monitors are analog devices, meaning they can display an infinite range of colors (the number of colors you see is limited by the VGA adapter, not the monitor).

When you’re shopping for a VGA monitor, keep several points in mind. First, if you want to use higher resolutions than the VGA standard of 640x 480, you need a multiscan monitor-a plain VGA monitor will not work at higher resolutions. Second, some VGA monitors give a sharper image than others. Partly, this depends on the dot Pitch: a monitor with a smaller dot-pitch (like .28mm) will have better image clarity than one with a larger dot-pitch (like .39mm).

A VGA monitor requires an interface card and a cable. You need to know how much memory is on the card. You may want to add more memory, especially if you plan to create and use complex graphic or photographic images. The VGA is the current standard right now in monitors, and as such is usually the most readily available.

Explain vector vs. raster graphics.

By Dinesh Thakur

Vector graphics are stored in the computer as a set of mathematical formulas describing the shapes that make up each image. When you display a vector graphic on the screen or print it, these formulas are converted into the patterns of dots you can see. Because the dots are not specified unit! you display or print the graphic, you can change the size of the image without any loss of quality, and the image will always appear at the highest resolution of whatever screen or printer you’re using. The term vector graphics means exactly the same thing as object-oriented (or just object) graphics.

The contrasting term is raster graphic (the terms raster and bitmapped are synonymous). In a raster graphic, the actual dots that make up the image you see are defined when the graphic is created, so the resolution is fixed; changing the size will make the image look coarse or muddy. See paint program for an example illustrating the fixed resolution.

Most Macintosh people use the terms object-oriented and bitmapped rather than vector and raster. Most PC people use both pairs of terms interchangeably.

What is Ray Trace?

By Dinesh Thakur

Ray tracing is an incredibly complex method of producing shadows, reflections, and refractions in high-quality, three-dimensionally simulated computer graphics. Ray tracing calculates the brightness, the reflectivity, and the transparency level of every object in the image. And it does this backwards. That is, it traces the rays of light back from the viewer’s eye to the object from which the light was bounced off from the original light source, taking into consideration along the way any other objects the light was bounced off or refracted through .

What is Rasterize or Rasterizing?

By Dinesh Thakur

To understand what rasterizing does, first you need to know a little about the images in the computer: Bitmapped (raster) graphics and fonts are created with tiny little dots. Object-oriented (vector) graphics and fonts are created with outlines. Output devices, like printers (except for some plotters) and monitors can only print or display images using dots, not outlines. This means that when an object-oriented graphic or font is output to a printer that prints in dots per inch (as most of them do) or to a monitor that displays in pixels (as most of them do), the outlines must be turned into dots. This process of turning the outlines of the objects into dots is called rasterizing. Everything you see on your monitor has been rasterized. Everything you print has been rasterized.

When you print object-oriented and PostScript images and fonts to an image setter, the information about building those straight lines goes through a RIP (raster image processor), a piece of hardware that stands between the computer and the image setter (printer). The software in the RIP turns the straight lines into the dots that the image setter will print, in resolutions like 1270 or 2540 dots per inch

When you output (print) to a laser printer that understands PostScript, the computer chip inside the PostScript printer rasterizes the images so they can be printed in dots, usually with a resolution of 300 to 600 dots per inch

When the image is displayed (output) on the monitor, it has actually been rasterized so that it could be created out of the pixels on the screen.

And if you have a non-Postscript printer, you should read the definition for Adobe Type Manager to better understand how that software rasterizes your fonts, both to the monitor and to the printer.

What is Object-oriented Graphics?

By Dinesh Thakur

Also known as vector graphics, object-oriented graphics are shapes represented with mathematical formulas. (This is very different from bitmapped graphics, in which the image is mapped to the pixels on the screen, dot by dot.)

 In a program that uses object-oriented graphics, each separate element you draw-every circle, every line, and every rectangle-is defined and stored as a separate object. Each object is defined by its vector points, or end points. Because each graphic object is defined mathematically, rather than as a specific set of dots, you can change its proportions, or make it larger or smaller, or resize it, stretch it, rotate it, change its pattern, etc

 without distorting the line width or affecting the object’s sharpness and clarity (the resolution). Because each object is a separate entity, you can overlap objects in any order and change that order whenever you feel like it. To select a graphical object in an object-oriented graphics program, you usually just click on the object with the pointer. When you select it, a set of handles, little black squares, appear on or around the object (compare marquee). By dragging the handles, you can change the size or shape of the object or the curviness of any curved lines. You can also copy or cut a selected object to the Clipboard, or move it around on the screen, without disturbing any other object

 The resolution of object-oriented graphics is device independent. This means that if you print a graphic image to a printer that has a resolution of 300 dots per inch, the graphic will print at 300 dots per inch. If you print the same image to an image setter that has a resolution of 2540 dots per inch, the graphic will print at 2540 dots per inch. (Bitmapped graphics, though, always print at the same resolution.)

Fonts, or typefaces, can also be objected-oriented, but they’re not usually referred to this way-instead, such fonts are also known as outline fonts, scalable fonts, or vector fonts.

What is Multiscan?

By Dinesh Thakur

Multiscan refers to a type of computer monitor that automatically matches the synchronizing signals sent from the computer’s video adapter (the video circuitry). On a standard television-type monitor, the image you see is formed by a single beam of electrons scanning lickety-split across the picture tube. The beam starts at one corner, traces a narrow horizontal line, then moves down a bit and traces the next line. The speed with which the beam travels horizontally and vertically (the horizontal and vertical “scan frequencies”), must match the synchronizing signals from the computer’s video circuits.

 

The problem is, the synch signals vary with each type of video adapter for PCs(EGAS, VGAS, Super VGA, and so on). Since the scan rate is fixed in an ordinary monitor, you can only use the monitor with one type of video adapter-a VGA monitor only works with a VGA adapter, and so on. By contrast, a multiscan monitor will work with many different types of adapters, within limits.

When you buy, be sure your monitor’s range of scan frequencies matches all the adapters you may use it with. At a minimum, it should have a 50-75 Hz (hertz, times per second) vertical frequency and a 30-50 kHz (kilohertz) horizontal frequency. The vertical frequency measures how fast the entire screen is “repainted,” and is also called the refresh rate. You should also insist on a variable frequency monitor, one that can match any frequency within those ranges, rather than one that simply operates at several different but fixed frequencies. And, by the way, multiscan monitors are more expensive than fixed-scan rate monitors.

There’s much less inconsistency in the Macintosh world, so a multiscan monitor isn’t so important. But many of them will work with a Mac

What is Monitor?

By Dinesh Thakur

Monitor is another word for the computer screen. But “monitor” encompasses the whole piece of equipment, rather than just the screen part that you look at. You also might hear a monitor called a display, as in “Oooh, I got a new two-page display,” or VDT(video display terminal), as in newspaper journalism, or CRT (cathode ray tube), which is the technical term for a picture tube. However, flat panel screens like LCDS are not referred to as monitors, even if they’re housed externally from a computer.

 

Some monitors are built right into the computers, like in the small Macintoshes. When you purchase a larger Macintosh or most other kinds of computers, you must buy the monitor separate from the computer itself (that’s why they’re called “modular”). Monitor size is measured like a television, from one corner to the diagonally opposite corner.

Some monitors are monochrome, meaning they can show only one color on a background, like black on white (Macs), green on black, or amber on black (pcs). Grayscale monitors can display different shades of gray, rather than imitating the different shades with combinations of black and white dots. And there are many different color monitors. A color monitor can display any of several levels of resolution and can display varying numbers of colors, determined by several factors, such as amount of memory in the computer or the type of card that is controlling the monitor. See the section in Appendix A on how to read a computer monitor advertisement.

What is Low Resolution?

By Dinesh Thakur

If an image is displayed on your screen or printed on the page in low-res (short for low resolution), that means you are seeing a low-grade quality. Some graphics are just low-resolution to begin with, such as graphics made in the paint file format at 72 dots per inch. Some graphics are created as complex, high-resolution images, but you may choose to display them on the screen or print them in low-res just to save time, since it takes longer for a screen or a printer to create the high- resolution version.

 

For instance, if you are producing a brochure on your computer and in the brochure you have several high-resolution photographs in full color, it can take a long time to turn pages or change views. So you can choose to view these images in low resolution while you are working, just so you can move around the screen faster. You can choose to print them in low resolution just so you get an idea of the look of the brochure without waiting to reproduce the entire high resolution images.

The lower the resolution, the less information there is in a given amount of space (in a square inch, for instance). It may mean that each pixel in that square inch of the screen is not providing enough information to resolve the image clearly, or it may mean there are less printed dots per inch on the page.

What is LED (light emitting diode)?

By Dinesh Thakur

LED stands for light emitting diode. You know those little lights on your computer, usually near the hard disk, that flash while the computer is working? Those are LEDs. They work on the principle of electroluminescence, which refers to substances that glow when you apply electricity. LEDs were used in digital watches, but now all digital watches use LCDs because LCD stakes less power.

LEDs are ordinary diodes (the most basic electronic component) that, due to their composition, happen to glow red, green, or amber when energized by a couple of volts. They use less power than incandescent bulbs and last over 100,000 hours.

What is Interlaced or Non-Interlaced Monitors?

By Dinesh Thakur

In a standard television-like computer monitor, an image is produced on the screen by a beam of electrons sweeping rapidly across the surface of the picture tube, lighting up the screen as it passes. Starting at the top, the beam traces one horizontal row across the screen, shifts down a bit and does another row, and so on, until the full height of the screen has been covered.

 

In an interlaced monitor, the electron beam takes two passes to form a complete image: it skips every other row on the first pass, and then goes back and fills in the missing rows. A non-interlaced monitor does the whole job in one pass, tracing each row consecutively. Interlaced monitors are easier to build and therefore cheaper, but as you can guess-they aren’t as good as non-interlaced monitors. The problem is that all things being equal, it takes twice as long to create the complete screen image on an interlaced monitor. That’s long enough to spoil the illusion that you’re looking at a steady picture, and the image on the screen flickers annoyingly. 

What is Greyscale?

By Dinesh Thakur

Some computer screens are grayscale, rather than plainblack-and white (monochrome). On a black-and-white screen, there is only one bit of information being sent to each pixel (dot), so the pixels on the screen are either on (white) or off (black). On a greyscale monitor, anywhere from 2 to 16 bits of information are sent to each pixel, so it is possible to display gray tones in the pixels, rather than just black or white.

 

The gray tones are the result of some of the bits being on and some being off. If the monitor uses 4 bits, there are 16 possible combinations of on and off, so there are 16 possible shades of gray.

A grayscale is also one variety of TIFF (tagged image file format). When you scan an image as a grayscale, each dot on the screen can register a different gray value. A grayscale tries to approximate the continuous gray tone of photographs.

What is Graphical User Interface?

By Dinesh Thakur

A graphical user interfaceis fondly called “GUI” pronounced “gooey.” The word “graphical” means pictures; “user” means the person who uses it; “interface” means what you see on the screen and how you work with it. So a graphical user interface, then, means that you (the user) get to work with little pictures on the screen to boss the computer around, rather than type in lines of codes and commands.

(GUI) An INTERACTIVE outer layer presented by a computer software product (for example an operating system) to make it easier to use by operating through pictures as well as words. Graphical user interfaces employ visual metaphors, in which objects drawn on the computer’s screen mimic in some way the behaviour of real objects, and manipulating the screen object controls part of the program.

A graphical user interface usesmenusandicons(pictorial representations) to choose commands, start applications, make changes to documents, store files, delete files, etc. You can use the mouse to control a cursor or pointer on the screen to do these things, or you can alternatively use the keyboard to do most actions. A graphical user interface is considereduser-friendly.

The most popular GUI metaphor requires the user to point at pictures on the screen with an arrow pointer steered by a MOUSE or similar input device. Clicking the MOUSE BUTTONS while pointing to a screen object selects or activates that object, and may enable it to be moved across the screen by dragging as if it were a real object

Take, for example, the action of scrolling a block of text that is too long to fit onto the screen. A non-graphical user interface might offer a ‘scroll’ command, invoked by pressing a certain combination of keys, say CTRL+S. Under a GUI, by contrast, a picture of an object called a SCROLLBAR appears on the screen, with a movable button that causes the text to scroll up and down according to its position. Similarly, moving a block of text in a WORD PROCESSOR that employs a GUI involves merely selecting it by dragging the mouse pointer across it until the text becomes HIGHLIGHTED, then dragging the highlighted area to its intended destination.

There is now an accepted ‘vocabulary’ of such screen objects which behave in more or less similar ways across different applications, and even across different operating systems. These include: WINDOWS, ICONS, pull down and pop-up MENUS, BUTTONS and button bars, check boxes, dialogues and tabbed property sheets. Variants of these GUI objects are used to control programs under Microsoft Windows, Apple’s MacOS, and on UNIX systems that have a windowing system such as Motif or KDE installed.

GUIs have many advantages and some disadvantages. They make programs much easier to learn and use, by exploiting natural hand-to-eye coordination instead of numerous obscure command sequences. They reduce the need for fluent typing skills, and make the operation of software more comprehensible and hence less mysterious and anxiety- prone. For visually-oriented tasks such as word processing, illustration and graphic design they have proved revolutionary.

On the deficit side, GUIs require far more computing resources than older systems. It is usual for the operating system itself to draw most of the screen objects (via SYSTEM CALLS) to relieve application programs from the overhead of creating them from scratch each time, which means that GUI-based operating systems require typically 100 to 1000 times more working memory and processing power than those with old text-based interfaces.

GUIs can also present great difficulties for people with visual disabilities, and their interactive nature makes it difficult to automate repetitive tasks by batch processing. Neither do GUIs automatically promote good user interface design. Hiding 100 poorly-chosen commands behind the tabs of a property sheet is no better than hiding them among an old-fashioned menu hierarchy – the point is to reduce them to 5 more sensible ones

Historically, the invention of the GUI must be credited to Xerox PARC where the first GUI based workstations – the XEROX STAR and XEROX DORADO – were designed in the early 1970s. These proved too expensive and too radical for commercial exploitation, but it was following a visit to PARC by Steve Jobs in the early 1980s that Apple released the LISA, the first commercial GUI computer, and later the more successful MACINTOSH. It was only following the 1990 release of Windows version 3.0 that GUIs became ubiquitous on IBM-compatible PCs.

What is FatBits?

By Dinesh Thakur

On the Macintosh, some programs let you edit bitmapped graphics as FatBits. In FatBits mode, the individual dots, or pixels, making up the image are blown up so you can work with them easily, one at a time. If you see stray dots in an image you’ve scanned, or if a line in a picture is just slightly too thick or too skinny, it’s almost impossible to make precise changes working at normal size.

 

Traditionally, you can get into FatBits mode by selecting the pencil tool, holding down the Command key, and clicking on the image at the spot you want to see enlarged. You might also have a magnifying glass tool, or a menu command for FatBits or a command to Enlarge. Sometimes to get out of FatBits you can hold down the Option key and click.

What is Emulsion?

By Dinesh Thakur

On a piece of photographic film, such as the kind you use to shoot photographs, one side of the film is coated with a layer of chemicals called the emulsion. This is the side that absorbs the light, and the emulsion is scratch able and dull. The non-emulsion side of film looks shinier and is more difficult to scratch. You can see the emulsion side on any negative you have hanging around.

 

The kind of film that comes out of image setters or that a pressperson uses to print your brochure also has an emulsion side. If you are creating something on your computer that will be output onto film (rather than onto plain paper from your personal printer or onto resin-coated paper from the image setter), you need to know whether the film should be output emulsion side up or down. The only person who can tell you the correct answer is the pressperson who will be printing the final job. She will say, “I need your film right reading emulsion side up” (RREU) or maybe “right reading emulsion side down” (RRED). This means that if you were to lay the film on a light table as if you were reading it properly (reading it right) the emulsion would be up, or facing you, or it would be down, on the side of the film away from you.

What is Duotone?

By Dinesh Thakur

A typical black-and-white photograph uses only one color. In a duotone, though, the black-and-white photograph (or other artwork) is reproduced using two colors. Perhaps it’s black and brown, or black and grey, or dark grey and a rusty color. Halftone images are generated for the photograph, one slightly underexposed and one slightly overexposed, and the two are printed one on top of the other. The result can be an incredibly rich, powerful image-much richer and more interesting than the image with one color. The artist/designer has control over the values and percentages of the two different colors. It is also possible to make “tritones” using three different colors, and “quadtones,” using four different colors.

 

A duotone (or tritone or quadtone) does not refer to the use of spot color for the second color; that is, if you take a photograph and color the lady’s dress pink, that’s not a duotone, even though there are two colors in the image.

What is Dot Pitch?

By Dinesh Thakur

The dot pitch of a color monitor measures the size of the tiny individual dots of phosphorescent material that coat the back side of the picture tube’s face. The dot pitch helps determine how sharp the image looks, independent of the resolution (which is measured in pixels). A smaller dot pitch is better.

 

Here’s the technical scoop: Each point of light on a color monitor is formed from a triad of three separate dots of phosphor: one that glows red, one green, and one blue (the color you finally see depends on how intensely each dot in the triad is excited by the picture tube’s electronic beam). The dot pitch is the vertical distance between the centre of one dot and the next like-colored dot directly above or below it (the way the dots are arranged, pairs of like-colored dots are always two rows apart). The farther apart the centres of the dots are, the bigger the dots and the fuzzier the image. All other things being equal, a monitor with a smaller dot pitch is preferable to one with a larger dot pitch, though other factors are more important in determining image sharpness below a certain dot pitch threshold.

A dot pitch of .28 mm or smaller is ideal for 14- or 15-inch monitors; a dot pitch of .31mm or less for 17- to 20-inch monitors. Resolution, in pixels, is determined by the video circuitry in your computer. Depending on the resolution and on the dot pitch, a single pixel may occupy 4 to 16 separate phosphor triads.

What is Dot Gain?

By Dinesh Thakur

Whenever a photograph, painting or drawing containing many colors or gray tones is printed, the colors and tones must be simulated with tiny dots. Dot gain refers to an increase in the size of these dots when they are actually printed on the paper by the printing press. The dots can increase in size rather dramatically once the ink hits the paper, depending on the characteristics of the press, the absorbency of the paper, and the nature of the ink that is used.

 

The effect of dot gain on the final printed piece can be an increase in color intensity, because more ink is put on the paper (by the press) than was called for when the image was output, making the colors look darker than intended. Because the dots have gained in size, the final printed image can look not only darker, but muddy, low in contrast, and blurry.

The best of the color separation utilities offer a way to compensate for dot gain by adjusting the color curves when the film is imaged, but sometimes the artist has to compensate for dot gain manually within the software application used to create the image. It helps if the artist, the commercial press, and the service bureau work together closely so they can adjust for the quirks of each others’ equipment and compensate for the dot gain effectively

What is Dithering?

By Dinesh Thakur

Dithering is a trick many graphic applications use to fool your eye into seeing a whole lot more colors (or grey tones) on the screen than are really there. The computer achieves this optical illusion by mixing together different colored pixels (tiny dots on the screen that make up an image) to trick the eye into thinking that a totally new color exists. For instance, since pixels are so tiny, if the computer intermingles a series of black with white dots then you’re going to think you’re seeing gray.

 

Color dithering smoothes out images by creating intermediate shades between two more extreme colors (called a blend). Dithering also makes the best use of the limited number of available colors, like when you open a 24-bit color image (millions of colors) on a computer that’s only capable of displaying 8-bit (256 colors).

There is a half tone effect for black-and-white images called dithering. Rather than dots of varying sizes, a dithered image has dots or squiggles all the same size, arranged in such a way as to create the illusion of gray values

What is Texture mapping?

By Dinesh Thakur

Early computer-generated images used shaded objects that had unnaturally smooth surfaces. To produce a textured surface using the techniques discussed would require creating an excessive number of surface pieces that follow all of the complexities of the texture. An alternative to the explosion of surfaces would be to use the techniques of texture mapping. Texture mapping is a technique used to paint scanned images of a texture onto the object being modeled. Through an associated technique, called bump mapping, the appearance of the texture can be improved still further.

What is Flat shading?

By Dinesh Thakur

A problem with the Painter’s and Z-Buffer Algorithms is that they ignore the effects of the light source and use only the ambient light factor. Flat shading goes a bit further and includes the diffuse reflections as well. For each of the planar pieces, an intensity value is calculated from the surface normal, the direction to the light, and the ambient light and diffuse coefficient constants. Since none of these changes at any point on the piece, all of the pixels in that piece will have the same intensity value. The resulting image will appear to be faceted, with ridges running along the boundaries of the pieces that make up an object.

Next Page »

Primary Sidebar

Computer Graphics Tutorials

Computer Graphics

  • CG - Home
  • CG - Introduction
  • CG - Applications
  • CG - Applications
  • CG - Raster Vs Random Scan Display
  • CG - Frame Buffer
  • CG - DVST
  • CG - CRT Display
  • CG - DDA
  • CG - Transformation
  • CG - Cathode Ray Tube
  • CG - Bresenham’s Line Algorithm
  • CG - Pixel
  • CG - Data Compression
  • CG - Clipping
  • CG - Shadow Mask CRT
  • CG - Line Drawing Algorithm
  • CG - Text Clipping
  • CG - Refresh Rates
  • CG - CRT/Monitor
  • CG - Interactive Graphics Display
  • CG - Raster Vs Random Scan System
  • CG - Liquid Crystal Display
  • CG - Scan Converting a Line
  • CG - Monitors Types
  • CG - Display Types
  • CG - Sutherland-Hodgeman Clipping
  • CG - Bitmap
  • CG - Antialiasing
  • CG - Refresh Rates
  • CG - Shadow Mask Vs Beam Penetration
  • CG - Scan Converting a Point
  • CG - Image Resolution
  • CG - Double Buffering
  • CG - Raster Vs Random Scan
  • CG - Aspect Ratio
  • CG - Ambient Light
  • CG - Image Processing
  • CG - Interactive Graphics Displayed
  • CG - Shadow Mask CRT
  • CG - Dithering
  • CG - GUI
  • CG - CLUT
  • CG - Graphics
  • CG - Resolutions Types
  • CG - Transformations Types
  • CG - Half-toning Effect
  • CG - VGA
  • CG - Aliasing
  • CG - CGA

Other Links

  • Computer Graphics - PDF Version

Footer

Basic Course

  • Computer Fundamental
  • Computer Networking
  • Operating System
  • Database System
  • Computer Graphics
  • Management System
  • Software Engineering
  • Digital Electronics
  • Electronic Commerce
  • Compiler Design
  • Troubleshooting

Programming

  • Java Programming
  • Structured Query (SQL)
  • C Programming
  • C++ Programming
  • Visual Basic
  • Data Structures
  • Struts 2
  • Java Servlet
  • C# Programming
  • Basic Terms
  • Interviews

World Wide Web

  • Internet
  • Java Script
  • HTML Language
  • Cascading Style Sheet
  • Java Server Pages
  • Wordpress
  • PHP
  • Python Tutorial
  • AngularJS
  • Troubleshooting

 About Us |  Contact Us |  FAQ

Dinesh Thakur is a Technology Columinist and founder of Computer Notes.

Copyright © 2025. All Rights Reserved.

APPLY FOR ONLINE JOB IN BIGGEST CRYPTO COMPANIES
APPLY NOW