The PC Video Hardware FAQ

Vidtitle.gif

This will just be tidbits of information from old archived copies of this FAQ such as this version last modified in 1997 and archived in 1998: https://web.archive.org/web/19980203125934/http://www.heartlab.rri.uwo.ca/vidfaq/part1.html

How can I hook more than one monitor to my video card?

The following discussion assumes that you want to display the same video signal on a number of monitors. If instead you want use 2 or more monitors to increase your screen real estate, refer to the section "Can I use two video cards in the same system?".

The best way to do this is to purchase a commercial VGA signal splitter or video distribution amplifier.. These are not cheap, but they will provide the best results. A video splitter designed for VGA or SVGA will include the proper high bandwidth video amplifiers as well as the proper cable termination and shielding.

Someone may suggest that you just cut and splice a couple of VGA cables together, but this won't provide good results. Major problems relate to cable termination and interference.

In order for the video to be sharp and clear without ghosting or ringing, the video cable must be treated as a transmission line. What this means from a practical point of view is that it must use high quality coaxial cable, multiple monitors must be daisychained and not star connected, and the proper terminating resistors must be put only at the very end.

Another problem is that video signals operate at high frequencies, and as a result they can cause interference with neighbouring electronic devices, and even the monitor itself. In fact, the video cable can, when designed improperly, act like a nice big antenna. To minimize the interference emanating from the cable, considerations like conductor material, length, shielding, connectors and chokes are taken into account. Chokes are those (usually cylindrical) objects that are located at the ends of many video cables.

The result of a good cable design is an impedance matched circuit, which causes a minimum amount of interference, and provides a clean crisp signal to the monitor.

If you know enough about electronics, and the monitors and video card in question, then go ahead and design and build a splitter. If you don't, you may cause additional problems. Basic rules for a cable-only solution:

  1. Use high quality 75 ohm coax - RG59 is a generic part number but many variations are available.
  2. Multiple monitors must be daisychained and not split in a star configuration.
  3. Only the last monitor should have its 75 ohm terminating resistors in place. They should be removed from all other monitors or if they have switches, set for HiZ.
  4. Pay attention to the grounds - signal returns. Keep the stubs - the connections to intermediate monitors - as short as possible.

This will work quite well for workstation monitors - those with BNC coax connectors. Most PC monitors with the 15 pin VGA connectors do not have any means of disconnecting the terminating resistors without actually doing some desoldering - which you really should not attempt unless you are familiar with the safety issues involved in working inside a monitor.

If you decide to build an active video splitter which uses video amplifiers, be aware that the video and sync voltage levels are different in a PC: The video is typically 0.7 V p-p, and the sync's are typically TTL level (5 V p-p), so the splitter or amplifier must be able to handle both levels. Finally, pay attention to the video bandwidth capability of the splitter/amp if you care about preserving image detail information.

As noted a better solution is to buy an active video splitter. This will include the proper high bandwidth video amplifiers and termination.

Can I use my CGA/EGA/VGA monitor as a TV?

CGA and EGA monitors are digital, rather than analog like televisions and more modern monitors, usually making them incompatible with TV. Television signals contain all colour information along with syncs on one conductor. In addition, there are two types of television signals - the RF that comes in from cable or an antenna, and composite. The line-in/out on a VCR is a composite signal, and doesn't contain all of the different channel information that an RF cable signal does.

The original CGA monitors accept a composite signal, but it is TTL, which uses a different voltage from composite. Some CGA (and perhaps EGA?) monitors have composite-in jacks and circuitry inside them to display a composite signal. If you have one of these, then you can feed it a composite video signal from a VCR, laser disc player or other composite video source.

Since the VGA/SVGA monitor was introduced, computers have used an RGB video signal, with separate horizontal and vertical syncs. This means that five separate wires are used to carry the video signal from the computer to the monitor. In order to display a TV signal on a VGA monitor, signals for all five wires have to be derived from one, the so-called composite TV signal. This involves some electronic circuitry, so it can't be accomplished simply by attaching all of the wires together.

Because of the demands of higher pixel addressabilities and refresh rates, VGA and newer monitors run at horizontal refresh rates of 30 kHz or higher, which is double that of composite video (15.7 KHz). Basically, these newer monitors are unable to sync to a low enough frequency to display broadcast (NTSC or PAL) video. The end result is that it is not feasible to use a VGA or better monitor to display a television signal. The only real alternative is to purchase a TV card for your computer which allows you to display a television signal on your monitor. Personally, I'd rather spend the money on a small TV rather than look at a four inch window on my already cramped computer monitor.

What kinds of monitors are available?

Since there is a large variety of different types available, only some of the more common are listed here, along with their most common applications. In fact, it's difficult to define exactly what a 'kind of monitor' means. There are grayscale and colour, analog and digital, flat and not. I'll try to give some general answers.

Monochrome, Grayscale and Colour

This one's easy. Monochrome monitors can display two colours, usually black and one of white, green or amber. Grayscale monitors display only intensities between white and black. Colour monitors display combinations of red, green and blue, each in an independent intensity. Even though each colour is displayed only in one frequency (the frequency of light that a particular type of phosphor emits when excited) the combination of the three colours in different intensities fools the eye such that it perceives a full range of colours.

Analog and Digital - [From: Michael Scott and Sam Goldwasser]

Today, digital monitors are much less common than analog though in the days of CGA and EGA the situation was reversed. Digital does _not_ mean that the monitor has digital controls. Rather, it indicates that the monitor accepts a digital input signal. Examples of digital monitors include early monochrome, the IBM EGA and CGA. Digital monitors are limited by their internal hardware as to the number of colours that they can display. Most digital monitors use TTL signals (Transistor Transistor Logic). Note that some sales persons will call a new analog monitor 'digital', in reference to the controls. Strictly speaking they are wrong - see "Analog vs. Digital Controls" below.

Analog colour monitors can display an unlimited range of colours, since they accept an analog video signal. This means that the horizontal and vertical syncs, and actual video signals (usually red, green and blue) are analog. The total number of colours that a given computer system with an analog colour monitor can display is limited by the video card, not the monitor. It is rare for video cards to use digital-to-analog converters capable of generating more than 256 intensities per colour, so it is rare for systems to be able to display more than 256*256*256 equals 16.7 million colours. Analog monitors can have digital controls on the front panel, and have digital circuitry inside. The vast majority of monitors currently in use are analog, as they are more flexible than the digital variety and typically lower cost.

Most graphics cards put out an analog _or_ digital signal but not both. Similarly, most monitors accept and analog _or_ digital signal. It is feasible, however, to convert a digital video signal to analog and vice versa, though building such a device requires considerable electronics knowledge.

Shadow Masks and Aperture Grilles

By far the most common type of monitor uses a shadow mask, which is a fine metal grid which enables the electron beams for red, green and blue to only impact their proper phosphor dots. One alternative to this design is the aperture grille, which uses fine vertical wires for the same purpose. Sony first used this aperture grille in their Trinitron line.

Which one is better is not clear cut and is largely a matter of personal preference. Note that one complaint of Trinitron users is the presence of 1 or 2 very fine, almost invisible, horizontal stabilizing wires apparently needed to keep the fine aperture grill wires from moving out of place. You need to decide whether these will prove an unacceptable distraction. Trinitrons are usually considered to be brighter and sharper - but this is not always the case.

Analog vs. Digital Controls - [From: Michael Scott]

An analog monitor can have either analog (dials or knobs) or digital (buttons, sometimes with a dial) controls for brightness, contrast, screen size and position, pincushioning and trapezoidal shape, among others. Also, digital controls tend to be associated with a monitor's ability to store factory and user calibrations for image size and centering when operated at common video modes. This is desirable for a user who may be switching between DOS and windows applications often, so they don't have to be bothered with readjusting these controls after each change. Analog controls have the benefit of being infinitely adjustable, while digital controls are limited to a number of discrete steps for each adjustment.

Flat Panel vs. Conventional Tubes

Cathode ray tubes (CRT's) are the most common, inexpensive and best performing displays available for most users. Variations of CRT's exist including older designs with double curvature, some with only curvature in the horizontal plane (like Sony Trinitrons) and others which are called flat screen.

Flat panel displays are usually used in laptops because of their small size, but are expensive to manufacture and don't provide the high refresh rates and bright colours that conventional CRT technology provides. Flat panel displays range from monochrome LCD (Liquid Crystal Display) to dual scan colour to active matrix colour. Because of the difficulty of manufacturing these displays, and the fact that currently their primary application is in laptops where the maximum display size is usually less than eleven inches, high resolution flat panel displays are rare and expensive. In future, it's very likely that flat panel displays will replace conventional CRT technology for many home and business computer users.

What types of flat-panel displays are available?

Flat-Panel Display (FPD) technology is evolving rapidly, so I will only touch on the most common current types of displays. There are other types of displays still in use, though the most common ones are based on LCD (Liquid Crystal Display) or PDP (Plasma Display Panels) technology. Now, FPD's are expensive due to the difficulty in manufacturing (typically ~65% yield - ~4 in 10 are discarded) and relatively small number of units sold. As manufacturing techniques improve and volume increases, prices will drop. In fact, in 1995, yields are up, volumes are up, _and_ factory capacity has expanded to the point where prices are dropping significantly this year. It appears there will be an oversupply of panels this year. However, the prices are still not down to the point where they can compete with CRT monitors in desktop applications.

[From: Michael Scott]

The vast majority of FPD's are addressed in a matrix fashion, such that a given pixel is activated by powering the corresponding row and column. This means that an individual LCD element is required for each display pixel, unlike a CRT which may have several dot triads for each pixel.

LCD displays consist of a layer of liquid crystal, sandwiched between two polarizing plates. The polarizers are aligned perpendicular to each other, so that light incident on the first polarizer will be completely blocked by the second one. The liquid crystal is a conducting matrix with cyanobiphenyls (long rod-like molecules) that are polar and will align themselves with an electric current. The neat feature of these molecules is that they will shift incoming light out of phase when at rest. Light exiting the first polarizer passes through the liquid crystal matrix and is rotated out of phase by 90 degrees, then it passes through the second polarizer. Thus, unpowered LCD pixels appear bright. When an electric current is passed through the crystal matrix, the cyanobiphenyls align themselves parallel to the direction of light, and thus don't shift the light out of phase, the light is blocked by the second polarizer and the LCD appears black.

So, basic LCD technology can generate bright or dark pixels, like a monochrome (not grayscale!) monitor. In order for the eye to see shades of gray, the LC activation time is modulated. i.e. a pixel that is activated 50% of the time will appear as 50% gray. The number of shades that can be generated without visible flicker is limited by the response time of a LC element - typically 16 shades, although some display manufacturers claim 64 or more shades.

Most colour LCD's use red, green and blue sub-pixels, similar to the way that CRT's use coloured dots of phosphor. The concept is the same; that when viewed from a distance, the human eye will perceive the three sub-pixels as a single colour. Obviously, this requires three times as many discrete elements as would a monochrome display of the same resolution. A second method of implementing colour uses a subtractive CYM (Cyan Yellow Magenta) system where white light is generated at the back plane. The light then passes through each of three LC layers, each one blocking one of the three colours. By activating the LC layers in different combinations, a variety of colours can be produced.

Common to all LCD displays is the requirement for either high ambient light levels, or bright backlighting since liquid crystals don't generate light - they can only block it. Typically, LCD's allow 5-25% of incoming light (i.e. from the backlight source) to pass through. The result of this is that LCD technology requires a significant amount of energy, and this is an important consideration in light- weight laptop design.

Specific type of LCD's

Passive Matrix (twisted-nematic) LCD's

PM LCD's come in several types including; supertwisted nematic, double supertwisted nematic and triple supertwisted nematic. The original PM LCD's had a very limited viewing angle and poor contrast. Super and double supertwisted nematic designs provide an increased viewing angle and better contrast. The triple supertwisted design implements the subtractive CYM colour model mentioned above. PM designs are addressed in matrix fashion, so a VGA PM display would require 640 transistors horizontally and 480 vertically. Rows of pixels are activated sequentially by activating the row transistors while the appropriate column transistors are activated. This means that a given row is activated for only a short time during a screen refresh, resulting in poor contrast. Some implementations of PM technology break the screen into two parts, top and bottom, and refresh them independently, resulting in better contrast. These are called Dual Scan PM LCD's. In addition, PM displays suffer from very slow response times (40-200 ms) which is inadequate for many applications. Aside from their performance shortcomings, PM displays are inexpensive - their relatively low number of discrete components reduces manufacturing complexity and increases yields. Note that while dual scan displays are better than the original PM LCD's, they still don't have the high refresh rates and brightness of active matrix LCD's.

Active Matrix LCD's

Instead of using one switch (transistor) for each row and column, AM LCD's dedicate one switch for each pixel. This results in a more complex display which requires a larger number of discrete components, and therefore costs more to manufacture. An AM display is basically a large integrated circuit (IC). The benefits are significant over the PM design. Pixels can be activated more frequently, giving better contrast and control over modulation. AM technology can produce higher resolution displays that can generate more, and brighter colours. The main types of AM LCD's are; TFT (Thin-Film Transistors), MIM (Metal- Insulator-Metal) and PALC (Plasma Addressed Liquid Crystal).

Ferroelectric LCD's

FE LCD's use a special type of LC which holds its polarization after being charged. This reduces the required refresh rate and flicker. Also, FE LCD's have a fast response time of 100ns. Although they are very difficult to manufacture, and therefore expensive, FE LCD's may provide AM quality at PM prices in future.

Plasma Display Panels

PDP's have been under development for many years, and provide rugged display technology. A layer of gas is sandwiched between two glass plates. Row electrodes run across one plate, while column electrodes run up and down the other. By activating a given row and column, the gas at the intersection is ionized, giving off light. The type of gas determines the colour of the display. Because it has excellent brightness and contrast and can easily be scaled to larger sizes, PDP's are an attractive technology. However, their high cost and lack of grayscale or colour have limited applications of PDP's. However, advancements in colouring technology have allowed some manufacturers to produce large full-colour PDP's. In future, large colour PDP's will be more common in workstation and HDTV applications.

What do those monitor specifications mean?

Like so many other areas in high-technology, a bewildering array of models are available, and along with them comes a list of specifications. There are a few that will help you understand more about the differences between specific models.

[Thanks to Bill Nott for straightening me out on bandwidth and dot clock]

Bandwidth: This is a measure of the total amount of data that the monitor can handle in one second, and is measured in megahertz (MHz). The bandwidth of a monitor is limited by the design of the video amplifiers. It is generally desirable to match the bandwidth of the monitor with the dot clock of the video controller to take full advantage of both devices. see dot clock. see 'How do I calculate the minimum bandwidth required for a monitor?'

Dot Clock: This is the clock frequency (in MHz) used by the video controller chip, sometimes termed pixel rate. Many newer graphics processors have variable dot clocks, but usually only the highest is quoted in specifications. It is a measure of the maximum amount of throughput that a video controller can sustain. A higher dot clock generally means that higher screen addressabilties, colour depths and vertical refresh rates are possible. If you want to know the _approximate_ maximum dot clock for your video card and it isn't specified, you can calculate an approximate value (which tends to overestimate) as outlined in "How do I calculate the minimum bandwidth required for a monitor?"

Horizontal Scan Rate (HSR): This is a measure of how many scanlines of pixel data the monitor can display in one second. The electron gun has to scan horizontally across the screen and then return back to the beginning of the next line ready to scan again. It is controlled by the horizontal sync signal which is generated by the video card, but is limited by the monitor. If too much data (i.e. too high a horizontal pixel addressability) is sent to the monitor, it exceeds its ability to modulate the electron gun, and the signal will be displayed incorrectly and/or the monitor may be damaged. VGA and SVGA monitors must have a minimum HSR of 31.5 kHz to be able to display the corresponding horizontal resolutions. Now we begin to see how the vertical refresh rate and the horizontal scan rate are related.

Refresh Rate (also Vertical Refresh Rate or Vertical Scan Rate): This measures the maximum number of frames that can be displayed on the monitor per second at a given pixel addressability (resolution). It is controlled by the vertical sync signal coming from the video card. The vertical sync tells the monitor to position the electron gun(s) at the upper left corner of the screen, ready to paint another frame. The maximum rate for a given monitor is dependent on the frequency capability of the vertical deflection circuit and the pixel addressability, since higher addressabilities require a higher horizontal scan rate. For example, a monitor which can provide 72Hz refresh rate at 800x600 may only be capable of 60Hz refresh at 1024x768. In order to be considered a VGA or SVGA monitor, the unit must provide a minimum vertical refresh rate of 60Hz. In general, higher is better, but there is no point in paying more for a video card and monitor which are capable of higher refresh rates if you won't notice a difference. 60 Hz is adequate for most people, but others are bothered by flicker and prefer 72 Hz or faster to reduce eye strain. The minimum acceptable refresh rate for you may also depend on the screen resolution and monitor size. In general, higher addressabilities require higher refresh rates to prevent flicker from becoming noticeable.

A monitor's maximum vertical refresh rate is limited by how fast it can direct the electron beam over all of the picture elements on the monitor. This involves moving the electron beam in the same manner as you would read the words in a book, left to right, top to bottom. It is limited by the maximum HSR, which determines the maximum horizontal pixel addressability the monitor can display and the number of scanlines (i.e. vertical addressability). For example, to display a screen with an addressability of 640 pixels horizontally and 480 vertically, a monitor with a HSR of 31.5kHz would take 480/31.5k = 15.2 ms to scan the entire screen once. In one second, this monitor could be refreshed 1000ms/15.2ms = 65.6 times. However, the vertical sync - movement of the electron gun to the upper left corner of the screen - requires some time, so the resulting vertical refresh rate is only 60 Hz.

Built into the HSR and vertical refresh rate are the horizontal and vertical blanking intervals, respectively. During horizontal blanking, the electron beam is moved back across the screen from the right end of one scan line to the beginning of the next scan line on the left of the screen. This occurs once for each scan line displayed. The vertical blanking interval occurs after the last scan line is displayed, and the electron beam is directed back to the upper left corner of the screen to begin displaying the next screen image.

Interlacing: Interlacing is a holdover from television standards which use it as a way of putting more information on the screen than would otherwise be possible. Original television technology could handle thirty full frames of video per second. However, a 30 Hz refresh rate results in highly annoying flicker, so the video signal is divided into two fields for each frame. This is accomplished by displaying first the odd scanlines (i.e. 1,3,5, etc.) for 1/60 of a second, and then displaying the even scanlines for the next 1/60 of a second. Your brain can integrate the two fields, and the result is a higher effective resolution and lower flicker. Ideally however, you want to display a frame of video information at full resolution - i.e. have one horizontal scanline for each horizontal line of pixels and display it at a high enough refresh rate that flickering is not an issue. Fortunately, modern monitor technology is capable of non-interlaced (NI) display at high vertical refresh rates. Many non-interlaced monitors can only work in non-interlaced mode up to a maximum pixel addressability, above which they revert to interlaced mode. For this reason, it is important that you ensure that the monitor you buy is capable of non-interlaced display at the maximum addressability and vertical refresh rate that you want to use. Typically, interlaced computer monitors refresh at about 87Hz, or 43.5 full frames per second. Interlaced displays can result in annoying flicker, especially noticeable with thin horizontal lines because the scanline is alternating between the line and background colours. It's very noticeable if you look at the top or bottom edge of a window on an interlaced monitor.

Dot Pitch: Images on a computer monitor are made up of glowing blobs of phosphor. On colour monitors, the smallest discrete picture element consists of three phosphor blobs, one each of red, green and blue. These elements are called dot triads. On most monitors the blobs are arranged in rows and columns, often with every other row staggered:

R G B R G B          R - Red
 B R G B R G         G - Green
R G B R G B          B - Blue
 B R G B R G

So, in the above example, a shape like the following might be a dot triad:

R G
 B

The dot pitch is measured as the shortest diagonal distance between the centers of any two neighbouring dot triads. This is the same as the shortest diagonal distance between any two phosphor blobs of the same colour. As dot pitch decreases, smaller objects can be resolved.

Resolution: First, the correct term that _should_ be used in place of resolution for most computer video discussion is pixel addressability. This is because in actuality, when we talk about 'resolution' being say, 640x480, we are referring to how many pixels can be addressed in the video frame buffer. Resolution should actually be defined as the smallest sized object that can be displayed on a given monitor, and so is really more closely related to dot pitch. So, two definitions are given here. The first is technically more correct, while the second is the more common interpretation (though strictly incorrect).

The technically correct answer:

[From: Bill Nott]

Resolution: The ability of a monitor to show fine detail, related mostly to the size of the electron beam within the CRT, but also to how well the focus is adjusted, and whether the video bandwidth is high enough. Note that the dot pitch of a CRT is generally an indication of the tube's resolution ability, but only because the manufacturers try to maintain a spot size enough larger than the dot pitch to prevent Moire' patterning from appearing.

The more mainstream usage:

This refers to the maximum number of pixels which can be displayed on the monitor at one time, and is expressed as (number of horizontal pixels) by (number of vertical pixels) i.e. 1024x768. While a higher maximum resolution is, in general, a good thing, keep in mind that as the resolution gets higher, the pixel size gets smaller. The resolution capability of a monitor puts practical limits on the maximum pixel addressability a user may want to use. You may notice that most addressabilities are in the ratio of 4:3. This is also a holdover from television technology which uses the same 4:3 aspect ratio. As a result, monitor size can be quoted with one diagonal measure, since the horizontal and vertical sizes can be calculated from the 4:3 ratio. In future, HDTV (High Definition Television) will use 16:9 (the same aspect ratio as used in movie theatres) and this may spill over into computer technology.

The following are recommendations:

Monitor Size	14"	15"	17"	20"

Resolution
640x480		A	A	B	B	
800x600		C	A	A	B	
1024x768	D	C	A	A	
1280x1024	D	D	C	A	

Legend:	A - Optimal
	B - Grainy, pixels become visible
	C - Usable, but objects become small and fine detail becomes
	    less distinct
	D - Not Recommended, objects are difficult to see and fine
	    detail can not be perceived

These are only recommendations. Personally, I can only afford a 14" NI monitor, and I run it at 1024x768. Objects are small, but my vision is 20/20 :-).

[From: Sam Goldwasser]

Keep in mind that there is also a very wide variation in the quality of the images between manufacturers and between models. Many factors contribute to this variation including video amplifier bandwidth, sharpness of the electron beam (focus), dot pitch of the CRT shadowmask (or line pitch of a Trinitron's aperture grill), stability of the power supplies, bandwidth of the video card, quality of the cables, etc.

[From: Bill Nott]

Note: Many monitors are able to operate (synchronize, and present an image) at pixel addressabilities beyond their resolution capabilities. When operated in this way, fine detail (single pixels) within the image may not be perceptible by the user.

[From: Bill Nott and Michael Scott]

Size: Monitor sizes are typically quoted in inches, and this is measured across the diagonal length of the monitor i.e. the longest possible measurement. Industry practice has been to list the size of the picture tube as the size of the monitor, but this has lead to some problems. For example, a tube may measure 17" across the diagonal, but due to glass thickness and that the tube is encased in the monitor housing, the viewable area is only 15.5". So, just because two monitors are advertised as being the same size doesn't mean that they have the same viewable area.

Part of the source of this inconsistency is that the monitor _tube_ manufacturers do not specify image performance such as focus and convergence up to the extreme edge of the phosphor, so the image size is adjusted to that which the tube supplier specifies. (Many monitors today provide the possibility of adjusting the image size larger than this, but may neglect to tell the user to expect image quality degradation beyond the calibrated image size.)

Some users may have allowed themselves to think (or wish) that the size designation should refer to the image size, but this has never been true. Regardless, within the US, the Federal Trade Commission (the body which brought standardization to the TV industry with use of the "V" terminology) is working to produce a standard for computer monitors. Some vendors actually quote viewable area in addition to the tube size, but this is not provided by all vendors yet. Until then, caveat emptor - take a measuring tape with you when you go shopping.

What is a shadow mask?

Monitors work by aiming a beam of electrons at a blob of phosphor, which in turn glows. This glow is what we perceive as a pixel on the screen. Your standard colour monitor has three dots (dot triad) at each location on the screen; red, green and blue. There is a corresponding electron gun for each colour which emits an electron beam of varying intensity - this corresponds to colour brightness. To ensure that the electrons from each gun strike the corresponding phosphor, a 'shadow mask' is used. Because the three electron beams arrive at slightly different angles (from the three separate electron guns), it is possible to construct and align the shadow mask such that the electron beam from one gun will strike the correct phosphor dot, but the other two phosphors will be in shadow. This way, the intensity of red, green and blue can be separately controlled at each dot triad location. The shadow mask is usually an invar mask (64% iron & 36% nickel) which is a thin plate with small holes punched in it. Only about 20-30% of the electron beam actually passes through the holes in the mask and hits the screen phosphor, so the rest of the energy is dissipated as heat from the mask. As a result, shadow mask monitors are prone to colour purity problems as they heat up due to slight shifts in the position of the holes relative to the phosphor dots. Shadow masks - or their equivalent - have made mass production of CRT's possible.

What's the difference between fixed frequency and multisynchronous monitors?

[From: Michael Scott and Bill Nott]

There are two primary measures of the maximum effective pixel addressability and refresh rate that a monitor is capable of. The maximum rate that a monitor can refresh the screen is measured in Hertz (cycles/second) and is called the vertical refresh rate (or vertical scan rate). The horizontal scan rate is the number of times that the monitor can move the electron beam horizontally across the screen, then back to the beginning of the next scan line in one second. Most early analog monitors were fixed frequency, meaning that they were intended to work only at one specific vertical refresh rate (often 60 Hz) and one horizontal rate (often this is expressed as a number of pixels, but this isn't really the same). Most older SUN, SGI and other workstation monitors were of this type. Generally, these monitors are limited in their applications, since they require that the incoming video signal falls within narrow timing specifications.

These type monitors also typically use composite video signals (with sync on Green), so are not compatible with most of today's PC graphics controllers. Also note that even if the composite video signal issue is overcome, there are additional issues related to attempting to use such monitors with a PC. Among these are DOS text mode support, and radiated emissions compliance. See "How can I get a fixed frequency (RGB) monitor to work on my PC?" below.

In part due to the desire to produce more flexible monitors (i.e. fewer different models), the lack of PC SVGA/EVGA/etc video standards, and in part due to recognition of an emerging trend toward higher pixel addressability formats within the computer industry, along with a desire to provide an upward migration path for new customers, vendors started to produce monitors capable of syncing to video signals within a range of frequencies. Such monitors are called multisychronous, or Multisync. Multisync is actually a trademark of NEC's, though it has become a generic term for a monitor which is capable of syncing to more than one video frequency. The meaning of multisynchronous has become somewhat muddled. To truly be multisynchronous, a monitor should be able to sync to any frequency of incoming video signal (within reason, of course). However, many so-called multisynchronous monitors can only sync to a number of discrete frequencies (usually 3 or 4).

If the video signal supplied to such a monitor is within the range of it's deflection circuits, the image will be displayed; otherwise, the image may be either not synchronized, or completely blanked. It is also possible to harm some monitors of this type by applying a video signal outside it's ranges, if protective measures were not put into place by the design. Thus, such a monitor will usually operate at the most common video modes, but may not operate at less common modes. This type of monitor may be referred to as a 'banded' design. A continuous frequency design should operate at any frequency within the specified range.

How do I calculate how much VRAM/DRAM I need?

This discussion only deals with calculating the minimum amount of RAM you will require _on your video card_ and is not related to main system RAM. The following calculations will tell you the minimum amount of RAM necessary, but some video cards do not use all of their RAM for the frame buffer (area that stores screen information). In particular, some Windows accelerator cards use some of their memory to store font or other graphical information. As a result, some cards with 2 Megs of video memory will not be able to display the higher pixel addressabilities and colour depths that you might expect.

There are two things that have to be decided in order to determine how much video RAM is required for a given pixel addressability. The first is the screen addressability in pixels and the second is the colour depth in bits. Before you go out and purchase a video card and/or extra RAM, make sure that the card is capable of the pixel addressability and number of colours that you want. Often cards are advertised as 1280x1024 and up to 16.7 million colours, _not_ 1280x1024 _at_ 16.7 million colours.

Standard pixel addressabilities available are:

  • 640x480, 800x600, 1024x768, 1280x1024 & 1600x1200

Less commonly, 1152x864 and 1600x1280 are supported.

For an idea of pixel addressabilities appropriate for your monitor, see "What pixel addressabilities are best for my monitor". Colour depth information is provided in "How does colour depth relate to the number of colours?".

To calculate the amount of video memory you need, simply multiply:

(horizontal addressability) * (vertical addressability) * (pixel depth)/8

So, for 1024x768 and 256 colours (that's 8 bit):

1024 * 768 * 8/8 = 786432 bytes i.e. a 1 Meg card will suffice

and for other configurations:

640x480x24 bit colour = 921600 (min. 1 Meg card)
800x600x16 bit colour = 960000 (min. 1 Meg card)
800x600x24 bit colour = 1440000 (min. 2 Meg card)
1024x768x16 bit colour = 1572864 (min. 2 Meg card)
1024x768x24 bit colour = 2359296 (min. 4 Meg card)
1280x1024x8 bit colour = 1310720 (min. 2 Meg card)
1280x1024x24 bit colour = 3932160 (min. 4 Meg card)
1600x1200x24 bit colour = 5760000 (min. 6 Meg card)

Note that many truecolour implementations (24 bit colour) use 32 bit long words. For these chipsets/modes you will have to use a pixel depth of 32 in the above calculation i.e. 24 bit colour may not be available at 1280x1024 with some 4 Meg cards.

How does a video accelerator work, and will one help me?

The term accelerator is used so frequently that it has lost much of its meaning. This section is intended to answer how a video card with special purpose video acceleration works, typically called 'Windows accelerator' or 'coprocessed' cards. In a general sense, the principals here can be applied to 2D, 3D and digital video acceleration. For more specific information about 3D and digital video acceleration, see "How does a 3D graphics accelerator work?" and "What does a video codec do?". Before we get into acceleration, we have to understand how a VGA card works.

A VGA card is a simple display adapter with no processing capability. All the thinking is done by the CPU, including writing and reading of text, and drawing of simple graphics primitives like pixels, lines and memory transfers for images.

Programs like most DOS-based word processors run in VGA text mode while graphics-based programs like games run in graphics mode. Microsoft Windows 3.1 runs in VGA graphics mode as default, meaning that every pixel you see as a part of the background, a window or text character had to be written using basic VGA calls. As you can imagine, the low-level nature of the VGA command set means that many commands are required to do something as simple as moving or closing a window. To move a window, the VGA commands might go something like this:

  • Block transfer to store window contents in PC RAM
  • Solid rectangle fill (to blank window - cosmetic)
  • Block transfer to put window in new location in VGA RAM
  • Block transfer or Write pixel to rewrite background behind old window location.

Clearly, an enormous amount of data must move from the VGA card, along the bus, into the CPU, and on into memory, and vice versa. This has to occur because the VGA card has no processing capability of its own, it relies on the CPU. Now we are in a position to understand how a graphics accelerator works.

A VGA card has its own memory and digital-to-analog converter (DAC), but can't actually process data. Accelerated video cards have their own processor, and therefore are called video coprocessors. This means such a card can perform many video operations by itself, with only minimal input from the CPU. Let's go back to our example of moving a window.

Assume our 'accelerated' card can keep track of:

  • the background fill pattern
  • the location and contents of rectangular regions, i.e. windows
  • and has adequate memory to store them.

To move a window, the CPU has to transmit something like:

  • 'move window' instruction
  • window ID
  • location to move to

At this point, the video card can perform all of the operations the CPU would have had to with a VGA card. This frees the bus and CPU to execute other tasks, and speeds-up video operations as they're all done on the video card. Why is this faster? Unlike VGA mode, where every pixel has to be moved to and from the card via the bus and CPU, the accelerated card can perform the same operations with instructions consisting of only a few bytes being transferred along the bus. This will result in an enormous performance gain for most common graphics operations including bitmap and pixmap transfers and painting, movement of sprites and icons, opening and closing of windows, filling with solid colours and patterns, line drawing, polygon painting, etc. As a result, even an ISA bus accelerator video card can provide blistering speed improvements over VGA in graphical environments like Windows 3.1, OS/2, X Windows (i.e. XFree86) and AutoCAD. Some operations like animations or raw video playback which require large block transfers at high rates will benefit less from accelerator cards.

Some newer accelerator cards include functions for 3D graphics rendering like polygon shading, coordinate manipulation and texture mapping. Others provide on-the-fly magnification of video clips so that those MPEG movies don't appear in a box that's three inches wide and two inches high on your screen.

However, keep in mind that the implementation of a given video coprocessor is proprietary. This means we're tied to a system where every video accelerator has a set of proprietary drivers which interpret video commands. Different drivers are required for each operating system or software program that wishes to take advantage of acceleration functions. Some 3D graphics standards like SGI's OpenGL and PHIGS are being integrated into workstation video hardware, and perhaps in the future a 3D (or even 2D!) standard will be accepted by PC component manufacturers to provide a consistent set of video instructions for accelerated hardware.

What does a video codec do?

Anybody who has played-back a movie on their computer knows that the video is choppy and low resolution. The reason is that current PC technology simply can't handle the amount of data required to display uncompressed full-screen video. To understand why, we just have to look at the amount of data contained in a video clip. If we want to record a standard video signal for digital playback, we have to digitize it at about 640x480 pixels/frame. At a refresh rate of 30 fps (frames per second), and true colour (16.7 million) we would be pumping 640x480x30x3 = 28 Mbytes/s through our computer. At that data rate, a 650 Mbyte CDROM would hold only 23 seconds of video! CDROM reader and hard drive technologies don't allow us to transfer data at such high rates, so in order to display digital video it is compressed for storage.

Compressed video streams are read from a hard drive or CDROM, then are decompressed before being displayed. This decompression is very CPU intensive, and displaying the resulting video pushes the limits of the peripheral bus (usually ISA, VLB or PCI) and video cards. If any of the hard drive/CDROM reader, CPU, bus or video card can't keep up with the high amount of data, the video clip appears choppy, or is displayed very small.

The software or hardware that performs the decompression (or compression when recording video) is called a codec (compression- decompression). Dedicated hardware codecs are available either as add-in cards or are integrated into video cards. The advantage of such hardware is that it is optimized specifically for the quick decompression and display of video data, so can provide higher frame rates and larger images than a computer using a purely software-based codec routine. Hardware codecs also reduce the computing load on the system CPU, allowing it to perform other tasks.

Several types of compressed video formats exist, including MPEG (Motion Pictures Experts Group), AVI, MOV, Indeo, MS-Video, Cinepak and Quicktime. In addition, different versions of these formats exist, some incorporating sound. Under optimal conditions, some of these formats can provide compression ratios of up to 100:1 while still providing good quality video.

Some hardware codecs are optimized to work best with a particular video format, but most support the basic operations required to display compressed digital video streams.

Any given digital video accelerator may support some or all of the following operations:

Codec - Decompression of compressed video from various formats.

Colour space conversion - Conversion of the video signal from YUV colour space to computer-display-compatible RGB. The YUV colour space is derived from the composite video signal that is the source of most video clips.

Image clipping, filtering and scaling - Filtering reduces the amount of graininess in the image. Scaling can be of different types:

  • Pixel replication - This simply means that pixels are doubled in both the x and y directions - a 320x240 image is displayed as a 640x480 image with larger pixels. This results in poor quality video.
  • Pixel interpolation - Uses an image processing filter (usually an averaging algorithm) to interpolate pixel values. This provides a smoother image than direct pixel replication.

Some of the new video cards provide a degree of hardware acceleration for video playback, while others claim to provide full-screen 30 fps video but don't have the necessary hardware. My advice is to test drive any card that you are considering in a machine that is similarly configured to your own before buying.

How does a 3D graphics accelerator work?

As you know, the vast majority of computer displays are two-dimensional. As a result, most of the objects which are represented on computers are also 2D. Examples of 2D object include text, images and animations. Of course, most of the world is 3D, so there are obvious advantages in being able to represent real-world objects in a realistic way.

The 3D representation that I'm referring to here is really surface modeling, but involves true 3d objects. This shouldn't be confused with games like Doom or Wolfenstein 3d, which are really just souped-up 2D engines.

The way that 3D objects are traditionally represented is using a meshwork of polygons - usually triangles - to describe their outside surface. If enough polygons are used, then even curved surfaces can look smooth when projected onto the computer display. The minimum parameters which have to be defined to describe a 3D object and its view; The coordinates of the object's polygon vertices (corners), polygon (or vertex) normals (to tell us which side of the polygon is pointing out, and which is inside the object, and for shading purposes), reflection characteristics of the polygonal surfaces, the coordinates of the viewer's location, the location and intensity of the light source(s), the location and orientation of the plane where the 3D scene will be projected on (i.e. the computer screen). Once all of this information is available, the computer performs a process where it projects the 3D scene, given the above information, onto the 2D computer screen. This process is called rendering, and involves equations for tracing from the viewer through the scene, equations for determining how light is reflected from light sources, off of objects and back to the viewer, and algorithms for determining which objects in the scene are visible, and which are obscured. Often, depth cueing is also performed to make distant objects darker, giving move of a 3D feel.

The point of this description is to impress upon you that the 3D rendering process is highly complex, and involves an enormous number of computations, even for simple scenes with few objects and light sources and no shading. The addition of shading often more than doubles computational time. If the computer's CPU had to perform all of these operations, then rendering a scene would be very sluggish, and things like real-time renderings (i.e. for games or flight simulators) would not be possible.

Happily, new 3D graphic card technology relieves the CPU of much of the rendering load. 3D operations are accelerated in a similar manner as standard windowing operations are for say, Windows 3.1. The application program is written using a standard 3D graphics library like OpenGL, Renderman or another. A special-purpose driver, written specifically for that 3D graphics card, handles all requests through the 3D graphics library interface, and translates them to the hardware. Using a software driver adds an additional layer between the application and video card, and as a result is slower than accessing the hardware directly. However, most of the 3D video hardware is proprietary, which means that without a driver, an application developer would have to write a version of their program for each 3D graphics card available. An additional advantage to having a driver, is that if a new 3D graphics standard is released, or an old one is updated, a new driver can be written to support the new standard.

For the 3D rendering example above, the rendering process can be sped-up through the use of the special-purpose hardware on the video card. Instead of the main CPU having to perform all of the operations necessary to calculate the colour and intensity of each pixel being rendered, all of the 3D scene information can be sent directly to the video card in its raw form. Polygon vertices and normals, surface characteristics, location of the viewer, light sources and projection plane are all off-loaded to the 3D video card. Then the video card, which is optimized to perform 3D operations, can determine what image is displayed, and dump it to the screen, while the system CPU is free to perform other tasks.

Should I have BIOS shadowing on?

The code which tells the computer how to access the video card is stored on the video card itself in a ROM (Read Only Memory) chip. When the computer wants to access the video card, it uses the video BIOS (Basic Input/Output System) routines on the ROM chip(s). The only real problem with this is that ROM chips run more slowly that traditional DRAM which is used for main system RAM. As a result, most (if not all) modern BIOS setup utilities (sometimes referred to as CMOS) allow the video BIOS to be copied to a section of main system DRAM (this is the shadowing). This has the benefit of speeding up video operations between the CPU and video card because the video BIOS 'instructions' can be read more quickly from the shadow RAM, and the disadvantage of using a relatively small block of upper memory (the chunk of memory is located above 640k and below 1 Meg).

When video BIOS shadowing is turned off, some systems and memory managers allow you to use that chunk of memory to load TSR's (i.e. mouse driver, cdrom driver) which may allow you to free up some additional conventional memory. When turned on, video operations will be performed faster, at the expense of a chunk of upper memory. Unless you're tight for upper memory or have a compatiblity problem, try running with shadowing on.

What is VGA, and how does it work?

OK, the answer to this one could easily be a book (actually, see the references because it _is_ a book or several). I'll give a very cursory overview of what the VGA is capable of.

The Video Graphics Array is a standard established by IBM to provide higher pixel addressability, colour graphics than are available with EGA. In fact, VGA is a superset of EGA, incorporating all EGA modes.

The VGA consists of seven sub-systems, including: graphics controller, display memory, serializer, attribute controller, sequencer and CRT controller. Basically, the CPU performs most of the work, feeding pixel and text information to the VGA.

  • Graphics Controller: Can perform logical functions on data being written to display memory.
  • Display Memory: A bank of 256k DRAM divided into 4 64k colour planes. It is used to store screen display data.
  • Serializer: Takes display data from the display memory and converts it to a serial bitstream which is sent to the attribute controller.
  • Attribute Controller: Contains the colour LUT (Look Up Table) which determines what colour will be displayed for a given pixel value in display memory.
  • Sequencer: Controls timing of the board and enables/disables colour planes.
  • CRT Controller: Generates syncing and blanking signals to control the monitor display.

It is beyond the scope of this FAQ to describe the functionality of these components in detail, so for further reading consult Sutty & Blair (see References).

VGA provides very low-level graphics commands. This, combined with the fact that a VGA card has a frame buffer but no real processing power, means that the PC's CPU has to do most of the graphics number crunching. As a result, the VGA speed of a given computer is highly dependent on the CPU speed, and the two cannot be uncoupled. Basically this renders VGA speed comparisons between video cards installed in systems which use different processors meaningless. Also, the VGA performance of a video card _can not_ be used to estimate how fast that card will be in another video mode (i.e. SVGA, Windows 3.1, etc).

VGA is really an outdated standard, but in fact, all PC's today boot in VGA text mode 7 (see table below) and there is no indication that this will change in the near future. Most DOS games still use it because of its universality. While most GUI users think that 800x600 is a minimum pixel addressability, most DOS games only use a 320x200 pixel mode. Now, a number of SVGA games (640x480 with >16 colours or higher resolutions) are being released. However, the larger number of pixels which are being displayed require a faster processor and sometimes even a fast Pentium can appear sluggish.

The VGA modes are:

Mode	Type	Resolution	Chars	Colours	
(Hex)
0,1	text	360x400		40x25	16
2,3	text	720x400		80x25	16
4,5	gfx	320x200		40x25	4
6	gfx	640x200		80x25	2
7	text	720x400		80x25	mono
D	gfx	320x200		40x25	16
E	gfx	640x200		80x25	16
F	gfx	640x350		80x25	mono
10	gfx	640x350		80x25	16
11	gfx	640x480		80x30	2
12	gfx	640x480		80x30	16
13	gfx	320x200		40x25	256

The next 'standard' (and hopefully it will be widely adopted), is VESA SVGA, and provides standard SVGA modes (pixel addressabilities & colour depths), registers and refresh rates.

What is the pinout for a standard VGA/PGA/EGA/CGA connector?

Standard 15 pin D-Sub VGA connector pinout

___________________________________________________
\                                                 /
 \        1       2       3       4       5      /
  \                                             /
   \  6       7       8       9       10       /
    \                                         /
     \   11      12      13      14      15  /
      \_____________________________________/

Pin #	Description

1	Red Video
2	Green Video
3	Blue Video
4	Sense 2  (Monitor ID bit 2)
5	Self Test (TTL Ground)
6	Red Ground
7	Green Ground
8	Blue Ground
9	Key - reserved, no pin
10	Logic Ground (Sync Ground)
11	Sense 0 (Monitor ID bit 0)
12	Sense 1 (Monitor ID bit 1)
13	Horizontal Sync
14	Vertical Sync
15	Sense 3 - often not used

Compaq (and perhaps some other companies) use the "Sense" lines as a way of telling what kind of monitor is connected. Newer monitors with DDC (also called Plug-n-play) use some of these pins.

[From: Ashok Cates]

The ID bit pins in the 15 pin connector are shorted/left open to identify the type of monitor. I don't think they are very important anymore, as most cards have software to set resolutions, refresh rates etc. However, I think their functions are:

ID bit 0 and ID bit 2 grounded: Dual frequency analog color interlaced (8514 or compatible) or variable frequency analog color interlaced.

ID bit 0 grounded, ID bit 2 not connected: Fixed frequency analog color (8512, 8513, or compatible) or variable frequency analog color non-interlaced.

ID bit 0 not connected, ID bit 2 grounded: Fixed frequency analog monochrome (8503 or compatible) or variable frequency analog monochrome.

  • ID bit 1 and ID bit 2 are usually connected together.
  • Monitor model numbers are for IBM monitors.


Standard 9 pin D-Sub PGA/EGA/CGA connector pinout

[From: Michael Scott]

_______________________
\                     /
 \ 1   2   3   4   5 /
  \                 /
   \ 6   7   8   9 /
    \_____________/


                IBM Adapters

Pin Assignment  CGA             EGA             PGA		VGA	
                TTL 16 colours  TTL 16/64 col.  Analogue	Analogue

1               GND             GND             Red		GND
2               GND             Secondary Red   Green		GND
3               Red             Primary Red     Blue		Red
4               Green           Primary Green   Composite Sync	Green
5               Blue            Primary Blue    Mode Control	Blue
6               Intensity       Secondary Green Red GND		GND
                                /Intensity			
7               not used        Secondary Blue  Green GND	not used
8               H. Sync         H. Sync         Blue GND	H. Sync
								/Comp. Sync
9               V. Sync         V. Sync         GND		V. Sync

What are VGA/SVGA/UVGA/8514/a/XGA?

The wonderful thing about PC's is that there are standards for so many different things. The problem is that every company has their own standards ;-). The lack of a widely accepted standard for >VGA pixel addressabilities is causing plenty of problems for manufacturers, system builders, programmers and end users. As a result, each vendor must provide specific drivers for each supported operating system for each of their cards. In the list above, VGA, 8514/a and XGA are standards established by IBM, and have been accepted to a greater (VGA), lesser (XGA) or even much less (8514/a) degree. The reason for this may be a backlash against IBM (due to royalty demands) or that video card vendors were not satisfied with the suggested standards.

For a more detailed discussion of VGA, see 'What is VGA, and how does it work?'

The 8514/a was the next graphics offering from IBM and provides three new video modes that are not available from the VGA controller. Computers with 8514/a hardware must also have a VGA controller, as the 8514/a does not support VGA video modes. The additional modes are:

Type	Pixel		Max. # Colours	Characters
	Addressability
gfx	640x480		256		80x34
gfx	1024x768	256		85x38  (interlaced)
gfx	1024x768	256		146x51 (interlaced)

The 8514/a also has some smarts, as it is capable of performing video memory transfers, drawing lines and extracting rectangular areas of the display image. These are so-called accelerated features.

The XGA has superseded the 8514/a. It was the first IBM display adapter to use VRAM, and can be configured with 500k or 1 Meg. Like the 8514/a, the XGA has accelerated features which make it faster than standard VGA for some operations. The new modes XGA introduced are:

Mode	Type	Pixel     	Max. # Colours	Characters
		Addressability
14	text	1056x400	16		132x25
-	gfx	640x480		256/65535*	-
-	gfx	1024x768	16/256*		-

*500k/1 Meg configurations

SVGA & UVGA

SVGA and UVGA are not established standards, and so their meanings vary depending on manufacturer. VESA VGA BIOS Extensions are the closest thing to an 'SVGA' standard. Most video cards currently available are called SVGA (Super VGA), which basically means that the card provides a superset of standard VGA calls and capabilities. This means that anything better than 640x400 and 16 colours is an SVGA mode. Some suggest that SVGA covers 800x600 modes, while UVGA (Ultimate VGA) refers to 1024x768. However, the absence of any real standard renders the term SVGA quite useless, and the term UVGA is not used frequently.

The result of having no SVGA standard is that there are many (>10 !) different SVGA chipsets available, and none of them use a common programming interface. Many provide video acceleration capabilities, which free the system CPU to do other tasks, i.e. hardware cursor, BitBlt, etc. However, to use the SVGA video modes and advanced features, each chipset requires its own driver. This is why video drivers are required for Windows 3.1, Windows 95, OS/2 & XFree86. These drivers, combined with accelerated hardware, can provide enormous increases in video performance.

If you are looking for a machine and would like SVGA capabilities, don't accept that a given video card or monitor is adequate just because it is advertised as supporting SVGA. Instead, decide what maximum pixel addressabilities and colour depths you want to use, and at what vertical refresh rates, and ensure that the models you are looking at provide those capabilities, and that software drivers are available for the operating systems and programs you will be using.

I want to add an MPEG card to my system. How does it work?

The Motion Pictures Experts Group (MPEG) has released a series of standards which describe a lossy digital video compression technique. In some cases, MPEG can reach compression rates of 100:1. It works by removing redundant information and details that most people would generally miss, and in later versions storing only the differences between successive frames.

When an MPEG video clip is viewed on the screen, the video stream must be decoded on-the-fly. If done in software, this operation can be quite demanding of the system CPU. An alternative is to have a dedicated coprocessor do the MPEG decoding, then feed the resulting video stream to the video card. Because this type of coprocessor is dedicated to MPEG decoding, it can be optimized to perform the operation very fast, and can also be used to scale up the size of the resulting video with little or no degradation in performance. Even a relatively small 320x200 video displayed at 30 frames per second requires a bandwidth of 15.4 million bits. This would seriously decrease available bandwidth for other purposes like disk i/o if all of that data was dumped down the peripheral bus (ISA, VLB, PCI, etc). As a result, many video card manufacturers incorporated a feature connector on their VGA cards. This connector gives direct access to video display memory, allowing high frame rate video to be dumped to the monitor. One limitation of this adapter is that it can only provide 8-bit (256 unique) colour.

If you're planning on using your PC as a VCR, you'll be disappointed with an MPEG card playing the cdrom version of your favourite film. The resolution will be inferior to that provided by your television. If you want to get smoother video playback and/or free-up your CPU for other tasks, then the addition of an MPEG decoder card may be worth the cost.