Chapter Contents

 

4.0 Imaging

 

4.1 The Human Eye

4.1.1 Rods

4.1.2 Cones

 

4.2 Color Coordinate Systems

4.2.1 RGB Color Space

4.2.2 Chromaticity Diagrams

 

4.3 Forming an Image

4.3.1 Pinhole Camera

4.3.2 A Standard Camera

 

4.4 Resolution

4.4.1 Video Display Resolution

4.4.2 Bandwidth Requirement

 

4.5 Video Cameras

4.5.1 Camera Tubes

4.5.2 Charge Coupled Devices (CCD)

4.5.3 Gamma Correction

4.5.4 Color Separation

 

Assignment Questions

 

For Further Research

 

 

4.0   Imaging

 

Objectives

The purpose of this brief section is to:

•   Examine human vision

•   Examine how an image is formed

•   Determine the image resolution

•   Examine the various components found in a video camera

•   Examine the basic characteristics of color

 

The most common way to form an image is by means of a lens. However, once it is created, it must be translated into an electrical signal for processing or transmission.

4.1     The Human Eye

Although vision has been studied for many years, it is still poorly understood. It is extremely difficult to make quantitative measurements on the eye for obvious reasons. As a result, almost all the data collected is subjective. It is also difficult to reduce this data to simple and meaningful graphical form.

Click here for a diagram of the human eye.

The lens is primarily responsible for forming an image on the retina. The retina consists of light sensitive photoreceptors. Pigments in the photoreceptors are broken down by light and create a potential. When a threshold is reached, nerve impulses begin to travel towards the brain.

The two types of photoreceptors in the human eye are the rods and cones. The rods are very sensitive to light while the cones are very sensitive to its wavelength or color. A single photon of light is enough to cause a rod in the human eye to fire while cones need several photons before it is activated. This is why rods are more important than cones for vision in low light conditions.

Perceived color depends on light interacting with the human visual components. The perception of colors also dependent on the viewing environment. Objects appear to change color under different lighting conditions.

Experimental evidence suggests that the eye contains three types of cone shaped sensors, each of which is sensitive to a different range of colors. Since not all people have exactly the same vision, and all experiments have a degree of subjectivity, one can only speak of the ‘average’ response. It appears that the average eye can see some colors more ‘easily’ than others.

The average eye can see wavelengths ranging from 380 nm, corresponding to bluish colors, to 780 nm, which correspond to reddish colors. A combination of these can be used to create the perception of a particular color; this is known as tristimulus color. Unfortunately, no set of three primaries can produce every color.

 

Rods

Cones

Shape

Rod-shaped outer segment

Cone-shaped outer segment

Connections

Many rods connect with one bipolar neurone

Only a single cone cell per bipolar cell

Visual acuity

Low

High

Visual pigment(s)

Contain rhodopsin (no color vision)

Contain three types of iodopsin responding to red, blue and green

Frequency

120 million per retina - twenty times more common than cones

7 million per retina - twenty times less common than rods

Distribution

Found evenly all over retina

Found all over retina but much more concentrated in the center, particularly the yellow spot or fovea centralis

Sensitivity

Sensitive to low light intensities (used in dim light)

Sensitive to high light intensities (used in bright light)

Overall function

Vision in poor light

Color vision and detailed vision in bright light

 

4.1.1    Rods

Rods contain a reddish-purple compound called rhodopsin. This contains opsin, a protein and retinene, a light-absorbing compound. When retinene is exposed to light, it changes shape and causes the rhodopsin to break down. The free opsin acts as an enzyme and, through a series of reactions, activates a neurotransmitter called cyclic GMP.

Cyclic GMP closes membrane pores in the rod cell increasing the negative charge inside the cell. This causes the cell membrane to become more negative and send signals to nearby nerve cells. Nerve impulses then pass to the brain for decoding. In the absence of light, the original retinene molecule reforms and this then recombines with opsin to form rhodopsin.

Rhodopsin is very sensitive to light and so rods are mainly used in dim light. In strong light rhodopsin breaks down more quickly than it can reformed and the eye loses sensitivity.

4.1.2    Cones

Cone cells contain the photosensitive pigment iodopsin. It is made up of photopsin and retinene. Cone cells are stimulated in the same way as rod cells. There are, however, three different types of cones, each with a slightly different pigment and responding to a different wavelength of light. They correspond to red, green and blue. The eye has a blind spot where the optic nerve leaves the eye since it does not contain any receptor cells.

The perceived color perceive, depends on which cones are stimulated. When all the cones are fully stimulated, we see ‘white’. When very few are stimulated, we see black. Stimulation of separate types produces all the other perceived colors.

The visual processing, which the brain then performs, is extremely complex. The occipital lobes at the back of the brain deal with visual information. However, sensory information is processed at various points along the nerve pathway before it reaches the brain. Within the brain, it is processed even further. The details of this process are not understood.

The estimated spectral sensitivity of cones[1] is quite interesting. For example, although the eye responds to three dominant wavelengths or colors, we perceive hundreds of thousands of different colors. This suggests that all the colors seen by the eye are actually combinations of three primary colors associated with the cones. This phenomenon is known as trichromacy.

Artists have known for a very long time that mixing certain primary colors could create new colors. This tells us that the exact wavelength of the three colors used to produce a specific perceived color is not very important.

4.2     Color Coordinate Systems

Principles of Color Management by Hutchison Consulting

 

Color can significantly enhance information content. Unfortunately, there is no simple way to precisely specify color. It is also difficult to translate quantized color parameters between various presentation devices. This is especially true when trying to match colors from a light emitting CRT to light reflecting hardcopy print. The range of colors that any device can produce is known as a gamut.

Combinations of different wavelengths or colors can produce the same perceived color. This is known as metamerism. It is the existence of metamers that makes color photography, printing and television, possible.

Depending upon the application, almost any three colors can be defined as primaries. The only criterion is that while a wide range of colors can be derived from a combination of the three primaries, a primary cannot be derived from a combination of the other two.

Primary colors fall into two broad classifications, additive [red, green, blue] and subtractive [cyan, magenta, yellow]. Both types of primaries are complementary to each other. The distinction is easy to observe when combining colors. When subtractive primaries are combined, the result is black.

Subtractive primaries are used in printing, painting, photography etc. Combining these primaries results in black because they work on the subtractive principle and absorb light. Each color pigment appears as it does, because it absorbs the other wavelengths. Thus combining all of the pigments eliminates all wavelengths resulting in black. To make economical use of primary pigments, black is applied as a separate pigment in the four-color printing process.

Additive primaries are used in television, and when they are mixed, the result is white. This happens because white light itself is composed of all sorts of wavelengths or colors.

In the development of color television, it became very important to determine just how sensitive the human eye is to the additive primary colors. Some experiments have shown that the green receptors are the most sensitive, and the blue ones, the least sensitive. Besides being optical sensors, the eyes are complex optical filters.

Relative Human Eye Relative Response to Primary Colors

There are several color schemes used in computer and video systems, including RGB, HSI, CIE, and NTSC YIQ.

CIE† 

The 1931 CIE standard is used to map spectral distribution to tristimulus values. The specification occupies a three dimensional XYZ space. A two dimensional map called the CIE chromaticity standard is used to show the hues, ignoring luminance. The CIE chromaticity map shows the plane X + Y + Z = 1, so in the two dimensional space x = X / (X + Y + Z) and y = Y / (X + Y + Z). The CIE chromaticity chart is often used to show the gamut of other color specifications as a subset of CIE space.

RGB

The RGB color specification is used in the computer industry. Specified percentages of red, green and blue primaries are added together to create the desired color. These three primaries allow the broadest color gamut possible with a three-primary system. This system is readily implemented in hardware, but is not intuitive in specifying color.

RGB is a component video signal. This specification is the same used by video display hardware such as monitors and color TVs, but the signal that is used to drive such devices is usually a composite signal.

HSI†

The HSI model is a more intuitive model for specifying colors. Hues such as red, yellow, etc are represented as positions around a color wheel. Red may correspond to 0o, green to 120o, and blue to 240o.

Saturation is color purity. Low saturation (<20%) results in gray, regardless of the hue. Saturation of 40 - 60% creates pastels. Saturation >80% produces vivid colors.

Intensity or luminance is the brightness of a color and ranges from 0% (black) to 100% (white).

Similar specifications include hue-saturation-value (HSV), luminance-hue-saturation (LHS or HSL), and others.

NTSC YIQ

The NTSC video standard uses a color specification consisting of luminance (Y) and two color difference signals called interphase and quadrature  (I and Q). Luminance measures total brightness or intensity and is defined by the CIE standard to be

Y = 0.59 Green + 0.30 Red + 0.11 Blue

This is also the accepted formula for converting color into grayscale. The color difference signals are computed by subtracting multiples of the green and blue RGB components from the (scaled) red component.

Such subtractive systems use much less bandwidth than additive systems such as RGB with no loss of information. This scheme has the added advantage that the color difference signals can be bandwidth-restricted and added as subcarriers to a primary luminance signal, which allowed backward compatibility with black and white television signals when color TV came into widespread popularity.

4.2.1 RGB Color Space

http://www.color.org/sRGB.html

http://www.cgsd.com/index.html

http://www.inforamp.net/~poynton/notes/colour_and_gamma/ColorFAQ.html

http://www.cs.rit.edu/~ncs/color/a_spaces.html

http://www.aols.com/colorite/colorite1.html

http://www.cs.fit.edu/wds/classes/cse5255/cse5255/davis/

http://www.linocolor.com/colorman/sp.htm

http://hyperphysics.phy-astr.gsu.edu/hbase/vision/cie.html

 

As a result of ‘average’ eye response experiments, the quality of luminance or brightness component of vision [Y], has been defined in terms of its three chromatic RGB components:

The terms  denote the gamma corrected values.

The wavelengths associated with the standard primary colors today are:

•   Red             700 nm

•   Green          546.1 nm

•   Blue            435.8 nm

These three wavelengths do not necessarily correspond to the wavelengths to which the eye is most sensitive. Almost any three separated colors are sufficient to provide a whole range of visual chromatic stimuli.

4.2.1.1 Computer Color Space

Because the standard analog based TV uses analog signals, an extremely large range of colors can be displayed. However, since computers digitize video signals for storage, the range of colors becomes quantized or limited to discrete values.

Most computer systems digitize each of the RGB components into and 8-bit byte. Thus a single pixel is comprised of 24 bits. This allows the computer to theoretically create 224 or 16,777,216 colors. This is often referred to as true color. Unfortunately, many computers are not setup to display this range.

Computers use a color palette to translate the image pixel value to the displayed RGB value. Rather than being 24 bits wide, the palette is often 8 bits wide. This means that the monitor will only display 28 or 256 colors. This reduces the memory requirements of the video card and speeds up the display. More expensive systems will have a variety of palette settings.

If the computer settings do not allow certain colors to be displayed, the software makes a substitution for the nearest color in the palette.

In some cases, the applications program and computer platform also reduces the size of the color palette. Netscape for example, is capable of displaying only 216 colors on a Windows operating system with a 256-color palette. However, it is possible to increase the perceived range of colors by means of dithering. The color palette on the left contains all of Netscape’s non-dithered colors on the Mac OS while the right palette contains the colors as seen on a Windows OS.

4.2.1.2 Dithering

Dithering creates additional colors and shades from an existing palette by interspersing pixels of different colors. These can be distributed either randomly or regularly. The higher the resolution of the display, the smoother the dithered color will appear to the eye.

Dithering can be used to create backgrounds, fills and shading, and halftones for printing. It is also used in conjunction with anti-aliasing in order to make jagged lines appear smoother.

The following links contains an image in: true color, dithered color, and substituted color.

Dithering in high-resolution images causes undesirable artifacts. However, it can be very effective in low-resolution imaging.

4.2.2    Chromaticity Diagrams

Newton examined this phenomenon from the standpoint of light rays instead of pigments, and developed the color triangle as a method of classifying and specifying colors. This triangle is also known as a normalized chromaticity diagram. It is a somewhat arbitrary, but nevertheless, useful tool.

Normalized Chromaticity Diagram

When the visible light spectrum is plotted on the standard or normalized chromaticity diagram[2], the loci passes out of the first quadrant.

Only those colors within the spectrum locus boundary are visible to the eye. The alychne is a straight line joining the two ends of the visual spectrum, and is sometimes referred to as the purple boundary. It is the line along which colors would lie if they could have zero luminance.

This graph shows that a whole range of colors extends outside the normalized chromaticity triangle. Furthermore, the colors in the other quadrants are defined in terms of a negative red and green axis. By redefining the axis, all values on the graph could be kept positive. For this reason, the chromaticity diagram was redrawn in 1931 by CIE† using new stimuli arbitrarily labeled X, Y, and Z.

This graph now describes all the visible colors in terms of positive quantities.

The center of the chromaticity diagram is daylight or C white light. It is interesting to note that the area defined as white is quite large, indicating that it has many different hues. The locus of constant purity corresponds approximately to saturation, and is an attempt to quantify a subjective quantity.

Most studio monitors are adjusted to D6500 white instead of C white.

 

Chromaticity

Color

x

y

C White

0.310

0.316

D6500 White

0.3127

0.3290

Red

0.67

0.33

Green

0.21

0.71

Blue

0.14

0.08

 

The eye is not equally receptive to all colors, and it is easier to detect subtle changes in some colors than others. It is not necessary to provide a high degree of detail for those colors to which the eye cannot appreciate. This has a significant impact later when determining the bandwidth requirements of the broadcast chroma signals.

When electrons bombard phosphor, it tends to glow. Depending on its exact chemical composition, it can glow in different colors, and have different degrees of persistence. Different organizations have standardized on slightly different phosphors to reproduce the primary colors. By varying the amount of electron excitation, any color located in the triangle formed by connecting the three primary values, can be displayed. Colors lying outside the triangle are not reproducible. The location of various phosphors on the chromaticity diagram are found on this link.

The eye adapts to a wide range of ambient light conditions, whereas cameras cannot. This means that if one were to setup a television system and rely solely on chromaticity values determined by some instrument, under certain lighting conditions the colors would appear wrong. This is observed in film, where an indoor scene may appear yellow if not illuminated properly. Consequently, filters or additional light sources are needed to compensate.

From this very cursory introduction, we note that the eye is an extremely complex organ, it is very poorly understood, and there is no machine that comes close to its overall performance.

4.3     Forming an Image

There are many kinds of imaging devices or cameras in use today, but they all have the same tasks. The first is to create an optical image and the second is to convert the image to some other medium such as film or an electrical signal. An optical image is most often created by means of a lens assembly but it can also be done by a small pinhole.

4.3.1    Pinhole Camera

Camera is Latin for vault or chamber. The simplest of all cameras is the camera obscura or dark chamber. If a small hole is made in the chamber, an image will appear on the far wall. This image will be inverted from top to bottom and left to right.

If a photosensitive film is placed where the images form, a photograph can be taken. The word photograph is taken from two Greek words photo meaning light and graph meaning writing or drawing.

This type of imaging is very limited. Since the hole must be quite small, very little light is available to make the image. Furthermore, distortion occurs if the edge of the hole is not well defined.

4.3.2    A Standard Camera

Digital Camera technology by DuncanTech

http://www.duncantech.com/Default.htm

 

To overcome the limitations of a pinhole camera, a lens is used to collect light and form the image.

Since a lens opening is much larger than a pinhole, it admits more light, and creates a brighter image. The lens duplicates the pinhole by directing the bulk of the light entering the lens, to converge at one point.

The lens may introduce a certain amount of distortion to an image, such as:

Spherical aberration:  rays parallel to the axis meet at different focal points. This can be corrected by combining a convex and concave lens.

Coma:  This is a function of the lens thickness as it varies from the center out. Parallel rays, at an angle to the principle axis will not all pass through the lens with the same intensity. Images produced by marginal rays are darker, thus the image appears to have a comet-like tail.

Astigmatism:  occurs when oblique rays produce a line focus instead of a point.

Field curvature:  a convex lens tends to have the outer edges of the image appear to move towards the center.

Chromatic aberration:  since the focal length of a lens is dependent on the refractive index, which is in turn dependent on wavelength, a color separation of images can occur.

4.4     Resolution

In order to reproduce or transmit an image, it is necessary to break it up into its constituent parts. The greater the detail required in the final reproduction, the finer the individual picture elements must be. There are practical limits to this process. At some point, the human eye is unable to detect any more detail even if it is provided. At the other extreme, if not enough detail is obtained, it is impossible to recognize the object. Consequently, one must take into account certain factors in attempting to determine an acceptable resolution. These factors include:

•   The purpose for the image

•   Image medium

•   Available bandwidth

•   Signal power

•   Cost

Modern printing, whether it is in black and white or color, is accomplished by depositing small ink dots on paper. Either the density or size of the dots is altered to vary the shading or color of the image. In the case of full color printing, the relative size of three primary pigment dots is altered to provide the full range of colors.

4.4.1    Video Display Resolution

Most video images consist of horizontal lines displayed on a CRT†. To create the illusion of motion, 30 frames per second are displayed. To overcome the flickering problem associated with this rate, each frame is decomposed into even and odd line fields. The even and odd fields are transmitted sequentially but are interlaced on the display tube. Consequently 60 partial images or fields are displayed every second.

In North America, the displayed image is composed of:

•   525 horizontal lines in a picture frame

•   30 frames or 60 fields per second

•   Between 482 to 495 lines may be visible, at the discretion of the TV station

•   Typically the number of visible lines is 485

•   The CRT aspect ratio is 4:3

•   For equal horizontal & vertical resolution, there are an equivalent (4/3) x 485 = 647 equivalent vertical lines

Color display tubes have slightly lower resolution than monochromatic ones, because the image must be further broken down into a triad of primary color elements, as in color printing.

Video Sync Formats

 

4.4.2    Bandwidth Requirement

Each horizontal line has a duration of 63.5 mSec and is broken down into two parts. The visible part of the line is about 53 mSec followed by a blanking pulse about 10.5 mSec long. The blanking pulse masks the beam retrace.

Since the maximum display resolution is 647 equivalent vertical lines, the time duration of a pixel or picture element can be determined:

53 mSec/647  = 82.69 nSec

The most detailed image would consist of alternate black and white pixels. The fundamental frequency of such an image would therefor be:

1/(2x82.69 nSec)  =  6.047 MHz

In reality, this type of image could never be broadcast since, if the image moved by a pixel, it would cause an image luminance reversal. Consequently, the image resolution is reduce by what is known as the Kell factor (0.7). The actual signal bandwidth required is 0.7 x 6 MHz = 4.2 MHz, and is practically reduced even further to 4 MHz, since very few scenes require such high resolution.

Putting all of this together into one mathematical expression results in:

From this expression, we make the discovery that bandwidth is proportional to the number of lines squared. This leads to the basic problem associated with high definition TV. To increase the image definition, more lines are needed. However, the required bandwidth quickly becomes excessive.

NTSC Video Channel Signal Distribution

4.5     Video Cameras

Digital Video – Basic Principles

http://www.infosyssec.org/infosyssec/cctv_.htm

http://www.tubenet.com/camera.html

http://www.tvcameramen.com/

 

Video Camera Slide Show

 

A basic video camera consists of:

•   Lens assembly

•   Camera tube or equivalent

•   Synchronization circuits

•   Gamma correcting circuit

•   Video amplifier

•   Viewfinder

4.5.1    Camera Tubes

http://www.tubenet.com/camera.html

 

Tube

Year

Comments

Iconoscope

1933

Photoemissive

Launched NBC in 1939

Image iconoscope

1934

The first tube to include internal image intensification and low velocity scanning

Linear response

This device started the European TV networks

Image orthicon

1946

Photoemissive, high sensitivity

Typical face size: 76 & 114 mm

Became the standard tube in TV broadcasting

Vidicon

1952

Antimony trisulfide photo-conductor is deposited on a thin transparent substrate

Resistance changes as a function of light intensity

Relatively low sensitivity, has dark current

Widely used for live pickup

Typical face sizes: 13, 17, 25, & 38 mm

g= 0.65

Plumbicon

(the bottom 2 tubes)

1962

Lead oxide sandwiched between P and N semiconductors

Uses reverse biased junctions, no dark current

Diode gun plumbicon introduced in 1979, and widely used

g= 1

Saticon

1973

P-type selenium arsenic sandwiched between antimony trisulfide on the scanning side, and layers of selenium tellurium and N-type selenium arsenic on the image side

Used reverse biased junctions, no dark current

g= 1

Newvicon

1974

Target is made of zinc selenide & zinc cadmium telluride

Most sensitive (20 x vidicon, 2 x silicon vidicon)

Chalnicon

1972

Target is made of cadmium selenide

Good resolution & sensitivity, low dark current, high lag

Silicon diode vidicon

1967

Process patented in 1958

Contains > 500 K diode islands etched into n-type phosphorous doped silicon oxidized and diffused with boron

Sometimes referred to as a solid state camera tube

High target capacitance and dark current cause lag

Greater infrared sensitivity

Used for surveillance

FPS Vidicon

 

FPS - focus, protection, and scan

First tube to use electrostatic deflection (all previous tubes used magnetic deflection)

Contains about 1/3 of the parts of a standard vidicon

Uses silicon diode array target for military missile tracking applications

Also made with antimony trisulphide or lead oxide targets for broadcast applications

CCD

1980

Broadcast quality devices typically have 492 rows

Extremely sensitive

Used in astronomy and surveillance

 

The early camera tubes were complex, photoemissive, and very sensitive, but often suffered from marginal S/N. Later tubes were relatively simple, photoconductive, more rugged and smaller, but less sensitive.

Dark current is the current that flows from the signal output when the camera tube is capped. This value should ideally be zero since the signal output currents of photoconductive tubes is measured in nanoamps.

4.5.1.1     Vidicon Tube

The vidicon tube is a relatively simple and rugged device. The term is often used generically to refer to a wide range of imaging tubes that have a very similar construction.

An image is focused on a photoconductive layer. The conductivity of this layer is proportional to the light intensity.

An electron beam is accelerated to the target, but is nearly stopped by grid 4. The beam scans the target and deposits a uniform charge over it.

Since the conductivity of the target is light dependent, a current proportional to the image intensity is produced. The image current is passed through a resistor, thus generating a video voltage.

4.5.1.2     Image Orthicon Tube

There are a number of different kinds of orthicon tubes. In all cases, a low velocity beam strikes the target orthagonally during the scanning process.

The image orthicon tube is a marvel of ingenuity. It is a very sensitive tube and is ideal for studio applications. It is composed of three sections:

•   Image section

•   Scanning section

•   Multiplier

Imaging Section

A lens assembly is used to focus an image onto a photo emissive layer deposited on a glass plate known as a translucent photocathode. This cathode emits electrons proportional to light intensity.

The electrons are drawn off the cathode by an accelerator grid and move towards a positive collector mesh. This mesh has approximately 2 M holes per in2. The electrons pass through the mesh and strike the target. The target is 3 ΅m thick and placed 40 ΅m from the mesh.

This process results in an image being transferred from the translucent photocathode to the target. The target emits on the average, 5 secondary electrons for each incident electron, thus creating an intensified positively charged image. The secondary electrons attempt to go back to accelerator grid but instead are collected by the mesh.

The collected electrons are returned to the photo emissive cathode, and thus preventing it from being depleted. The target on the other hand has a high resistivity, and retains a positive image

Scanning Section

An electron beam is accelerated to the target by grids 3 and 4. Decelerator grid 5 is used to drop the beam velocity drops to zero as it reaches the target.

The target captures some of the beam electrons since it contains a positive image, thus effectively erasing the image. The electrons that are not captured are accelerated back to a photo detector by grids 3 and 4.

The return beam current is diverted to an electron multiplier section where the current is multiplied several hundred times. Passing the multiplier output current through a resistor creates a video signal.

4.5.1.3     Single Tube Color Camera

In 1953 RCA constructed a single tube color camera by using red, green and blue filter stripes on the vidicon, each of which had a separate output. This tube was not successful. Many years later, Hitachi solved the problems associated with this tube by making significant improvements in manufacturing and semiconductor technology.

JVC currently makes a single tube color camera having green, cyan, and white filters. Unlike other attempts, this tube has a single output. The green filter passes only green light, cyan passes green and blue light, and white passes green, blue and red. The luminance signal is obtained by putting the resultant stepped signal through a low pass filter to obtain the average response. The chroma signals are created through a more elaborate signal processing arrangement.

4.5.2    Charge Coupled Devices (CCD)

Technical Overview: CCD Technology by Kodak

CCD Image Sensors and Analog to Digital Conversion by Texas Instruments

 

CCDs are solid state imaging devices, based on MOS technology. An image is focused on a light sensitive capacitor array. The typical array used in the television industry typically consists of 492 rows and 510 columns. A charge proportional to the light intensity is produced at each element when an image is focused on the array. The charge is then clocked off as a video signal.

CCDs are widely used in camcorders as well as security, astronomy, and industrial vision applications. Shutter speeds of 1/1000 of a second are possible, thus supporting high-resolution freeze frame applications. The only major area where CCDs are not to be found is in a broadcast studio.

Two principle methods for extracting the charge image from the CCD are: frame transfer and interline transfer. There is also the frame interline transfer method which combines features of both..

Frame Transfer

The charge built up in the imaging area is clocked into the storage area during vertical blanking. Each cell acts as an imaging point and shift register. In order to prevent vertical blurring during the shifting process, a mechanical shutter is used to mask the imaging array during the transfer interval. The need for a mechanical shutter somewhat negates the purpose of a solid-state camera.

Interline Transfer

During the vertical blanking period, the image is shifted sideways into a storage register. The registers then shift the image down one position during the horizontal interval. Since the storage registers reduce the available imaging area, the sensitivity of the camera is reduced.

Since adjacent imaging cells do not affect each other, there is no lag or blooming. The sensors can also operate over a very wide range of lighting conditions. To further expand this capability, CCDs may be operated linearly for normal conditions, or dynamic contrast control can be used to compress highlights.

There are two methods used to create interlacing: frame integration, and field integration. In frame integration, the CCD requires 2 fields to build up a charge image. Alternate lines are then clocked out to create even and odd fields. Field integration involves the combining of adjacent lines to simulate even and odd fields. The odd field is comprised of 1+2, 3+4 etc. and the even field is comprised of 2+3, 4+5 etc.

4.5.1.5     Color CCD Sensor[3]

A recent development is the advent of color CCDs used in 8mm camcorders. This device uses a color filter mask over the array so that only a single sensor is needed instead of the usual 3.

Frame integration and interline transfer are used to create interlacing. Green appears on every line, but red and blue appear on alternate lines. Delay lines are used to provide vertical correlation, thus allowing luminance and chromance signals to be derived over two lines.

By placing filters in front of the sensing elements, it is possible to create a single sensor color camera. Since only one color is available at any instant when clocking out the signal in the CCD array, signal processors and delay lines are needed to construct the standard luminance and chroma signals. This technique is currently being used in 8 mm video recorders.

4.5.3    Gamma Correction

It is essential that optical linearity be maintained between the camera and display tube. This is relatively easy to do in a B&W system, but becomes more complex with the introduction of color.

Equal increments of control voltage do not provide equal increments in brightness on a CRT, because of non-linear grid and plate characteristics.

The overall relationship between the optical input and the optical output is known as gamma (g).

              

A gamma amplifier, a non-linear amplifier placed behind the camera tube to produce an overall linear response. Each camera tube has a different light transfer characteristic[4].

The situation becomes even more complex when it is realized that imaging devices do not have a flat spectral response characteristics[5].

4.5.4    Color Separation

http://www.novia.net/~ereitan/Color_Cameras.html

 

Most color cameras contain a dichroic prism and three camera tubes. A dichroic prism or mirror separates a single image into three images, each of a different color.

There are several different ways to define color space depending upon the method used to create color. Optic based systems such as display tubes are a color additive method. This means that adding primary colors together creates white. The simplest color space of this type is simply RGB or red, green, blue.

Subtractive processes create color by adding pigments together. This means that adding primary colors together creates black. The simplest of these is the CMY color space or cyan, magenta, yellow. Since text is often black, adding a black pigment to the color space can save a great deal of primary color pigment. Hence the CMYK color space is used in four-color printing.

The color space is of necessity determined by the media used to create the image. Consequently, the color space for monitors, film, printing etc is different. In order to be able to reproduce images in multimedia formats, the relationship between these spaces must be examined.

4.5.4.1     Chroma Signals

Consumer Analog RGB & YUV Video Formats by Harris

YCbCr to RGB Considerations by Harris

Composite Signal Separation by Harris

 

In order to create three primary color images, it is first necessary to create three identical images. The images are then passed through an optical filter that passes only the red, green, or blue component.

Test Card Scan

In the computer industry, it would be possible to keep these three signals separated and apply them directly to the CRT control grids. However, in broadcast applications, this was not practicable since color TV would not be backward compatible with B&W TV.

For this reason, the camera tube RGB signals are sent to a matrix to generate the color difference signals. These in turn can be manipulated in such a way to be backward compatible. The matrix output resembles:

The matrix color difference signals are then applied to a chroma modulator.

Chroma Modulator

Summing the color difference signals in the above modulator translates the chroma signals into polar quantities.

This form of modulation is known as QAM†, and is widely used in communications. Everything from modems to satellite links use a digital version of QAM.

In this particular application, the output phase corresponds to hue and the amplitude to saturation.

 

 

 

Color Difference Signals

 

Y

B-Y

R-Y

Magnitude

Phase

Red

0.3

-0.3

0.7

0.76518

113.2

Green

0.59

-0.59

-0.59

0.83439

225

Blue

0.11

0.89

-0.11

0.89677

353

 

The complementary colors are 180o out of phase with the primary colors, but have the same magnitude. The Y value for the complementary colors is 1 - primary value.

                i.e. for cyan:                          Y = 1 - Red  = 1 - 0.3 = 0.7

The magnitude and phase of the chroma signal is superimposed on the Y signal.

Vector Scope Display of the Color Bar Test Pattern

Dichroic Mirror Color Separation

Dichroic mirrors are made by coating glass with alternating layers of high and low index of refraction materials. The material thickness must be 1/4 wavelength of the color to be reflected. Each mirror reflects almost all of the color it is tuned for, and passes through about 90% of the remainder. This high degree of reflection occurs when the angle of incidence to the mirror is 38o.

Typical Dichroic Mirror Response[6] 

Dichroic prisms can also separate color images. The angles and refractive index is chosen to allow for partial reflection and refraction. The surfaces can then be coated to filter out the appropriate wavelength. Each image is focused on a standard camera tube. In this way, three signals are created, each corresponding to a separate color.

Dichroic Prism Color Separation[7]

The distance between the lens and image is known as the back focal distance. This distance must be large since a dichroic element is placed between the two. For this reason, the lens assembly of a video camera is somewhat different from that of a film camera. This limitation also makes it difficult to design a wide-angle lens for video applications.

4.6     Hubble Space Telescope

HST Primer

 

Assignment Questions  

 

Quick Quiz

1.     The [rods, cones] of the human eye detect luminance but not color.

2.     The image is focused on a photo (emissive, conductive) layer in the Image Orthicon tube, and is focused on a photo (emissive, conductive) layer in a Vidicon tube.

3.     The ____________________ camera tube scans a charge image created by secondary electron emission.

4      Label the following drawing.

5.           Fill in the following table:

Movies

Number of frames per second

 

NTSC TV

Number of frames per second

 

 

Number of lines per frame

 

 

Number of visible lines per frame

 

 

Actual signal bandwidth

 

 

Broadcast video channel bandwidth

 

 

6.     The back distance for a 35 mm film camera lens is [greater than, equal to, less than] that of a video camera lens.

7.     The sensitivity of color video cameras is less than that of B&W cameras because the dichroic element acts as a video attenuator. [True, False]

8.     A color CCD array uses [interlace, progressive] scanning.

Analytical Problems

1.     Sketch the luminance signal for one frame if a TV camera scanning only 9 horizontal lines per frame, progressively scans the following test cards.

Composition Questions

1.     What is a dichroic prism?

2.     What modulation scheme is used for the luminance signal, and why?

3.     What is gamma correction and where is it used?

 

For Further Research

 

Video Techniques [2nd ed.], Gordon White, Heinemann Professional Publishing, 1988

Video Engineering. Andrew F Inglis, McGraw-Hill, 1993

Electronic Cinematography, Harry Mathias & Richard Patterson, Wadsworth Publishing Company, 1985

Special Report: Towards an Artificial Eye, IEEE Spectrum, May 1996

 

The Human Eye:

http://www.campus.bt.com/CampusWorld/pub/ScienceNet/database/Biology/Senses/b00514c.html

http://www.coopervision.com/cv/norma

http://www.campus.bt.com/CampusWorld/pub/ScienceNet/qpages/engq.html

 

Digital Cameras

http://www.shortcourses.com/

http://www.dvcco.com/

 

Video Capture Cards

http://www.yahoo.com/Business_and_Economy/Companies/Computers/Hardware/Components/Video_Cards/

 

Multimedia

http://www.bsu.edu/classes/czerwinska/wfm/

 

CCDs:

http://www.dalsa.com/basics.htm

 

Camcorder:

http://www.philipsmagnavox.com/product/pv331cam.html

http://www.videomaker.com/

http://www.igd.fhg.de/www/projects/icib/tv/org/smpte/s17.42/dia-fr.html

 

3D:

http://www.3d-web.com/

http://www.stereoscopy.com/

http://www.vis.colostate.edu/bulletins/anim_intro.html

 

Color:

http://168.143.225.37/redpage.asp

http://www.color.org/

http://www.tru-color.com/tru-color/

http://www.connect.hawaii.com/hc/webmasters/Netscape.colors.html

http://www.phoenix.net/~jacobson/rgb.html

http://www.stars.com/Authoring/Graphics/Colour/Resources.html

http://www.color.com/

 

Television:

http://www.novia.net/~ereitan/index.html

http://www.pharis-video.com/

 



[1]       Television Engineering Handbook, K. Blair Benson, ed., FIG. 2-1

†       Commission Internationale de l'Eclairage

†       Hue Saturation Intensity

[2]       Television Engineering Handbook, K. Blair Benson, ed., FIG. 2-7

†       Commission Internationale de l’Eclairage

†       Cathode Ray Tube

[3]       Video Techniques, Gordon White, 1988, FIG 154

[4]       Fig. 11-35, Television Engineering Handbook, Benson

[5]       Fig. 11-34, Television Engineering Handbook, Benson

†       Quadrature Amplitude Modulation

[6]       Based on  fig.3-16, Television Engineering Handbook, K. Blair Benson

[7]       Based on  fig.14-41, Television Engineering Handbook, K. Blair Benson