Machine Vision and Imaging Library Comprehensive Machine Vision Glossary of Terms

A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
0-9
1D – One dimensional.
2D – Two dimensional.
3D – Three dimensional.
A
Aberration – The failure of an optical lens to produce an exact point-to-point correspondence between the object and its resulting image. Various types are chromatic, spherical, coma, astigmatism and distortion.
Absorption – The loss of light of certain wavelengths as it passes through a material and is converted to heat or other forms of energy. (-)
Accuracy – The extent to which a machine vision system can correctly measure or obtain a true value of a feature. The closeness of the average value of the measurements to the actual dimension.
Active Illumination – Lighting a scene with a light source coordinated with the acquisition of an image. Strobed flash tubes, pulsed lasers and scanned LIDAR beams are examples.
Algorithm – A set of well-defined rules or procedures for solving a problem or providing an output from a specific set of inputs.
Alpha Risk (ý-risk) – The risk of rejecting good product.
Ambient light – Light which is present in the environment of the imaging front end of a vision system and generated from outside sources. This light, unless used for actual scene illumination, will be treated as background noise by the vision system.
Analog – A smooth, continuous voltage or current signal or function whose magnitude (value) is the information. From the word “analogous,” meaning “similar to.”
Analog-to-Digital Converter (A/D) – A device which converts an analog voltage or current signal to a discrete series of digitally encoded numbers (signal) for computer processing. Architecture – For a vision system, the hardware organization designed for high speed image analysis.
Area – Portion or area of the image to be analyzed. Area analysis measures the number of pixels which fall in a specified range of gray levels for the feature of interest.
Area Array Camera – A solid state imaging device with both rows and columns of pixels, forming an array which produces a 2D image.
Array Processor – A specially designed vision engine peripheral which attaches to the host to speed up arithmetical calculations by using parallel processing techniques. The host manages image data access and analysis results.
Artifact – An artificially created structure (by accident or on purpose), form or shape, usually part of the background, used to assist in measurement or object location.
Artificial Intelligence – The capability of a computer to perform functions normally attributed to human intelligence, such as learning, adapting, recognizing, classifying, reasoning, self- correction and improvement. Rarely found connected to vision systems.
ASIC – An acronym for Application Specific Integrated Circuit. All vision system elements including firmware can be integrated onto one ASIC.
Aspect ratio – The ratio of the width to the height of a frame of a video image. The U.S. television standard is 4:3 or 1.333
Astigmatism – A defect in a lens which causes blur or imperfect image results, since the rays from a given point fail to meet at the focal point. (-)
Asynchronous – A camera characteristic which allows the return to top-of-frame to occur on demand, rather than synchronously following the 60 hz power line scanning frequency.
Attribute List – List of distinguishing features which are selected for IP calculation.
Autofocus – The ability of an imaging system to control the focus of the lens to obtain the sharpest image on the detector. Edge crispness is a typical control variable.
B
Background – The part of a scene behind the object to be imaged. (-)
Backlighting – Placement of a light source behind an object so that a silhouette of that object is formed. It is used where outline information of the object and its features is important rather than surface features.
Backpropagation – A training technique which adjusts the weights of the hidden and input layers of a neural net to force the correct decision for a given feature vector data input set.
Baffle – A type of shield that prohibits light from entering an optical system. (-)
Bandpass Filter – An absorbing filter which allows a known range of wavelengths to pass, blocking those of lower or higher frequency. (2)
Bar Code – An identification system that employs a series of machine-readable lines of varying widths of black and white. Usually read with a laser scanner. (-)
Bar Code (2D) – An arrangement of rectangles and spaces that contains far more information than a traditional bar code. (-)
Barrel Distortion – An optical imperfection which causes an image to bulge convexly on all sides similar to a barrel. (-)
Beamsplitter – An optical device which divides one beam into two or more separate beams. A simple coated piece of glass in the optical path might reflect 60% of the light down onto the object, while allowing the other 40% to pass. (2)
Beta Risk ( -risk) – The risk of accepting bad or defective product.
Binary – An image with pixel values of one or zero.
Binary image – A black and white image whose data is represented as a single bit either zeros or ones, in which objects appear as silhouettes. The result of backlighting or thresholding.
Bit – An acronym for a Binary digit. It is the smallest unit of information which can be represented. A bit may be in one of two states, on or off, represented by a zero or a one.
Bit Map – A representation of graphics or characters by individual pixels arranged in rows and columns. Black and white require one bit, while high definition color up to 32. (-)
Blanking – The time during a raster scan retrace when the video signal is suppressed. (-)
Blob – A single, connected region in a binary or grayscale image. (2)
Blob Analysis – Identification of segmented objects in an image based on their geometric features (ie area, length, number of holes). (SRI) (2)
Borescope – A device for internal inspection of difficult access locations such as pipes, engines, rifle barrels and pipes. Its long narrow tube contains a telescope system with a number of relay lenses. Light is provided via the optical path or fiber bundles. A 45 degree mirror at the end allows inspection of tube walls.
Boundary – The line formed by the joining of two image regions, each having a different light intensity. The edge of a region or object.
Bounding Box – The four coordinates which define a box around the object parallel to the major and minor axis. (SRI feature)
Brewster’s Angle – The angle at which incident light, by reflecting at a boundary between two mediums of different refractive indices (ie air/glass or air/water), becomes plane polarized. For air/glass it is about 67.4 degrees.
Brightness – The total amount of light or incident illumination on a scene or object per unit area. Also called intensity.
Bus – A set of parallel conductors, which allow devices attached to it to communicate with the CPU. The bus consists of three main parts: Control lines, Address lines, and Data lines. Control lines allow the CPU to control which operations the devices attached should perform, I.E. read or write. The address lines allows the CPU to reference certain (Memory) locations within the device. The meaningful data which is to be sent or retrieved from a device is placed on to the data lines.
Byte – Eight bits of digital information. A byte has values from 0 to 255, and is the unit most commonly used to represent the gray scale value of one pixel.
C
C-mount – A threaded means of mounting a lens to a camera.
Calibration – 1. a measurement or comparison against a standard.a measurement or comparison against a standard. 2. the determination of any equipment deviation from a standard source so as to ascertain the proper correction factors.the determination of any equipment deviation from a standard source so as to ascertain the proper correction factors.
CCD – Charge Coupled Device. A photo-sensitive image sensor implemented with large scale integration technology.
CCD – (Frame Transfer) The entire image is transferred from the sensing area to a storage area on chip. Data (charge) is read out from the storage area in a full frame mode. This workhorse of the industry is also capable of non-RS-170 operation.
CCD – (Interline Transfer) Data (charge) is transferred simultaneously out by odd and even lines or fields directly from the image sensors to their corresponding sensor registers. The output from the camera is always one field (frame) behind the image being captured.
Centroid – Points that are, respectively, the center of a given area or midpoint of a given line segment.
Character – A single letter, digit or punctuation symbol requiring one byte storage. (-)
Character Recognition (OCR) – Imaging and recognizing individual text characters in a scene. Also called Optical Character Recognition. (-)
Character Verification (OCV) – Imaging and verifying the correctness, quality and legibility of known text characters in an image. Also Optical Character Verification. (-)
Child – Computer programming term. In data structures, any node in a tree except the root; a direct descendant of a given node.
Chroma – The quality of a color including both the hue and saturation. Not present in gray. (-)
CID – Charge Injection Device – A photo-sensitive image sensor implemented with large scale integration technology. Based on charge injection technology, a CID can be randomly addressed, non-destructively read, can be subscanned in a small region and is less susceptible to charge overflow from bright pixels to neighbors. The pixel structure is contiguous with maximum surface to capture incident light which is useful for sub-pixel measurement.
CIE – An acronym for a chromaticity coordinate system developed by the Commission Internationale de l’Eclairage, the international commission on illumination. In the CIE system, a plot of ratios (x, y and z) of the three standard primary colors (tristimulus values) to their sum. The most common diagram is the 2 dimensional CIE (x,y).
Classification – Assignment of image objects to one of two or more possible groups. Decisions are made by evaluating features either 1) structurally based on relationships or 2) statistically. For example, 1) a penny is round, a certain diameter (+/- a tolerance) and has a histogram of a mean value; or 2) statistically, the object is measured a number of times, then the average and standard deviation are recorded. After training the features are weighted based on significance in object identification. For multiple features, absolute values are used.
Closing – A dilation followed by an erosion. A morphological operator useful to close holes and boundaries.
Coaxial Illumination – Front lighting with the illumination path running along the imaging optical axis and usually introduced with a 45 degree angle beam splitter.
Coherent Fiber Optics – A bundle of optical fibers with the input and output spatial x-y relationship maintained, resulting in near spatially correct image transmission.
Collimate – To produce light with parallel rays. (-)
Collimated Lighting – Radiation from a given point with every light ray considered parallel. In actuality, even light from a very distant point source (ie a star) diverges somewhat. Note that all collimators have some aberrations.
Color – A visual object attribute which may be described by a “coordinate system” such as hue, saturation and intensity (HSI), CIE or LAB. Wavelengths in the visible part of the electromagnetic spectrum to which retinal rods respond.
Color Space – A two or three dimensional space used to represent an absolute color coordinate. RGB, HSI, LAB and CIE are all representations of color spaces.
Color Temperature – A colorimetric concept related to the apparent visual color of a source, but not its actual temperature.
Colorimetry – Techniques used to measure color of an object or region and to define the results in a comparison or coordinate system.
Composite Video – A television signal which is produced by combining both a video or picture signal with horizontal and vertical synch and blanking signals. (-)
Condenser Lens – Used to collect and redirect light for the purpose of illumination. Often used to collect light from a small source and project even light onto an object.
Connectivity Analysis – An Stanford Research Institute routine used to determine which pixels are interconnected and part of the same object or region. The results are used for blob analysis.
Contrast – The difference of light intensity between two adjacent regions in the image of an object. Often expressed as the difference between the lightest and darkest portion of an image. Contrast between a flaw or feature and its background is the goal of illumination. (2)
Contrast Enhancement – Stretching of the gray level values between dark and light portions of an image to improve both visibility and feature detection.
Convolution – Superimposing a m x n operator (usually a 3×3 or 5×5 mask) over an area of the image, multiplying the points together, summing the results to replace the original pixel with the new value. This operation is often performed on the entire image to enhance edges, features, remove noise and other filtering operations.
Correlation – A mathematical measure of the similarity between images or areas within an image. Pattern matching or correlation of an X by Y array size template to the same size image, produces a scaler number, the percentage of match. Typically, the template is walked through a larger array to find the highest match.
CPU – An acronym for Central Processing Unit. A VLSI chip such as 80486 or pentium.
Cross section – A 3D profile of a slice of an object.
D
Darkfield Illumination – Lighting of objects, surfaces or particles at very shallow or low angles, so that light does not directly enter the optics. Objects are bright with a dark background. This grazing illumination causes specular reflections from abrupt surface irregularities.
Data Reduction – The process of lowering the data content of a pixel or image such as thresholding or run length encoding. (-)
Decision Tree – A structural classification technique based on relationships of feature measurements. Useful for differentiating a number of objects.
Dedicated System – Refers to a system which is configured for a specific application. Able to function when plugged in with no further development. Also called turnkey.
Depth-of-field – The range of an imaging system in which objects are in focus.
Depth Perception – The perception of solidity of a visual object and its location in the spatial field, through the fusion in the brain of the two slightly dissimilar images from the two eyes.
Dichroic Filter – A filter used to transmit light based on its wavelength, rather than on its plane of vibration. Transmits one color, while reflecting a second when illuminated with white light. Often used in heads-up displays. (2)
Diffraction Pattern Sampling – Inspection by comparing portions of the interference pattern formed on a screen or special sensor from light waves diffracted by object edges. (-)
Diffuse Reflection – Light which bounces off an object surface in many different directions. Light radiated from a matte surface is highly diffused.
Diffused lighting – Scattered soft lighting from a wide variety of angles used to eliminate shadows and specular glints from profiled, highly reflective surfaces.
Digital Camera – The newest generation of video cameras transform visual information into pixels, then translate each pixel’s level of light into a number in the camera.
Digital-to-Analog Converter – A VLSI circuit used to convert digital computer processed images to analog for display on a monitor. DAC is the acronym.
Digital Image – A video image converted into pixels. The numeric value of each pixel’s value can be stored in computer memory for subsequent processing and analysis.
Digital Signal Processor (DSP) – A VLSI chip designed for ultra high speed arithmetic processing. Often imbedded in a vision engine. TI’s TMS320C40 is the industry standard.
Digitization – Sampling and conversion of an incoming video or other analog signal into a digital value for subsequent storage and processing.
Dilation – A morphological operation which moves a probe or structuring element of a particular shape over the image, pixel by pixel. When an object boundary is contacted by the probe, a pixel is preserved in the output image. The effect is to “grow” the objects.
Dispersion – Separation of a beam of light into its wavelength components, each of which travel at slightly different speeds. Also called chromatic dispersion.
Dust – Finely divided, dry, solid matter of silt- and clay-sized earthy particles, less than 0.0625 millimeter in diameter.finely divided, dry, solid matter of silt- and clay-sized earthy particles, less than 0.0625 millimeter in diameter.
Dynamic Range – The range in signal amplitude over which a communication receiver or audio amplifier is capable of operating while producing an acceptable output; usually expressed in decibels.
E
Edge – A change in pixel values exceeding some threshold amount. Edges represent borders between regions on an object or in a scene.
Edge Detection – The ability to determine the edge of an object.
Edge Operator – Templates for finding edges in images.
Electrical Noise – 1. an unwanted, often random disturbance to a signal that tends to obscure the signal’s information content; caused primarily by the random thermal motions of particles in the system. 2. any signal disturbance that interferes with the operation of a system.any signal disturbance that interferes with the operation of a system. 3. any random disturbance that obscures the clarity of a signal.
Electro-magnetic Spectrum – The total range of wavelengths, extending from the longest (audio) to the shortest (gamma rays) which can be physically generated. This entire spectrum is potentially useful for imaging, well beyond just the visible spectrum.
Encoder (Shaft or position) – Provides rotation information for control of image acquisition, especially for moving web processes. Outputs either pulses for counting or BCD parallel with absolute position information.
Endoscope – A medical instrument used to view inside the human body. It may use borescope optics or coherent fibers to relay the image to the eye or camera. Illumination is provided by a non-coherent bundle of optical fibers.
Erosion – The converse of the morphology dilation operator. A morphological operation which moves a probe or structuring element of a particular shape over the image, pixel by pixel. When the probe fits inside an object boundary, a pixel is preserved in the output image. The effect is to “shrink or erode” objects as they appear in the output image. Any shape smaller than the probe (ie noise) disappears.
Extension Tube – A cylindrical threaded tube used to change the magnification, effective focal length and field of view of a lens when inserted between the lens and imaging sensor.
F
F-number or f-stop – The ratio of the focal length to the lens aperture. The smaller the f- number, the larger the lens diameter and brighter the image and narrower the depth-of-field. (-)
Fast Fourier Transform – Produces a new image which represents the frequency domain content of the spatial or time domain image information. Data is represented as a series of sinusoidal waves.
Features – Simple image data attributes such as pixel amplitudes, edge point locations and textural descriptors, center of mass, number of holes in an object with distinctive characteristics defined by boundaries or regions.
Feature Extraction – Determining image features by applying feature detectors to distinguish or segment them from the background.
Feature Vectors – A set of features of an object (such as area, number of holes, etc) that can be used for its identification or inspection.
Fiber Optics – Light source or optical image delivery via a long, flexible fiber(s) of transparent material, usually bundled together. Light is transmitted via internal reflection inside each fiber. Coherent fiber optics are spatially organized so images can be relayed.
Fiberscope – An optical instrument similar to a borescope, but uses a flexible, coherent fiber or bundle (usually silicon), an objective lens and an eyepiece or camera.
Fiducial – A line, mark or shape used as a standard of reference for measurement or location.
Field – One of the two parts of a television frame in an interlaced scanning system. The odd plus the even field comprise one video frame. A field is scanned every 1/60th of a second.
Field-of-view – The 2D area which can be seen through the optical imaging system. (FOV)
Filtering – The use of an optical filter for picture or color enhancement in front of the camera lens or light source. Also analog or digital image processing (IP) operations to enhance or modify an image. May be linear & non-linear.
Filter – A device or process that selectively transmits frequencies. In optics, the material either reflects or absorbs certain wavelengths of light, while passing others. (2)
Firmware – Software hard coded in non-volitle memory (ROM), usually to increase speed.
Fixture – A device to hold and locate a workpiece during processing or inspection operations.
Fluorescence – The emission of light or other electromagnetic radiation at longer wavelengths by matter as a result of absorption of a shorter wavelength. The emission lasts only as long as the stimulating irradiation is present.
Focal Length – The distance from a lens’ principal point to the corresponding focal point on the object.
Focal Plane – Usually found at the image sensor, it is a plane perpendicular to the lens axis at the point of focus (-).
Focus – The point at which rays of light converge for any given point on the object in the image. Also called the focal point.
Focus Following – A ranging and tracking technique that uses image processing to measure object range based on best focus.
Fourier Domain Inspection – Evaluation of the fourier transform (frequency information) of a 2D spatial image for features of interest. (-)
Frame – The total area scanned in an image sensor while the video signal is not blanked. In interlaced scanning, two fields comprise one frame. Frame rate is typically 30 Hz.
Frame Buffer – Image memory in a frame grabber.
Frame Grabber – A device that interfaces with a camera and, on command, samples the video, converts the sample to a digital value and stores that in a computer’s memory.
Front End System – The object, illumination, optics and imager blocks of a vision system. Includes all components useful to acquire a good image for subsequent processing.
Front Lighting – The use of illumination on the camera side of an object so that surface features can be observed.
G
Gaging – In machine vision, non-contact dimensional examination of an object.
Gamma (?)– The numeric value for the degree of contrast in a television picture. The exponent in the power law relating output to input signal magnitude. Non-linear camera tube.
Glints – Shiny, specular reflections from smooth objects or surfaces.
Global Method – An image processing operation uniformly applied to the whole image. (-)
Gradient – The rate of change of pixel intensity (first derivative).
Gradient Space – A matrix containing values for the rate of change of pixel values or gray level intensity of the image.
Gradient Vector – The orientation and magnitude of the rate of change in intensity at a point or pixel location in the image.
Grating – An optical element with an even arrangement of rods or stripes with spaces between them for light to pass. Its ability to separate wavelengths is expressed in line pairs per millimeter, for example. A moire grating of parallel dark and light stripes is an example. Also used for structured light projection. (2)
Gray level – A quantized measurement of image irradiance (brightness), or other pixel property typically in the range between pure white and black.
Grayscale Image – An image consisting of an array of pixels which can have more than two values. Typically, up to 256 levels (8 bits) are used for each pixel.
GUI – An acronym for Graphical User Interface. Pronounced “gooie.” A Windows based user interface screen or series of screens allowing the user to point-and-click to select icons rather than typing commands.
H
Halogen lamp – An incandescent lamp with a gas similar to iodine inside which is constantly evaporated then redeposited on the filament.
Hardware – Electronic integrated circuits, boards and systems used by the system.
HDTV – High Definition TV proposed broadcast standard to double the current 525 lines per picture to 1,050 lines, and increasing the screen aspect ratio from 12:9 to 16:9. The typical TV of 336,00 pixels would increase to about 2 million. (-)
Height/Range – Object profile is usually measured by changes in range or distances from the sensor. 3D techniques are usually used.
High Pass Filter – Passes detailed high frequency image information, while attenuating low frequency, slow changing data.
High Speed Imaging – Image capture near, at or above 1800 parts per minute. (30 parts per second) (-)
Histogram – A graphical representation of the frequency of occurrence of each intensity or range of intensities (gray levels) of pixels in an image. The height represents the number of observations occurring in each interval. (2)
Histogram Analysis – Determination of the presence or absence of a feature or flaw based on the histogram values in a certain gray scale region.
Histogram Equalization – Modification of the histogram to evenly distribute a narrow range of image gray scale values across the entire available range.
Holography – Optically recording of the interference pattern from two coherent waves which forms a 3 dimensional record or hologram. (-)
Hough Transform – A global parallel method for locating both curved and straight lines. All points on the curve map into a single location in the transform space.
HSI Conversion – A mathematical conversion from the color RGB space to hue, saturation and intensity values.
HSI – An acronym for the Hue-Saturation-Intensity color representation. A mathematical conversion from RGB. Often used for machine vision analysis.
Hue – One of the three properties of HSI color perception. A color attribute used to express the amount of red, green, blue or yellow a certain color possesses. White, gray and black do not exhibit any hue.
Hueckel Operator – An edge finding operator which fits an intensity surface to the neighborhood of each pixel and selects surface gradients above a specified threshold.
Hybrid Electro-Optic Sensor – A silicon sensor fabricated in a configuration to match spatial information generated by the imaging system, such as a PSD (position sensitive detector), concentric rings, pie shapes and others.
Hz – An abbreviation for Hertz or cycles per second. Often used with metric prefixes such as kHz or MHz for kilohertz and megahertz respectively. (-)
I
Illumination – Normally a wavelength or range of wavelengths of light or visible light used to enhance a scene so the detector, normally a camera, can produce an image.
Image – Projection of an object or scene onto a plane (ie screen or image sensor). (-)
Image Analysis – Evaluation of an image based on its features for decision making. (-)
Image Capture – The process of acquiring an image of a part or scene, from sensor irradiation to acquisition of a digital image.
Image Distortion – A situation in which the image is not exactly true to scale with the object scale.
Image Enhancement – Image processing operations which improve the visibility of image detail and features. Usually performed for humans.
Image Formation – Generation of an image of an object or scene on the imaging sensor. It includes effects from the optics, filters, illumination and sensor itself.
Image Intensifier – Usually an electron tube equipped with a light sensitive electron emitter at one end and a phosphor screen at the other. Used to provide electron gain for imaging in low light conditions such as night vision.
Image Memory – An internal, high speed, large capacity storage area on a frame grabber card or in a computer dedicated to image retention.
Image Plane – The plane surface of the imaging sensor, perpendicular to the viewing direction, at which the optics are focused.
Image Processing – Digital manipulation of an image to aid feature visibility, make measurements or alter image contents.
Incandescent lamp – An electrical lamp in which the filament radiates visible light when heated in a vacuum by an electrical current.
Incident Light – Light which falls directly onto an object. (-)
Index of Refraction – A property of a medium that measures the degree that light bends when passing between it and a vacuum.
Infrared – The region of the electromagnetic spectrum adjacent to the visible spectrum, just beyond red with longer wavelengths.
Infrared Imaging – Image formation using wavelengths just above the visible spectrum. (-)
Intensity – The relative brightness of a portion of the image or illumination source.
Interlaced Scanning – A scanning process in which all odd lines then all even lines are alternately scanned. Adjacent lines belong to different fields.
I/O – An acronym for Input/Output data either entering or leaving a system. (-)
L
LAB – CIELAB color gets its name from a color space that uses three values to describe the precise three-dimensional location of a color inside a visible color space CIE stands for Commission Internationale de l-Eclairages an international body of color scientists whose standards make it possible to communicate color information accurately L describes relative lightness; A represents relative redness-greenness,and B represents relative yellowness-blueness.
Laplacian Operator – The sum of the second derivatives of the image intensity in both the x and y directions is called the Laplacian. The Laplacian operator is used to find edge elements by locating points where the Laplacian in zero.
Laser Illumination – Lighting an object with a laser source for frequency selection, pulse width (strobe) control or for accurate positioning.
Laser Radar – See LIDAR.
LED – Light emitting diode. Often used as a strobe for medium speed objects.
Lens – A transparent piece of material, usually glass or plastic, with curved surfaces which either converge or diverge light rays. Often used in groups for light control and focusing.
Lens Types – The lenses most commonly used in machine vision are: 35mm, CCTV, Copying, Cylindrical, Enlarger, Micrographic, Video, and Wide Angle.
LIDAR – An acronym of Light Detection And Ranging. A system that uses light instead of microwaves for range and tracking measurements. LADAR uses a laser light source to measure velocity, altitude, height, range or profile
Light Tent – An arrangement of diffusing surfaces above the object to create a horizon to horizon diffuse illumination.
Lightpen – A pen on a cable used to select items from a display screen.
Line(s) of Light – One or more light stripes projected at a known angle onto the object. Deformation of this type of structured light results in 3D information in a 2D image.
Line Scan Camera – A solid state video camera consisting of a single row of pixels. Also called a linear array camera.
Linear Array – see Line Scan Camera.
Lighting – See illumination. (-)
Location – The point in X and Y image space where a recognized object is found.
Look-Up Table (LUT) – High speed digital memory used to transform image input values to outputs for thresholding, windowing and other mappings such as pseudo-color. (-)
Low Angle Illumination – See darkfield. Very useful to enhance and highlight surface texture features.
Low Pass Filter – A digital or optical filter which passes slow changing, low frequency information, while attenuating high frequency, detailed edge information.
M
Machine Vision – The use of devices for optical non-contact sensing to automatically receive and interpret an image of a real scene, in order to obtain information and/or control machines or processes. (-)
Magnification – The relationship between the length of a line or size of a feature in the object plane with the length or size of the same in the image plane.
Mask – 1) Setting portions of an image are neighbors to a constant value; 2) A filter matrix used as a convolution operator; 3) A logical or physical structure placed in an optical system to prevent viewing or passing of information in a certain spatial or frequency region.
Material Handling – Hardware systems that provide motion, indexing and/or orientation both during manufacture and the inspection process. (-)
Matrix Array Camera – See Area Array Camera.
Median Filter – A method of image smoothing which replaces each pixel value with the median grayscale value of its immediate neighbors.
Memory – The internal, high-speed, large capacity working storage in a computer where data and images may be both stored and retrieved.
Micron – One millionth of a meter also called a micrometer. (-)
Mirror – A smooth, highly polished surface, for reflecting light. It may be plane or curved. Mirrors are fabricated by depositing a thin coating of silver or aluminum on a glass substrate. First surface mirrors are coated on the top surface, thus avoiding a second ghost image produced when light is reflected off the back surface after passing through the glass twice. (2)
MIPS – Millions of Instructions per Second measure for computer processing speed. (-)
Modulation Transfer Function (MTF) – The ability of a lens or optical system to reproduce (transfer) various levels of detail (modulation) of an object to the image as the frequency (usually sinusoidal) increases.
Moire Interferometry – A method to determine 3D profile information of an object or scene, using interference of light stripes. Two identical gratings of known pitch are used. The first creates a shadow of parallel lines of light projected on the object. The second is placed in the imaging train, and superimposed on the shadow cast by the first grating, forming a moire fringe pattern. Distance between the fringes or dark bands is directly related to range or profile. Varying the gap between the lines changes the sensitivity. (2)
Moire Pattern – A pattern resulting from the interference of light when gratings, screens or regularly spaced patterns are superimposed on one another. Two stacked window screens create this effect.
Moire Topography – A contour mapping technique in which the object is both illuminated and viewed through the same grating. The resulting moire fringes form contour lines of object elevation or profile.
Monochromatic – Refers to light having only one color or a single wavelength of radiation.
Monochrome – Refers to a black and white image with shades of gray but no color. (-)
Morphology – Image algebra group of mathematical operations based on manipulation and recognition of shapes. Also called mathematical morphology. Operations may be performed on either binary or gray scale images.
MOS Array – Metal Oxide Semiconductor camera array sensor with random addressing capability, rows and columns of photodiodes and charge sent directly from the photodiode to the camera output..
Mouse – A device, thought of as somewhat resembling a mouse in appearance and movement, that allows the user to control cursor movement on a video display screen by rolling the device over a flat surface. It is also used to select commands, designate text blocks, and for other functions.
N
Neural Networks – A computing paradigm which processes information based on biological neural systems. No programming is involved as in artificial intelligence. Rather decisions are made based on weighted features analyzed by interconnected nodes of simple processing elements using analog computer-like techniques.
Noise – Irrelevant or meaningless data resulting from various causes unrelated to the source. Random, undesired video signals.
Normalized Correlation – Removes the absolute illumination value from a traditional correlation, making the algorithm less sensitive to light variations.
O
Object – The 3D item to be imaged, gauged or inspected.
Object Features – Any characteristic that is descriptive of an image or region, and useful for distinguishing one from another. A feature may be any measurable item such as length, size, number of holes, surface texture amount or center of mass.
Object Plane – An imaginary plane at the object, which is focused by the optical system at the image plane on the sensor.
Oblique Illumination – A lighting direction at an angle which emphasizes object features by shadows produced. (-)
OEM – Original Equipment Manufacturer that supplies components to another for resale. (-)
Off-the-Shelf – Refers to a general purpose system, readily available for immediate shipment, which is not configured for a specific application.
Oil mist – An environmental contaminant which builds up on vision optical surfaces.
Opaqueness – Degree to which an object does not transmit light.
Opening – An erosion followed by a dilation, it is the opposite of the closing morphololgical operator.
Optical Computing – Performing operations usually handled by electronic, serial computers with optical or photonic circuits/elements in parallel at near the speed of light. (-)
Orientation – The angle or degree of difference between the object coordinate system major axis relative to a reference axis as defined in a 3D measurement space.
P
Pantone Matching System (PMS) – A system of describing colors by assigning numbers. (-)
Parallax – The change in perspective of an object when viewed from two slightly different positions. The object appears to shift position relative to its background, and also appears to rotate slightly.
Parallel Processor – A redundant hardware design using a number of processors so multiple pixels may be processed at the same time.
Parent -1. the previous generation of an item or file that is required to create a new record.the previous generation of an item or file that is required to create a new record. 2. in data structures, a node on a tree that has a given node as one of its subtrees.in data structures, a node on a tree that has a given node as one of its subtrees
Pattern Recognition – A process which identifies an object based on analysis of its features. (-)
Perceptron – The basic processing element used in neural networks. A simple analog circuit with weighted inputs and a nonlinear decision element such as a hard limiter, threshold logic or sigmoid nonlinearity.
Photodiode – A single photoelectric sensor element, either used stand-alone or a pixel site, part of a larger sensor array.
Photometry – Measurement of light which is visible to the human eye (photopic response). (-)
Photopic Response – The color response of the eye’s retinal cones.
Pinhole – A small, sharp edged hole, acts as a lens aperture which produces a soft edged image, is distortion free, with a wide field of view and large depth of field.
Pixel – An acronym for “picture element.” The smallest distinguishable and resolvable area in an image. The discrete location of an individual photo-sensor in a solid state camera.
Pixel Counting – A simple technique for object identification representing the number of pixels contained within its boundaries.
Polarized Light – Light which has had the vibrations of the electric or magnetic field vector typically restricted to a single direction in a plane perpendicular to its direction of travel. It is created by a type of filter which absorbs one of the two perpendicular light rays. Crossing polarizers theoretically blocks all light transmission.
Polarizer – An optical device which converts natural or unpolarized light into polarized light by selective absorption of rays in one direction, and passing of rays perpendicular to the polarizing medium. Usually fabricated from stretched plastic sheets with oriented, parallel birefringent crystals. The first polarizers were constructed with parallel wires. (2)
Positioning Equipment – Used to bring the part into the field of view, or to translate when multiple images or views are required.
Precision – The degree of spread or deviation between each measurement of the same part or feature. Repeatability.
Prism – An optical device with two or more non-parallel, polished faces from which light is either reflected or refracted. Often used to redirect light as in binoculars. (2)
Processing Speed – A measure of the time used by a vision system to receive, analyze and interpret image information. Often expressed in parts per minute.
Profile – The 3D contour of an object. (2)
R
Radiometry – Measurement of light within the entire optical spectrum. (-)
RAM – An acronym for Random Access Memory for storage and retrieval of data. (-)
Random Access – The ability to read out chosen lines or windows of information from an imager as needed, without following the RS-170 standards.
Range Measurement – Determination of the distance from a sensor to the object.
Raster Scan – A scanning pattern, generally from left to right while progressing from top to bottom of the imaging sensor or the display monitor. Generally comprised of two fields composed of odd and even lines.
Real Time Processing – In machine vision, the ability of a system to perform a complete analysis and take action on one part before the next one arrives for inspection.
Reflection – The process by which incident light leaves the surface from the same side as it is illuminated. (2)
Refraction – The bending of light rays as they pass from one medium (ie air) to another (ie glass), each with a different index of refraction.
Region – Area of an image. Also called a region of interest for image processing operations.
Registration – The closeness of the part to the actual position expected for image acquisition.
Reject – A mechanism used on a manufacturing line to remove defective or sample product from the main stream or conveyor. Reject design is usually customized to the process.
Repeatability – The ability of a system to reproduce or duplicate the same measurement. See precision. The total range of variation of a dimension is called the 6-sigma repeatability.
Resolution, Pixel Grayscale – The number of resolvable shades of gray (ie 256).
Resolution, Image – The number of rows and columns of pixels in an image.
Resolution, Spatial – A direct function of pixel spacing. Pixel size relative to the image field of view is key.
Resolution, Feature – The smallest object or feature in an image which may be sensed.
Resolution, Measurement – The smallest movement measurable by a vision system.
Reticle – An optical element with a pattern located in the image plane to assist in calibration, measurement or alignment of a system or instrument. Examples are cross lines or grids.
RGB – An acronym for the Red-Green-Blue color space. This three primary color system is used for video color representation. (2)
Ringlight – A circular lamp or bundles of optical fibers arranged around the perimeter of an objective lens to illuminate the object in the field below it. A wide variety of sizes are available on both a stock and custom basis.
RS-170 – The Electronic Industries Association (EIA) standard governing monochrome television studio electrical signals. The broadcast standard of 30 complete images per second.
RS-232-C – The Electronic Industries Association (EIA) standard governing serial communications over a twisted pair. Good to about 150 feet.
RS-330 – Standard governing color television studio electrical signals.
RS-422; RS-423; RS-449 – The Electronic Industries Association (EIA) standards for serial communication protocols intended to gradually replace the widely used RS-232-C standard.
Rotation – Translation of a part about its center axis from the expected orientation in X and Y space. Expressed in degrees. (2)
Run Length Encoding – A data reduction method to code a binary image. For each line in an image, data is stored denoting only the starting location of a blob and object and the length of the run of that line over the object.
S
Saturation – The degree to which a color is free of white. One of the three properties of color perception along with hue and intensity (HSI).
Scanner (galvo & polygon mirror) – An image sensor which uses a swept or scanned beam of light (usually a laser) to generate or acquire a one or two dimensional grayscale reflectance pattern.
Scene – The object and a background in it’s simplest form. A portion of space imaged by a vision system for investigation or measurement.
Scattering – Redirection of light reflecting off a surface or through an object. See diffuse. (-)
Scene Analysis – Performing image processing and pattern recognition on an entire image.
Segmentation – The process of dividing a scene into a number of individual objects or contiguous regions, differentiating them from each other and the image background.
Shading – The variation of the brightness or relative illumination over the surface of an object, often caused by color variations or surface curvature.
Shape – An object characteristic, often referring to its spatial contour.
Shape from Shading – A 3D technique that uses shadows from interaction of the object and the light source to determine shape.
Sharpening – An image proccessing operation which enhances edges. An unsharp mask adds a low pass filtered image to the original, resulting in edge enhancement.
Shutter – An electrical or mechanical device used to control the amount of time the imaging surface is exposed to light. Often used to stop blur from moving objects.
Siblings – In Stanford Research Institute (SRI) terminology, several child objects within a parent object are siblings.
Silhouette – A black and white image of an object illuminated by backlighting.
Simple Lens – A lens with only a single element. (-)
Sinusoidal Projection – Use of a grating in which the dark stripes vary in their density sinusoidally across each one, rather than constant black. Improved profile or range discrimination is possible when used in a moire type configuration.
Size – An object characteristic typically measured by x and y dimensions. Size may be expressed in pixels, the system calibrated units of measure or classes or size groups.
Smart Camera – A new term for a complete vision system contained in the camera body itself., including imaging, image processing and decision making functions .
Sobel Transform – A 3×3 convolution used for edge enhancement and locating.
Solid-state Camera – A camera which uses a solid state integrated circuit chip to convert incident light or other radiation into an analog electrical signal.
Span – The allowance of gray level acceptance for thresholding, adjustable from black to white from 0 to 100%. (-)
Spatial Light Modulator – (Also SLM) A transparent screen used in optical computer systems to introduce an image into the optical processing path. Similar to liquid crystal computer display screens, their resolution approaches 512×512 and grayscale imaging 8 bits.
Spectral Analysis – Evaluation of the wavelength composition of object irradiance. (-)
Spectral Characteristics – The unique combination of wavelengths of light radiated from a source or transmitter or reflected from an object.
Spectral Response – The characteristic of a sensor to respond to a distribution of light by wavelength in the electromagnetic spectrum.
Specular Reflection – Light rays that are highly redirected at or near the same angle of incidence to a surface. Observation at this angle allows the viewer to “see” the light source.
Speed – An object characteristic expressed in distance moved per unit time. Velocity. Image blur may be caused by high speeds unless strobes or shutters are used to “stop motion.”
SRI Algorithms – A rich set of routines used for geometric analysis and identification developed at the Stanford Research Institute in the early 1970s. Four main steps are: 1) Convert the image to binary; 2) Perform connectivity analysis to identify each blob or object; 3) Calculate the core statistical features for image objects; and 4) Calculate additional user selected features.
Stadimetry – A range measuring technique based on the apparent size measurement of a known size object in the field-of-view.
Statistical (Theoretic) Pattern Recognition – Statistical analysis of object features to perform recognition and classification.
Stereo (Passive) – For imaging, the use of two cameras, offset by a known distance and angle, to image the same object and provide range, depth or 3D information. Active stereo uses a controlled or structured light source to provide 3D data.
Stereo Photogrammetry – See Shape from Shading.
Stereoscopic Approach – The use of triangulation between two or more image views from differing positions. Used to determine range or depth.
Strobe Duration – The amount of time, expressed in microseconds, during which the flash lamp (strobe) is at 90% intensity.
Strobed Light – Brief flashes of light for observing an object during a short interval of time, typically used to “stop” movement and resulting image blur. Strobes may use xenon flash tubes, banks of LEDs or a laser to illuminate the scene.
Structural (Syntactic) Pattern Recognition – Evaluation of the relationship of object features in a specific order, ie decision trees, to perform recognition and classification.
Structured Light – Points, lines, circles, sheets and other projected configurations used to directly determine shape and/or range information by observing their deformation as it intersects the object in a known geometric configuration.
Subpixel Resolution – Mathematical techniques used on gray scale images to resolve an edge location to less than one pixel. A one tenth pixel resolution is reasonable in the factory.
Syntactic PR – See Structural Pattern Recognition
System Performance Measures – Accuracy, precision or repeatability, and alpha and beta risk for a given throughput rate specify the performance of a vision system. (-)
Synch Pulse – Timing signals used to control the television scanning and display process. The horizontal synch triggers tracing of a new line from left to right, while the vertical synch initiates the start of a new field.
Synchronous – A camera characteristic denoting operation at a fixed frequency locked to the AC power line (typically 60 or 50Hz).
Systems Integration – The art of assembling hardware, software, components, mounts and enclosures to produce a system that meets a customer’s specification.
T
Tail End System – The operator interface, I/O and communications blocks of a vision system. Includes all aspects of information display and handling.
TDI Camera – Time Delay Integration. Similar to a line scan, a TDI camera is comprised of a number of rows of pixels. As an object such as a web moves, the charge from one row is passed to the next row, synchronously continuing the integration. Requires far less illumination intensity than the standard line scan.
Template – An artificial model of an object or a region or feature within an object. (-)
Template Matching – A form of correlation used to find out how well two images match.
Texture – The degree of smoothness of an object surface. Texture affects light reflection, and is made more visible by shadows formed by its vertical structures.
Thickness – The measurement in the third dimension (length and width being the other two) from one object surface to another using one or two 3D range sensors or other technique.
Thresholding – The process of converting gray scale image into a binary image. If the pixel’s value is above the threshold, it is converted to white. If below the threshold, the pixel value is converted to black.
Throughput Rate – The maximum parts per minute inspection rate of a system.
Top Hat – A morphological operator comprised of an opening followed by a subtraction of the output image from the original input image.
Trackball – A stationary ball used as a pointing device to select items from a display screen.
Transition – For an edge in a binary image, the location where pixels change between light and dark. (-)
Translation – Movement in the X and/or Y direction from a known point.
Translucent – An object characteristic in which part of the incident light is reflected and part is transmitted. The transmitted light emerges from the object diffused.
Transmittance – The ratio of the radiant power transmitted by an optical element or object to the incident radiant power.
Transputer – A type of computer architecture with several CPUs connected in parallel.
Triangulation – A method of determining distance by forming a right triangle consisting of a light source, camera and the object. The distance or range can be calculated if the camera-to-light source distance and the incident to reflected beam angle are both known. Based on the Pythagorean relation.
Tube Type Camera – A camera in which the image is formed on a fluorescent screen, then read out sequentially in a raster scan type pattern by an electron beam for conversion to an analog voltage proportional to incoming light intensity.
U
Ultrasonic Imaging – Use of ultrasound waves as the imaging “illumination” source. (-)
Ultrasound – Low frequency radiated acoustical waves just above human sound perception which are useful for penetration and “illumination” for inspection of solid objects.
Ultraviolet – The region of the electromagnetic spectrum adjacent to the visible spectrum, but of higher frequency (shorter wavelength) than blue ranging from 1 to 400 nm. UV A ranges from 320 to 400 nm while UV B falls between 280 and 320 nm.
User Interface – Includes display, operator, user controls and a means to access and modify custom user programming. See operator interface.
V
Validation – A rigid set of tests to verify that a system performs as documented.
Variable Scan Input – Frame grabber capability to accept a variety of non RS-170 input formats from a variety of cameras. Allows operation above the 30 Hz limit.
VESA – Video Electronics Standards Association. A 32 bit display or other hardware card. (-)
VGA – An acronym for Video Graphics Array. The IBM video display standard of 16 colors.
Video – Visual information encoded in a specific bandwidth and frequency spectrum location originally developed for television and radar imaging. (-)
Vidicon – A generic name for a camera tube of normal light sensitivity. It outputs an analog voltage stream corresponding to the intensity of the incoming light.
Visible Light – The region of the electromagnetic spectrum in which the human retina is sensitive, ranging from about 400 to 750 nm in wavelength.
Vision Engine – Analyzes the image and makes decisions, using a very fast processor inside a computer. It performs dedicated evaluation of the pre-processed image data to find features and make measurements. Unlike a personal computer, the vision engine is built for speed, not flexibility.
W
Wavelength – The distance covered by one cycle of a sinusoidally varying wave as it travels at or near the speed of light. It is inversely proportional to frequency.
Well – A morphological operator comprised of a closing followed by a subtraction of the output image from the original input image.
Window – A selected portion of an image or a narrow range of gray scale values.
Windowing – Performing imaging proccessing operations only within a predefined window or area in the image.
X
Xenon Strobe – A gas filled electronic discharge tube, useful for high speed, short duration illumination for inspection.
X-ray – A portion of the electromagnetic spectrum beyond the ultraviolet with higher frequency and shorter wavelengths. Able to penetrate solid objects for internal, non-destructive evaluation.
Z
Zoom Lens – A compound lens which remains in focus as the image size is varied continuously. May be motorized or manually operated. (-)
(-) Definition not in Category Glossary
(2) Defined twice in Category Glossary
Copyright 1997 Visual*Sense*Systems
Phone: (607) 273-6882
Fax: (607) 273-9224
kww3@cornell.edu
(800) 892-8368