【文章內(nèi)容簡(jiǎn)介】
hod must also be specified for describing the data so that features of interest are highlighted. Description, also called feature selection, deals with extracting attributes that result in some quantitative information of interest or are basic for differentiating one class of objects from another. Recognition is the process that assigns a label (., “vehicle”) to an object based on its descriptors. As detailed before, we conclude our coverage of digital image processing with the development of methods for recognition of individual objects. So far we have said nothing about the need for prior knowledge or about the interaction between the knowledge base and the processing modules in Fig2 above. Knowledge about a problem domain is coded into an image processing system in the form of a knowledge database. This knowledge may be as sim 10 Chapter 1 ■ Introduction ple as detailing regions of an image where the information of interest is known to be located, thus limiting the search that has to be conducted in seeking that information. The knowledge base also can be quite plex, such as an interrelated list of all major possible defects in a materials inspection problem or an image database containing highresolution satellite images of a region in con nection with changedetection applications. In addition to guiding the operation of each processing module, the knowledge base also controls the interaction between modules. This distinction is made in Fig2 above by the use of doubleheaded arrows between the processing modules and the knowledge base, as op posed to singleheaded arrows linking the processing modules. Edge detection Edge detection is a terminology in image processing and puter vision, particularly in the areas of feature detection and feature extraction, to refer to algorithms which aim at identifying points in a digital image at which the image brightness changes sharply or more formally has point and line detection certainly are important in any discussion on segmentation,edge dectection is by far the most mon approach for detecting meaningful discounties in gray level. Although certain literature has considered the detection of ideal step edges, the edges obtained from natural images are usually not at all ideal step edges. Instead they are normally affected by one or several of the following effects: blur caused by a finite depthoffield and finite point spread function。 blur caused by shadows created by light sources of nonzero radius。 at a smooth object edge。 specularities or interreflections in the vicinity of object edges. A typical edge might for instance be the border between a block of red color and a block of yellow. In contrast a line (as can be extracted by a ridge detector) can be a small number of pixels of a different color on an otherwise unchanging background. For a line, there may therefore usually be one edge on each side of the line. To illustrate why edge detection is not a trivial task, let us consider the problem of detecting edges in the following onedimensional signal. Here, we may intuitively say that there should be an edge between the 4th and 5th pixels. 5 7 6 4 152 148 149 If the intensity difference were smaller between the 4th and the 5th pixels and if the intensity differences between the adjacent neighbouring pixels were higher, it would not be as easy to say that there should be an edge in the corresponding region. Moreover, one could argue that this case is one in which there are several , to firmly state a specific threshold on how large the intensity change between two neighbouring pixels must be for us to say that there should be an edge between these pixels is not always a simple problem. Indeed, this is one of the reasons why edge detection may be a nontrivial problem unless the objects in the scene are particularly simple and the illumination conditions can be well controlled. There are many methods for edge detection, but most of them can be grouped into two categories,searchbased and zerocrossing based. The searchbased methods detect edges by first puting a measure of edge strength, usually a firstorder derivative expression such as the gradient magnitude, and then searching for local directional maxima of the gradient magnitude using a puted estimate of the local orientation of the edge, usually the gradient direction. The zerocrossing based methods search for zero crossings in a secondorder derivative expression puted from the image in order to find edges, usually the zerocrossings of the Laplacian or the zerocrossings of a nonlinear differential expression, as will be described in the section on differential edge detection following below. As a preprocessing step to edge detection, a smoothing stage, typically Gaussian smoothing, is almost always applied (see also noise reduction). The edge detection methods that have been published mainly differ in the types of smoothing filters that are applied and the way the measures of edge strength are puted. As many edge detection methods rely on the putation of image gradients, they also differ in the types of filters used for puting gradient estimates in the x and ydirections. Once we have puted a measure of edge strength (typically the gradient magnitude), the next stage is to apply a threshold, to decide whether edges are present or not at an image point. The lower the threshold, the more edges will be detected, and the result will be increasingly susceptible to noise, and also to picking out irrelevant features from the image. Conversely a high threshold may miss subtle edges, or result in fragmented edges. If the edge thresholding is applied to just the gradient m