Image Analysis
It is the method of extracting the description from image that is necessary for high level scene analysis method. Image analysis techniques includes the perceived brightness and color, partial or complete recovery of 3D data in the scene, location of discontinuities corresponding to objects in the scene and characterization of the properties of uniform region in the image.

Image analysis is important in many areas as like overall survey photographic, slow scan TV images of the moon or planets x-ray image, TV images taken from an industrial robust visual sensor etc.
The sub areas of image processing include image enhancement, pattern detection and Recognition scene analysis and computer vision.

Image enhancement:

It deals with improving the image quality by illuminating noise (extraneous (unwanted) pixel or missing pixel) or by enhancing contrast.

Pattern detection and Recognition:

it deals with detecting and clarifying standard patterns and finding distortions from these pattern for e.g. optical character recognition.
Scene analysis and computer vision: it deals with recognizing and reconstructing 3D modes of scene from several 3D images. An example is an industrial robot sensing, the relative sizes, shape, posting and colors of objects.

Image Recognition:

To fully recognize an object in an image means knowing that there is an agreement between the sensory projection and the observed images. How the article appears within the image needs to do with the spatial configuration of the pixel. The image Recognition can be completed in the given steps:

Conditioning:

It estimates the informative pattern on the basic of the observed image. The conditioning suppresses noise, uninteresting systematic or patterned variations to normalize the image.

Labeling:

it suggests the informative pattern has structure as a spatial arrangement of events, each spatial events being a set of connected pixels labeling determines in what kind of spatial events each pixel participates for e.g. edge detection. The labeling operation labels the sort of primitive spatial events during which the pixel participates.

Grouping:

it indicates the events by collecting together or identifying maximal connected set of pixel participating in the same kind of events. A grouping operation, where edges are grouped into lines, is named line fitting. The grouping operation involves a change of logical arrangement. The observed image, conditioned image and labeled image all are digital image data structures.

Extracting:

This operation computes for each group of pixel a list of property. The properties might include its centered, area, orientation, spatial movement circle scribing circles, inscribing circle etc. it can also measure spatial relationship between two or more grouping.

Matching:

It is the matching operations that determines the interpretation of some related set of image events, associating these image with some given 3D object or 2D object. There are wide variety of matching operations, the classic example is template matching, which compares the examine pattern with stored models of known pattern and choose the best match.