IMAGE SEGMENTATION ALGORITHMS USING MATLAB
|| IMAGE SEGMENTATION ALGORITHMS USING MATLAB
IMAGE SEGMENTATION ALGORITHMS USING MATLAB.docx (Size: 1.51 MB / Downloads: 169)
“One picture is worth more than ten thousand words” -ANONYMOUS
In computer vision, segmentation refers to the process of partitioning a digital image into multiple segments (sets of pixels, also known as super pixels). The goal of segmentation is to simplify and/or change the representation of an image into something that is more meaningful and easier to analyse. Image segmentation is typically used to locate objects and boundaries (lines, curves, etc.) in images. More precisely, image segmentation is the process of assigning a label to every pixel in an image such that pixels with the same label share certain visual characteristics.
The result of image segmentation is a set of segments that collectively cover the entire image, or a set of contours extracted from the image (see edge detection). Each of the pixels in a region is similar with respect to some characteristic or computed property, such as colour, intensity, or texture. Adjacent regions are significantly different with respect to the same characteristic(s).
The simplest method of image segmentation is called the thresholding method. This method is based on a clip-level (or a threshold value) to turn a gray-scale image into a binary image. The key of this method is to select the threshold value (or values when multiple-levels are selected). Several popular methods are used in industry including the maximum entropy method, Otsu's method (maximum variance), and et al. k-means clustering can also be used.
The K-means algorithm is an iterative technique that is used to partition an image into K clusters. The basic algorithm is:
1. Pick K cluster centres, either randomly or based on some heuristic
2. Assign each pixel in the image to the cluster that minimizes the distance between the pixel and the cluster centre
3. Re-compute the cluster centres by averaging all of the pixels in the cluster
4. Repeat steps 2 and 3 until convergence is attained (e.g. no pixels change clusters)
In this case, distance is the squared or absolute difference between a pixel and a cluster centre. The difference is typically based on pixel colour, intensity, texture, and location, or a weighted combination of these factors. K can be selected manually, randomly, or by a heuristic.
This algorithm is guaranteed to converge, but it may not return the optimal solution. The quality of the solution depends on the initial set of clusters and the value of K. In statistics and machine learning, the k-means algorithm is a clustering algorithm to partition n objects into k clusters, where k < n. It is similar to the expectation-maximization algorithm for mixtures of Gaussians in that they both attempt to find the centres of natural clusters in the data. The model requires that the object attributes correspond to elements of a vector space. The objective it tries to achieve is to minimize total intra-cluster variance, or, the squared error function. The k-means clustering was invented in 1956. The most common form of the algorithm uses an iterative refinement heuristic known as Lloyd’s algorithm. Lloyd’s algorithm starts by partitioning the input points into k initial sets, either at random or using some heuristic data. It then calculates the mean point, or centroid, of each set. It constructs a new partition by associating each point with the closest centroid. Then the centroids are recalculated for the new clusters, and algorithm repeated by alternate application of these two steps until convergence, which is obtained when the points no longer switch clusters (or alternatively centroids are no longer changed). Lloyd’s algorithm and k-means are often used synonymously, but in reality Lloyd’s algorithm is a heuristic for solving the k-means problem, as with certain combinations of starting points and centroids, Lloyd’s algorithm can in fact converge to the wrong answer. Other variations exist, but Lloyd’s algorithm has remained popular, because it converges extremely quickly in practice. In terms of performance the algorithm is not guaranteed to return a global optimum. The quality of the final solution depends largely on the initial set of clusters, and may, in practice, be much poorer than
the global optimum. Since the algorithm is extremely fast, a common method is to run the algorithm several times and return the best clustering found. A drawback of the k-means algorithm is that the number of clusters k is an input parameter. An inappropriate choice of k may yield poor results. The algorithm also assumes that the variance is an appropriate measure of cluster scatter.
They are two types of algorithms are used in our project. There are shown below
Segmentation is pivotal work in character recognition especially in case hand-written characters are connected. During past 50 years, many methods have been set forth in segmenting connected characters. Drop fall algorithm is a classical segmentation algorithm often used in character segmentation because of its simplexes and effectiveness in application. Firstly advanced by G. Conge do in 1995, Drop Fall algorithm mimics the motions of a falling raindrop that falls from above the characters rolls along the contour of the characters and cuts through the contour when it cannot fall further. The raindrop follows a set of movement rules to determine the segmentation trace. Concretely, the Drop Fall algorithm selects one pixel out of the neighbours of the current pixel as a new pixel of the segmentation trace. Although Extended Drop Fall algorithm has been advanced to improve the performance of drop fall algorithm, when the raindrop falls into the concave pixel between the small convexnesses on the contour of characters, these algorithms will treat it as connected strokes and therefore start splitting it. Obviously it could split a single character and result in invalid segmentation. In this case, we introduce Inertial Drop Fall algorithm which follows the previous direction in the segmentation. Furthermore, Big Inertial Drop Fall algorithm is advanced to increase the size of the raindrop. When there is no big enough free space for the big raindrop to fall down, it will search for other direction and thus can avoid fall into the concave.
TYPES OF DROPFALL ALGORITHM:
Traditional Drop Fall algorithm:
As mentioned above, the basic idea of Traditional Drop Fall (TDF) algorithms is to simulate a “drop-falling” process. The cut tracing is defined with both the information of neighbour pixels and perhaps of more pixels. The algorithm considers only five adjacent pixels: the three pixels blow
the current pixel and the pixels to the left and right .Upward moves are not considered, because the rules are meant to mimic a falling motion.