DOI QR코드

DOI QR Code

Online Burning Material Pile Detection on Color Clustering and Quaternion based Edge Detection in Boiler

  • Wang, Weixing (Collage of Information Engineering, Chang'an University) ;
  • Liu, Sheng (Collage of Information Engineering, Chang'an University)
  • 투고 : 2014.01.02
  • 심사 : 2014.05.17
  • 발행 : 2015.01.31

초록

In the combustion engineering, to decrease pollution and increase production efficiency, and to optimally keep solid burning material amount constant in a burner online, it needs a smart method to detect the amount variation of the burning materials in a high temperature environment. This paper presents an online machine vision system for automatically measuring and detecting the burning material amount inside a burner or a boiler. In the camera-protecting box of the system, a sub-system for cooling is constructed by using the cooling water circulation techqique. In addition, the key and intelligent step in the system is to detect the pile profile of the variable burning material, and the algorithm for the pile profile tracing was studied based on the combination of the gey level (color) discontinuity and similarity based image segmentation methods, the discontinuity based sub-algorithm is made on the quaternion convolution, and the similarity based sub-algorithm is designed according to the region growing with multi-scale clustering. The results of the two sub-algoritms are fused to delineate the final pile profile, and the algorithm has been tested and applied in different industrial burners and boilers. The experiements show that the proposed algorithm works satisfactorily.

키워드

1. Introduction

In the combustion industry, there are different types of boilers and burners such as garbage and waste combustion, sodium, suspension, oil and diesel combustion burners etc., some of them are used to burn the solid materials such as solid fuel (wood, peat, coal, grains etc.) for generating energy. In a burning controlling procedure, one of the main tasks is to obtain the information of the burning material amount inside a burner. The traditional way is to monitor the burning material amount by an operator, which is a kind of hard work, and is personal dependent. There is today no existing technology which gives the online information about the actual combustion in a good way, because the high temperature inside a burner makes the measurements difficult (or impossible) online.

Many boilers and burners are currently working with too low effect just because the combustion is difficult to be controlled and the online burning information is very limited. The problems are not connected to the construction of boilers and burners, but mainly on the lack of the monitoring combustion with the visual online information. Instead of human eyes, the image analysis and computer vision can provide the both high speed and high accuracy of non-touch measurements, and can give the information needed to control combustion with low excess -oxygen - a factor that is vital for minimizing emissions [1-3].

As tested, by using a computer vision system to get the online burning information from a burner, the benefits can be obtained from different aspects: (1) the work efficiency is improved; (2) NOx emissions are decreased; (3) the work safety is increased; (4) the production capacity is enlarged; (5) the burner life becomes the longer due to the lower excess; (6) the ammonia and ures are eliminated; and (7) ashes are environmentally used for regeneration.

Anyhow, the new optical monitoring technology can obtain the online visual burning information by checking the variation of the burning material and the flames/tempreture directly and automatically. As literature review, recently, a number of researchers have used the image analysis and machine vision techniques to achieve the online burning informationin in different ways.

Most of the researchers used color information to study a combustion monitoring system. Wellington et al. [4] proposed an image-based flame emission spectrometric (DIB-FES) method for the quantitative chemical analysis. The method used a webcam to capture the flame images. A novel mathematical model was developed to build DIB-FES analytical curves and estimate figures. In their approach, each image was retrieved in RGB individual components and their values were used to define a position vector in RGB three-dimensional space. As tested, in comparison with the traditional flame emission spectrometry (trad-FES), no statistic difference was observed between the results by applying the paired t-test at the 95% confidence level. Hua et al. [5] stated that in particular, the variations of flame oscillation frequency responsed in diffusion-like sooty and premixed-like chemiluminescence flame colour entities under external acoustic perturbation were extracted, analysed, and compared using the digital colour image processing technique. The technique made use of both the RGB and HSV colour modelling principles to identify and tag digital image data that conformed to the different flame colour distribution regimes. In reference [6], a novel method was studied to improve the control performance of product quality by applying the digital flame color images to control loops. It led to a substantial reduction in oxygen quality variability. With the minimal oxygen quality variability, the requirement of the excess combustion air in a furnace was minimized and it had less loss of heat. An effective real-time flame color detection criterion was proposed by Chen and Bao [7], the method was of two-dimensional color space reconstruction and saturation fitting for the manually collected flame-pixel sample database. The flame size and accumulated gray value were then extracted to construct the time series of video sequences of four combustion experiments. Finally, the oscillation frequency of the flame flickering was calculated by performing the Fourier Transform for these time series.

In addition to the colors, some of the reearchers used other image features to make the combustion analysis. Hernández and Ballester [8] suggested the methods capable of converting geometrical and luminous data into reliable information on the state of practical combustion systems. For flame identification techniques, one method was to use self-organising feature maps and yields as the output of the most probable combustion regime among those previously characterised ones. The other one was an adaptation of a speech recognition method and informs about the probability of an unknown state to correspond with the different combustion regimes. Bizon, et al. [9] reported on the 2D images of combustion-related luminosity taken in two optically accessible automobile engines of the most recent generation. The results are discussed to elucidate physical phenomena in the combustion chambers. Proper orthogonal decomposition (POD) was applied to images of luminosity taken during experiments on optically accessible internal combustion engines. González-Cencerrado et al. [10] developed a new procedure of image processing for the characterization of a given combustion state through flame visualization. In their advanced vision based system, flame images have been recorded and subsequently processed, obtaining both luminous and spectral parameters from the grey values registered by each individual pixel. The acquisition system was based on a CCD camera at a high-speed frame rate. Li et al. [11] presented a flame image-based burning state recognition system by using a set of heterogeneous features and fusion technologies. The color feature and the global and local configuration features, were used to characterize different aspects of flame images, and the features were extracted from pixel values directly without image segmentation efforts.

As the computer vision technique development, three dimensional flame analysis was carried out. Moinul Hossain et al. [12] presented the design, implementation, and evaluation of an optical fiber imaging based tomographic system for the 3-D visualization and characterization of a burner flame. Eight imaging fiber bundles coupled with two RGB chargecoupled device cameras are used to acquire flame images simultaneously from eight different directions around the burner. A tomographic algorithm that combined the logical filtered back-projection and the simultaneous algebraic reconstruction technique was applied to reconstruct the flame sections from the images.

For the contour feature of flames, Qiu et al. [13] developed an autoadaptive edge-detection algorithm for flame and fire images. As tested, the different traditional edge detection algorithm could not be used for flame and fire boundary detection, therefore a new edge detection algorithm was proposed based on the properties of flames and fires. The algorithm detected coarse and superfluous flame/fire edges firstly, then identified the edges and removed the spurious edges and noise in the flame/fire image.

As the previous work reported, in the flame/fire image analysis and computer vision, the colors and edges of flames/fires were basic and significant features. Most of the tracking techniques used to determine the pose of an object in a sequence rely on the fact that silhouettes of flames/fires/burning material pile can be extracted using the relatively simple algorithms such as background subtraction or standard edge and gradient based techniques with colors or grey levels [14-15], but in most of cases, the traditional edge detectors cannot be used to trace contours of flames/burning material pile satisfactorily [13].

However, for the burning material pile detection, it is more complicated than flame detection, and this rarely is the case. The pile silhouette extraction methods can be very brittle. They tend to fail in the presence of highly textured objects and clutters, which produces too many irrelevant edges. In such situations, it is advantageous to make the fusion of edges and regions in this study. For a general machine vision system of the combustion as studied and developed in this paper, it mainly includes two parts: hardware and software. The hardware is used for protecting the cameras from the high temperature environment, and the software is mainly for detecting the online situation in a burner. As one example, the studied system is given in the next section, in the system, the burning material pile which is the similar to and more complex than a flame, is detected by combining the quaternion based edge detection result and the color clustering based region detection result.

 

2. System Configuration

In the studied solid fuel boiler combustion, the purpose of a monitoring system is to check the variations of flame and temperature distributions, bed lengths, and burning materials amounts. The system hardware consists of four CCD color cameras. An image frame grabber, a PC computer, and a D/A board. For each of the cameras, a camera protecting and cooling box (see Fig. 1) is specially designed and constructed, which makes the cameras work inside a burner (the temperature is up to 1400 -1500 Co). The covered and cooled color CCD cameras are mounted at the back wall of a burner in the four different places (directions).

Fig. 1.Components of camera cooling and protecting box: in the camera front, an air system makes the camera lens keep cleaning in a complicated and high temperature environment

The software in the system is a Windows program (Fig. 2), and the hundreds of functions and algorithms are included. The following example shows the software layout. For each measurement and analysis, such as statistical analysis, temperature measure, bed length check and burning material height estimate, a dialog box let an operator to choose the different online parameters for the online measurement and analysis. In addition to the system parameter calibration, the operator can also define and input special measurement parameters that can be selected based on the basic visual parameters in the system.

Fig. 2.Windows system for monitoring and detecting burning material amount

The computer vision system (onlie system) makes the combustion with lower excess oxygen (air), which can increase the work efficiency and decrease emission. Solid fuel combustion boilers or burners are well constructed today in industries, which is helpful with the development of the system for the monitoring and control of the combustion. As evaluated, the system can greately reduce the cost for the combustion producion, and can make 30% increase in efficiency.

In this paper, as one of the main parts in the system, the wood burning material pile online detection is presented. The key technology is the burning material pile delineation based on the fusion of the region growing with multi-scale color clustering and the edge detection on quaternion convolution. According to the properties of the images, the work procedure outline for the burning material pile delineation is described as the following section.

 

3. Image Properties and System Work Outline

In general, the studied combustion system firstly grabs images continuously; then the grabbed images are evaluated to disregard the images of poor quality; subsequently a certain number of selected images are processed and analyzed as one group; finally the analysis results are transformed into the analog signals to trig an alarm or alert relay or enter into the process control system for the automatic adjustments of the production parameters. The analysis results can also be displayed on the computer screen by an online graphic page, and are automatically saved into a database, which can be checked and re-displayed. The main work steps are illustrated in Fig. 3. To avoid to processing vague images, the captured image is evaluated according to edge density and image variance first, then the rough material regions (rectangles) are marked based on the image color information and pre-knowlage in the burnner to reduce computing burden for the subsequent image processing procedures.The contents of three grey boxes are presented in the following sections.

Fig. 3.Flow chart of online system work procedure

A sequence of typical images are illustrated in Fig. 4, after the burning material falling down; there is a large pile of the burning material in the burner, as the material burning continuously, the material amount decreases gradually. For this kind of sequence, one may ask why not check the image difference to get the information of the material amount variation, but as tested in this way, since there is too many factors affecting the flame image quality (such as smoking, flame strength, ashes and powders etc.), only some useful information can be got to roughly adjudge if the burning material amount is reduced or increased.

Fig. 4.Burning material amount varies as time being, from (a) to (f) (2nd , 55th, 96th, 163th, 214th, and 266th frames from a video film)

The dynamic estimation of the burning material amount is a hard task in an online combustion system, see Fig. 4, because the smoke often makes the pile contour of the burning material unclear. A normal edge detector or a tradional similarity based algorithm cannot work satisfactorily for this kind of images (see next two sections); therefore, a number of special algorithms are studied and developed for this purpose, and the algorithm presented in this paper is the combination of different image segmentation technologies and pre-knowledge in the real applications. For the measurements, in order to avoid the effects of smoking and feeding materials, the system assumes that there is no large change for the image quality and the burning parameters in a short time, it compares the measurement results from image to image taken in a video camera, removes the bad the quality images which are polluted very much, and calculates the average values of the same measurement parameters from different remaining images.

The burning material pile detection is to monitor the variation of the burning material amount in a boiler/burner, the detected information is sent to the combustion control center, then the center optimally controls the door (for adding burning materials), for determing the door’s open time and open gaps. In a boiler/ burner, if the burning material is too much, the material cannot be fully burned, which will produce pollution and increase production cost; if the material is too little, the temperature inside a boiler/burner will be lower than what needed, which will increase production cost too. Hence, the study and development of a good algorithm for the burning material detection is very important for the optimal combustion production.

In Fig. 5, three typical burning material images and their processing results are presented. In the first image, the pile is a mountain-like pile, the colors and intensities are different in different image parts, a lot of white and black noises are on the image, 90% of the pile profile is detected by an edge detection operator, and the region delineation result by a new sub-algorithm is good in most parts. To finalize the burning material pile profile, according to the above two different kinds of the detection results, the fusion is to use the detected edge information to match the detected region boundaries, to mend non-detected parts, and to link the profile gaps. The final result for the pile profile tracing is well displayed.

Fig. 5.Burning material pile profile tracing for three typical images by combining quaternion convolution based edge detection and region growing with multi-scale clustering, and each column presents one image’s processing results

The second image is more blurring, it is a two-peak mountain like pile, both the left and right parts of the pile are unclear with dark and blurring rgions; a part of the left profile edges of the pile is missing when doing the edge detection, the region delineation is fair in general, but the pile profile location is not exact. The combination of the two unsatisfactory results can make a good pile profile tracing result.

The last image presents another kind of burning material piles, the image was taken by a video camera that is over the burning material pile, the colors on the pile are very different from place to place, and the differences are caused mainly by flames/fires and smoke; after the edge detection, there are many edges that are not in the pile profile boundary, and the region detection result gives a good look out, but the extracted profile location is not satisfactory. If the edge detection result is used as a clue to trace the pile profile based on the region detection result, with the original image information, the result is promising.

In a real combustion application, since the situations are very different, a traditional image edge detection algorithm cannot be applied for most of the situations (see next two sections). To treat the variable situations, in this study, a special algorithm for the burning material pile detection (by combining edge detection and region delineation) is presented. Fig. 5 shows the robustness of the new algorithm, and the detailes of the algorithm (the fusion of the two sub-algorithms) are described in the following two sections.

The other three similar images are presented in Fig. 6. Fig. 6 includes three original burning wood material pile images: Fig. 6(a) shows a mountian-like pile where both the left and the right parts are vague, and the bounday of the pile cannot be seen clearly, on the top part, the smoke make image fuzzy; Fig. 6(b) presents a more darker and two-peak pile image, the contour of the pile is difficult to be detected by using a traditional edge based or region similarity based algorithm; and Fig. 6(c) illustrates the image which is much different to the images in Figs. 6(a-b), where the left part is much lighter than the right part, the pile is a two-stair pile, and there are a lot of smake over the pile.

Fig. 6.Burning material pile profile tracing for other three typical images by combining quaternion convolution based edge detection and region growing with multi-scale clustering, and each column presents one image’s processing results

Anyhow, as Figs. 5(b-c) and Figs. 6(b-c) shown, the detected edges cannot fully cover the pile bounday, and the region detection result only partly separates the pile from non-pile part, so, the fusion of the two results is meaningful. To do this, the color edge detection result is compared with the color region growing and multi-scale clustering results. The main edge parts are used as a part of pile boundary. The gaps of the main edges (Fig. 5(b)) are filled/linked by using the information of the region boundaries (Figs. 5(c)-6(c)).

 

4. Color Edge Detection Using Quaternion Convolution

In the image processing and computer vision, the edge detection is a fundamental operation, because it is the basic information for object delineation and recognition. There are a number of literatures dealing with this topic in last 30 years. As well known, Marr [16] described a method for determining the edges using the zero-crossings of the Laplacian of Guassian of an image. Similarly, Haralick [17] determined edges by fitting polynomial functions for local image intensities and finding the zero-crossings of the second directional derivative of the functions. As a standard recently, Canny [18] detected edges by an optimization process and proposed an approximation to the optimal detector as the maxima of gradient magnitude of a Gaussian-smoothed image [19]. Among the edge detection methods so far, Canny edge detector is the most rigorously defined operator and is widely used.

In addition, Clark [20] found out a method to filter out false edges obtained by the Laplacian of Gaussian operator. Bergholm [21] introduced the concept of edge focusing and tracked edges from coarse to fine to mask weak and noisy edges. Elder and Zucker [22] marked edges at multitudes of scales. For a survey and comparison of the edge detectors, the readers can refer to [23-24].

Anyhow, it is hard to design a general edge detection algorithm which performs well in many contexts and meets the requirements of the subsequent processing stages. Conceptually, the most commonly proposed scheme for edge detection include three operations: differentiation, smoothing and labeling. The goal of the edge detection in this study is to quickly and clearly detect the burning material amount, i.e. the pile profile variation. In the situation of Figs. 5-6, this paper mainly uses an edge-based algorithm to detect burning material boundaries (profiles), and it is studied based quaternion convolution [25-26], described as the follows.

Given a set of n independent observations {xi} i = 1,2,...,n , with unknown density f(x) , the nonparametric kernel probability density estimate computed at location x is given by

Where, h is the kernel bandwidth and d is the number of dimensions (equal to 3 for a RGB color image). A common choice for a kernel function K(x) is the multivariate Gaussian that is defined for a d-dimensional x vector:

In general, Eqs.(1) and (2) are used to estimate the density locally from the data within a spatial window, for example, in a size n = 3 × 3 rectangular window W . Two kinds of pixel distributions are applied within a 3 × 3 mask: one is that the window is located entirely over a single coherent region, the pixels form a unimodal distribution, and the density is expected to be high; the other is that the window is located in a long region between two neiboring regions, the pixels form a bimodal distribution, and the density is expected to be low in the in-between the spaces of the samples.

Actually, this method is fit for a 1-D signal. In a space window W , the signals can form a unimodal or bimodal distribution. In the case of the 1-D edge signal, some samples can be in the left mode and the others are in the right mode. Let P1 and P2 be two peak density values of the two modes respectively. To maximize the edge detection probability, the edge output is taken at saddle (valley) point [25-27].

To Find out this position, the lowest density sampling point xT , which has a certain computational load as saddle point detection in multivariate space, is not easy to accomplish. To avoid iterative computing, an optimal thresholding approach is utilised and the location of xT is given by

Where, μ1 is the mean of the left mode; μ2 is the mean of the right mode, and σ2 is the global variance. The above formula is for a constant case, but a multivariate extension for RGB color pixels is straightforward.

Any row of image pixels in the horizontal direction can also be regarded as a 1-D signal and the scale is decided by the window W' , in which the size is N × 1, then N is used to decide the edge detection scale. The edge detector output can be rewritten as

After done all the rows of image pixels for the edge detection in the horizontal direction, the same method can be sequentially applied to detect the edges in the vertical direction. Then, the size of the window W' is 1 × N for all the columns of image pixels. The edge detection output in the vertical direction is

To get the angle with the horizontal direction for non-maxima suppressing, a sign is put for the edge detection outputs and respectively. The sign can be defined as:

When P2 ≥ P1 in the horizontal direction, is positive, otherwise is negative, and the same judgement in the vertical direction for . At each scale which is determined by the window length N , the modulus location at (x, y) is proportional to

The angle can be given by

The edges are the points (x, y) where the modulus MNf(x, y) has a local maxima in the direction of the gradient given by ANf(x, y).

As tested, for an image with resolution 512x512, the procedure can process 25 images per second under the PC (2G Mhz) environment (Windows XP). Figures 7 and 8 are examples by using the different algorithms [19] to do the edge detection of the burning material pile, the best detection result is by the new edge detection algorithm (Figs. 7(f), 8(f)). Since the burning material pile images are vague, the Robert and Laplacian operators cannot detect out the week edges; the Sobel operator can make edges in details, but the edges are too rough and too complicated, which is difficult to make binarization of the gradient magnitude image; the Canny operator [18] can detect edges clearly and produce a binary result, but there are too many noise (e.g., spurious) edges, therefore the useful and complete edges are hard to pick out; and the new edge detection algorithm can detect out real edges with less noise.

Fig. 7.Procedure of edge based algorithm for burning material pile detection. (a) Original image#1; (b) Sobel result; (c) Robert result; (d) Laplacian result; (e) Canny result; and (f) Edge detection using quaternion convolution result

Fig. 8.More example of edge based algorithm for burning material pile detection.. (a) Original image#2; (b) Sobel result; (c) Robert result; (d) Laplacian result ; (e) Canny result; and (f) Edge detection using quaternion convolution result

 

5. Region Growing and Multi-scale Clustering

5.1 Color Space Determination

In a color image, RGB is one of mostly used color spaces, but in this study, it is constructed into some other special color spaces for special applications, as well known, the HIS model is used in the color enhancement applications and the YIQ model is in the NTSC color television. These special color models provide the much better match to human visual perception than the RGB model.

Normally, the colors in MacAdam ellipses are visually indistinguishable. Any color lying on the perimeter of a MacAdam ellipse is just noticeably different to the center of the ellipse. The size and shape of these ellipses vary in the color models used in image processing, including the HSI and YIQ models. That is to say, the equal distances in different parts of the color space denote the different amounts of perceived color shifts. So it is not appropriate to do the clustering in these color spaces [28].

Recently, non-linear transformation methods have been used to operate in a color space called geodesic chromaticity which can give almost equally perceived color shifts throughout the space. The equations for the non-linear transformation are given by MacAdam [28]:

and X,Y, Z are the tristimulus CIE color coordinates derived from RGB tristimulus values by means of the following transformation matrix:

5.2 Multi-Scale Clustering (MSC)

Generally, most clustering methods require a number of clusters to be known or given. The MSC solves this problem by providing an objective measure to have an appropriate number of clusters. So it segments a color image into distinct color regions without requiring an operator to give the number of clusters. The idea of this algorithm is briefly presented in this section, the more detailes can be found in the reference by Nakamura [29].

A set of n color points Xk, k = 1,2,...,n, are made of the dimension r. A scale-space representation of these points can be realized by convolving them with a Gaussian kernel ϕσ(x) with a scale size σ

To generate the following function:

Where, r is the dimension of x, σ is the scale size and ψσ changes with the transformation of it. ψσ is set as a potential field function and c(σ) is the number of clusters at the scale size σ. Cluster prototypes vi(i ∈ [1,c(σ)]) is obtained by setting the gradient ▽ψσ(vi) to zero, requiring Hessian matrix Hψσ(vi) to be semi-positive [29].

As analysed that the number of clusters and the locations of prototypes are affected by the scale size σ. The developed MSC algorithm consists of two parts: (1) an inter-cluster representation of data based on a structural criterion called lifetime; (2) an intra-cluster representation of data based on another structural criterion called drift speed. These concepts originate from the scale-space theory [29]. Here the first part of the MSC algorithm has been used to find out an appropriate number of color clusters. The locations of the prototypes lie on minimizing the trace within a cluster scatter matrix.

The term lifetime is defined as follows:

Where, σmax(c) and σmin(c) denote the maximum and minimum scale sizes when the number of clusters is c. The number of clusters changes with the scale size. The lifetime is longer and the corresponding clustering is steadier. When the lifetime reaches to its maximum, and the clustering is very stable:

Where, c* denotes the number of clusters when the lifetime is the maximum which is the optimal cluster number.

5.3 Clustering based Image Segmentation Algorithm

The algorithm procedure of the clustering based image segmentation is depicted as:

(1) The image is quantified with the results of the MSC. That is to say the value of the colors in the same cluster will be quantified to be the same value.

(2) The algorithm searches for the unlabeled pixels in the processed image and finds out the current core pixel. The order is from the top left corner to the bottom down corner of the image. (The rule affirming the core pixel will be introduced later.)

(3) If a core pixel p is found out, a new cluster will be created. Then, it iteratively searches for unlabeled pixels which are density-connected with p, and it labels these pixels with the same cluster label.

(4) If there are still existing the core pixels in the image, go to (3).

(5) It merges the pixels which are not included in the cluster that is adjacent to the pixels.

The rule affirming the core pixel: in a region D, if a pixel p contains at least a minimum number, MinP, of pixels in the same cluster, then the pixel is called a core pixel. The size of the region D and the minimum number MinP are given. Generally, the region D is defined as the quadrate region with side length N, and MinP is half of the number of the region D (MinP = N*N/2). N changes with the resolution of the image. In this way, it can avoid to defining some pixels on a borderline or choosing some noise pixels as a separated region in order to conquer the over-segmentation problem.

Definition of density-connectivity: if a is density-connected with b, a and b are in the same cluster and a is spatial connected with b. If there is no request for the spatial connectivity, a pixel veries far apart will also be classified into the same area. So the spatial connectivity is very good for noise suppression.

5.4 Experiements and Analyses

Normally, a part of regions in a boiler are burnning material areas, the material zones are closed to or neighbored to the boiler wall, thereore, in the image the interest areas can be difined firstly, then the studied algorithm just detects the interest areas. In most cases, the burning material zones can be separated into three to five zones, the obvious material zones where the average gray values are lower (see Figs. 9-10) can be obtained easily, but the others might be not certain or difficult to detect out. Hence the algorithm has to do the fine detection by the edge detection algorithm as described as the above section. To compare the new algorithm with others, the four existing widely used image segmentation algorithms are applied. They are (1) Otsu thresholding [30]; (2) Region growing [31]; (3) Multiple thresholding [14]; and (4) Object region delineation on Canny edge detection [31].

Fig. 9.Burning material pile detection by using different algorithms. (a) Original image, (b) Otsu thresholding; (c) Region growing; (d) Multiple thresholding; (e) Canny based segmentation; and (f) Clustering based image segmentation algorithm

Fig. 10.Comparison of three algorithms for burning material pile detection. The images on the first row are operated on the image in Fig. 7(a), and the images on the second row are on the image in Fig. 8(a). (a) and (d) are by Region growing; (b) and (e) are by Multiple thresholding; (c) and (f) are by Clustering based image segmentation algorithm

In Fig. 9, the original image is a burning material pile image of different light intensities and colors with noises, and the botom part of the pile is connected to a non-material part. The five algorithms work on the same image rspectively. The Otsu thresholding gives a good result on the image right part since the contrast between flames and burning materials are high in that part, but the result in the image left part is bad because the colors of the materials are very closed to that of the boiler wall. The Region growing makes the whole image into different zones from top to bottom; and the material pile cannot be exactly extracted. The Multiple thresholding result is better than that by the simple Otsu thresholding, and the black zone covers most parts of the burning material pile, but the pile boundary localization is not satisfactory. The Canny edge detection based algorithm divides the image into two main zones because the edges between the two regions are obvious, but a more exact detection function is needed for the pile profile delineation. In this way, the algorithm separates the image into five zones, the two zones on the image bottom represent the whole material pile satisfactorily.

The similar results have been achieved for the other burning material pile images. Fig. 10 illustrates the processing results on the two images from Fig. 7(a) and Fig. 8(a).

 

6. Conclusion

The presented burning material amount detection algorithm was developed for a combustion monitoring system that was applied into the combustion industry. In the system, the key part of hardware was a cooling tube for the camera, which makes cameras to be used in a high temperature environment possible. The software was a Windows program which includes a number of algorithms and functions for the different measurements in a burner or a boiler. The developed algorithms (procedures), for burning material pile, combined different image processing technologies, and made the measurements with high speed and high accuracy.

The algorithm for the pile detection was studied based on the combination of edge detection on quaternion convolution and the Region growing with the Multi-scale clustering. It used the detected region boundary information to mend pile edges for completing the pile profile tracing. The algorithm and its two sub-algorithms were tested for a number of burning material images taken from burners and boilers. In addition, to prove the new sub-algorithms robustness, the four widely used edge detection algorithms were applied on the same images for comparing the quaternion convolution based edeg detection algorithm, and the four similarity based algorithms were utilised for comparing the studied algorithm of the the Region growing with Multi-scale clustering, and the results of tesing and comparison illustrated that both the studied sub-algorithms worked well for the flame images than others. To make the burning material pile delineation accurately, the two sub-algorithms were combined into a pile detection algorithm, which delineated pile boundaries exactly, it worked satisfactorily.

Since the images were taken from Swedish wood material burnners and boilers in this study, the testing work was limited because the images taken from different boilers or burnners vary much. To make the sudied algorithm to be used for the other online flame images, more research and testing work are required for the improvements of the color based image segmentation algorithm.

참고문헌

  1. Pu Han, Zhang Xin, Wang Bing, Pan Wei-hua, "Interactive Method of Furnace Flame Image Recognition Based on Neural Networks," Proceedings of the CSEE, vol. 28, no. 20, pp. 22-26, Jul. 15, 2008. http://www.pcsee.org/EN/abstract/abstract20083.shtml.
  2. Li Weitao and Kezhi Mao and Xiaojie Zhou and Tianyou Chai and Hong Zhang, "Eigen-flame image-based robust recognition of burning states for sintering process control of rotary kiln," in Proc. of Joint 48th IEEE Conference on Decision and Control and 28th Chinese Control Conference, Shanghai, P.R. China, pp. 398-403, December 16-18, 2009.
  3. Sujatha K. et. al., "Combustion Quality Estimation in Power Station Boilers using Median Threshold Clustering Algorithms," International Journal of Engineering Science and Technology, vol. 2, no. 7, pp. 2623-2631, 2010. http://130.203.133.150/viewdoc/similar;jsessionid=6D64B20A057737DCD29AF55C05940BCF?doi=10.1.1.176.2475&type=ab.
  4. Wellington Silva Lyra, et al., "Digital image-based flame emission spectrometry," Talanta, vol. 77, no. 5, pp. 1584-1589, March 15, 2009. https://doi.org/10.1016/j.talanta.2008.09.057
  5. Hua Wei Huang, et al., "Characterisation of external acoustic excitation on diffusion flames using digital colour image processing," Fuel, vol. 94, pp. 102-109, April 2012. https://doi.org/10.1016/j.fuel.2011.12.034
  6. Junghui Chen et al., "Design of image-based control loops for industrial combustion processes," Applied Energy, vol. 94, pp. 13-21, June 2012. https://doi.org/10.1016/j.apenergy.2011.12.080
  7. Chen Juan and Bao Qifu, International Symposium on Safety Science and Technology "Digital image processing based fire flame color and oscillation frequency analysis," Procedia Engineering, vol. 45, pp. 595-601, 2012. https://doi.org/10.1016/j.proeng.2012.08.209
  8. R. Hernandez, J. Ballester, "Flame imaging as a diagnostic tool for industrial combustion," Combustion and Flame, vol. 155, no. 3, pp. 509-528, November 2008. https://doi.org/10.1016/j.combustflame.2008.06.010
  9. K. Bizon, et al., "POD-based analysis of combustion images in optically accessible engines," Combustion and Flame, vol. 157, no. 4, pp. 632-640, April 2010. https://doi.org/10.1016/j.combustflame.2009.12.013
  10. A. Gonzalez-Cencerrado, B. Pezalez, A. Gil, "Coal flame characterization by means of digital image processing in a semi-industrial scale PF swirl burner," Applied Energy, vol. 94, pp. 375-384, June 2012. https://doi.org/10.1016/j.apenergy.2012.01.059
  11. Weitao Li, Dianhui Wang, and Tianyou Chai, "Flame Image-Based Burning State Recognition for Sintering Process of Rotary Kiln Using Heterogeneous Features and Fuzzy Integral," IEEE Transactions on Industrial Informatics, vol. 8, no. 4, pp. 780-790, November 2012. https://doi.org/10.1109/TII.2012.2189224
  12. Md. Moinul Hossain, Gang Lu and Yong Yan, "Optical Fiber Imaging Based Tomographic Reconstruction of Burner Flames," IEEE Transactions on Instrumentation and Measurement, vol. 61, no. 5, pp. 1417-1425, May 2012. https://doi.org/10.1109/TIM.2012.2186477
  13. Tian Qiu, Yong Yan and Gang Lu, "An autoadaptive Edge-Detection Algorithm for Flame and Fire Image Processing, " IEEE Transactions on Instrumentation and Measurement, vol. 61, no. 5, pp. 1486-11493, May 2012. https://doi.org/10.1109/TIM.2011.2175833
  14. Tseng Shou-Yi, "Motion estimation using a frame-based adaptive thresholding approach," Real-Time Imaging, vol. 10, no. 1, pp. 1-7, February 2004. https://doi.org/10.1016/j.rti.2003.09.014
  15. Yang JF, Chang YC, Chen CU, "Computation reduction for motion search in low rate video codes," IEEE Transactions on Circuits System Video Technology, vol. 12, no. 10, pp. 948-51, Oct. 2002. https://doi.org/10.1109/TCSVT.2002.804892
  16. Marr D. and E. Hildreth, "Theory of Edge detection," in Proc. of Royal Society of London, vol. B-207, no. 1167, pp. 187-217, February 29, 1980.
  17. Haralik R., "Digital Step Edges from zero Crossing of Second Directional Derivatives," IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 6, no. 1, pp. 58-68, Jan. 1984.
  18. Canny J., "A computational Approach to Edge Detection," IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 8, no. 6, pp. 679-698, Nov. 1986.
  19. Zhao Xiangmo, Weixing Wang and Liping Wang, "Parameter optimal determination for Canny edge detection," International Journal: Imaging Science Journal, vol. 59, no. 6, pp. 332-341, November 2011. https://doi.org/10.1179/136821910X12867873897517
  20. Clark J.J., "Authenticating Edges Produced by Zero Crossing Algorithms," IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 11, no. 1, pp. 43-57, Jan. 1989. https://doi.org/10.1109/34.23112
  21. Bergholm F., "Edge focusing," IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 9, no. 6, pp. 726-741, Nov. 1987.
  22. Elder J.H. and S.W. Zucker, "Local Scale Control for Edge Detection and Blur Estimation," IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 20, no. 7, pp. 699-716, July 1998. https://doi.org/10.1109/34.689301
  23. Wang Weixing, 2008, "Fragment Size Estimation without Image Segmentation," International Journal: Imaging Science Journal, vol. 56, no. 2, pp. 91-96, April, 2008. https://doi.org/10.1179/174313108X268312
  24. Wang W X, W S Li and X Yu, "Fractional differential algorithms for rock fracture images," The Imaging Science Journal, vol. 60, no. 2, pp. 103-111, April 2012. https://doi.org/10.1179/1743131X11Y.0000000012
  25. Xu Jiangyan, Weixing Wang, and Linning Ye, "Rock fracture edge detection based on quaternion convolution by scale multiplication," International Journal: Optical Engineering: Opt. Eng. vol. 48, no. 9, 097001, Sep. 3, 2009.
  26. Sangwine S. J., "Colour image edge detection based on quaternion convolution," Electronics Letters, vol. 34, no. 10, pp. 969-971, May 1998. https://doi.org/10.1049/el:19980697
  27. Papari G., N. Petkov, "Adaptive pseudo dilation for gestalt edge grouping and contour detection," IEEE transactions on image processing, vol. 17, no. 10, pp. 1950-1962, October 2008. https://doi.org/10.1109/TIP.2008.2002306
  28. MacAdam D., Color measurement, theme and variations, Springer-Verlag, 1981. http://www.tandfonline.com/doi/abs/10.1080/713820848.
  29. Kehtarnavaz N., Nakamura E., "Generalization of the EM algorithm for mixture density estimation," Pattern Recognition Letters. vol. 19, no. 2, pp. 133-140, February 1998. https://doi.org/10.1016/S0167-8655(97)00173-6
  30. Otsu N., "A threshold selection method from gray-level histogram," IEEE Trans. Systems Man Cybernet, vol. 9, no. 1, pp. 62-66, January 1979. https://doi.org/10.1109/TSMC.1979.4310076
  31. Wang Weixing, "Colony image acquisition system and segmentation algorithms," Optical Engineering, vol. 50, no. 12, Nov 29, 2011.

피인용 문헌

  1. Rock Fracture Centerline Extraction based on Hessian Matrix and Steger algorithm vol.9, pp.12, 2015, https://doi.org/10.3837/tiis.2015.12.018