DOI QR코드

DOI QR Code

Discriminatory Projection of Camouflaged Texture Through Line Masks

  • Bhajantri, Nagappa (Dept. of Computer Science and Engg, Government Engineering College) ;
  • Pradeep, Kumar R. (Adithya Institute of Technology) ;
  • Nagabhushan, P. (University of Mysore, Department of Studies in Computer Science)
  • Received : 2012.10.31
  • Accepted : 2013.10.01
  • Published : 2013.12.31

Abstract

The blending of defective texture with the ambience texture results in camouflage. The gray value or color distribution pattern of the camouflaged images fails to reflect considerable deviations between the camouflaged object and the sublimating background demands improved strategies for texture analysis. In this research, we propose the implementation of an initial enhancement of the image that employs line masks, which could result in a better discrimination of the camouflaged portion. Finally, the gray value distribution patterns are analyzed in the enhanced image, to fix the camouflaged portions.

Keywords

1. INTRODUCTION

Camouflage is a method that helps visible organisms or objects to remain indiscernible from their surrounding environment. In other words, camouflage is an entity’s form of natural or unnatural deception to hide them from visual systems for various reasons. Most of the time, the reason has been for the sake of preventing attack from predators or enemies. Camouflage works because it creates visual confusion. It doesn’t make things invisible, but it simply disguises the recognizable form by breaking up the visual outline (i.e., in an array of arranged elements, one element could merged with nighbour pixel or a change in orientation or a slight misalignment).

In daily experiences, camouflage could be described as disguising one thing as two, two things as one, three as two, and so on. Most often, it causes confusion between an object and its background (called “blending”), between one kind of object and another (mimetic resemblance), or it causes a single object to divide into meaningless fragments (dazzle) (Roy 2006).

Even today,the only way practiced by and large to detect such defects/camouflaging is by visual inspection, which is highly erroneous, because once the visual system is in a state of fatigue their concentration level gradually decreases, and fails to detect the defects. Thus, we find defects that go unnoticed during the manufacturing process (Smith et al. 1997). Even at the operational level certain defects remain camouflaged. In fact, the automatic identification of defects in industrial parts has potential applications and it may be a cost effective alternative (Smith et al.) 1997. The wider application for anautomated surface defect detection system would seem to offer several advantages, such as reduced labour cost, the elimination of subjective judgment, etc.just to name a few (Smith et.al 1997).

Defence is another important field where camouflage is prominently found. Here, camouflaging provides protective concealment for soldiers to guard themselves against enemy attacks (Juliana et al. 2002). Although modern warfare employs increasingly complex and deadly weapon systems and highly sophisticated electronic surveillance devices, the necessity and importance of deceiving the enemy remains (Juliana et al. 2002). To a major extent, animals success depends upon their ability to remain undetected by the enemy. Thus, a camouflaging strategy is purposefully employed in military applications.

Even in nature, there is a strong evolutionary pressure for entities like prey species of animals to blend into their environment by concealing their shape, color, or texture with the background or surrounding to avoid predators. Natural camouflage is one method that animals use to meet these aims.

The process of automatically finding a solution for spotting these types of unobservable defects or camouflage, has brought up many challenging issues. This paper proposes a technique for decamouflaging based on discriminatory projections through line masks and histogram based classification. Section 2 deals with the building ofmodels, which is inspired by much of the literature on areas that are related to this topic. Section 3 and Section 4 deal with texture enhancements through line masks and histogram based classification, respectively. Section 5 provides the experimentation validation of our proposed model.

 

2. BUILDING MODELS

Few decamouflage models have been proposed in the literature to identify a camouflaged region. However, most of these models normally worked on recognizing motion camouflage. In other words, it is an attempt to detect an object that tries to camouflage or obscure when it is in motion. Juliana et al. (2002) have proposed automatically monitoring the animal population in the natural forest environment. They have suggested to employ spatial and motion fields to automatically decamouflage a monkey’s movements in the foreground with the trees and other vegetation in the background. Andrew James et al. worked towards building an autonomous motion camouflage control system. In our work, we have tried to provide a sensor motor controller for biologically inspired motion camouflage.

In this paper we address the problem of detecting the camouflaged portion, which by itself is a very small amount, in a static image, in contrast to the motion camouflage problem (Anderson and McOwan 2002, Juliana et al. 2002) that is described above. Perhaps this is a more difficult problem, if the decamouflaging has to be completely carried out in an unsupervised way. By the term “unsupervised” we mean that we do not have any knowledge about either the normal background or the camouflage. In our proposed method, we worked in a semi supervised environment in the sense that we assumed that we were generally aware of the properties of the region or surface under normal conditions and that a very small percentage of that region represents camouflage or defects.

The camouflaged regions generally go unnoticed and hence, the detection of this type of region in static images is a difficult task. The camouflaged region is usually very small, and its texture uniformity with the surrounding/background is normally very high. Moreover, discriminating sensitivity is a smaller amount. when we observe the camouflaged image from a distance. Hence, from the automation point of view, texture analysis carried out at a global/coarser perspective will not be helpful in recognizing the defect. Therefore, the first computational clue could be that the higherlevel texture analysis model, which is able to probe deeper/finer, would be more helpful. The second clue could be based on the fact that the camouflaged texture should exhibit a contrast in smaller regions because local properties in smaller blocks will contribute to better discrimination capabilities. Thus, in our proposed work, the camouflaged regions are detected by splitting the entire region into smaller blocks and by performing texture analysis on each of the smaller regions. There is always a relatively appreciable quantum of deviation in the texture in those smaller/finer regions, which contain camouflaged portions or defects.

The concept of decomposing a region into smaller regions is recorded in many bodies of research on disjoint partition of image. For example, in the recent work by Lalitha and Nagabhushan (2004), they argue that recognizing and locating the minor classes in a satellite image, which have a relatively very low spatial occupancy as compared to other classes, is almost impossible. This is because such classes are absorbed amongst the major classes. Therefore, it is disadvantageous to go for global cluster analysis. Hence, in the work by Lalitha and Nagabhushan (2004), the image frame is split into many adjacently placed non-overlapping small blocks, and cluster analysis or classification is performed at the block level. This procedure of being able to capture the local information at the block level can effectively recognize the presence of even very minor classes. In a similar manner, to determine the number of texture segments, Joseph and Tay (2001) proposed the use of a block-wise feature extraction scheme. The local/finer texture parameters would make it possible to discriminate these smaller camouflaged defects from the background. With this motivation, the idea of decomposing a region into smaller blocks has been successfully extended in this paper to detect the camouflaged portions.

Texture analysis is proposed by many researchers for the inspection of textures in textiles (Newman and Jain 1992), wood (Schael and Burkhardt 2000), paper (Iivarian and Visa 1998), and leather (Serafim 1992). Song et al. (1996) and Smith et al. (1997) presented a Wigner filter based approach to detect synthetic cracks in random and regular patterns. Defect detection in random color texture is addressed the work by Smith et al.(1997). A survey on defect detection in texture is given by Song et al. (1996). The central moments based features are used in numerous texture analysis applications and have been studied for image recognition and computer vision since the 1960s.

The above discussion calls for an image manipulation mechanism to enhance the discriminatory properties of the camouflaged segments or objects with respect to the background at a finer resolution. These discriminatory properties could then be summarized into appropriate features for classification. The discrimination properties could be based on texture orientation, shape formation, or the color properties of the camouflaged object. In this paper, we targeted the problem of texture obscureness with similar background, which is the most challenging problem of them all.

We propose a methodology for improving the discriminator enhancement of camouflaged texture through line masks. Once these camouflaged regions are projected through enhancement, we propose extracting the histogram features from the resultant image for classification and subsequently decamouflaging process is taken place. The histogram features based classification uses a regression based histogram distance measure (Pradeep and Nagabhushan 2006, Sanjay and Nagabhushan 2004). The classification of the image regions result into two classes: (i) defective/ camouflaged regions and (ii) non-defective/uncamouflaged regions. The classification phase also includes knowledge extraction from hierarchical dendrograms and the knowledge fusion stage (Pradeep and Nagabhushan 2005). The overall view of the model is portrayed in Fig. 2.1 and the following sections illustrate each of the blocks in detail.

Fig. 2.1.Overall view of the proposed model

 

3. EXPLORATION WITH LINE MASKS

Before proceeding further, let us first distinguish between hidden objects and camouflaged objects. The hidden objects, which are also called “occluded objects,” are placed completely beneath/ behind an opaque object (i.e., they do not appear as a surface object in an image). On the other hand ,the camouflaged objects are surface objects that are induced by hiding their characteristics. This is done by merging their visuals with the nature of similar background. The nature of the background is the background color, texture, shape, motion, etc. Thus, the camouflaged objects are obscure and are not hidden. Once the camouflaging is ideal, then the objects acquire the characteristics of a hidden object. But in nature, ideal camouflaging is very rare, which means that there are minute discriminatory properties/patterns that will still be evident if observed closely or aesthetically and also at a finer resolution.

So the first requirement is to transform these finer discriminatory characteristics into prominently observable discriminatory properties. Thus, the features extracted from the enhanced visual/ image could be more powerful to distinguish between the camouflaged object and the background. This section proposes a discrimination enhancement technique that is based on line masks. Line masks are M×M kernels, which can derive the existence of lines that are oriented at a particular angle in an image. The size of the kernels varies depending on the orientation of the lines that are extracted. A detailed design procedure for generation of line masks or kernels is show below.

The design of masks for different orientations: A few masks of different sizes for different orientations are shown in Fig 3.1(b) to (d) and Fig 3.1 (a) represents a generic 3×3 mask structure.

The line or edge detection approach is to build the mask with the sum of the products between the mask coefficients and with the intensities of the pixel that are under the mask at a specific location in the image. The response of mask S is:

where W1, W2,…, Wn are arbitrary masks coefficients (weights) and Y1, Y2, …., Yn are the correspondingly associated gray levels of pixels under the mask. The result of the mask is reflected through its center. The 3×3 mask has its own limitations and will not able to generate the particular orientation; for example a 300 mask. In such cases we should go for bigger mask, and this is shown in Table 3.3.

If the first mask shown in Fig 3.1 (b) were moved around an image, it would respond more strongly to lines of one pixel thick oriented or horizontal. With a constant background, the maximum response would result when the line passes through the middle row of the mask. This is easily verified by stretching a simple array of 1’s with a different gray level running horizontally through the array. A similar experiment would reveal that the second mask, as shown in

Fig 3.1 (c) , responds best to lines that are oriented at 450, and that the third mask, which is shown in Fig 3.1(d), responds best to vertical lines. These directions can also be established by noting that the preferred direction of each mask weighted with larger coefficients than other possible directions. This is shown in Fig 3.1 (b) to (d). (The sum of the weights should be zero)

Let S1, S2, S3 denote the response of the masks, where the S’s are given by Eq- 1. Suppose that all masks are run through an image. If, at certain point in image, , for all j ≠i that point is said to be more likely to be associated with a line in the direction of mask i. For example, if at a point in the image, |Si| > |Sj|, for j = 2,3.., n are the points that are more likely to be associated with a horizontal line. The operation of the mask or kernels over the image is brought out (Michael et al. 2000) in Fig 3.2.

Fig. 3.1.

Mask size and orientation: Before proceeding to the derivation of a kernel, the possible sizes of a mask supporting different orientations are listed in Table 3.3.

Deciding the mask coefficients/weights: Table 3.3 shows the possibility of a mask size that derives these particular orientations. Let us consider a line that connects the pixels (2, 1) and (4, 5) in a 5×5 mask, as shown in Fig 3.4(a). The orientation of this line is less than 450. The equation to the line is y=2×−3, which gives θ= tan−1( 2 )= 260.Hence, this has produced an approximately 300 line.

Now to create a 300 line from a 5×5 mask, we propose the following method to determine the weights.

The line y= 2× −3 passes through the shaded pixels (cells), as shown in Fig 3.4(a). However, all of these pixels may not lie exactly on the line. For instance pixel (3, 4) when substituted in the equation y = 2×−3 gives 4 = 6−3 = 3, (i.e. the error is 4−3 = 1 against the expected value of 4). Therefore, the quantisation error is 1/4 (25%) or the amount of quantization acceptance is (75%). The quantisation acceptance values of pixels that fall on the line y=2×−3 are shown in Fig 3.4(b).

Fig. 3.2.Mask operations over an imag

Table 3.3.Y- yes, A- approximately, N-not possible

The line passes through 7 pixels and therefore, the sum of the weights covering these 7 pixels should be equal to the number of pixels in the mask through which this line does not pass through ( = 18 in case of 5×5 mask). Now based on the knowledge of the quantisition acceptance values, the total weight of 18 is distributed amongst the 7 pixels, as shown Fig 3.4(c).

Mask size and the amount of camouflage: We have conducted experimentation on synthetic images. Synthetically creating a defective surface (Nagabhushan and Bhajantri 2003) involves making use of shapes of alphabets or numerals, which are highly resembling in geometrical structure, for example doping V in U or vice-versa. Fig. 3.7(a) illustrates a normal surface synthesized using the assemblage of U elements. In Fig. 3.7(b), a small portion of U is replaced by V elements. When such types of camouflaged images were shown to many human observers without providing any clues about the images, all of them failed to recognize the defective images at a first perusal of the image.

The above procedure has been experimented on with the image shown in Fig 3.7. The corresponding results of the experimental analysis are shown in Fig 3.5 and Fig 3.6. As indicated in the previous discussions, the discriminatory features are finer in nature. Thus, camouflaged portion requires lot more of local analysis and processing to enhance it. It can be clearly observed from the experimentation graphs that the smaller sized kernels perform better than the larger sized kernels. But the disadvantage of using a smaller sized kernel (for example a 3×3 mask known is that it cannot derive the lines of all angles except at 00, 450, 900, 1350, and 1800. The larger sized kernels can capture lines with different orientations, if not accurately, at least approximately. This results in a tradeoff between the number of orientations and the size of the mask. But on the other hand, there exists a tradeoff between the size of the mask and the enhancement of finer discriminatory features. Larger masks require higher computation and may fail when the camouflaged area is smaller than the kernel size. Thus, we propose to experiment on images shown in Fig 3.7 with smaller kernel sizes with approximate line orientations.

The decamouflaging of obscure regions that are based on texture is a tricky task. Different textures respond to different line masks with corresponding orientation features. Thus, the discriminatory features are enhanced with multiple line masks and the knowledge derived from all these orientations are later fused to classify the camouflaged and uncamouflaged regions.

Fig. 3.4.A 5×5 mask derived for slope 26°(approximately a 30° line mask)

Fig. 3.5.Different sizes and orientations of a line mask over a UV image

Fig. 3.6.3×3, 5×5,7×7, and 9×9 masks with different orientations

 

4. CAMOUFLAGE DETECTION

Once the finer discriminatory features are enhanced through line masks, the appropriate features will have to be extracted for the classification of camouflaged and uncamouflaged regions. Conventionally, these simple texture parameters are employed such as mean and variance. These texture features will not be sufficient to project the discrimination, especially not under this kind of difficult situation (Nagabhushan et al. 2006). In an image analysis histogram, the features are commonly extracted to act as discriminatory components for image classification, segmentation, clustering, etc. Histogram features from images provide the distribution of the gray level components that are present in the image or image block. Until recently the computational complexity of the distance measure between two histograms has been high and it was proportional to the number of bins. Pradeep and Nagabhushan (2006) introduced the AB regression line based histogram distance measure, which drastically reduces the computational complexity.

In this paper we transform the histogram features into regression line features for low complexity distance computation (Pradeep and Nagabhushan 2006). As indicated in the modelbuilding phase, the image is initially divided into smaller disjointed blocks and then the regression line features are extracted for individual blocks. Thus, the distance between these blocks are computed to obtain the distance matrix, from which hierarchical clusters are generated. We em ployed a linkage-clustering algorithm for clustering (Jain and Dubes 1988).

Fig. 3.7.Experimentation with normal and defective UV images

Let us assume that the hierarchical cluster spans the length L to complete the clustering. As per our assumptions, the uncamouflaged blocks are very much similar in their characteristics. These assumptions are based on the observations of the different camouflaging scenarios that are found in nature. Gestalt’s law states that human perceptions are governed based on the law of similarity and proximity

The law of similarity: When objects look similar to one another, they are often perceived to be part of a pattern. This is applicable to color, texture, shape, and size.

The law of proximity: Objects that are positioned close to one another are often not seen as separate parts, but rather as one coherent whole object.

Based on the above laws, camouflaging is possible only when the object is disguised in a way that is similar to its surroundings. That means that the surrounding textures are very much similar to each other. Thus, when the image is divided into blocks for camouflage analysis, the surrounding/ uncamouflaged blocks are expected to cluster within a short span in the hierarchical cluster. Now that the finer discriminatory properties of the camouflage portion are enhanced, the blocks belonging to the camouflaged regions join the hierarchical clusters at a later span. Thus, the inference is that the homogenous blocks belonging to the surroundings gets clustered faster within a particular span S. The experimentation and simulation results have shown that most of the time this span S is within 30% of the total span L. So we mark this span S as a 30% threshold (T) point and we call it the “30% T mark.” Fig. 4.1 illustrates the 30% T concept by using the UV image that is shown in Fig. 3.7 (b).

Fig. 4.1Enhanced images and dendrograms

At the 30% T span let the clusters that are recorded be C1,C2,C3….. with n1,n2,n3… respectively as the number of samples (i.e., the number of blocks) belonging to each of the clusters. As the number of surrounding blocks is expected to be more than the camouflaged blocks, the cluster Ck with the highest number of samples/ blocks is considered to be the uncamouflaged or nondefective cluster. The remaining clusters are considered to be holding defective/camouflaged blocks.

Let the blocks belonging to the uncamouflaged cluster be collected in the US (Uncamouflaged Set) and the remaining be collected from the camouflaged clusters in the CS(Camouflaged Set). Thus, for K number of line masks exposing camouflage at K orientations, we obtained the K sets of the US and CS. Each set of the US and CS represents the knowledge that was extracted at a particular line mask orientation. This is shown in Table 4.2. The knowledge packets extracted at K orientations is fused with adjacently located blocks to obtain the final comprehensive classification results. The fusion is a process of merging the homogenous image regions, where the intersection of the K sets of the US results in uncamouflaged/ non-defective regions, as represented by Eq-2, and the union of the K sets of the CS results in camouflaged/ defective blocks, as shown in Eq- 3.

Table 4.2.Knowledge extraction and fusion

Comprehensive Algorithm : 1 Split the whole image into LxL disjoint blocks. 2 Derive the different orientation mask and run over each block. 3 Compute the histogram and regression line, store the slope and intercept of the regression line of each block. 4 Perform hierarchical complete linkage clustering and represent it by a dendrogram. 5 The knowledge from the dendrograms are extracted and fused with adjacently located homogenous block usinga 30% T span. 6 The camouflaged portion is segmented. Algorithm ends

 

5. EXPERIMENTAL VALIDATIONS

We have experimented on synthetic and real images in order to reveal the capability of proposed criteria. This method has been implemented in a Mat lab using an Image processing tool box on a P-IV, 2.99GHz Windows PC with 512 MB RAM.

Experiment 1: In this experiment we used a synthetic data set that was made up of shapes that resembled containing : (pipe symbol) with ! (exclamation mark). Fig 5(a) shows the image constructed with pipe symbol and exclamation mark. The image shown in Fig 5(a) attempts to reveal the obsceneness of camouflage portion that it is difficult to distinguish the two symbols each other. The image was split into R2 disjoint uniform blocks. For R=3, 9 blocks of each of size 40×40 are processed for an image that is 120x120 in size. Fig 5(b) shows the decomposition of the image into 3×3 blocks. The 00 and 900 masks were convolved over the defective image. (One can easily understand that these shapes respond better to 00 and 900 masks) The dendrogramswere obtained by the application of a complete linkage hierarchical clustering algorithm (Jain and Dubes 1988). The process of conducting knowledge fusion by using a 30 % T span is shown in Fig 5(c). It shows that blocks 4, 5, and 6 formed the separate cluster C2, which was from a 900 mask and these were suspected to be defective. Each defective block was subject to region based segmentation (Gonzalez and Woods 2005) and finally the adjacently situated. camouflaged sectors were fused together (Nagabhushan et al. 2006). Fig. 5(d) shows the camouflaged segment

Experiment 2: In this experiment, a real life case study of a ceramic plate containing a defect that studied by (Nagabhushan et al. 2006, Pradeep and Nagabhushan 2006) was considered. We h employed our method to successfully trace the defective region. The image surface was decomposed into 5×5 blocks, as shown in Fig 6(b). The size of a texel is a single pixel. Each block was 4×4 in size. For an image frame that was 200×200 in size, as shown in Fig. 6(a), the tota number of 5×5 blocks was 25 (20×20). The histogram regression line feature space obtained for 25 blocks was subjected to classification, which resulted in two classes.

Fig. 5.

Fig. 6

The 450 mask was convolved over the tile image. Cutting the dendrogram at a 30% T span showed 7 blocks in the defective class and 18 blocks in the normal class. The dendrogram in Fig. 6(c) shows that 14, 15, 16, 17, 18, 19, and 20 were defective blocks. To trace the actual extent of defect, we performed a region based segmentation (Gonzalez and Woods 2005) process on the defective blocks. The stretch of defect has been mapped out in Fig.6 (d) . The results tally with the results presented in the study by Song et al. (1992).

Experiment 3: Shaded caps are used by the armed forces to camouflage themselves against the backdrop of vegetation. We conducted three experiments with these caps by using them as data elements. We considered a rectangular arrangement (array) of caps, as shown in Fig. 7(a).

In Case(i), one cap was slightly defective. In Case (ii), one cap was placed slightly skewed, which resulted in rotation camouflage. In Ease (iii,) one cap was slightly laterally shifted to cause alignment camouflage. In each case, for experimental convenience, the rectangular array of caps was decomposed into blocks in such a way that each block corresponds to the enclosure of one cap image. The 450 mask was employed. The histogram regression line features were computed for each block. The dendrogramwas cut off at a 30% T span. The 5th cap came out as a defective one. The dendrogramsthat were obtained corresponding to these three cases are shown in Fig. 7(b). In Case(i) the 5th cap has a defective textured top. In Case (ii) and Case(iii), the 5th cap was the cause for rotation camouflage and alignment camouflage, respectively. In Ease(ii), the 5th cap was rotated by angle 50 as compared to other caps in the rectangular arrangement. In Case(iii), the 5th cap was shifted to the left by 2 pixels. Our algorithm efficiently recognized these highly unobservable defects.

Fig. 7.

Fig. 8.(a) A normal shaded image, (b) The surface after it was split into 16 blocks (the unnumbered blocks are defective), (c) Dendrogram due to clustering

Experiment 4: In this experiment an interesting image frame, which was a modified version of the image that was made available by John (1999) and that contained shades of gray was considered, as shown in Fig. 8(a). The image is decomposed into 4×4 blocks that were each 64×64 in size and the entire image size was 256×256, as shown in Fig. 8(b). The 450 mask was convolved over the image. The histogram regression line as obtained for each block. We used the complete linkage hierarchical clustering algorithm to produce the dendrogram, which is shown in Fig 8(c). The 30% T span cut off shows that the defective blocks were 11, 13, and 16. The defects in these blocks were due to variations in the shading patterns.

Experiment 5: This is a typical experiment where images of the tops of matchboxes were used. The market is flooded with substandard or duplicate products, which are packed in such a way that there is not any deviation between the packages that contain duplicate the ones that conta standard products. Only very close observation reveals the difference between the duplicate wrapper and the original wrapper. Fig. 9(a) shows the image of original and duplicate matchboxes arranged together. It was decomposed into 16 blocks, as shown in Fig. 9(b). Experimentally, we found that the 450 mask gives the best discrimination of camouflaged portion. The clustering of the histogram regression lines indicated that the 8th and 11th matchboxes do not belong to the class of true or original matchboxes. It is natural that anyone would be deceived and would fail to recognize the duplicate matchbox.

For the purpose of computational ease, we chosen block size that corresponds to the size of the matchbox. The defective matchbox was then effectively isolated. This can be observed from the dendrogram that is presented in Fig. 9(c).

Fig. 9.(a) A defective matchbox image split into 16 blocks (the 8th and 11th are defective), (b) Image split into 16 blocks, (c) The dendrogram, due to clustering

 

6. CONCLUSION

In this paper we have proposed a novel method to detect camouflaged/ defective regions. The obscure camouflaged regions are projected through discriminatory feature enhancement techniques. Using a line mask with different orientations enhanced the finer discriminatory features. (In some experiments, we found the best orientations and just worked with the corresponding mask.) The image was convolved with a line mask to obtain visibly enhanced texture variations. This enhanced image was further processed to obtain the histogram regression line features. The discrimination knowledge was generated from these regression features through hierarchical clustering techniques.

An automatic identification of texels could be a very good feature enhancement strategy for hard natural image decamouflage. The experimental results on several datasets have revealed that this approach is capable of correctly spotting camouflaged regions.

References

  1. Anderson, A. J. &McOwan, P. W., 2002, Towards an autonomous motion camouflage control system, International Joint Conference on Neural Networks, Hawaii, 2006-2011.
  2. Gonzalez R.C and R.E Woods, 2005, Digital Image processing. II edition, PHI.
  3. Iivarinen J, and A Visa, 1998, An adaptive Texture and Shape Based Defect Classification, In ProcInternational Conf on Pattern Recognition, Volume 2, pp 117-123.
  4. Jain A.K. and Dubes R.C, 1988, Algorithms for Clustering Data, Prentice Hall, Eagle Wood Cliffs,
  5. John C Russ, 1999, Image Processing Hand Book, IIIrd Edition, CRC press.
  6. Joseph P Havlicek and Peter C Tay, 2001, Determination of The Number of texture Segments using Wavelets, 16th Conference on Applied Mathematics, University of Central Oklahoma, Electronic Journal of Differential Equations, Conf 07, pp 61-70.
  7. Juliana F CamapumWanderley, Mark H Fisher, 2002, Spatial-Feature Parametric Clustering Applied to Motion-Based Segmentation in Camouflage, Computer Vision and Image Understanding, 85 pp144-157. https://doi.org/10.1006/cviu.2001.0944
  8. Lalitha R and P Nagabhushan, 2004, Content Driven Dimensionality Reduction at Block level in the design of an efficient classifier for spatial multi spectral Images, PRL, pp 1833-1844, 25.
  9. Michael S and Lawrence O and Michael J S, 2000, Practical algorithm for Image Analysis,(Description, Examples and code), Chapter 3- Gray Scale Image Analysis, Cambridge University Press
  10. Nagabhushan P, NagappaBhajantri, B.H Shekar, 2006, Identification of Camouflaged Defect through Central moments and Hierarchical Segmentation, Communicated to International Journal of Tomography and Statistics.
  11. Nagabhushan P and Pradeep Kumar R, 2005, Multiresolution Knowledge Mining using Wavelet Transform, Proceedings of ICCR. pp-781-792.
  12. Nagabhushan P and Nagappa U Bhajantri, 2003, Visualization of camouflaged Image segment: Experiment with synthetic Images', Proc of National Conference on Document analysis& Recognition, 11-12th July, PESCE, Mandya.
  13. Newman. T S , Anil K Jain, 1992, A survey of automated Visual Inspection , Computer Vision and Image Understanding ,67 231-262,
  14. Pradeep Kumar R and P. Nagabhushan, 2006, AB Distance Based Histogram Clustering for Mining Multi-Channel EEG Data Using Wavesim Transform. July, Beijing China, IEEE International conference on Cognitive Informatics
  15. Roy R. Behrens, 2006. An introductory address (by Roy R Behrens) at the international Camouflageconference, on April 22, at the University of Northern Iowa
  16. Sanjay Pande M.B (Supervisor - P Nagabhushan), 2004, An algorithmic model for exploratory analysis of trace elements in cognition and recognition of neurological disorders. Ph.D Thesis, Department of studies in Computer Science, University of Mysore,
  17. Schael M and H Burkhardt, 2000, Automatic Detection of Errors in Textures Using Invariant Gray scale Features and Polynomial Classifier. In M.K. Pietkaainen editor, Texture analysis in machine Vision, Volume 40, pp 219-229, World Scientific.
  18. Serafim. A, 1992, Segmentation of natural Images based On Multi resolution Pyramids Linking: Application to Leather defects Detection. In Proc International Conf on Pattern Recognition, pp41-44.
  19. Smith, M L, T. Hill, G. Smith, 1997, Surface texture analysis based upon the visually acquired perturbation of surface normal, Image and Vision Computing Journal, Vol. 15, No. 12, pp.949-955. https://doi.org/10.1016/S0262-8856(97)00050-4
  20. Song K Y, J Kittler and M Petrou, 1996, Defect detection in random Color textures, Image and Vision Computing, 14: 667-683. https://doi.org/10.1016/0262-8856(96)84491-X
  21. Song, K Y, J Kittler and MPetrou, 1992, Texture Defect Detection: Review In SPIE Applications of Artificial Intelligence X: Machine Vision and Robotics, Vol1708, pp 99-106, SPIE.
  22. Song, K Y, M Petrou and J Kittler. 1992, Wigner based crack detection in Texture Images, In Fourth IEE International Conference on Image Processing and its Application, pp315-318.

Cited by

  1. Real-Time Projection-Based Augmented Reality System for Dynamic Objects in the Performing Arts vol.7, pp.1, 2015, https://doi.org/10.3390/sym7010182
  2. Automatic image thresholding using Otsu’s method and entropy weighting scheme for surface defect detection 2017, https://doi.org/10.1007/s00500-017-2709-1