• Title/Summary/Keyword: Canny Edge Detection

Search Result 145, Processing Time 0.021 seconds

Reconstruction of internal structures and numerical simulation for concrete composites at mesoscale

  • Du, Chengbin;Jiang, Shouyan;Qin, Wu;Xu, Hairong;Lei, Dong
    • Computers and Concrete
    • /
    • v.10 no.2
    • /
    • pp.135-147
    • /
    • 2012
  • At mesoscale, concrete is considered as a three-phase composite material consisting of the aggregate particles, the cement matrix and the interfacial transition zone (ITZ). The reconstruction of the internal structures for concrete composites requires the identification of the boundary of the aggregate particles and the cement matrix using digital imaging technology followed by post-processing through MATLAB. A parameter study covers the subsection transformation, median filter, and open and close operation of the digital image sample to obtain the optimal parameter for performing the image processing technology. The subsection transformation is performed using a grey histogram of the digital image samples with a threshold value of [120, 210] followed by median filtering with a $16{\times}16$ square module based on the dimensions of the aggregate particles and their internal impurity. We then select a "disk" tectonic structure with a specific radius, which performs open and close operations on the images. The edges of the aggregate particles (similar to the original digital images) are obtained using the canny edge detection method. The finite element model at mesoscale can be established using the proposed image processing technology. The location of the crack determined through the numerical method is identical to the experimental result, and the load-displacement curve determined through the numerical method is in close agreement with the experimental results. Comparisons of the numerical and experimental results show that the proposed image processing technology is highly effective in reconstructing the internal structures of concrete composites.

Automatic detection of discontinuity trace maps: A study of image processing techniques in building stone mines

  • Mojtaba Taghizadeh;Reza Khalou Kakaee;Hossein Mirzaee Nasirabad;Farhan A. Alenizi
    • Geomechanics and Engineering
    • /
    • v.36 no.3
    • /
    • pp.205-215
    • /
    • 2024
  • Manually mapping fractures in construction stone mines is challenging, time-consuming, and hazardous. In this method, there is no physical access to all points. In contrast, digital image processing offers a safe, cost-effective, and fast alternative, with the capability to map all joints. In this study, two methods of detecting the trace of discontinuities using image processing in construction stone mines are presented. To achieve this, we employ two modified Hough transform algorithms and the degree of neighborhood technique. Initially, we introduced a method for selecting the best edge detector and smoothing algorithms. Subsequently, the Canny detector and median smoother were identified as the most efficient tools. To trace discontinuities using the mentioned methods, common preprocessing steps were initially applied to the image. Following this, each of the two algorithms followed a distinct approach. The Hough transform algorithm was first applied to the image, and the traces were represented through line drawings. Subsequently, the Hough transform results were refined using fuzzy clustering and reduced clustering algorithms, along with a novel algorithm known as the farthest points' algorithm. Additionally, we developed another algorithm, the degree of neighborhood, tailored for detecting discontinuity traces in construction stones. After completing the common preprocessing steps, the thinning operation was performed on the target image, and the degree of neighborhood for lineament pixels was determined. Subsequently, short lines were removed, and the discontinuities were determined based on the degree of neighborhood. In the final step, we connected lines that were previously separated using the method to be described. The comparison of results demonstrates that image processing is a suitable tool for identifying rock mass discontinuity traces. Finally, a comparison of two images from different construction stone mines presented at the end of this study reveals that in images with fewer traces of discontinuities and a softer texture, both algorithms effectively detect the discontinuity traces.

A Road Luminance Measurement Application based on Android (안드로이드 기반의 도로 밝기 측정 어플리케이션 구현)

  • Choi, Young-Hwan;Kim, Hongrae;Hong, Min
    • Journal of Internet Computing and Services
    • /
    • v.16 no.2
    • /
    • pp.49-55
    • /
    • 2015
  • According to the statistics of traffic accidents over recent 5 years, traffic accidents during the night times happened more than the day times. There are various causes to occur traffic accidents and the one of the major causes is inappropriate or missing street lights that make driver's sight confused and causes the traffic accidents. In this paper, with smartphones, we designed and implemented a lane luminance measurement application which stores the information of driver's location, driving, and lane luminance into database in real time to figure out the inappropriate street light facilities and the area that does not have any street lights. This application is implemented under Native C/C++ environment using android NDK and it improves the operation speed than code written in Java or other languages. To measure the luminance of road, the input image with RGB color space is converted to image with YCbCr color space and Y value returns the luminance of road. The application detects the road lane and calculates the road lane luminance into the database sever. Also this application receives the road video image using smart phone's camera and improves the computational cost by allocating the ROI(Region of interest) of input images. The ROI of image is converted to Grayscale image and then applied the canny edge detector to extract the outline of lanes. After that, we applied hough line transform method to achieve the candidated lane group. The both sides of lane is selected by lane detection algorithm that utilizes the gradient of candidated lanes. When the both lanes of road are detected, we set up a triangle area with a height 20 pixels down from intersection of lanes and the luminance of road is estimated from this triangle area. Y value is calculated from the extracted each R, G, B value of pixels in the triangle. The average Y value of pixels is ranged between from 0 to 100 value to inform a luminance of road and each pixel values are represented with color between black and green. We store car location using smartphone's GPS sensor into the database server after analyzing the road lane video image with luminance of road about 60 meters ahead by wireless communication every 10 minutes. We expect that those collected road luminance information can warn drivers about safe driving or effectively improve the renovation plans of road luminance management.

3D Film Image Inspection Based on the Width of Optimized Height of Histogram (히스토그램의 최적 높이의 폭에 기반한 3차원 필름 영상 검사)

  • Jae-Eun Lee;Jong-Nam Kim
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.23 no.2
    • /
    • pp.107-114
    • /
    • 2022
  • In order to classify 3D film images as right or wrong, it is necessary to detect the pattern in a 3D film image. However, if the contrast of the pixels in the 3D film image is low, it is not easy to classify as the right and wrong 3D film images because the pattern in the image might not be clear. In this paper, we propose a method of classifying 3D film images as right or wrong by comparing the width at a specific frequency of each histogram after obtaining the histogram. Since, it is classified using the width of the histogram, the analysis process is not complicated. From the experiment, the histograms of right and wrong 3D film images were distinctly different, and the proposed algorithm reflects these features, and showed that all 3D film images were accurately classified at a specific frequency of the histogram. The performance of the proposed algorithm was verified to be the best through the comparison test with the other methods such as image subtraction, otsu thresholding, canny edge detection, morphological geodesic active contour, and support vector machines, and it was shown that excellent classification accuracy could be obtained without detecting the patterns in 3D film images.

Computer Assisted EPID Analysis of Breast Intrafractional and Interfractional Positioning Error (유방암 방사선치료에 있어 치료도중 및 분할치료 간 위치오차에 대한 전자포탈영상의 컴퓨터를 이용한 자동 분석)

  • Sohn Jason W.;Mansur David B.;Monroe James I.;Drzymala Robert E.;Jin Ho-Sang;Suh Tae-Suk;Dempsey James F.;Klein Eric E.
    • Progress in Medical Physics
    • /
    • v.17 no.1
    • /
    • pp.24-31
    • /
    • 2006
  • Automated analysis software was developed to measure the magnitude of the intrafractional and interfractional errors during breast radiation treatments. Error analysis results are important for determining suitable planning target volumes (PTV) prior to Implementing breast-conserving 3-D conformal radiation treatment (CRT). The electrical portal imaging device (EPID) used for this study was a Portal Vision LC250 liquid-filled ionization detector (fast frame-averaging mode, 1.4 frames per second, 256X256 pixels). Twelve patients were imaged for a minimum of 7 treatment days. During each treatment day, an average of 8 to 9 images per field were acquired (dose rate of 400 MU/minute). We developed automated image analysis software to quantitatively analyze 2,931 images (encompassing 720 measurements). Standard deviations ($\sigma$) of intrafractional (breathing motion) and intefractional (setup uncertainty) errors were calculated. The PTV margin to include the clinical target volume (CTV) with 95% confidence level was calculated as $2\;(1.96\;{\sigma})$. To compensate for intra-fractional error (mainly due to breathing motion) the required PTV margin ranged from 2 mm to 4 mm. However, PTV margins compensating for intefractional error ranged from 7 mm to 31 mm. The total average error observed for 12 patients was 17 mm. The intefractional setup error ranged from 2 to 15 times larger than intrafractional errors associated with breathing motion. Prior to 3-D conformal radiation treatment or IMRT breast treatment, the magnitude of setup errors must be measured and properly incorporated into the PTV. To reduce large PTVs for breast IMRT or 3-D CRT, an image-guided system would be extremely valuable, if not required. EPID systems should incorporate automated analysis software as described in this report to process and take advantage of the large numbers of EPID images available for error analysis which will help Individual clinics arrive at an appropriate PTV for their practice. Such systems can also provide valuable patient monitoring information with minimal effort.

  • PDF