• Title/Summary/Keyword: RGB camera

Search Result 316, Processing Time 0.024 seconds

Comparison of Clinical Characteristics of Fluorescence in Quantitative Light-Induced Fluorescence Images according to the Maturation Level of Dental Plaque

  • Jung, Eun-Ha;Oh, Hye-Young
    • Journal of dental hygiene science
    • /
    • v.21 no.4
    • /
    • pp.219-226
    • /
    • 2021
  • Background: Proper detection and management of dental plaque are essential for individual oral health. We aimed to evaluate the maturation level of dental plaque using a two-tone disclosing agent and to compare it with the fluorescence of dental plaque on the quantitative light-induced fluorescence (QLF) image to obtain primary data for the development of a new dental plaque scoring system. Methods: Twenty-eight subjects who consented to participate after understanding the purpose of the study were screened. The images of the anterior teeth were obtained using the QLF device. Subsequently, dental plaque was stained with a two-tone disclosing solution and a photograph was obtained with a digital single-lens reflex (DSLR) camera. The staining scores were assigned as follows: 0 for no staining, 1 for pink staining, and 2 for blue staining. The marked points on the DSLR images were selected for RGB color analysis. The relationship between dental plaque maturation and the red/green (R/G) ratio was evaluated using Spearman's rank correlation. Additionally, different red fluorescence values according to dental plaque accumulation were assessed using one-way analysis of variance followed by Scheffe's post-hoc test to identify statistically significant differences between the groups. Results: A comparison of the intensity of red fluorescence according to the maturation of the two-tone stained dental plaque confirmed that R/G ratio was higher in the QLF images with dental plaque maturation (p<0.001). Correlation analysis between the stained dental plaque and the red fluorescence intensity in the QLF image confirmed an excellent positive correlation (p<0.001). Conclusion: A new plaque scoring system can be developed based on the results of the present study. In addition, these study results may also help in dental plaque management in the clinical setting.

Estimating the Spatial Distribution of Rumex acetosella L. on Hill Pasture using UAV Monitoring System and Digital Camera (무인기와 디지털카메라를 이용한 산지초지에서의 애기수영 분포도 제작)

  • Lee, Hyo-Jin;Lee, Hyowon;Go, Han Jong
    • Journal of The Korean Society of Grassland and Forage Science
    • /
    • v.36 no.4
    • /
    • pp.365-369
    • /
    • 2016
  • Red sorrel (Rumex acetosella L.), as one of exotic weeds in Korea, was dominated in grassland and reduced the quality of forage. Improving current pasture productivity by precision management requires practical tools to collect site-specific pasture weed data. Recent development in unmanned aerial vehicle (UAV) technology has offered cost effective and real time applications for site-specific data collection. To map red sorrel on a hill pasture, we tested the potential use of an UAV system with digital cameras (visible and near-infrared (NIR) camera). Field measurements were conducted on grazing hill pasture at Hanwoo Improvement Office, Seosan City, Chungcheongnam-do Province, Korea on May 17, 2014. Plant samples were obtained at 20 sites. An UAV system was used to obtain aerial photos from a height of approximately 50 m (approximately 30 cm spatial resolution). Normalized digital number values of Red, Green, Blue, and NIR channels were extracted from aerial photos. Multiple linear regression analysis results showed that the correlation coefficient between Rumex content and 4 bands of UAV image was 0.96 with root mean square error of 9.3. Therefore, UAV monitoring system can be a quick and cost effective tool to obtain spatial distribution of red sorrel data for precision management of hilly grazing pasture.

Image Processing System for Color Analysis of Food (식품의 색채 분석을 위한 영상 처리 시스템)

  • Kim, Kyung-Man;Seo, Dong-Wook;Chun, Jae-Kun
    • Korean Journal of Food Science and Technology
    • /
    • v.28 no.4
    • /
    • pp.786-789
    • /
    • 1996
  • An image processing system was built to evaluate the color properties of apple and meat. The system consisted of video camera, video card, 32 bit microcomputer and an optical illuminator. The operating software was developed to carry out capturing, analyzing, displaying and storing of the 8 bit digitized images of food. The images of apples at various maturing stages were investigated to obtain the color histogram of R, G, B and Hunter value. RGB histogram showed a major difference in G value, 35.01, the minor change in R value, 6.16, and the negligible difference in B value. The image of beef cut was separated into two parts, fat and lean tissue, by applying threshold value method based on the digital value of color. The threshold value for fat was over 240 and for lean under 230 in R value, respectively. The resulting non fat image showed 2% decreased color difference value, ${\Delta}E$, than whole meat cut.

  • PDF

A Study on Pipe Model Registration for Augmented Reality Based O&M Environment Improving (증강현실 기반의 O&M 환경 개선을 위한 배관 모델 정합에 관한 연구)

  • Lee, Won-Hyuk;Lee, Kyung-Ho;Lee, Jae-Joon;Nam, Byeong-Wook
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.32 no.3
    • /
    • pp.191-197
    • /
    • 2019
  • As the shipbuilding and offshore plant industries grow larger and more complex, their maintenance and inspection systems become more important. Recently, maintenance and inspection systems based on augmented reality have been attracting much attention for improving worker's understanding of work and efficiency, but it is often difficult to work with because accurate matching between the augmented model and reality information is not. To solve this problem, marker based AR technology is used to attach a specific image to the model. However, the markers get damaged due to the characteristic of the shipbuilding and offshore plant industry, and the camera needs to be able to detect the entire marker clearly, and thus requires sufficient space to exist between the operator. In order to overcome the limitations of the existing AR system, in this study, a markerless AR was adopted to accurately recognize the actual model of the pipe system that occupies the most processes in the shipbuilding and offshore plant industries. The matching methodology. Through this system, it is expected that the twist phenomenon of the augmented model according to the attitude of the real worker and the limited environment can be improved.

Changes in the Hyperspectral Characteristics of Wheat Plants According to N Top-dressing Rates at Various Growth Stages (밀에서 질소 시비 조건에 따른 생육 단계별 초분광 특성 변화)

  • Jung, Jae Gyeong;Lee, Yeong Hun;Choi, Jae Eun;Song, Gi Eun;Ko, Jong Han;Lee, Kyung Do;Shim, Sang In
    • KOREAN JOURNAL OF CROP SCIENCE
    • /
    • v.65 no.4
    • /
    • pp.377-385
    • /
    • 2020
  • Recently, wheat consumption has been increasing in Korea, requiring increased production. Nitrogen fertilization is a critical determinant in crop yield; therefore, it is necessary to optimize the nitrogen fertilization regime with current trends that emphasize the minimum impact of nitrogen fertilizer on the environment. In this study, both nondestructive spectral analysis using a hyperspectral camera and growth analysis were performed to determine the optimal N top-dressing rates after heading. The nitrogen application regimes consisted of three conditions according to the secondary top-dressing rate: N4:3:0 (0 kg 10 a-1), N4:3:3 (2.73 kg 10 a-1), and N4:3:6 (5.46 kg 10 a-1). Subsequently, growth and physiological investigations were performed at the jointing, heading, and ripening stages of wheat, and spectral investigations were conducted. On April 29, as the nitrogen fertilization rate was increased to N4:3:3 and N4:3:6, plant height and grain yield increased by 4% and 8%, and 8% and 52%, respectively, compared to those under N4:3:0. Leaf area index and SPAD value also increased by 13% and 24%, and 32% and 43%, respectively. The R (red), G (green), and B (blue) of leaf color were lowered by 15, 11, and 4 in N4:3:3 and 44, 34, and 18 in N4:3:6, respectively, as compared to the control. Grain yield was the highest at high top-dressing (N4:3:6), however, there was no difference between no top-dressing (N4:3:0) and intermediat top-dressing (N4:3:3). The reflectance analyzed using a hyperspectral camera showed a difference in the near-infrared (NIR) region on March 19, and on April 29, there was a difference both in the visible light region greater than 550 nm and the NIR region. Vegetation indices differed according to fertilization regime, except for the greenness index (GI). The results of this study showed that not only growth and physiological analysis but also spectral indices can be used to optimize the nitrogen top-dressing rate.

Matching Points Filtering Applied Panorama Image Processing Using SURF and RANSAC Algorithm (SURF와 RANSAC 알고리즘을 이용한 대응점 필터링 적용 파노라마 이미지 처리)

  • Kim, Jeongho;Kim, Daewon
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.51 no.4
    • /
    • pp.144-159
    • /
    • 2014
  • Techniques for making a single panoramic image using multiple pictures are widely studied in many areas such as computer vision, computer graphics, etc. The panorama image can be applied to various fields like virtual reality, robot vision areas which require wide-angled shots as an useful way to overcome the limitations such as picture-angle, resolutions, and internal informations of an image taken from a single camera. It is so much meaningful in a point that a panoramic image usually provides better immersion feeling than a plain image. Although there are many ways to build a panoramic image, most of them are using the way of extracting feature points and matching points of each images for making a single panoramic image. In addition, those methods use the RANSAC(RANdom SAmple Consensus) algorithm with matching points and the Homography matrix to transform the image. The SURF(Speeded Up Robust Features) algorithm which is used in this paper to extract featuring points uses an image's black and white informations and local spatial informations. The SURF is widely being used since it is very much robust at detecting image's size, view-point changes, and additionally, faster than the SIFT(Scale Invariant Features Transform) algorithm. The SURF has a shortcoming of making an error which results in decreasing the RANSAC algorithm's performance speed when extracting image's feature points. As a result, this may increase the CPU usage occupation rate. The error of detecting matching points may role as a critical reason for disqualifying panoramic image's accuracy and lucidity. In this paper, in order to minimize errors of extracting matching points, we used $3{\times}3$ region's RGB pixel values around the matching points' coordinates to perform intermediate filtering process for removing wrong matching points. We have also presented analysis and evaluation results relating to enhanced working speed for producing a panorama image, CPU usage rate, extracted matching points' decreasing rate and accuracy.

A Road Luminance Measurement Application based on Android (안드로이드 기반의 도로 밝기 측정 어플리케이션 구현)

  • Choi, Young-Hwan;Kim, Hongrae;Hong, Min
    • Journal of Internet Computing and Services
    • /
    • v.16 no.2
    • /
    • pp.49-55
    • /
    • 2015
  • According to the statistics of traffic accidents over recent 5 years, traffic accidents during the night times happened more than the day times. There are various causes to occur traffic accidents and the one of the major causes is inappropriate or missing street lights that make driver's sight confused and causes the traffic accidents. In this paper, with smartphones, we designed and implemented a lane luminance measurement application which stores the information of driver's location, driving, and lane luminance into database in real time to figure out the inappropriate street light facilities and the area that does not have any street lights. This application is implemented under Native C/C++ environment using android NDK and it improves the operation speed than code written in Java or other languages. To measure the luminance of road, the input image with RGB color space is converted to image with YCbCr color space and Y value returns the luminance of road. The application detects the road lane and calculates the road lane luminance into the database sever. Also this application receives the road video image using smart phone's camera and improves the computational cost by allocating the ROI(Region of interest) of input images. The ROI of image is converted to Grayscale image and then applied the canny edge detector to extract the outline of lanes. After that, we applied hough line transform method to achieve the candidated lane group. The both sides of lane is selected by lane detection algorithm that utilizes the gradient of candidated lanes. When the both lanes of road are detected, we set up a triangle area with a height 20 pixels down from intersection of lanes and the luminance of road is estimated from this triangle area. Y value is calculated from the extracted each R, G, B value of pixels in the triangle. The average Y value of pixels is ranged between from 0 to 100 value to inform a luminance of road and each pixel values are represented with color between black and green. We store car location using smartphone's GPS sensor into the database server after analyzing the road lane video image with luminance of road about 60 meters ahead by wireless communication every 10 minutes. We expect that those collected road luminance information can warn drivers about safe driving or effectively improve the renovation plans of road luminance management.

Estimation of Rice Heading Date of Paddy Rice from Slanted and Top-view Images Using Deep Learning Classification Model (딥 러닝 분류 모델을 이용한 직하방과 경사각 영상 기반의 벼 출수기 판별)

  • Hyeok-jin Bak;Wan-Gyu Sang;Sungyul Chang;Dongwon Kwon;Woo-jin Im;Ji-hyeon Lee;Nam-jin Chung;Jung-Il Cho
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.25 no.4
    • /
    • pp.337-345
    • /
    • 2023
  • Estimating the rice heading date is one of the most crucial agricultural tasks related to productivity. However, due to abnormal climates around the world, it is becoming increasingly challenging to estimate the rice heading date. Therefore, a more objective classification method for estimating the rice heading date is needed than the existing methods. This study, we aimed to classify the rice heading stage from various images using a CNN classification model. We collected top-view images taken from a drone and a phenotyping tower, as well as slanted-view images captured with a RGB camera. The collected images underwent preprocessing to prepare them as input data for the CNN model. The CNN architectures employed were ResNet50, InceptionV3, and VGG19, which are commonly used in image classification models. The accuracy of the models all showed an accuracy of 0.98 or higher regardless of each architecture and type of image. We also used Grad-CAM to visually check which features of the image the model looked at and classified. Then verified our model accurately measure the rice heading date in paddy fields. The rice heading date was estimated to be approximately one day apart on average in the four paddy fields. This method suggests that the water head can be estimated automatically and quantitatively when estimating the rice heading date from various paddy field monitoring images.

Evaluation of Application Possibility for Floating Marine Pollutants Detection Using Image Enhancement Techniques: A Case Study for Thin Oil Film on the Sea Surface (영상 강화 기법을 통한 부유성 해양오염물질 탐지 기술 적용 가능성 평가: 해수면의 얇은 유막을 대상으로)

  • Soyeong Jang;Yeongbin Park;Jaeyeop Kwon;Sangheon Lee;Tae-Ho Kim
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.6_1
    • /
    • pp.1353-1369
    • /
    • 2023
  • In the event of a disaster accident at sea, the scale of damage will vary due to weather effects such as wind, currents, and tidal waves, and it is obligatory to minimize the scale of damage by establishing appropriate control plans through quick on-site identification. In particular, it is difficult to identify pollutants that exist in a thin film at sea surface due to their relatively low viscosity and surface tension among pollutants discharged into the sea. Therefore, this study aims to develop an algorithm to detect suspended pollutants on the sea surface in RGB images using imaging equipment that can be easily used in the field, and to evaluate the performance of the algorithm using input data obtained from actual waters. The developed algorithm uses image enhancement techniques to improve the contrast between the intensity values of pollutants and general sea surfaces, and through histogram analysis, the background threshold is found,suspended solids other than pollutants are removed, and finally pollutants are classified. In this study, a real sea test using substitute materials was performed to evaluate the performance of the developed algorithm, and most of the suspended marine pollutants were detected, but the false detection area occurred in places with strong waves. However, the detection results are about three times better than the detection method using a single threshold in the existing algorithm. Through the results of this R&D, it is expected to be useful for on-site control response activities by detecting suspended marine pollutants that were difficult to identify with the naked eye at existing sites.

Integrating UAV Remote Sensing with GIS for Predicting Rice Grain Protein

  • Sarkar, Tapash Kumar;Ryu, Chan-Seok;Kang, Ye-Seong;Kim, Seong-Heon;Jeon, Sae-Rom;Jang, Si-Hyeong;Park, Jun-Woo;Kim, Suk-Gu;Kim, Hyun-Jin
    • Journal of Biosystems Engineering
    • /
    • v.43 no.2
    • /
    • pp.148-159
    • /
    • 2018
  • Purpose: Unmanned air vehicle (UAV) remote sensing was applied to test various vegetation indices and make prediction models of protein content of rice for monitoring grain quality and proper management practice. Methods: Image acquisition was carried out by using NIR (Green, Red, NIR), RGB and RE (Blue, Green, Red-edge) camera mounted on UAV. Sampling was done synchronously at the geo-referenced points and GPS locations were recorded. Paddy samples were air-dried to 15% moisture content, and then dehulled and milled to 92% milling yield and measured the protein content by near-infrared spectroscopy. Results: Artificial neural network showed the better performance with $R^2$ (coefficient of determination) of 0.740, NSE (Nash-Sutcliffe model efficiency coefficient) of 0.733 and RMSE (root mean square error) of 0.187% considering all 54 samples than the models developed by PR (polynomial regression), SLR (simple linear regression), and PLSR (partial least square regression). PLSR calibration models showed almost similar result with PR as 0.663 ($R^2$) and 0.169% (RMSE) for cloud-free samples and 0.491 ($R^2$) and 0.217% (RMSE) for cloud-shadowed samples. However, the validation models performed poorly. This study revealed that there is a highly significant correlation between NDVI (normalized difference vegetation index) and protein content in rice. For the cloud-free samples, the SLR models showed $R^2=0.553$ and RMSE = 0.210%, and for cloud-shadowed samples showed 0.479 as $R^2$ and 0.225% as RMSE respectively. Conclusion: There is a significant correlation between spectral bands and grain protein content. Artificial neural networks have the strong advantages to fit the nonlinear problem when a sigmoid activation function is used in the hidden layer. Quantitatively, the neural network model obtained a higher precision result with a mean absolute relative error (MARE) of 2.18% and root mean square error (RMSE) of 0.187%.