• Title/Summary/Keyword: camera image

Search Result 4,917, Processing Time 0.035 seconds

Development of non-destructive freshness measurement system for eggs using PLC control and image processing (PLC제어와 영상처리를 이용한 계란의 비파괴 신선도 측정 시스템 개발)

  • Kim, Tae-Jung;Kim, Sun-Jung;Lee, Dong-Goo;Lee, Jeong-Ho;Lee, Young-Seok;Hwang, Heon;Choi, Sun
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.12 no.2
    • /
    • pp.162-169
    • /
    • 2019
  • Non-destructive freshness measurement using spectroscopy has been carried out several times, but research on freshness and freshness has not been conducted. Therefore the purpose of this study is to develop a system for visually measuring and quantifying the air sack inside the egg by non - destructive method. The experimental environment which designed a small chamber was composed of 850nm band of two IR lasers, IR camera and two servo motors to acquire air sack Images. When the air sack volume ratio is 2.9% or less and the density is 0.9800 or more, the Haugh Unit value is 60 or more It was judged to be a fresh egg of a grade B or higher. These results mean, using the weight measurement, nondestructive decision system, and freshness evaluating algorithm. It can be expected to distinguish grade B or more marketable eggs without using destructive methods.

3D Model Generation and Accuracy Evaluation using Unmanned Aerial Oblique Image (무인항공 경사사진을 이용한 3차원 모델 생성 및 정확도 평가)

  • Park, Joon-Kyu;Jung, Kap-Yong
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.20 no.3
    • /
    • pp.587-593
    • /
    • 2019
  • The field of geospatial information is rapidly changing due to the development of sensor and data processing technology that can acquire location information. And demand is increasing in various related industries and social activities. The construction and utilization of three dimensional geospatial information that is easy to understand and easy to understand can be an essential element to improve the quality and reliability of related services. In recent years, 3D laser scanners are widely used as 3D geospatial information construction technology. However, 3D laser scanners may cause shadow areas where data acquisition is not possible when objects are large in size or complex in shape. In this study, 3D model of an object has been created by acquiring oblique images using an unmanned aerial vehicle and processing the data. The study area was selected, oblique images were acquired using an unmanned aerial vehicle, and point cloud type 3D model with 0.02 m spacing was created through data processing. The accuracy of the 3D model was 0.19m and the average was 0.11m. In the future, if accuracy is evaluated according to shooting and data processing methods, and 3D model construction and accuracy evaluation and analysis according to camera types are performed, the accuracy of the 3D model will be improved. In the point cloud type 3D model, Cross section generation, drawing of objects, and so on, it is possible to improve work efficiency of spatial information service and related work.

3-dimensional Modeling and Mining Analysis for Open-pit Limestone Mine Stope Using a Rotary-wing Unmanned Aerial Vehicle (회전익 무인항공기를 이용한 노천석회석광산 채굴장 3차원 모델링 및 채굴량 분석)

  • Kang, Seong-Seung;Lee, Geon-Ju;Noh, Jeongdu;Jang, Hyeongdoo;Kim, Sun-Myung;Ko, Chin-Surk
    • The Journal of Engineering Geology
    • /
    • v.28 no.4
    • /
    • pp.701-714
    • /
    • 2018
  • The purpose of this study is to show the possibility of 3-dimensional modeling of open-pit limestone mine by using a rotary-wing unmanned aerial vehicle, a drone, and to estimate the amount of mining before and after mining of limestone by explosive blasting. Analysis of the image duplication of the mine has shown that it is possible to achieve high image quality. Analysis of each axis error at the shooting position after analyzing the distortions through camera calibration was shown the allowable range. As a result of estimating the amount of mining before and after explosive blasting, it was possible to estimate the amount of mining of a wide range quickly and accurately in a relatively short time. In conclusion, it is considered that the drone of a rotary-wing unmanned aerial vehicle can be usefully used for the monitoring of open-pit limestone mines and the estimation of the amount of mining. Furthermore, it is expected that this method will be utilized for periodic monitoring of construction sites and road slopes as well as open-pit mines in the future.

Land Cover Classification of High-Spatial Resolution Imagery using Fixed-Wing UAV (고정익 UAV를 이용한 고해상도 영상의 토지피복분류)

  • Yang, Sung-Ryong;Lee, Hak-Sool
    • Journal of the Society of Disaster Information
    • /
    • v.14 no.4
    • /
    • pp.501-509
    • /
    • 2018
  • Purpose: UAV-based photo measurements are being researched using UAVs in the space information field as they are not only cost-effective compared to conventional aerial imaging but also easy to obtain high-resolution data on desired time and location. In this study, the UAV-based high-resolution images were used to perform the land cover classification. Method: RGB cameras were used to obtain high-resolution images, and in addition, multi-distribution cameras were used to photograph the same regions in order to accurately classify the feeding areas. Finally, Land cover classification was carried out for a total of seven classes using created ortho image by RGB and multispectral camera, DSM(Digital Surface Model), NDVI(Normalized Difference Vegetation Index), GLCM(Gray-Level Co-occurrence Matrix) using RF (Random Forest), a representative supervisory classification system. Results: To assess the accuracy of the classification, an accuracy assessment based on the error matrix was conducted, and the accuracy assessment results were verified that the proposed method could effectively classify classes in the region by comparing with the supervisory results using RGB images only. Conclusion: In case of adding orthoimage, multispectral image, NDVI and GLCM proposed in this study, accuracy was higher than that of conventional orthoimage. Future research will attempt to improve classification accuracy through the development of additional input data.

A vision-based system for long-distance remote monitoring of dynamic displacement: experimental verification on a supertall structure

  • Ni, Yi-Qing;Wang, You-Wu;Liao, Wei-Yang;Chen, Wei-Huan
    • Smart Structures and Systems
    • /
    • v.24 no.6
    • /
    • pp.769-781
    • /
    • 2019
  • Dynamic displacement response of civil structures is an important index for in-construction and in-service structural condition assessment. However, accurately measuring the displacement of large-scale civil structures such as high-rise buildings still remains as a challenging task. In order to cope with this problem, a vision-based system with the use of industrial digital camera and image processing has been developed for long-distance, remote, and real-time monitoring of dynamic displacement of supertall structures. Instead of acquiring image signals, the proposed system traces only the coordinates of the target points, therefore enabling real-time monitoring and display of displacement responses in a relatively high sampling rate. This study addresses the in-situ experimental verification of the developed vision-based system on the Canton Tower of 600 m high. To facilitate the verification, a GPS system is used to calibrate/verify the structural displacement responses measured by the vision-based system. Meanwhile, an accelerometer deployed in the vicinity of the target point also provides frequency-domain information for comparison. Special attention has been given on understanding the influence of the surrounding light on the monitoring results. For this purpose, the experimental tests are conducted in daytime and nighttime through placing the vision-based system outside the tower (in a brilliant environment) and inside the tower (in a dark environment), respectively. The results indicate that the displacement response time histories monitored by the vision-based system not only match well with those acquired by the GPS receiver, but also have higher fidelity and are less noise-corrupted. In addition, the low-order modal frequencies of the building identified with use of the data obtained from the vision-based system are all in good agreement with those obtained from the accelerometer, the GPS receiver and an elaborate finite element model. Especially, the vision-based system placed at the bottom of the enclosed elevator shaft offers better monitoring data compared with the system placed outside the tower. Based on a wavelet filtering technique, the displacement response time histories obtained by the vision-based system are easily decomposed into two parts: a quasi-static ingredient primarily resulting from temperature variation and a dynamic component mainly caused by fluctuating wind load.

Measurement of Bubble Size in Flotation Column using Image Analysis System (이미지 분석시스템을 이용한 부선컬럼에서 기포크기의 측정)

  • An, Ki-Seon;Jeon, Ho-Seok;Park, Chul-Hyun
    • Resources Recycling
    • /
    • v.29 no.6
    • /
    • pp.104-113
    • /
    • 2020
  • Bubble size in froth flotation has long been recognized as a key factor which affects the bubble residence time, the bubble surface area flux (Sb) and the carrying rate (Cr). This paper presents method of bubble size measurement, relationship between operating variables and gas dispersion properties in flotation column. Using high speed camera and image analysis system, bubble size has been directly measured as a function of operating parameters (e.g., superficial gas rate (Jg), superficial wash water rate (Jw), frother concentration) in flotation column. Relationship compared to measured and estimated bubble size was obtained within error ranges of ±15~20% and mean bubble size was 0.718mm. From this system the empirical relationship to control the bubble size and distribution has been developed under operating conditions such as Jg of 0.65~1.3cm/s, Jw of 0.13~0.52cm/s and frother concentration of 60~200ppm. Surface tension and bubble size decreased as frother concentration increased. It seemed that critical coalescence concentration (CCC) of bubbles was 200ppm so that surface tension was the lowest (49.24mN/m) at frother concentration of 200ppm. Bubble size tend to increase when superficial gas rate (Jg) decreases and superficial wash water rate Jw and frother concentration increase. Gas holdup is proportional to superficial gas rate as well as frother concentration and superficial wash water rate (at the fixed superficial gas rate).

Analysis on Mapping Accuracy of a Drone Composite Sensor: Focusing on Pre-calibration According to the Circumstances of Data Acquisition Area (드론 탑재 복합센서의 매핑 정확도 분석: 데이터 취득 환경에 따른 사전 캘리브레이션 여부를 중심으로)

  • Jeon, Ilseo;Ham, Sangwoo;Lee, Impyeong
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.3
    • /
    • pp.577-589
    • /
    • 2021
  • Drone mapping systems can be applied to many fields such as disaster damage investigation, environmental monitoring, and construction process monitoring. To integrate individual sensors attached to a drone, it was essential to undergo complicated procedures including time synchronization. Recently, a variety of composite sensors are released which consist of visual sensors and GPS/INS. Composite sensors integrate multi-sensory data internally, and they provide geotagged image files to users. Therefore, to use composite sensors in drone mapping systems, mapping accuracies from composite sensors should be examined. In this study, we analyzed the mapping accuracies of a composite sensor, focusing on the data acquisition area and pre-calibration effect. In the first experiment, we analyzed how mapping accuracy varies with the number of ground control points. When 2 GCPs were used for mapping, the total RMSE has been reduced by 40 cm from more than 1 m to about 60 cm. In the second experiment, we assessed mapping accuracies based on whether pre-calibration is conducted or not. Using a few ground control points showed the pre-calibration does not affect mapping accuracies. The formation of weak geometry of the image sequences has resulted that pre-calibration can be essential to decrease possible mapping errors. In the absence of ground control points, pre-calibration also can improve mapping errors. Based on this study, we expect future drone mapping systems using composite sensors will contribute to streamlining a survey and calibration process depending on the data acquisition circumstances.

A Study on the Expression of Sense of Space in 3D Architectural Visualization Animation (3D 건축 시각화 애니메이션의 공간감 표현에 관한 연구)

  • Kim, Jong Kouk
    • The Journal of the Convergence on Culture Technology
    • /
    • v.7 no.1
    • /
    • pp.369-376
    • /
    • 2021
  • 3D architectural visualization animation has become more important in architectural presentations due to the rapid development of digital technology. Unlike games and movies, architectural visualization animation most focuses on delivering visual information, and aims to express the sense of space that viewers feel in an architectural space, rather than simply providing an image of viewing buildings. The sense of space is affected not only by physical elements of architecture, but also by immaterial elements such as light, time, and human actions, and it is more advantageous to express it in animations that can contain temporality compared to a fixed image. Therefore, the purpose of this study is to search for elements to effectively convey a sense of space in architectural visualization animation. To this end, the works of renowned architectural visualization artists that are open to the public were selected and observed to search for elements to effectively convey a sense of space to viewers. The elements that convey the sense of space that are common to the investigated architectural animations can be classified into the movement and manipulation of the camera, the movement of surrounding objects, the change of the light environment, the change of the weather, the control of time, and the insertion of a surreal scene. It will be followed by a discussion on the immersion of architectural contents.

Human Skeleton Keypoints based Fall Detection using GRU (PoseNet과 GRU를 이용한 Skeleton Keypoints 기반 낙상 감지)

  • Kang, Yoon Kyu;Kang, Hee Yong;Weon, Dal Soo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.22 no.2
    • /
    • pp.127-133
    • /
    • 2021
  • A recent study of people physically falling focused on analyzing the motions of the falls using a recurrent neural network (RNN) and a deep learning approach to get good results from detecting 2D human poses from a single color image. In this paper, we investigate a detection method for estimating the position of the head and shoulder keypoints and the acceleration of positional change using the skeletal keypoints information extracted using PoseNet from an image obtained with a low-cost 2D RGB camera, increasing the accuracy of judgments about the falls. In particular, we propose a fall detection method based on the characteristics of post-fall posture in the fall motion-analysis method. A public data set was used to extract human skeletal features, and as a result of an experiment to find a feature extraction method that can achieve high classification accuracy, the proposed method showed a 99.8% success rate in detecting falls more effectively than a conventional, primitive skeletal data-use method.

Contactless User Identification System using Multi-channel Palm Images Facilitated by Triple Attention U-Net and CNN Classifier Ensemble Models

  • Kim, Inki;Kim, Beomjun;Woo, Sunghee;Gwak, Jeonghwan
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.3
    • /
    • pp.33-43
    • /
    • 2022
  • In this paper, we propose an ensemble model facilitated by multi-channel palm images with attention U-Net models and pretrained convolutional neural networks (CNNs) for establishing a contactless palm-based user identification system using conventional inexpensive camera sensors. Attention U-Net models are used to extract the areas of interest including hands (i.e., with fingers), palms (i.e., without fingers) and palm lines, which are combined to generate three channels being ped into the ensemble classifier. Then, the proposed palm information-based user identification system predicts the class using the classifier ensemble with three outperforming pre-trained CNN models. The proposed model demonstrates that the proposed model could achieve the classification accuracy, precision, recall, F1-score of 98.60%, 98.61%, 98.61%, 98.61% respectively, which indicate that the proposed model is effective even though we are using very cheap and inexpensive image sensors. We believe that in this COVID-19 pandemic circumstances, the proposed palm-based contactless user identification system can be an alternative, with high safety and reliability, compared with currently overwhelming contact-based systems.