• Title/Summary/Keyword: camera image

Search Result 4,918, Processing Time 0.028 seconds

Measurement of rivulet movement and thickness on inclined cable using videogrammetry

  • Jing, Haiquan;Xia, Yong;Xu, Youlin;Li, Yongle
    • Smart Structures and Systems
    • /
    • v.18 no.3
    • /
    • pp.485-500
    • /
    • 2016
  • Stay cables in some cable-stayed bridges suffer large amplitude vibrations under the simultaneous occurrence of rain and wind. This phenomenon is called rain-wind-induced vibration (RWIV). The upper rivulet oscillating circumferentially on the inclined cable surface plays an important role in this phenomenon. However, its small size and high sensitivity to wind flow make measuring rivulet size and its movement challenging. Moreover, the distribution of the rivulet along the entire cable has not been measured. This paper applies the videogrammetric technique to measure the movement and geometry dimension of the upper rivulet along the entire cable during RWIV. A cable model is tested in an open-jet wind tunnel with artificial rain. RWIV is successfully reproduced. Only one digital video camera is employed and installed on the cable during the experiment. The camera records video clips of the upper rivulet and cable movements. The video clips are then transferred into a series of images, from which the positions of the cable and the upper rivulet at each time instant are identified by image processing. The thickness of the upper rivulet is also estimated. The oscillation amplitude, equilibrium position, and dominant frequency of the rivulet are presented. The relationship between cable and rivulet variations is also investigated. Results demonstrate that this non-contact, non-intrusive measurement method has good resolution and is cost effective.

Making of View Finder for Drone Photography (드론 촬영을 위한 뷰파인더 제작)

  • Park, Sung-Dae
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.22 no.12
    • /
    • pp.1645-1652
    • /
    • 2018
  • A drone which was developed first for military purpose has been expanded to various civil areas, with its technological development. Of the drones developed for such diverse purposes, a drone for photography has a camera installed and is actively applied to a variety of image contents making, beyond filming and broadcasting. A drone for photography makes it possible to shoot present and dynamic images which were hard to be photographed with conventional photography technology. This study made a view finder which helps a drone operator to control a drone and directly view an object to shoot with the drone camera. The view finder for drones is a type of glasses. It was developed in the way of printing out the data modelled with 3D MAX in a 3D printer and installing a ultra-small LCD monitor. The view finder for drones makes it possible to fly a drone safely and achieve accurate framing of an object to shoot.

An Environment Information Management System for Cultivation in Agricultural Facilities using Augmented Reality (증강현실 기반 농업용 환경 정보 관리 시스템)

  • Kim, Min-ji;Kim, Jong-Ho;Koh, Jin-Gwang;Lee, Sung-Keun;Lee, Jae-Hak
    • The Journal of Bigdata
    • /
    • v.3 no.2
    • /
    • pp.113-121
    • /
    • 2018
  • In this study, an augmented reality(AR)-based information management system for agricultural facility is proposed. Using a variety of sensed data transmitted by Lora-based wireless networks deployed at the agricultural facility, this system is capable of augmenting the sensed data and displaying them on the user's smartphone screen to provide visualized information to user. When users point their smartphone camera to the agricultural facility, the environment information collected from numerous sensors installed at the facility would be visualized and appear on the screen. Unlike traditional system which requires user to search a specific facility and then select sensor(s) to obtain the environment information, the proposed system shows the information on smartphone screen by augmenting it with real image captured by camera without doing a series of time-taking selection process. Since the way of acquiring information is through image or video, this system contributes to convenient monitoring and efficient management for agricultural facility.

A Construction of Web Application Platform for Detection and Identification of Various Diseases in Tomato Plants Using a Deep Learning Algorithm (딥러닝 알고리즘을 이용한 토마토에서 발생하는 여러가지 병해충의 탐지와 식별에 대한 웹응용 플렛폼의 구축)

  • Na, Myung Hwan;Cho, Wanhyun;Kim, SangKyoon
    • Journal of Korean Society for Quality Management
    • /
    • v.48 no.4
    • /
    • pp.581-596
    • /
    • 2020
  • Purpose: purpose of this study was to propose the web application platform which can be to detect and discriminate various diseases and pest of tomato plant based on the large amount of disease image data observed in the facility or the open field. Methods: The deep learning algorithms uesed at the web applivation platform are consisted as the combining form of Faster R-CNN with the pre-trained convolution neural network (CNN) models such as SSD_mobilenet v1, Inception v2, Resnet50 and Resnet101 models. To evaluate the superiority of the newly proposed web application platform, we collected 850 images of four diseases such as Bacterial cankers, Late blight, Leaf miners, and Powdery mildew that occur the most frequent in tomato plants. Of these, 750 were used to learn the algorithm, and the remaining 100 images were used to evaluate the algorithm. Results: From the experiments, the deep learning algorithm combining Faster R-CNN with SSD_mobilnet v1, Inception v2, Resnet50, and Restnet101 showed detection accuracy of 31.0%, 87.7%, 84.4%, and 90.8% respectively. Finally, we constructed a web application platform that can detect and discriminate various tomato deseases using best deep learning algorithm. If farmers uploaded image captured by their digital cameras such as smart phone camera or DSLR (Digital Single Lens Reflex) camera, then they can receive an information for detection, identification and disease control about captured tomato disease through the proposed web application platform. Conclusion: Incheon Port needs to act actively paying.

Deep Learning Based On-Device Augmented Reality System using Multiple Images (다중영상을 이용한 딥러닝 기반 온디바이스 증강현실 시스템)

  • Jeong, Taehyeon;Park, In Kyu
    • Journal of Broadcast Engineering
    • /
    • v.27 no.3
    • /
    • pp.341-350
    • /
    • 2022
  • In this paper, we propose a deep learning based on-device augmented reality (AR) system in which multiple input images are used to implement the correct occlusion in a real environment. The proposed system is composed of three technical steps; camera pose estimation, depth estimation, and object augmentation. Each step employs various mobile frameworks to optimize the processing on the on-device environment. Firstly, in the camera pose estimation stage, the massive computation involved in feature extraction is parallelized using OpenCL which is the GPU parallelization framework. Next, in depth estimation, monocular and multiple image-based depth image inference is accelerated using the mobile deep learning framework, i.e. TensorFlow Lite. Finally, object augmentation and occlusion handling are performed on the OpenGL ES mobile graphics framework. The proposed augmented reality system is implemented as an application in the Android environment. We evaluate the performance of the proposed system in terms of augmentation accuracy and the processing time in the mobile as well as PC environments.

Development of Application to guide Putting Aiming using Object Detection Technology (객체 인지 기술을 이용한 퍼팅 조준 가이드 애플리케이션 개발)

  • Jae-Moon Lee;Kitae Hwang;Inhwan Jung
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.23 no.2
    • /
    • pp.21-27
    • /
    • 2023
  • This paper is a study on the development of an app that assists in putting alignment in golf. The proposed app measures the position and size of the hole cup on the green to provide the distance between the hole cup and the aiming point. To achieve this, artificial intelligence object recognition technology was applied in the development process. The app measures the position and size of the hole cup in real-time using object recognition technology on the camera image of the smartphone. The app then displays the distance between the aiming point and the hole cup on the camera image to assist in putting alignment. The proposed app was developed for iOS on the iPhone. Performance testing of the developed app showed that it could sufficiently recognize the hole cup in real-time and accurately display the distance to provide helpful information for putting alignment.

Automatic analysis of golf swing from single-camera video sequences (단일 카메라 영상으로부터 골프 스윙의 자동 분석)

  • Kim, Pyeoung-Kee
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.14 no.5
    • /
    • pp.139-148
    • /
    • 2009
  • In this paper, I propose an automatic analysis method of golf swine from single-camera video sequences. I define necessary swing features for automatic swing analysis in 2-dimensional environment and present efficient swing analysis methods using various image processing techniques including line and edge detection. The proposed method has two characteristics compared with previous swing analysis systems and related studies. First, the proposed method enables an automatic swing analysis in 2-dimension while previous systems require 3-dimensional environment which is relatively complex and expensive to run. Second, swing analysis is done automatically without human intervention while other 2-dimensional systems necessarily need analysis by a golf expert. I tested the method on 20 swing video sequences and found the proposed method works effective for automatic analysis of golf swing.

Three-dimensional Reconstruction of X-ray Imagery Using Photogrammetric Technique (사진측량기법을 이용한 엑스선영상의 3차원 모형화)

  • Kim, Eui Myoung
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.28 no.2D
    • /
    • pp.277-285
    • /
    • 2008
  • X-ray images are wildly used in medical applications, and these can be more efficiently find scoliosis which is appearing during the growth of human skeleton than others. This research is focused on the calibration of X-ray image and three-dimensional coordinate determination of objects. Three-dimensional coordinate of objects taken by X-ray are determined by two step procedure. Firstly, interior and exterior orientation parameters are determined by camera calibration using Primary Calibration Object (PCO) which has two sides with embedded radiopaque steel ball. Secondly, calibration cage coordinates which is composed of two acrylic sheets that are perpendicular to X-ray source are determined by the parameters. Three-dimensional coordinates of calibration cage determined by photogrammetric technique are compared with that of Coordinate Measuring Machine (CMM). Though the accuracy analysis, X direction which is parallel to X-ray source error values are relatively higher than those of Y and Z directions. But, the accuracies of Y and Z axis are approximately -3 mm to 3 mm. From the research results, it is considered that photogrammetric technique is applied to determine three-dimensional coordinates of patients or assist to make medical devices.

Proximate Content Monitoring of Black Soldier Fly Larval (Hermetia illucens) Dry Matter for Feed Material using Short-Wave Infrared Hyperspectral Imaging

  • Juntae Kim;Hary Kurniawan;Mohammad Akbar Faqeerzada;Geonwoo Kim;Hoonsoo Lee;Moon Sung Kim;Insuck Baek;Byoung-Kwan Cho
    • Food Science of Animal Resources
    • /
    • v.43 no.6
    • /
    • pp.1150-1169
    • /
    • 2023
  • Edible insects are gaining popularity as a potential future food source because of their high protein content and efficient use of space. Black soldier fly larvae (BSFL) are noteworthy because they can be used as feed for various animals including reptiles, dogs, fish, chickens, and pigs. However, if the edible insect industry is to advance, we should use automation to reduce labor and increase production. Consequently, there is a growing demand for sensing technologies that can automate the evaluation of insect quality. This study used short-wave infrared (SWIR) hyperspectral imaging to predict the proximate composition of dried BSFL, including moisture, crude protein, crude fat, crude fiber, and crude ash content. The larvae were dried at various temperatures and times, and images were captured using an SWIR camera. A partial least-squares regression (PLSR) model was developed to predict the proximate content. The SWIR-based hyperspectral camera accurately predicted the proximate composition of BSFL from the best preprocessing model; moisture, crude protein, crude fat, crude fiber, and crude ash content were predicted with high accuracy, with R2 values of 0.89 or more, and root mean square error of prediction values were within 2%. Among preprocessing methods, mean normalization and max normalization methods were effective in proximate prediction models. Therefore, SWIR-based hyperspectral cameras can be used to create automated quality management systems for BSFL.

Detection of the co-planar feature points in the three dimensional space (3차원 공간에서 동일 평면 상에 존재하는 특징점 검출 기법)

  • Seok-Han Lee
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.16 no.6
    • /
    • pp.499-508
    • /
    • 2023
  • In this paper, we propose a technique to estimate the coordinates of feature points existing on a 2D planar object in the three dimensional space. The proposed method detects multiple 3D features from the image, and excludes those which are not located on the plane. The proposed technique estimates the planar homography between the planar object in the 3D space and the camera image plane, and computes back-projection error of each feature point on the planar object. Then any feature points which have large error is considered as off-plane points and are excluded from the feature estimation phase. The proposed method is archived on the basis of the planar homography without any additional sensors or optimization algorithms. In the expretiments, it was confirmed that the speed of the proposed method is more than 40 frames per second. In addition, compared to the RGB-D camera, there was no significant difference in processing speed, and it was verified that the frame rate was unaffected even in the situation that the number of detected feature points continuously increased.