• Title/Summary/Keyword: Camera Model

Search Result 1,515, Processing Time 0.163 seconds

A Study for Utilization and constitution of MMSS (MMSS 시스템 구성 및 활용에 대한 연구)

  • Kim, Kwang-Yong;Yeun, Yeo-Sang;Choi, Jong-Hyun;Kim, Min-Soo;Kim, Kyoung-Ok
    • Journal of Korea Spatial Information System Society
    • /
    • v.3 no.1 s.5
    • /
    • pp.117-126
    • /
    • 2001
  • We have developed the Mobile Multi Sensor System(MMSS) for the data construction of 4S application and for basic technology acquisition of mobile mapping system in Korea. Using this MMSS, we will collect the information of road and road facilities for DB creation and also construct the Digital Elevation Model(DEM) as ancillary data in urban area. The MMSS consist of the integrated navigation sensor, DGPS & IMU, and digital CCD camera set. In the S/W aspect, we developed the post-processing components for extracting the 3D coordinate information (Spatial Information) and the client program for the MMSS user group. In this paper, we will overview the MMSS constitution and post-processing program, and introduce the utilization plan of MMSS.

  • PDF

Estimation of two-dimensional position of soybean crop for developing weeding robot (제초로봇 개발을 위한 2차원 콩 작물 위치 자동검출)

  • SooHyun Cho;ChungYeol Lee;HeeJong Jeong;SeungWoo Kang;DaeHyun Lee
    • Journal of Drive and Control
    • /
    • v.20 no.2
    • /
    • pp.15-23
    • /
    • 2023
  • In this study, two-dimensional location of crops for auto weeding was detected using deep learning. To construct a dataset for soybean detection, an image-capturing system was developed using a mono camera and single-board computer and the system was mounted on a weeding robot to collect soybean images. A dataset was constructed by extracting RoI (region of interest) from the raw image and each sample was labeled with soybean and the background for classification learning. The deep learning model consisted of four convolutional layers and was trained with a weakly supervised learning method that can provide object localization only using image-level labeling. Localization of the soybean area can be visualized via CAM and the two-dimensional position of the soybean was estimated by clustering the pixels associated with the soybean area and transforming the pixel coordinates to world coordinates. The actual position, which is determined manually as pixel coordinates in the image was evaluated and performances were 6.6(X-axis), 5.1(Y-axis) and 1.2(X-axis), 2.2(Y-axis) for MSE and RMSE about world coordinates, respectively. From the results, we confirmed that the center position of the soybean area derived through deep learning was sufficient for use in automatic weeding systems.

The correction of Lens distortion based on Image division using Artificial Neural Network (영상분할 방법 기반의 인공신경망을 적용한 카메라의 렌즈왜곡 보정)

  • Shin, Ki-Young;Bae, Jang-Han;Mun, Joung-H.
    • Journal of the Korea Society of Computer and Information
    • /
    • v.14 no.4
    • /
    • pp.31-38
    • /
    • 2009
  • Lens distortion is inevitable phenomenon in machine vision system. More and more distortion phenomenon is occurring in order to choice of lens for minimizing cost and system size. As shown above, correction of lens distortion is critical issue. However previous lens correction methods using camera model have problem such as nonlinear property and complicated operation. And recent lens correction methods using neural network also have accuracy and efficiency problem. In this study, I propose new algorithms for correction of lens distortion. Distorted image is divided based on the distortion quantity using k-means. And each divided image region is corrected by using neural network. As a result, the proposed algorithms have better accuracy than previous methods without image division.

Multi-Region based Radial GCN algorithm for Human action Recognition (행동인식을 위한 다중 영역 기반 방사형 GCN 알고리즘)

  • Jang, Han Byul;Lee, Chil Woo
    • Smart Media Journal
    • /
    • v.11 no.1
    • /
    • pp.46-57
    • /
    • 2022
  • In this paper, multi-region based Radial Graph Convolutional Network (MRGCN) algorithm which can perform end-to-end action recognition using the optical flow and gradient of input image is described. Because this method does not use information of skeleton that is difficult to acquire and complicated to estimate, it can be used in general CCTV environment in which only video camera is used. The novelty of MRGCN is that it expresses the optical flow and gradient of the input image as directional histograms and then converts it into six feature vectors to reduce the amount of computational load and uses a newly developed radial type network model to hierarchically propagate the deformation and shape change of the human body in spatio-temporal space. Another important feature is that the data input areas are arranged being overlapped each other, so that information is not spatially disconnected among input nodes. As a result of performing MRGCN's action recognition performance evaluation experiment for 30 actions, it was possible to obtain Top-1 accuracy of 84.78%, which is superior to the existing GCN-based action recognition method using skeleton data as an input.

Panorama Image Stitching Using Sythetic Fisheye Image (Synthetic fisheye 이미지를 이용한 360° 파노라마 이미지 스티칭)

  • Kweon, Hyeok-Joon;Cho, Donghyeon
    • Journal of Broadcast Engineering
    • /
    • v.27 no.1
    • /
    • pp.20-30
    • /
    • 2022
  • Recently, as VR (Virtual Reality) technology has been in the spotlight, 360° panoramic images that can view lively VR contents are attracting a lot of attention. Image stitching technology is a major technology for producing 360° panorama images, and many studies are being actively conducted. Typical stitching algorithms are based on feature point-based image stitching. However, conventional feature point-based image stitching methods have a problem that stitching results are intensely affected by feature points. To solve this problem, deep learning-based image stitching technologies have recently been studied, but there are still many problems when there are few overlapping areas between images or large parallax. In addition, there is a limit to complete supervised learning because labeled ground-truth panorama images cannot be obtained in a real environment. Therefore, we produced three fisheye images with different camera centers and corresponding ground truth image through carla simulator that is widely used in the autonomous driving field. We propose image stitching model that creates a 360° panorama image with the produced fisheye image. The final experimental results are virtual datasets configured similar to the actual environment, verifying stitching results that are strong against various environments and large parallax.

Surface exposure age of (25143) Itokawa estimated from the number of mottles on the boulder

  • Jin, Sunho;Ishiguro, Masateru
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.45 no.1
    • /
    • pp.45.2-46
    • /
    • 2020
  • Various processes, such as space weathering and granular convection, are occurring on asteroids' surfaces. Estimation of the surface exposure timescale is essential for understanding these processes. The Hayabusa mission target asteroid, (25143) Itokawa (Sq-type) is the only asteroid whose age is estimated from remote sensing observations as well as sample analyses in laboratories. There is, however, an unignorable discrepancy between the timescale derived from these different techniques. The ages estimated based on the solar flare track density and the weathered rim thickness of regolith samples range between 102 and 104 years [1][2]. On the contrary, the ages estimated from the crater size distributions and the spectra cover from 106 to 107 years [3][4]. It is important to notice that there is a common drawback of both age estimation methods. Since the evidence of regolith migration is found on the surface of Itokawa [5], the surficial particles would be rejuvenated by granular convection. At the same time, it is expected that the erasure of craters by regolith migration would affect the crater size distribution. We propose a new technique to estimate surface exposure age, focusing on the bright mottles on the large boulders. Our technique is less prone to the granular convection. These mottles are expected to be formed by impacts of mm to cm-sized interplanetary particles. Together with the well-known flux model of interplanetary dust particles (e.g., Grün, 1985 [6]), we have investigated the timescale to form such mottles before they become dark materials again by the space weathering. In this work, we used three AMICA (Asteroid Multi-band Imaging Camera) v-band images. These images were taken on 2005 November 12 during the close approach to the asteroid. As a result, we found the surface exposure timescales of these boulders are an order of 106 years. In this meeting, we will introduce our data analysis technique and evaluate the consistency among previous research for a better understanding of the evolution of this near-Earth asteroid.

  • PDF

Quadruped Robot for Walking on the Uneven Terrain and Object Detection using Deep Learning (딥러닝을 이용한 객체검출과 비평탄 지형 보행을 위한 4족 로봇)

  • Myeong Suk Pak;Seong Min Ha;Sang Hoon Kim
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.5
    • /
    • pp.237-242
    • /
    • 2023
  • Research on high-performance walking robots is being actively conducted, and quadruped walking robots are receiving a lot of attention due to their excellent mobility and adaptability on uneven terrain, but they are difficult to introduce and utilize due to high cost. In this paper, to increase utilization by applying intelligent functions to a low-cost quadruped robot, we present a method of improving uneven terrain overcoming ability by mounting IMU and reinforcement learning on embedded board and automatically detecting objects using camera and deep learning. The robot consists of the legs of a quadruped mammal, and each leg has three degrees of freedom. We train complex terrain in simulation environments with designed 3D model and apply it to real robot. Through the application of this research method, it was confirmed that there was no significant difference in walking ability between flat and non-flat terrain, and the behavior of performing person detection in real time under limited experimental conditions was confirmed.

A Case Study on Quality Improvement of Electric Vehicle Hairpin Winding Motor Using Deep Learning AI Solution (딥러닝 AI 솔루션을 활용한 전기자동차 헤어핀 권선 모터의 용접 품질향상에 관한 사례연구)

  • Lee, Seungzoon;Sim, Jinsup;Choi, Jeongil
    • Journal of Korean Society for Quality Management
    • /
    • v.51 no.2
    • /
    • pp.283-296
    • /
    • 2023
  • Purpose: The purpose of this study is to actually implement and verify whether welding defects can be detected in real time by utilizing deep learning AI solutions in the welding process of electric vehicle hairpin winding motors. Methods: AI's function and technological elements using synthetic neural network were applied to existing electric vehicle hairpin winding motor laser welding process by making special hardware for detecting electric vehicle hairpin motor laser welding defect. Results: As a result of the test applied to the welding process of the electric vehicle hairpin winding motor, it was confirmed that defects in the welding part were detected in real time. The accuracy of detection of welds was achieved at 0.99 based on mAP@95, and the accuracy of detection of defective parts was 1.18 based on FB-Score 1.5, which fell short of the target, so it will be supplemented by introducing additional lighting and camera settings and enhancement techniques in the future. Conclusion: This study is significant in that it improves the welding quality of hairpin winding motors of electric vehicles by applying domestic artificial intelligence solutions to laser welding operations of hairpin winding motors of electric vehicles. Defects of a manufacturing line can be corrected immediately through automatic welding inspection after laser welding of an electric vehicle hairpin winding motor, thus reducing waste throughput caused by welding failure in the final stage, reducing input costs and increasing product production.

Application Analysis of Digital Photogrammetry and Optical Scanning Technique for Cultural Heritages Restoration (문화재 원형복원을 위한 수치사진측량과 광학스캐닝기법의 응용분석)

  • Han, Seung Hee;Bae, Yeon Soung;Bae, Sang Ho
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.26 no.5D
    • /
    • pp.869-876
    • /
    • 2006
  • In the case of earthenware cultural heritages that are found in the form of fragments, the major task is quick and precise restoration. The existing method, which follows the rule of trial and error, is not only greatly time consuming but also lacked precision. If this job could be done by three dimensional scanning, matching up pieces could be done with remarkable efficiency. In this study, the original earthenware was modeled through three-dimensional pattern scanning and photogrammetry, and each of the fragments were scanned and modeled. In order to obtain images from the photogrammetry, we calibrated and used a Canon EOS 1DS real size camera. We analyzed the relationship among the sections of the formed model, efficiently compounded them, and analyzed the errors through residual and color error map. Also, we built a web-based three-dimensional simulation environment centering around the users, for the virtual museum.

Vehicle Type Classification Model based on Deep Learning for Smart Traffic Control Systems (스마트 교통 단속 시스템을 위한 딥러닝 기반 차종 분류 모델)

  • Kim, Doyeong;Jang, Sungjin;Jang, Jongwook
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.05a
    • /
    • pp.469-472
    • /
    • 2022
  • With the recent development of intelligent transportation systems, various technologies applying deep learning technology are being used. To crackdown on illegal vehicles and criminal vehicles driving on the road, a vehicle type classification system capable of accurately determining the type of vehicle is required. This study proposes a vehicle type classification system optimized for mobile traffic control systems using YOLO(You Only Look Once). The system uses a one-stage object detection algorithm YOLOv5 to detect vehicles into six classes: passenger cars, subcompact, compact, and midsize vans, full-size vans, trucks, motorcycles, special vehicles, and construction machinery. About 5,000 pieces of domestic vehicle image data built by the Korea Institute of Science and Technology for the development of artificial intelligence technology were used as learning data. It proposes a lane designation control system that applies a vehicle type classification algorithm capable of recognizing both front and side angles with one camera.

  • PDF