• Title/Summary/Keyword: Top-View Image

Search Result 80, Processing Time 0.024 seconds

Fast and Accurate Visual Place Recognition Using Street-View Images

  • Lee, Keundong;Lee, Seungjae;Jung, Won Jo;Kim, Kee Tae
    • ETRI Journal
    • /
    • v.39 no.1
    • /
    • pp.97-107
    • /
    • 2017
  • A fast and accurate building-level visual place recognition method built on an image-retrieval scheme using street-view images is proposed. Reference images generated from street-view images usually depict multiple buildings and confusing regions, such as roads, sky, and vehicles, which degrades retrieval accuracy and causes matching ambiguity. The proposed practical database refinement method uses informative reference image and keypoint selection. For database refinement, the method uses a spatial layout of the buildings in the reference image, specifically a building-identification mask image, which is obtained from a prebuilt three-dimensional model of the site. A global-positioning-system-aware retrieval structure is incorporated in it. To evaluate the method, we constructed a dataset over an area of $0.26km^2$. It was comprised of 38,700 reference images and corresponding building-identification mask images. The proposed method removed 25% of the database images using informative reference image selection. It achieved 85.6% recall of the top five candidates in 1.25 s of full processing. The method thus achieved high accuracy at a low computational complexity.

Accuracy Comparison of TOA and TOC Reflectance Products of KOMPSAT-3, WorldView-2 and Pléiades-1A Image Sets Using RadCalNet BTCN and BSCN Data

  • Kim, Kwangseob;Lee, Kiwon
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.1
    • /
    • pp.21-32
    • /
    • 2022
  • The importance of the classical theme of how the Top-of-Atmosphere (TOA) and Top-of-Canopy (TOC) reflectance of high-resolution satellite images match the actual atmospheric reflectance and surface reflectance has been emphasized. Based on the Radiometric Calibration Network (RadCalNet) BTCN and BSCN data, this study compared the accuracy of TOA and TOC reflectance products of the currently available optical satellites, including KOMPSAT-3, WorldView-2, and Pléiades-1A image sets calculated using the absolute atmospheric correction function of the Orfeo Toolbox (OTB) tool. The comparison experiment used data in 2018 and 2019, and the Landsat-8 image sets from the same period were applied together. The experiment results showed that the product of TOA and TOC reflectance obtained from the three sets of images were highly consistent with RadCalNet data. It implies that any imagery may be applied when high-resolution reflectance products are required for a certain application. Meanwhile, the processed results of the OTB tool and those by the Apparent Reflection method of another tool for WorldView-2 images were nearly identical. However, in some cases, the reflectance products of Landsat-8 images provided by USGS sometimes showed relatively low consistency than those computed by the OTB tool, with the reference of RadCalNet BTCN and BSCN data. Continuous experiments on active vegetation areas in addition to the RadCalNet sites are necessary to obtain generalized results.

Image Processing Algorithm for Weight Estimation of Dairy Cattle (젖소 체중추정을 위한 영상처리 알고리즘)

  • Seo, Kwang-Wook;Kim, Hyeon-Tae;Lee, Dae-Weon;Yoon, Yong-Cheol;Choi, Dong-Yoon
    • Journal of Biosystems Engineering
    • /
    • v.36 no.1
    • /
    • pp.48-57
    • /
    • 2011
  • The computer vision system was designed and constructed to measure the weight of a dairy cattle. Its development involved the functions of image capture, image preprocessing, image algorithm, and control integrated into one program. The experiments were conducted with the model dairy cattle and the real dairy cattle by two ways. First experiment with the model dairy cattle was conducted by using the indoor vision experimental system, which was built to measure the model dairy cattle in the laboratory. Second experiment with real dairy cattle was conducted by using the outdoor vision experimental system, which was built for measuring 229 heads of cows in the cattle facilities. This vision system proved to a reliable system by conducting their performance test with 15 heads of real cow in the cattle facilities. Indirect weight measuring with four methods were conducted by using the image processing system, which was the same system for measuring of body parameters. Error value of transform equation using chest girth was 30%. This error was seen as the cause of accumulated error by manually measurement. So it was not appropriate to estimate cow weight by using the transform equation, which was calculated from pixel values of the chest girth. Measurement of cow weight by multiple regression equation from top and side view images has relatively less error value, 5%. When cow weight was measured indirectly by image surface area from the pixel of top and side view images, maximum error value was 11.7%. When measured cow weight by image volume, maximum error weight was 57 kg. Generally, weight error was within 30 kg but maximum error 10.7%. Volume transform method, out of 4 measuring weight methods, was minimum error weight 21.8 kg.

Improved Polynomial Model for Multi-View Image Color Correction (다시점 영상 색상 보정을 위한 개선된 다항식 모델)

  • Jung, Jae-Il;Ho, Yo-Sung
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.38C no.10
    • /
    • pp.881-886
    • /
    • 2013
  • Even though a multi-view camera system is able to capture multiple images at different viewpoints, the color distributions of captured multi-view images can be inconsistent. This problem decreases the quality of multi-view images and the performance of post-image processes. In this paper, we propose an improved polynomial model for effectively correcting the color inconsistency problem. This algorithm is fully automatic without any pre-process and considers occlusion regions of the multi-view image. We use the 5th order polynomial model to define a relative mapping curve between reference and source views. Sometimes the estimated curve is seriously distorted if the dynamic range of extracted correspondences is quite low. Therefore we additionally estimate the first order polynomial model for the bottom and top regions of the dynamic range. Afterwards, colors of the source view are modified via these models. The proposed algorithm shows the good subjective results and has better objective quality than the conventional color correction algorithms.

Vision-based Walking Guidance System Using Top-view Transform and Beam-ray Model (탑-뷰 변환과 빔-레이 모델을 이용한 영상기반 보행 안내 시스템)

  • Lin, Qing;Han, Young-Joon;Hahn, Hern-Soo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.16 no.12
    • /
    • pp.93-102
    • /
    • 2011
  • This paper presents a walking guidance system for blind pedestrians in an outdoor environment using just one single camera. Unlike many existing travel-aid systems that rely on stereo-vision, the proposed system aims to get necessary information of the road environment by using just single camera fixed at the belly of the user. To achieve this goal, a top-view image of the road is used, on which obstacles are detected by first extracting local extreme points and then verified by the polar edge histogram. Meanwhile, user motion is estimated by using optical flow in an area close to the user. Based on these information extracted from image domain, an audio message generation scheme is proposed to deliver guidance instructions via synthetic voice to the blind user. Experiments with several sidewalk video-clips show that the proposed walking guidance system is able to provide useful guidance instructions under certain sidewalk environments.

High Accurate Cup Positioning System for a Coffee Printer (커피 프린터를 위한 커피 잔 정밀 측위 시스템)

  • Kim, Heeseung;Lee, Jaesung
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.21 no.10
    • /
    • pp.1950-1956
    • /
    • 2017
  • In food-printing field, precise positioning technique for a printing object is very important. In this paper, we propose cup positioning method for a latte-art printer through image processing. A camera sensor is installed on the upper side of the printer, and the image obtained from this is projected and converted into a top-view image. Then, the edge lines of the image is detected first, and then the coordinate of the center and the radius of the cup are detected through a Circular Hough transformation. The performance evaluation results show that the image processing time is 0.1 ~ 0.125 sec and the cup detection rate is 92.26%. This means that a cup is detected almost perfectly without affecting the whole latte-art printing time. The center point coordinates and radius values of cups detected by the proposed method show very small errors less than an average of 1.5 mm. Therefore, it seems that the problem of the printing position error is solved.

A Crosswalk and Stop Line Recognition System for Autonomous Vehicles (무인 자율 주행 자동차를 위한 횡단보도 및 정지선 인식 시스템)

  • Park, Tae-Jun;Cho, Tai-Hoon
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.22 no.2
    • /
    • pp.154-160
    • /
    • 2012
  • Recently, development of technologies for autonomous vehicles has been actively carried out. This paper proposes a computer vision system to recognize lanes, crosswalks, and stop lines for autonomous vehicles. This vision system first recognizes lanes required for autonomous driving using the RANSAC algorithm and the Kalman filter, and changes the viewpoint from the perspective-angle view of the street to the top-view using the fact that the lanes are parallel. Then in the reconstructed top-view image this system recognizes a crosswalk based on its geometrical characteristics and searches for a stop line within a region of interest in front of the recognized crosswalk. Experimental results show excellent performance of the proposed vision system in recognizing lanes, crosswalks, and stop lines.

Semiautomatic 3D Virtual Fish Modeling based on 2D Texture

  • Nakajima, Masayuki;Hagiwara, Hisaya;Kong, Wai-Ming;Takahashi, Hiroki
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 1996.06b
    • /
    • pp.18-21
    • /
    • 1996
  • In the field of Virtual Reality, many studies have been reported. Especially, there are many studies on generating virtual creatures on computer systems. In this paper we propose an algorithm to automatically generate 3D fish models from 2D images which are printed in illustrated books, pictures or handwritings. At first, 2D fish images are captured by means of image scanner. Next, the fish image is separated from background and segmented to several parts such as body, anal fin, dorsal fin, ectoral fin and ventral fin using the proposed method“Active Balloon model”. After that, users choose front view model and top view model among six samples, respectively. 3D model is automatically generated from separated body, fins and the above two view models. The number of patches is decreased without any influence on the accuracy of the generated 3D model to reduce the time cost when texture mapping is applied. Finally, we can get any kinds of 3D fish models.

  • PDF

Vehicle Manufacturer Recognition using Deep Learning and Perspective Transformation

  • Ansari, Israfil;Shim, Jaechang
    • Journal of Multimedia Information System
    • /
    • v.6 no.4
    • /
    • pp.235-238
    • /
    • 2019
  • In real world object detection is an active research topic for understanding different objects from images. There are different models presented in past and had significant results. In this paper we are presenting vehicle logo detection using previous object detection models such as You only look once (YOLO) and Faster Region-based CNN (F-RCNN). Both the front and rear view of the vehicles were used for training and testing the proposed method. Along with deep learning an image pre-processing algorithm called perspective transformation is proposed for all the test images. Using perspective transformation, the top view images were transformed into front view images. This algorithm has higher detection rate as compared to raw images. Furthermore, YOLO model has better result as compare to F-RCNN model.

Development of Measurement System for Contact Angle and Evaporation Characteristics of a Micro-droplet on a Substrate (미소 액적의 접촉각 및 건조 특성 측정 시스템 개발)

  • Kwon, Kye-Si;An, Seung-Hyun;Jang, Min Hyuck
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.30 no.4
    • /
    • pp.414-420
    • /
    • 2013
  • We developed inkjet based measurement system for micro-droplet behavior on a substrate. By using the inkjet dispenser, a droplet, which is as small as few pico-liter in volume, can be jetted and the amount can be controlled. After jetting, the droplet image on the substrate is acquired from side view camera. Then, droplet profile is extracted to measure droplet volume, contact angle and evaporation characteristics. Also top view image of the droplet is acquired for better understanding of droplet shape. The previous contact angle measurement method has limitations since it mainly measures the ratio of height and contact diameter of droplet on a substrate. Unlike previous measurement system, our proposed method has advantages because various behavior of droplet on substrate can be effectively analyzed by extracting the droplet profile.