• Title/Summary/Keyword: camera vision

Search Result 1,394, Processing Time 0.026 seconds

Evaluation of Possibility for the Classification of River Habitat Using Imagery Information (영상정보를 활용한 하천 서식처 분류 가능성 평가)

  • Lee, Geun-Sang;Lee, Hyun-Seok
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.15 no.3
    • /
    • pp.91-102
    • /
    • 2012
  • As the basis of the environmental ecological river management, this research developed a method of habitat classification using imagery information to understand a distribution characteristics of fish living in a natural river. First, topographic survey and investigation of discharge and water temperature were carried out to analyze hydraulic characteristics of fish habitat, and the unmanned aerial photography was applied to acquire river imagery at the observation time. Riffle, pool, and glide regions were selected as river habitat to analyze fish distribution characteristics. Analysis showed that the standard deviation of RGB on the riffle is higher than pool and glide because of fast stream flow. From the classification accuracy estimation on riffle region according to resolution and kernel size using the characteristics of standard deviation of RGB, the highest classification accuracy was 77.17% for resolution with 30cm and kernel size with 11. As the result of water temperature observation on pool and glide using infrared camera, they were $19.6{\sim}21.3^{\circ}C$ and $15.5{\sim}16.5^{\circ}C$ respectively with the differences of $4{\sim}5^{\circ}C$. Therefore it is possible to classify pool and glide region using the infrared photography information. The habitat classification to figure out fish distribution can be carried out more efficiently, if unmanned aerial photography system with RGB and infrared band is applied.

Analysis of the application of image quality assessment method for mobile tunnel scanning system (이동식 터널 스캐닝 시스템의 이미지 품질 평가 기법의 적용성 분석)

  • Chulhee Lee;Dongku Kim;Donggyou Kim
    • Journal of Korean Tunnelling and Underground Space Association
    • /
    • v.26 no.4
    • /
    • pp.365-384
    • /
    • 2024
  • The development of scanning technology is accelerating for safer and more efficient automated inspection than human-based inspection. Research on automatically detecting facility damage from images collected using computer vision technology is also increasing. The pixel size, quality, and quantity of an image can affect the performance of deep learning or image processing for automatic damage detection. This study is a basic to acquire high-quality raw image data and camera performance of a mobile tunnel scanning system for automatic detection of damage based on deep learning, and proposes a method to quantitatively evaluate image quality. A test chart was attached to a panel device capable of simulating a moving speed of 40 km/h, and an indoor test was performed using the international standard ISO 12233 method. Existing image quality evaluation methods were applied to evaluate the quality of images obtained in indoor experiments. It was determined that the shutter speed of the camera is closely related to the motion blur that occurs in the image. Modulation transfer function (MTF), one of the image quality evaluation method, can objectively evaluate image quality and was judged to be consistent with visual observation.

Range finding algorithm of equidistance stereo catadioptric mirror (등거리 스테레오 전방위 렌즈 영상에 대한 위치 측정 알고리즘)

  • Choi, Young-Ho
    • Journal of Internet Computing and Services
    • /
    • v.6 no.6
    • /
    • pp.149-161
    • /
    • 2005
  • Catadioptric mirrors are widely used in automatic surveillance system. The major drawback of catadioptric mirror is its unequal image resolution. Equidistance catadioptric mirror can be the solution to this problem. Even double panoramic structure can generate stereo images with single camera system. So two images obtained from double panoramic equidistance catadioptric mirror can be used in finding the depth and height values of object's points. But compared to the single catadioptric mirror. the image size of double panoramic system is relatively small. This leads to the severe accuracy problem in estimation. The exact axial alignment and the exact mount of mirror are the sources that can be avoided but the focal length variation is inevitable. In this paper, the effects of focal length variation on the computation of depth and height of object' point are explained and the effective focal length finding algorithm, using the assumption that the object's viewing angles are almost same in stereo images, is presented.

  • PDF

A Robust Real-Time Lane Detection for Sloping Roads (경사진 도로 환경에서도 강인한 실시간 차선 검출방법)

  • Heo, Hwan;Han, Gi-Tae
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.2 no.6
    • /
    • pp.413-422
    • /
    • 2013
  • In this paper, we propose a novel method for real-time lane detection that is robust for inclined roads and not require a camera parameter, the Inverse Perspective Transform of the image, and the proposed lane filter. After finding the vanishing point from the start frame of the image and storing the region surrounding the vanishing point as the Template Area(TA), our method predict the lanes by scanning toward the lower part from the vanishing point of the image and obtain the image removed the perspective effect using the Inverse Perspective Transform coefficients extracted based on the predicted lanes. To robustly determine lanes on inclined roads, the region surrounding the vanishing point is set up as the template area (TA), and, by recalculating the vanishing point by tracing the area similar to the TA (SA) in the input image through template matching, it responds to the changes on the road conditions. The proposed method for a more robust lane detection method for inclined roads is a lane detection method by applying a lane detection filter on an image removed of the perspective effect. Through this method, the processing region is reduced and the processing procedure is simplified to produce a satisfactory lane detection result of about 40 frames per second.

The 3D Depth Extraction Method by Edge Information Analysis in Extended Depth of Focus Algorithm (확장된 피사계 심도 알고리즘에서 엣지 정보 분석에 의한 3차원 깊이 정보 추출 방법)

  • Kang, Sunwoo;Kim, Joon Seek;Joo, Hyonam
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.22 no.2
    • /
    • pp.139-146
    • /
    • 2016
  • Recently, popularity of 3D technology has been growing significantly and it has many application parts in the various fields of industry. In order to overcome the limitations of 2D machine vision technologies based on 2D image, we need the 3D measurement technologies. There are many 3D measurement methods as such scanning probe microscope, phase shifting interferometry, confocal scanning microscope, white-light scanning interferometry, and so on. In this paper, we have used the extended depth of focus (EDF) algorithm among 3D measurement methods. The EDF algorithm is the method which extracts the 3D information from 2D images acquired by short range depth camera. In this paper, we propose the EDF algorithm using the edge informations of images and the average values of all pixel on z-axis to improve the performance of conventional method. To verify the performance of the proposed method, we use the various synthetic images made by point spread function(PSF) algorithm. We can correctly make a comparison between the performance of proposed method and conventional one because the depth information of these synthetic images was known. Through the experimental results, the PSNR of the proposed algorithm was improved about 1 ~ 30 dB than conventional method.

Automated Bar Placing Model Generation for Augmented Reality Using Recognition of Reinforced Concrete Details (부재 일람표 도면 인식을 활용한 증강현실 배근모델 자동 생성)

  • Park, U-Yeol;An, Sung-Hoon
    • Journal of the Korea Institute of Building Construction
    • /
    • v.20 no.3
    • /
    • pp.289-296
    • /
    • 2020
  • This study suggests a methodology for automatically extracting placing information from 2D reinforced concrete details drawings and generating a 3D reinforcement placing model to develop a mobile augmented reality for bar placing work. To make it easier for users to acquire placing information, it is suggested that users takes pictures of structural drawings using a camera built into a mobile device and extract placing information using vision recognition and the OCR(Optical Character Registration) tool. In addition, an augmented reality app is implemented using the game engine to allow users to automatically generate 3D reinforcement placing model and review the 3D models by superimposing them with real images. Details are described for application to the proposed methodology using the previously developed programming tools, and the results of implementing reinforcement augmented reality models for typical members at construction sites are reviewed. It is expected that the methodology presented as a result of application can be used for learning bar placing work or construction review.

An image enhancement Method for extracting multi-license plate region

  • Yun, Jong-Ho;Choi, Myung-Ryul;Lee, Sang-Sun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.6
    • /
    • pp.3188-3207
    • /
    • 2017
  • In this paper, we propose an image enhancement algorithm to improve license plate extraction rate in various environments (Day Street, Night Street, Underground parking lot, etc.). The proposed algorithm is composed of image enhancement algorithm and license plate extraction algorithm. The image enhancement method can improve an image quality of the degraded image, which utilizes a histogram information and overall gray level distribution of an image. The proposed algorithm employs an interpolated probability distribution value (PDV) in order to control a sudden change in image brightness. Probability distribution value can be calculated using cumulative distribution function (CDF) and probability density function (PDF) of the captured image, whose values are achieved by brightness distribution of the captured image. Also, by adjusting the image enhancement factor of each part region based on image pixel information, it provides a function that can adjust the gradation of the image in more details. This processed gray image is converted into a binary image, which fuses narrow breaks and long thin gulfs, eliminates small holes, and fills gaps in the contour by using morphology operations. Then license plate region is detected based on aspect ratio and license plate size of the bound box drawn on connected license plate areas. The images have been captured by using a video camera or a personal image recorder installed in front of the cars. The captured images have included several license plates on multilane roads. Simulation has been executed using OpenCV and MATLAB. The results show that the extraction success rate is more improved than the conventional algorithms.

Boundary Depth Estimation Using Hough Transform and Focus Measure (허프 변환과 초점정보를 이용한 경계면 깊이 추정)

  • Kwon, Dae-Sun;Lee, Dae-Jong;Chun, Myung-Geun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.25 no.1
    • /
    • pp.78-84
    • /
    • 2015
  • Depth estimation is often required for robot vision, 3D modeling, and motion control. Previous method is based on the focus measures which are calculated for a series of image by a single camera at different distance between and object. This method, however, has disadvantage of taking a long time for calculating the focus measure since the mask operation is performed for every pixel in the image. In this paper, we estimates the depth by using the focus measure of the boundary pixels located between the objects in order to minimize the depth estimate time. To detect the boundary of an object consisting of a straight line and a circle, we use the Hough transform and estimate the depth by using the focus measure. We performed various experiments for PCB images and obtained more effective depth estimation results than previous ones.

The Creation of Dental Radiology Multimedia Electronic Textbook (멀티미디어기술을 이용한 치과방사선학 전자 교과서 제작에 관한 연구)

  • Kim Eun-Kyung;Cha Sang-Yun;Han Won-Jeong;Hong Byeong-Hee
    • Imaging Science in Dentistry
    • /
    • v.30 no.1
    • /
    • pp.55-62
    • /
    • 2000
  • Purpose: This study was performed to develop the electronic textbook (CD-rom title) about preclinical practice of oral and maxillofacial radiology, using multimedia technology with interactive environment. Materials and Methods: After comparing the three authoring methods of multimedia, i.e. programming language, multimedia authoring tool and web authoring tool, we determined the web authoring tool as an authoring method of our electronic textbook. Intel Pentium II 350 MHz IBM-compatible personal computer with 128 Megabyte RAM, Umax Powerlook flatbed scanner with transparency unit, Olympus Camedia l400L digital camera, ESS 1686 sound card, Sony 8 mm Handycam, PC Vision 97 pro capture board, Namo web editor 3.0, Photoshop 3.0, ThumbNailer, RealPlayer 7 basic and RealProducer G2 were used for creating the text document, diagram, figure, X-ray image, video and sound files. We made use of javascripts for tree menu structure, moving text bar, link button and spread list menu and image map etc. After creating all files and hyperlinking them, we burned out the CD-rom title with all of the above multimedia data, Netscape communicator and plug in program as a prototype. Results and Conclusions : We developed the dental radiology electronic textbook which has 9 chapters and consists of 155 text documents, 26 figures, 150 X-ray image files, 20 video files, 20 sound files and 50 questions with answers. We expect that this CD-rom title can be used at the intranet and internet environments and continuous updates will be performed easily.

  • PDF

Real-time Human Pose Estimation using RGB-D images and Deep Learning

  • Rim, Beanbonyka;Sung, Nak-Jun;Ma, Jun;Choi, Yoo-Joo;Hong, Min
    • Journal of Internet Computing and Services
    • /
    • v.21 no.3
    • /
    • pp.113-121
    • /
    • 2020
  • Human Pose Estimation (HPE) which localizes the human body joints becomes a high potential for high-level applications in the field of computer vision. The main challenges of HPE in real-time are occlusion, illumination change and diversity of pose appearance. The single RGB image is fed into HPE framework in order to reduce the computation cost by using depth-independent device such as a common camera, webcam, or phone cam. However, HPE based on the single RGB is not able to solve the above challenges due to inherent characteristics of color or texture. On the other hand, depth information which is fed into HPE framework and detects the human body parts in 3D coordinates can be usefully used to solve the above challenges. However, the depth information-based HPE requires the depth-dependent device which has space constraint and is cost consuming. Especially, the result of depth information-based HPE is less reliable due to the requirement of pose initialization and less stabilization of frame tracking. Therefore, this paper proposes a new method of HPE which is robust in estimating self-occlusion. There are many human parts which can be occluded by other body parts. However, this paper focuses only on head self-occlusion. The new method is a combination of the RGB image-based HPE framework and the depth information-based HPE framework. We evaluated the performance of the proposed method by COCO Object Keypoint Similarity library. By taking an advantage of RGB image-based HPE method and depth information-based HPE method, our HPE method based on RGB-D achieved the mAP of 0.903 and mAR of 0.938. It proved that our method outperforms the RGB-based HPE and the depth-based HPE.