• Title/Summary/Keyword: camera image

Search Result 4,922, Processing Time 0.034 seconds

A STUDY ON THE OPTIMAL ILLUMINATION POWER OF DIFOTI (DIFOTI 영상 최적화를 위한 광량에 대한 연구)

  • Kim, Jong-Bin;Kim, Jong-Soo;Yoo, Seung-Hoon;Kim, Yong-Kee
    • Journal of the korean academy of Pediatric Dentistry
    • /
    • v.37 no.1
    • /
    • pp.13-23
    • /
    • 2010
  • This study was performed to compare the quality of image processing between the newly developed prototype using light emitting diode(LED) and the conventional $DIFOTI^{TM}$ system(EOS Inc., USA). To estimate the optimal light emitting power for the improved images, primary enamel surfaces treated under Carbopol 907 de-mineralizing solution were taken daily during 20 days of experimental periods by both DIFOTI systems. The results of comparative analyses on the images obtained from both systems with polarized image as gold standard can be summarized as follows: 1. Trans-illumination indices of images taken from primary enamel surfaces were decreased with time in both systems. 2. The differences of intensity of luminance between sound and de-mineralized enamel surface in prototype DIFOTI system was shown to be relatively smaller than conventional $DIFOTI^{TM}$ system. 3. From the comparative analysis of images from both DIFOTI system with polarized images as gold standard, the difference between sound and de-mineralized enamel surface of intensity of luminance of $DIFOTI^{TM}$ system was more correlated to polarized images than prototype of DIFOTI system. With the optimal LED emitting power, the control of aperture of digital camera is considered as the another key factor to improve the DIFOTI images. For the best image quality and analysis, the development of the improved image processing software is required.

Face recognition using PCA and face direction information (PCA와 얼굴방향 정보를 이용한 얼굴인식)

  • Kim, Seung-Jae
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.10 no.6
    • /
    • pp.609-616
    • /
    • 2017
  • In this paper, we propose an algorithm to obtain more stable and high recognition rate by using left and right rotation information of input image in order to obtain a stable recognition rate in face recognition. The proposed algorithm uses the facial image as the input information in the web camera environment to reduce the size of the image and normalize the information about the brightness and color to obtain the improved recognition rate. We apply Principal Component Analysis (PCA) to the detected candidate regions to obtain feature vectors and classify faces. Also, In order to reduce the error rate range of the recognition rate, a set of data with the left and right $45^{\circ}$ rotation information is constructed considering the directionality of the input face image, and each feature vector is obtained with PCA. In order to obtain a stable recognition rate with the obtained feature vector, it is after scattered in the eigenspace and the final face is recognized by comparing euclidean distant distances to each feature. The PCA-based feature vector is low-dimensional data, but there is no problem in expressing the face, and the recognition speed can be fast because of the small amount of calculation. The method proposed in this paper can improve the safety and accuracy of recognition and recognition rate faster than other algorithms, and can be used for real-time recognition system.

Optical System Design of Compact Head-Up Display(HUD) using Micro Display (마이크로 디스플레이를 이용한 소형 헤드업 디스플레이 광학계 설계)

  • Han, Dong-Jin;Kim, Hyun-Hee
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.16 no.9
    • /
    • pp.6227-6235
    • /
    • 2015
  • The HUD has recently been downsized due to the development of micro display and LED technology as a see through information display device, gradually expands the application areas. In this paper, using a DLP micro display device designed a compact head-up display(HUD) optical system for biocular observation of the image exhibition area 5 inches. It was analyzed for each design element of the optical system in order to design a compacted HUD. DLP, projection optical system and concave image combiner were discussed the design approach and the characteristics. Through a connection structure analysis of each optical system, detailed design specifications were set up and designed the optical system in detail. Put a folded configuration in the form of a white diffuse reflector between the projection lens and concave image combiner was designed to be independent, respectively. Distance of the projected image is adjustable up to approximately 2m ~ infinity and observation distance is 1m. Resolution could be recognized by 1 ~ 2pixels in HD($1,280{\times}720pixels$) class, various characters and symbols could be read. In addition, color navigation map, daytime video camera and thermal imaging cameras can be displayed.

Production of Low-illuminated Image Sets based on Spectral Data for Color Constancy Research (색 항등성을 위한 분광 데이터 기반의 저조도 영상 집합 생성)

  • Kim, Dal-Hyoun;Lee, Woo-Ram;Hwang, Dong-Guk;Jun, Byoung-Min
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.12 no.7
    • /
    • pp.3207-3213
    • /
    • 2011
  • Most methods of color constancy, which is the ability to determine the object color regardless of the scene illuminant, have failed to meet our expectation of their performance especially about low-illuminated scenes. Some methods with high performance need to be developed, but we must, above all else, obtain experimental images for analyzing the required circumstances or evaluating the methods. Therefore, the paper produces new sets of images so that they can be used in the development of color constancy methods suitable for low-illuminated scenes. These sets are composed of two parts: one part of images which are synthesized with spectral power distribution(SPD) of illuminants, spectral reflectance curve of reflectances, and sensor response functions of camera; the other part of images where the intensity of each image is adjusted at the uniform rate. In an experiment, the use of the sets takes an advantage that its result images are analyzed and evaluated quantitatively as their ground truth data are known in advance.

Analysis of Heat Environment in Nursery Pig Behavior (자돈의 행동에 미치는 열환경 분석)

  • Sang, J.I.;Choi, H.L.;Jeon, J.H.;Jeon, B.S.;Kang, H.S.;Lee, E.S.;Park, K.H.
    • Journal of Animal Environmental Science
    • /
    • v.15 no.2
    • /
    • pp.131-138
    • /
    • 2009
  • This study was conducted to find ways to control environment with the difference between body temperature and background temperature based on swine activity, and to apply to the environment control system of swine barns based on the findings. Following are the results. 1. Swine activity related to background temperature was achieved as color images and swine activity status was categorized into cold, comfortable, and hot periods with visualization system (thermal image system). 2. Thermal image system consisted of an infrared CCD camera, an image processing board - DIF (TH3100), an main computer (400Hz, 128M, 586 Pentium model) with C++ program installed. 3. Thermal image system categorizing temperatures into cold, comfortable, and hot was applicable to the environment control system of swine barns 4. Feed intake was higher in cold temperature, and finishing weight and weight gain per day in cold temperature were lower than others (p<0.05).

  • PDF

Study on vision-based object recognition to improve performance of industrial manipulator (산업용 매니퓰레이터의 작업 성능 향상을 위한 영상 기반 물체 인식에 관한 연구)

  • Park, In-Cheol;Park, Jong-Ho;Ryu, Ji-Hyoung;Kim, Hyoung-Ju;Chong, Kil-To
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.18 no.4
    • /
    • pp.358-365
    • /
    • 2017
  • In this paper, we propose an object recognition method using image information to improve the efficiency of visual servoingfor industrial manipulators in industry. This is an image-processing method for real-time responses to an abnormal situation or to external environment change in a work object by utilizing camera-image information of an industrial manipulator. The object recognition method proposed in this paper uses the Otsu method, a thresholding technique based on separation of the V channel containing color information and the S channel, in which it is easy to separate the background from the HSV channel in order to improve the recognition rate of the existing Harris Corner algorithm. Through this study, when the work object is not placed in the correct position due to external factors or from being twisted,the position is calculated and provided to the industrial manipulator.

A Method for Effective Homography Estimation Applying a Depth Image-Based Filter (깊이 영상 기반 필터를 적용한 효과적인 호모그래피 추정 방법)

  • Joo, Yong-Joon;Hong, Myung-Duk;Yoon, Ui-Nyoung;Go, Seung-Hyun;Jo, Geun-Sik
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.8 no.2
    • /
    • pp.61-66
    • /
    • 2019
  • Augmented reality is a technology that makes a virtual object appear as if it exists in reality by composing a virtual object in real time with the image captured by the camera. In order to augment the virtual object on the object existing in reality, the homography of images utilized to estimate the position and orientation of the object. The homography can be estimated by applying the RANSAC algorithm to the feature points of the images. But the homography estimation method using the RANSAC algorithm has a problem that accurate homography can not be estimated when there are many feature points in the background. In this paper, we propose a method to filter feature points of a background when the object is near and the background is relatively far away. First, we classified the depth image into relatively near region and a distant region using the Otsu's method and improve homography estimation performance by filtering feature points on the relatively distant area. As a result of experiment, processing time is shortened 71.7% compared to a conventional homography estimation method, and the number of iterations of the RANSAC algorithm was reduced 69.4%, and Inlier rate was increased 16.9%.

White striping degree assessment using computer vision system and consumer acceptance test

  • Kato, Talita;Mastelini, Saulo Martiello;Campos, Gabriel Fillipe Centini;Barbon, Ana Paula Ayub da Costa;Prudencio, Sandra Helena;Shimokomaki, Massami;Soares, Adriana Lourenco;Barbon, Sylvio Jr.
    • Asian-Australasian Journal of Animal Sciences
    • /
    • v.32 no.7
    • /
    • pp.1015-1026
    • /
    • 2019
  • Objective: The objective of this study was to evaluate three different degrees of white striping (WS) addressing their automatic assessment and customer acceptance. The WS classification was performed based on a computer vision system (CVS), exploring different machine learning (ML) algorithms and the most important image features. Moreover, it was verified by consumer acceptance and purchase intent. Methods: The samples for image analysis were classified by trained specialists, according to severity degrees regarding visual and firmness aspects. Samples were obtained with a digital camera, and 25 features were extracted from these images. ML algorithms were applied aiming to induce a model capable of classifying the samples into three severity degrees. In addition, two sensory analyses were performed: 75 samples properly grilled were used for the first sensory test, and 9 photos for the second. All tests were performed using a 10-cm hybrid hedonic scale (acceptance test) and a 5-point scale (purchase intention). Results: The information gain metric ranked 13 attributes. However, just one type of image feature was not enough to describe the phenomenon. The classification models support vector machine, fuzzy-W, and random forest showed the best results with similar general accuracy (86.4%). The worst performance was obtained by multilayer perceptron (70.9%) with the high error rate in normal (NORM) sample predictions. The sensory analysis of acceptance verified that WS myopathy negatively affects the texture of the broiler breast fillets when grilled and the appearance attribute of the raw samples, which influenced the purchase intention scores of raw samples. Conclusion: The proposed system has proved to be adequate (fast and accurate) for the classification of WS samples. The sensory analysis of acceptance showed that WS myopathy negatively affects the tenderness of the broiler breast fillets when grilled, while the appearance attribute of the raw samples eventually influenced purchase intentions.

Development of Surface Velocity Measurement Technique without Reference Points Using UAV Image (드론 정사영상을 이용한 무참조점 표면유속 산정 기법 개발)

  • Lee, Jun Hyeong;Yoon, Byung Man;Kim, Seo Jun
    • Ecology and Resilient Infrastructure
    • /
    • v.8 no.1
    • /
    • pp.22-31
    • /
    • 2021
  • Surface image velocimetry (SIV) is a noncontact velocimetry technique based on images. Recently, studies have been conducted on surface velocity measurements using drones to measure a wide range of velocities and discharges. However, when measuring the surface velocity using a drone, reference points must be included in the image for image correction and the calculation of the ground sample distance, which limits the flight altitude and shooting area of the drone. A technique for calculating the surface velocity that does not require reference points must be developed to maximize spatial freedom, which is the advantage of velocity measurements using drone images. In this study, a technique for calculating the surface velocity that uses only the drone position and the specifications of the drone-mounted camera, without reference points, was developed. To verify the developed surface velocity calculation technique, surface velocities were calculated at the Andong River Experiment Center and then measured with a FlowTracker. The surface velocities measured by conventional SIV using reference points and those calculated by the developed SIV method without reference points were compared. The results confirmed an average difference of approximately 4.70% from the velocity obtained by the conventional SIV and approximately 4.60% from the velocity measured by FlowTracker. The proposed technique can accurately measure the surface velocity using a drone regardless of the flight altitude, shooting area, and analysis area.

A Study on the Comparison of Detected Vein Images by NIR LED Quantity of Vein Detector (정맥검출기의 NIR LED 수량에 따른 검출된 정맥 이미지 비교에 관한 연구)

  • Jae-Hyun, Jo;Jin-Hyoung, Jeong;Seung-Hun, Kim;Sang-Sik, Lee
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.15 no.6
    • /
    • pp.485-491
    • /
    • 2022
  • Intravenous injection is the most frequent invasive treatment for inpatients and is widely used for parenteral nutrition administration and blood products, and more than 1 billion procedures are used for peripheral catheter insertion, blood collection, and other IV therapy per year. Intravenous injection is one of the difficult procedures to be performed only by trained nurses with intravenous injection training, and failure can lead to thrombosis and hematoma or nerve damage to the vein. Accordingly, studies on auxiliary equipment capable of visualizing the vein structure of the back of the hand or arm are being published to reduce errors during intravenous injection. This study is a study on the performance difference according to the number of LEDs irradiating the 850nm wavelength band on a vein detector that visualizes the vein during intravenous injection. Four LED PCBs were produced by attaching NIR filters to CCD and CMOS camera lenses irradiated on the skin to acquire images, sharpen the acquired images using image processing algorithms, and project the sharpened images onto the skin. After that, each PCB was attached to the front end of the vein detector to detect the vein image and create a performance comparison questionnaire based on the vein image obtained for performance evaluation. The survey was conducted on 20 nurses working at K Hospital.