• Title/Summary/Keyword: camera image

Search Result 4,917, Processing Time 0.029 seconds

Design of Deep Learning-Based Automatic Drone Landing Technique Using Google Maps API (구글 맵 API를 이용한 딥러닝 기반의 드론 자동 착륙 기법 설계)

  • Lee, Ji-Eun;Mun, Hyung-Jin
    • Journal of Industrial Convergence
    • /
    • v.18 no.1
    • /
    • pp.79-85
    • /
    • 2020
  • Recently, the RPAS(Remote Piloted Aircraft System), by remote control and autonomous navigation, has been increasing in interest and utilization in various industries and public organizations along with delivery drones, fire drones, ambulances, agricultural drones, and others. The problems of the stability of unmanned drones, which can be self-controlled, are also the biggest challenge to be solved along the development of the drone industry. drones should be able to fly in the specified path the autonomous flight control system sets, and perform automatically an accurate landing at the destination. This study proposes a technique to check arrival by landing point images and control landing at the correct point, compensating for errors in location data of the drone sensors and GPS. Receiving from the Google Map API and learning from the destination video, taking images of the landing point with a drone equipped with a NAVIO2 and Raspberry Pi, camera, sending them to the server, adjusting the location of the drone in line with threshold, Drones can automatically land at the landing point.

Properties of Defective Regions Observed by Photoluminescence Imaging for GaN-Based Light-Emitting Diode Epi-Wafers

  • Kim, Jongseok;Kim, HyungTae;Kim, Seungtaek;Jeong, Hoon;Cho, In-Sung;Noh, Min Soo;Jung, Hyundon;Jin, Kyung Chan
    • Journal of the Optical Society of Korea
    • /
    • v.19 no.6
    • /
    • pp.687-694
    • /
    • 2015
  • A photoluminescence (PL) imaging method using a vision camera was employed to inspect InGaN/GaN quantum-well light-emitting diode (LED) epi-wafers. The PL image revealed dark spot defective regions (DSDRs) as well as a spatial map of integrated PL intensity of the epi-wafer. The Shockley-Read-Hall (SRH) nonradiative recombination coefficient increased with the size of the DSDRs. The high nonradiative recombination rates of the DSDRs resulted in degradation of the optical properties of the LED chips fabricated at the defective regions. Abnormal current-voltage characteristics with large forward leakages were also observed for LED chips with DSDRs, which could be due to parallel resistances bypassing the junction and/or tunneling through defects in the active region. It was found that the SRH nonradiative recombination process was dominant in the voltage range where the forward leakage by tunneling was observed. The results indicated that the DSDRs observed by PL imaging of LED epi-wafers were high density SRH nonradiative recombination centers which could affect the optical and electrical properties of the LED chips, and PL imaging can be an inspection method for evaluation of the epi-wafers and estimation of properties of the LED chips before fabrication.

Phase-Shifting System Using Zero-Crossing Detection for use in Fiber-Optic ESPI (영점검출을 이용한 광섬유형 전자 스페클 패턴 간섭계의 위상이동)

  • Park, Hyoung-Jun;Song, Min-Ho;Lee, Jun-Ho
    • Korean Journal of Optics and Photonics
    • /
    • v.16 no.6
    • /
    • pp.516-520
    • /
    • 2005
  • We proposed an efficient phase stepping method for the use in fiber-optic ESPI. To improve phase-stepping accuracy and efficiency, a fiber-optic Michelson interferometer was phase-modulated by a ramp-driven fiber stretcher, resulting in 4$\pi$ phase excursion in the PD interference signal. The zero-crossing points of the signal, which have consecutive $\pi$ phase difference, were carefully detected and used to generate trigger signals for the CCD camera. From the experimental results by using this algorithm, $\pi$/2 phase-stepping errors between the speckle patterns were measured to be less than 0.6 mrad with 100 Hz image capture speed. Also it has been shown that the error from the nonlinear phase modulation and environmental perturbations could be minimized without any feedback algorithm.

Relation between Thermal Emissivities and Alignment Degrees of Graphite Flakes Coated on an Aluminum Substrate (알루미늄 기판에 코팅된 흑연입자의 배향도 변화와 열방사율 변화의 관계)

  • Kang, Dong Su;Lee, Sang Min;Kim, Suk Hwan;Lee, Sang Woo;Roh, Jae Seung
    • Korean Journal of Materials Research
    • /
    • v.24 no.3
    • /
    • pp.159-165
    • /
    • 2014
  • This study is research on the thermal emissivity depending on the alignment degrees of graphite flakes. Samples were manufactured by a slurry of natural graphite flakes with organic binder and subsequent dip-coating on an aluminum substrate. The alignment degrees were controlled by applying magnetic field strength (0, 1, and 3 kG) to the coated samples. The alignment degree of the sample was measured by XRD. The thermal emissivity was measured by an infrared thermal image camera at $100^{\circ}C$. The alignment degrees were 0.04, 0.11, and 0.17 and the applied magnetic field strengths were 0, 1, and 3 kG, respectively. The thermal emissivities were 0.829, 0.837, and 0.844 and the applying magnetic field strengths were 0, 1, and 3 kG, respectively. In this study the correlation coefficient, $R^2$, between thermal emissivity and alignment degree was 0.997. Therefore, it was concluded that the thermal emissivities are correlated with the alignment degree of the graphite flakes.

Performance Comparison of Skin Color Detection Algorithms by the Changes of Backgrounds (배경의 변화에 따른 피부색상 검출 알고리즘의 성능 비교)

  • Jang, Seok-Woo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.15 no.3
    • /
    • pp.27-35
    • /
    • 2010
  • Accurately extracting skin color regions is very important in various areas such as face recognition and tracking, facial expression recognition, adult image identification, health-care, and so forth. In this paper, we evaluate the performances of several skin color detection algorithms in indoor environments by changing the distance between the camera and the object as well as the background colors of the object. The distance is from 60cm to 120cm and the background colors are white, black, orange, pink, and yellow, respectively. The algorithms that we use for the performance evaluation are Peer algorithm, NNYUV, NNHSV, LutYUV, and Kimset algorithm. The experimental results show that NNHSV, NNYUV and LutYUV algorithm are stable, but the other algorithms are somewhat sensitive to the changes of backgrounds. As a result, we expect that the comparative experimental results of this paper will be used very effectively when developing a new skin color extraction algorithm which are very robust to dynamic real environments.

Development of a Recognition System of Smile Facial Expression for Smile Treatment Training (웃음 치료 훈련을 위한 웃음 표정 인식 시스템 개발)

  • Li, Yu-Jie;Kang, Sun-Kyung;Kim, Young-Un;Jung, Sung-Tae
    • Journal of the Korea Society of Computer and Information
    • /
    • v.15 no.4
    • /
    • pp.47-55
    • /
    • 2010
  • In this paper, we proposed a recognition system of smile facial expression for smile treatment training. The proposed system detects face candidate regions by using Haar-like features from camera images. After that, it verifies if the detected face candidate region is a face or non-face by using SVM(Support Vector Machine) classification. For the detected face image, it applies illumination normalization based on histogram matching in order to minimize the effect of illumination change. In the facial expression recognition step, it computes facial feature vector by using PCA(Principal Component Analysis) and recognizes smile expression by using a multilayer perceptron artificial network. The proposed system let the user train smile expression by recognizing the user's smile expression in real-time and displaying the amount of smile expression. Experimental result show that the proposed system improve the correct recognition rate by using face region verification based on SVM and using illumination normalization based on histogram matching.

Extraction of Spatial Information of Tree Using LIDAR Data in Urban Area (라이다 자료를 이용한 도시지역의 수목공간정보 추출)

  • Cho, Du-Young;Kim, Eui-Myoung
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.18 no.4
    • /
    • pp.11-20
    • /
    • 2010
  • In situation that carbon dioxide emissions are being increased as urbanization, urban green space is being promoted as an alternative to find solution for these problems. In urban areas, trees have the ability to reduce carbon dioxide as well as to be aesthetic effect. In this study, we proposed the methodology which uses only LIDAR data in order to extract these trees information effectively. To improve the operational efficiency according to the extraction of trees, the proposed methodology was carried out using multiple data processing such as point, polygon and raster. Because the existing NDSM(Normalized Digital Surface Model) contains both the building and tree information, it has the problems of high complexity of data processing for extracting trees. Therefore, in order to improve these problems, this study used modified NDSM which was removed estimate regions of building. To evaluate the performance of the proposed methodology, three different zones which coexist buildings and trees within urban areas were selected and the accuracy of extracted trees was compared with the image taken by digital camera.

Motion Control of a Mobile Robot Using Natural Hand Gesture (자연스런 손동작을 이용한 모바일 로봇의 동작제어)

  • Kim, A-Ram;Rhee, Sang-Yong
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.24 no.1
    • /
    • pp.64-70
    • /
    • 2014
  • In this paper, we propose a method that gives motion command to a mobile robot to recognize human being's hand gesture. Former way of the robot-controlling system with the movement of hand used several kinds of pre-arranged gesture, therefore the ordering motion was unnatural. Also it forced people to study the pre-arranged gesture, making it more inconvenient. To solve this problem, there are many researches going on trying to figure out another way to make the machine to recognize the movement of the hand. In this paper, we used third-dimensional camera to obtain the color and depth data, which can be used to search the human hand and recognize its movement based on it. We used HMM method to make the proposed system to perceive the movement, then the observed data transfers to the robot making it to move at the direction where we want it to be.

A Study on Hand-signal Recognition System in 3-dimensional Space (3차원 공간상의 수신호 인식 시스템에 대한 연구)

  • 장효영;김대진;김정배;변증남
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.41 no.3
    • /
    • pp.103-114
    • /
    • 2004
  • This paper deals with a system that is capable of recognizing hand-signals in 3-dimensional space. The system uses 2 color cameras as input devices. Vision-based gesture recognition system is known to be user-friendly because of its contact-free characteristic. But as with other applications using a camera as an input device, there are difficulties under complex background and varying illumination. In order to detect hand region robustly from a input image under various conditions without any special gloves or markers, the paper uses previous position information and adaptive hand color model. The paper defines a hand-signal as a combination of two basic elements such as 'hand pose' and 'hand trajectory'. As an extensive classification method for hand pose, the paper proposes 2-stage classification method by using 'small group concept'. Also, the paper suggests a complementary feature selection method from images from two color cameras. We verified our method with a hand-signal application to our driving simulator.

Gaze Detection by Computing Facial Rotation and Translation (얼굴의 회전 및 이동 분석에 의한 응시 위치 파악)

  • Lee, Jeong-Jun;Park, Kang-Ryoung;Kim, Jai-Hie
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.39 no.5
    • /
    • pp.535-543
    • /
    • 2002
  • In this paper, we propose a new gaze detection method using 2-D facial images captured by a camera on top of the monitor. We consider only the facial rotation and translation and not the eyes' movements. The proposed method computes the gaze point caused by the facial rotation and the amount of the facial translation respectively, and by combining these two the final gaze point on a monitor screen can be obtained. We detected the gaze point caused by the facial rotation by using a neural network(a multi-layered perceptron) whose inputs are the 2-D geometric changes of the facial features' points and estimated the amount of the facial translation by image processing algorithms in real time. Experimental results show that the gaze point detection accuracy between the computed positions and the real ones is about 2.11 inches in RMS error when the distance between the user and a 19-inch monitor is about 50~70cm. The processing time is about 0.7 second with a Pentium PC(233MHz) and 320${\times}$240 pixel-size images.