• Title/Summary/Keyword: camera image

Search Result 4,918, Processing Time 0.026 seconds

Skin Condition Analysis of Facial Image using Smart Device: Based on Acne, Pigmentation, Flush and Blemish

  • Park, Ki-Hong;Kim, Yoon-Ho
    • Journal of Advanced Information Technology and Convergence
    • /
    • v.8 no.2
    • /
    • pp.47-58
    • /
    • 2018
  • In this paper, we propose a method for skin condition analysis using a camera module embedded in a smartphone without a separate skin diagnosis device. The type of skin disease detected in facial image taken by smartphone is acne, pigmentation, blemish and flush. Face features and regions were detected using Haar features, and skin regions were detected using YCbCr and HSV color models. Acne and flush were extracted by setting the range of a component image hue, and pigmentation was calculated by calculating the factor between the minimum and maximum value of the corresponding skin pixel in the component image R. Blemish was detected on the basis of adaptive thresholds in gray scale level images. As a result of the experiment, the proposed skin condition analysis showed that skin diseases of acne, pigmentation, blemish and flush were effectively detected.

Implementation of the Panoramic System Using Feature-Based Image Stitching (특징점 기반 이미지 스티칭을 이용한 파노라마 시스템 구현)

  • Choi, Jaehak;Lee, Yonghwan;Kim, Youngseop
    • Journal of the Semiconductor & Display Technology
    • /
    • v.16 no.2
    • /
    • pp.61-65
    • /
    • 2017
  • Recently, the interest and research on 360 camera and 360 image production are expanding. In this paper, we describe the feature extraction algorithm, alignment and image blending that make up the feature-based stitching system. And it deals with the theory of representative algorithm at each stage. In addition, the feature-based stitching system was implemented using OPENCV library. As a result of the implementation, the brightness of the two images is different, and it feels a sense of heterogeneity in the resulting image. We will study the proper preprocessing to adjust the brightness value to improve the accuracy and seamlessness of the feature-based stitching system.

  • PDF

Analysis of Affine Motion Compensation for Light Field Image Compression (라이트필드 영상 압축을 위한 Affine 움직임 보상 분석)

  • Huu, Thuc Nguyen;Duong, Vinh Van;Xu, Motong;Jeon, Byeungwoo
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2019.06a
    • /
    • pp.216-217
    • /
    • 2019
  • Light Field (LF) image can be understood as a set of images captured by a multi-view camera array at the same time. The changes among views can be modeled by a general motion model such as affine motion model. In this paper, we study the impact of affine coding tool of Versatile Video Coding (VVC) on LF image compression. Our experimental results show a small contribution by affine coding tool in overall LF image compression of roughly 0.2% - 0.4%.

  • PDF

POOL MONITORING IN GMAW

  • Absi Alfaro, S.C.;de Carvallio, G.C.;Motta, J.M.
    • Proceedings of the KWS Conference
    • /
    • 2002.10a
    • /
    • pp.307-313
    • /
    • 2002
  • This paper describes a weld pool monitoring technique, which is based on the weld pool image analysis. The proposed image analysis algorithm uses machine vision techniques to extract geometrical information from the weld pool image such as maximum weld pool width, gap width and misalignment between the joint longitudinal axis and the welding wire. These can be related to the welding parameters (welding voltage and current, wire feed speed and standoff) to produce control actions necessary to ensure that the required weld quality will be achieved. The experiments have shown that the algorithm is able to produce good estimates of the weld pool geometry; however, the adjustment of the camera parameters affects the image quality and, consequently, has a great influence over the estimation.

  • PDF

A Study on the In-Process Measurement of Rotary Body by Optical Technique (광학적 기법을 이용한 회전체 인프로세스 측정에 관한 연구)

  • So, Eui-Yeorl;Im, Young-Ho;Ryu, Bong-Hwan
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.13 no.5
    • /
    • pp.148-156
    • /
    • 1996
  • Automatic product system is gradually increasing according to development of industrial society. On-line measurement makes a important role in view of economic and effective side in industrial product system. Syncronization system is developed to measure screw thread which is rotating. In-process measuring of rotating body have a lot of difficuties even thoufht using various method containing high speed camera. So, now we suggest one of the new method which is not so expansive. In this study, digital value was produced through the image processing algorithm from acquired orignal image. As a result, we have a good agreement between measuring values calculated from image conture and measuring values acquired from profile project by means of experiment, respectively.

  • PDF

High-sensitivity NIR Sensing with Stacked Photodiode Architecture

  • Hyunjoon Sung;Yunkyung Kim
    • Current Optics and Photonics
    • /
    • v.7 no.2
    • /
    • pp.200-206
    • /
    • 2023
  • Near-infrared (NIR) sensing technology using CMOS image sensors is used in many applications, including automobiles, biological inspection, surveillance, and mobile devices. An intuitive way to improve NIR sensitivity is to thicken the light absorption layer (silicon). However, thickened silicon lacks NIR sensitivity and has other disadvantages, such as diminished optical performance (e.g. crosstalk) and difficulty in processing. In this paper, a pixel structure for NIR sensing using a stacked CMOS image sensor is introduced. There are two photodetection layers, a conventional layer and a bottom photodiode, in the stacked CMOS image sensor. The bottom photodiode is used as the NIR absorption layer. Therefore, the suggested pixel structure does not change the thickness of the conventional photodiode. To verify the suggested pixel structure, sensitivity was simulated using an optical simulator. As a result, the sensitivity was improved by a maximum of 130% and 160% at wavelengths of 850 nm and 940 nm, respectively, with a pixel size of 1.2 ㎛. Therefore, the proposed pixel structure is useful for NIR sensing without thickening the silicon.

LFFCNN: Multi-focus Image Synthesis in Light Field Camera (LFFCNN: 라이트 필드 카메라의 다중 초점 이미지 합성)

  • Hyeong-Sik Kim;Ga-Bin Nam;Young-Seop Kim
    • Journal of the Semiconductor & Display Technology
    • /
    • v.22 no.3
    • /
    • pp.149-154
    • /
    • 2023
  • This paper presents a novel approach to multi-focus image fusion using light field cameras. The proposed neural network, LFFCNN (Light Field Focus Convolutional Neural Network), is composed of three main modules: feature extraction, feature fusion, and feature reconstruction. Specifically, the feature extraction module incorporates SPP (Spatial Pyramid Pooling) to effectively handle images of various scales. Experimental results demonstrate that the proposed model not only effectively fuses a single All-in-Focus image from images with multi focus images but also offers more efficient and robust focus fusion compared to existing methods.

  • PDF

Moire Reduction in Digital Still Camera by Using Inflection Point in Frequency Domain (주파수 도메인의 변곡점을 이용한 디지털 카메라의 moire 제거 방법)

  • Kim, Dae-Chul;Kyung, Wang-Jun;Lee, Cheol-Hee;Ha, Yeong-Ho
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.51 no.1
    • /
    • pp.152-157
    • /
    • 2014
  • Digital still camera generally uses optical low-pass filter(OLPF) to enhance its image quality because it removes high spatial frequencies causing aliasing. However, the use of OLPF causes some loss of detail. On the other hand, when image are captured by using no OLPF, the moir$\acute{e}$ is generally existed in high spatial frequency region of an image. Therefore, in this paper, moir$\acute{e}$ reduction method in case of using no OLPF is suggested. To detect the moir$\acute{e}$, spatial frequency response(SFR) of camera was firstly analyzed by using ISO 12233 resolution chart. Then, moir$\acute{e}$ region is detected by using the patterns that are related to the SFR of camera. next, this region is analysed in the frequency domain. Then, the moir$\acute{e}$ is reduced by removing its frequency component, which represents inflection point between high frequency and DC components. Through the experimental results, it is shown that the proposed method can achieve moir$\acute{e}$ reduction with preserving the detail.

DEVELOPMENT OF AN AMPHIBIOUS ROBOT FOR VISUAL INSPECTION OF APR1400 NPP IRWST STRAINER ASSEMBLY

  • Jang, You Hyun;Kim, Jong Seog
    • Nuclear Engineering and Technology
    • /
    • v.46 no.3
    • /
    • pp.439-446
    • /
    • 2014
  • An amphibious inspection robot system (hereafter AIROS) is being developed to visually inspect the in-containment refueling storage water tank (hereafter IRWST) strainer in APR1400 instead of a human diver. Four IRWST strainers are located in the IRWST, which is filled with boric acid water. Each strainer has 108 sub-assembly strainer fin modules that should be inspected with the VT-3 method according to Reg. guide 1.82 and the operation manual. AIROS has 6 thrusters for submarine voyage and 4 legs for walking on the top of the strainer. An inverse kinematic algorithm was implemented in the robot controller for exact walking on the top of the IRWST strainer. The IRWST strainer has several top cross braces that are extruded on the top of the strainer, which can be obstacles of walking on the strainer, to maintain the frame of the strainer. Therefore, a robot leg should arrive at the position beside the top cross brace. For this reason, we used an image processing technique to find the top cross brace in the sole camera image. The sole camera image is processed to find the existence of the top cross brace using the cross edge detection algorithm in real time. A 5-DOF robot arm that has multiple camera modules for simultaneous inspection of both sides can penetrate narrow gaps. For intuitive presentation of inspection results and for management of inspection data, inspection images are stored in the control PC with camera angles and positions to synthesize and merge the images. The synthesized images are then mapped in a 3D CAD model of the IRWST strainer with the location information. An IRWST strainer mock-up was fabricated to teach the robot arm scanning and gaiting. It is important to arrive at the designated position for inserting the robot arm into all of the gaps. Exact position control without anchor under the water is not easy. Therefore, we designed the multi leg robot for the role of anchoring and positioning. Quadruped robot design of installing sole cameras was a new approach for the exact and stable position control on the IRWST strainer, unlike a traditional robot for underwater facility inspection. The developed robot will be practically used to enhance the efficiency and reliability of the inspection of nuclear power plant components.

High Resolution Video Synthesis with a Hybrid Camera (하이브리드 카메라를 이용한 고해상도 비디오 합성)

  • Kim, Jong-Won;Kyung, Min-Ho
    • Journal of the Korea Computer Graphics Society
    • /
    • v.13 no.4
    • /
    • pp.7-12
    • /
    • 2007
  • With the advent of digital cinema, more and more movies are digitally produced, distributed via digital medium such as hard drives and network, and finally projected using a digital projector. However, digital cameras capable of shotting at 2K or higher resolution for digital cinema are still very expensive and bulky, which impedes rapid transition to digital production. As a low-cost solution for acquiring high resolution digital videos, we propose a hybrid camera consisting of a low-resolution CCD for capturing videos and a high-resolution CCD for capturing still images at regular intervals. From the output of the hybrid camera, we can synthesize high-resolution videos by software as follows: for each frame, 1. find pixel correspondences from the current frame to the previous and subsequent keyframes associated with high resolution still images, 2. synthesize a high-resolution image for the current frame by copying the image blocks associated with the corresponding pixels from the high-resolution keyframe images, and 3. complete the synthesis by filling holes in the synthesized image. This framework can be extended to making NPR video effects and capturing HDR videos.

  • PDF