• Title/Summary/Keyword: 카메라 모듈

Search Result 441, Processing Time 0.026 seconds

Development of a Deep Learning Network for Quality Inspection in a Multi-Camera Inline Inspection System for Pharmaceutical Containers (의약 용기의 다중 카메라 인라인 검사 시스템에서의 품질 검사를 위한 딥러닝 네트워크 개발)

  • Tae-Yoon Lee;Seok-Moon Yoon;Seung-Ho Lee
    • Journal of IKEEE
    • /
    • v.28 no.3
    • /
    • pp.474-478
    • /
    • 2024
  • In this paper, we proposes a deep learning network for quality inspection in a multi-camera inline inspection system for pharmaceutical containers. The proposed deep learning network is specifically designed for pharmaceutical containers by using data produced in real manufacturing environments, leading to more accurate quality inspection. Additionally, the use of an inline-capable deep learning network allows for an increase in inspection speed. The development of the deep learning network for quality inspection in the multi-camera inline inspection system consists of three steps. First, a dataset of approximately 10,000 images is constructed from the production site using one line camera for foreign substance inspection and three area cameras for dimensional inspection. Second, the pharmaceutical container data is preprocessed by designating regions of interest (ROI) in areas where defects are likely to occur, tailored for foreign substance and dimensional inspections. Third, the preprocessed data is used to train the deep learning network. The network improves inference speed by reducing the number of channels and eliminating the use of linear layers, while accuracy is enhanced by applying PReLU and residual learning. This results in the creation of four deep learning modules tailored to the dataset built from the four cameras. The performance of the proposed deep learning network for quality inspection in the multi-camera inline inspection system for pharmaceutical containers was evaluated through experiments conducted by a certified testing agency. The results show that the deep learning modules achieved a classification accuracy of 99.4%, exceeding the world-class level of 95%, and an average classification speed of 0.947 seconds, which is superior to the world-class level of 1 second. Therefore, the effectiveness of the proposed deep learning network for quality inspection in a multi-camera inline inspection system for pharmaceutical containers has been demonstrated.

Development of Device Driver for Image Capture and Storage by Using VGA Camera Module Based on Windows CE (WINDOWS CE 기반 VGA 카메라 모듈의 영상 획득과 저장을 위한 디바이스 드라이버 개발)

  • Kim, Seung-Hwan;Ham, Woon-Chul;Lee, Jung-Hwan;Lee, Ju-Yun
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.44 no.4 s.316
    • /
    • pp.27-34
    • /
    • 2007
  • In this paper device driver for camera capture in hand held mobile system is implemented based on microsoft windows CE operating system. We also study the storage device driver based on the FAT fie system by using NAND flash memory as a storage device. We use the MBA2440 PDA board for implementing the hardware for image capture by using CMOS camera module producted by PixelPlus company. This camera module has VGA $640{\times}480$ pixel resolution. We also make application program which can be cooperated with the device driver for testing its performance, for example image capture speed and quality of captured image. We check that the application can be cooperated well not only with the device driver for camera capture but also with the device driver for FAT file system designed especially for the NAND flash memory.

Thermal Design and On-Orbit Thermal Analysis of 6U Nano-Satellite High Resolution Video and Image (HiREV) (6U급 초소형 위성 HiREV(High Resolution Video and Image)의 광학 카메라의 열 설계 및 궤도 열 해석)

  • Han-Seop Shin;Hae-Dong Kim
    • Journal of Space Technology and Applications
    • /
    • v.3 no.3
    • /
    • pp.257-279
    • /
    • 2023
  • Korea Aerospace Research Institute has developed 6U Nano-Satellite high resolution video and image (HiREV) for the purpose of developing core technology for deep space exploration. The 6U HiREV Nano-Satellite has a mission of high-resolution image and video for earth observation, and the thermal pointing error between the lens and the camera module can occur due to the high temperature in camera module on mission mode. The thermal pointing error has a large effect on the resolution, so thermal design should solve it because the HiREV optical camera is developed based on commercial products that are the industrial level. So, when it operates in space, the thermal design is needed, because it has the best performance at room temperature. In this paper, three passive thermal designs were performed for the camera mission payload, and the thermal design was proved to be effective by performing on-orbit thermal analysis.

A Study on the Design and Implementation of a Thermal Imaging Temperature Screening System for Monitoring the Risk of Infectious Diseases in Enclosed Indoor Spaces (밀폐공간 내 감염병 위험도 모니터링을 위한 열화상 온도 스크리닝 시스템 설계 및 구현에 대한 연구)

  • Jae-Young, Jung;You-Jin, Kim
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.12 no.2
    • /
    • pp.85-92
    • /
    • 2023
  • Respiratory infections such as COVID-19 mainly occur within enclosed spaces. The presence or absence of abnormal symptoms of respiratory infectious diseases is judged through initial symptoms such as fever, cough, sneezing and difficulty breathing, and constant monitoring of these early symptoms is required. In this paper, image matching correction was performed for the RGB camera module and the thermal imaging camera module, and the temperature of the thermal imaging camera module for the measurement environment was calibrated using a blackbody. To detection the target recommended by the standard, a deep learning-based object recognition algorithm and the inner canthus recognition model were developed, and the model accuracy was derived by applying a dataset of 100 experimenters. Also, the error according to the measured distance was corrected through the object distance measurement using the Lidar module and the linear regression correction module. To measure the performance of the proposed model, an experimental environment consisting of a motor stage, an infrared thermography temperature screening system and a blackbody was established, and the error accuracy within 0.28℃ was shown as a result of temperature measurement according to a variable distance between 1m and 3.5 m.

Design and Implementation of Digital Photo Kiosk System with Auto Color Correction Module (자동 컬러 보정 모듈을 가진 디지털 포토 키오스크 시스템의 설계 및 구현)

  • Park Tae-Yong;Lee Myong-Young;Park Kee-Hyon;Ha Yeong-Ho
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.43 no.2 s.308
    • /
    • pp.48-56
    • /
    • 2006
  • This paper introduces design and implementation of digital photo kiosk system, digital photo printing system, with auto color correction module that considers gamut between touch screen and output device, digital photo printer, to provide user-preferred media. This module performs media correction function to service high quality contents for image captured by digital camera and mobile phone camera. Since it is implemented as LUT for real-time processing, the system offers one-touch interface to user. As a result of implementation, this kiosk system provides user-favorite photo because of black and white mode, sepia mode, and brightness and contrast adjustment. Also it can gives smooth tone transition and photos of similar color to captured image due to auto color correction module.

Design of Image Recognition Module for Face and Iris Area based on Pixel with Eye Blinking (눈 깜박임 화소 값 기반의 안면과 홍채영역 영상인식용 모듈설계)

  • Kang, Mingoo
    • Journal of Internet Computing and Services
    • /
    • v.18 no.1
    • /
    • pp.21-26
    • /
    • 2017
  • In this paper, an USB-OTG (Uiversal Serial Bus On-the-go) interface module was designed with the iris information for personal identification. The image recognition algorithm which was searching face and iris areas, was proposed with pixel differences from eye blinking after several facial images were captured and then detected without any activities like as pressing the button of smart phone. The region of pupil and iris could be fast involved with the proper iris area segmentation from the pixel value calculation of frame difference among the images which were detected with two adjacent open-eye and close-eye pictures. This proposed iris recognition could be fast processed with the proper grid size of the eye region, and designed with the frame difference between the adjacent images from the USB-OTG interface with this camera module with the restrict of searching area in face and iris location. As a result, the detection time of iris location can be reduced, and this module can be expected with eliminating the standby time of eye-open.

Study on a Smart Cane for the Visually Impaired utilizing ESP32-CAM for Enhanced Safety (안전성 강화를 위한 ESP32-CAM을 활용한 시각장애인용 스마트지팡이에 대한 연구)

  • Doo-Hyeon-Hong;Jong-Hwan-Lim;Jun-Sun-Yu;Seung-Hyeop-Beak;Jae-Wook Kim
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.18 no.6
    • /
    • pp.1379-1386
    • /
    • 2023
  • In this paper, research was conducted to prevent various safety accidents that may occur from infant carriages carrying children and to make the use of infant carriages easier. In order to prevent the baby car from running without protection, a brake function is installed on the baby car wheels using a pressure sensor and a servo motor. Then, a pressure sensor and LCD are used to determine whether the seat belt is fastened to prevent the child from falling out of the baby car. In addition, it was designed to use LCD and LED to turn on a warning light when the temperature and humidity exceed a certain level, so that infants can be in a comfortable environment when using the baby car.

MIPI CSI-2 & D-PHY Camera Controller Design for Future Mobile Platform (차세대 모바일 단말 플랫폼을 위한 MIPI CSI-2 & D-PHY 카메라 컨트롤러 구현)

  • Hyun, Eu-Gin;Kwon, Soon;Jung, Woo-Young
    • The KIPS Transactions:PartA
    • /
    • v.14A no.7
    • /
    • pp.391-398
    • /
    • 2007
  • In this paper, we design a future mobile camera standard interface based on the MIPI CSI-2 and D-PHY specification. The proposed CSI-2 have the efficient multi-lane management layer, which the independent buffer on the each lane are merged into single buffer. This scheme can flexibly manage data on multi lanes though the number of supported lanes are mismatched in a camera processor transmitter and a host processor. The proposed CSI-2 & D-PHY are verified under test bench. We make an experiment on CSI-2 & D-PHY with FPGA type test-bed and implement them onto a mobile handset. The proposed CSI-2 & D-PHY module are used as both the bridge type and the future camera processor IP for SoC.

Remote Robot Control System based on Around View (어라운드 뷰 기반의 원격 로봇 제어 시스템)

  • Kim, Hyo-Bin;Jung, Woo-Sung;Jeon, Se-Woong
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2012.07a
    • /
    • pp.449-452
    • /
    • 2012
  • 본 논문에서는 인간이 환경에 대한 상황을 직접적으로 파악할 수 있는 시각 정보를 제공하기 위해 다중 카메라를 이용한 사용자 시각기반 어라운드 뷰를 개발하였다. 4대의 하향식 경사 카메라를 통하여 영상을 획득하고 켈리브레이션한다. 렌즈의 왜곡을 보정하고 호모그라피 행렬을 계산하여 지표면과 수평이 되는 관점으로 영상을 변환한다. 그 결과 사용자에게 종합적 상황정보 획득이 용이하도록 정보화하기 위한 위성 영상 관점의 정보를 획득할 수 있다. 그리고 4대의 카메라를 동시에 사용하기 위한 하드웨어적 한계를 극복하고자 영상처리가 가능한 임베디드 카메라 모듈을 개발하였다. 사용자-로봇 상호작용을 위해 버튼 및 조이스틱과 같은 기계적 입력장치를 사용하지 않고 사용자의 자연스러운 제스처를 통하여 제어 명령을 입력할 수 있는 터치 패드를 사용하여 사용자 인터페이스를 구축하였다. 개발한 시스템은 시 공간적 한계를 극복하고 원격에서 로봇의 상황정보를 획득하여 사용자 친화적인 로봇제어를 할 수 있다. 위의 내용들을 검증하기 위하여 같은 상황 환경에서의 기존의 시스템과 비교 실험을 진행하였고 실험 결과를 통하여 제안한 시스템의 효용성을 검증하였다.

  • PDF

Blind Helper program development by using Wireless Camera and Window Phone (무선 카메라 모듈과 Window Phone을 이용한 시각장애인 보조 프로그램 개발)

  • Kim, Yoeng-Woon;Park, Jong-Ki;Yu, Jae-Hoon;Hwang, Young-Sup;Heo, Jeong
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2012.11a
    • /
    • pp.474-477
    • /
    • 2012
  • 현대사회는 시각장애인에 대한 복지가 부족하다. 예를 들어 유도블럭의 홰손, 지폐의 점자처리의 모호함 등 시각장애인을 위해 만들어진 복지조차 사용하기 어려운게 현실이다. 그래서 우리는 무선카메라와 Window Phone을 이용하여 상기 불편함을 해소하기 위하여 이 프로젝트를 시작하였다. Guide Line Detection은 앞을 못 보는 시각장애인에게 무선카메라에 보이는 영상에서 유도블럭을 찾아 시각장애인과의 거리를 음성으로 알려준다. Bill Recognition은 지폐를 인식하여 음성으로 알려준다. 길 안내 기능은 길을 찾아가지 못하는 시각장애인에게 특정 지점마다 길 안내정보를 등록하고, 등록된 정보는 시각장애인이 실시간으로 길 안내를 받을 수 있다. 음성인식은 기기를 사용하기 힘든 시각장애인들에 대한 접근성을 높이기 위해 WinPhone Application이 제공하는 모든 기능을 흔들기와 음성만으로 사용할 수 있도록 제공한다. 무선카메라의 화질과 Window Phone의 GPS 불규칙적인 오차 때문에 많은 시행착오가 있었지만 무선카메라는 웹 캠으로, GPS오차는 BingMap API의 GPS 가상 좌표로 대체하여 프로젝트를 마칠 수 있었다.