• Title/Summary/Keyword: 웹 카메라

Search Result 257, Processing Time 0.04 seconds

The Smart Door with the full body mirror to help you get ready to go out. (외출준비를 도와주는 전신거울형 안면인식 스마트 도어)

  • Kim, Jinsoo;Lee, Sangeun;Min, Chaeeun;Kim, Jinwook;Choi, Byoungjo
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2018.10a
    • /
    • pp.501-503
    • /
    • 2018
  • 본 논문은 개인에 맞는 외출 시 필요한 정보를 문에 출력하여 외출 준비를 도와주는 스마트 도어 아이디오를 제안한다. 스마트 도어를 활성화 시켜 내장된 웹 카메라로 촬영된 사진을 이용하여 얼굴인식을 수행하고 수행한 결과로 개안 아이디를 통해 데이터베이스를 조작한다. 데이터베이스에 개인 준비물 데이터는 모바일 앱에서는 도어락을 제어하는 추가적인 기능도 수행한다. 개인 중비물외에도 개인 일정, 당일 날씨 및 교통 정보를 스마트 도어 LCD에 출력과 동시에 음성으로 알려준다. 본 논문에서 제시하는 스마트 도어는 LCD에 정보 출력뿐만 아니라 half-mirror와 함께 설계되어 전신 거울 기능도 포함된다. 스마트 도어는 어디에서 사용되는 문에 유용한 기능을 추가하여 공간 활용에 용이하고 필요한 소지품을 잊지 않고 챙길 수 있다.

Automatic Punching System using Machine Vision for FPC (비전을 이용한 FPC 필름용 자동펀칭 시스템)

  • Lee Seong-Cheol;Lee Young-Choon;Kim Seong-Min;Sim Ki-Jung
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 2005.10a
    • /
    • pp.976-979
    • /
    • 2005
  • This paper is about the development of automatic FPC(flexible printed circuit) punching instrument for the improvement of working condition and cost saving. FPC is used to detect the contact position of keyboard and button like a cellular phone. Depending on the quality of the printed ink and position of reference punching point to the FPC, the resistance and current are varied to the malfunctioning values. The size of reference punching point is 2mm and the above. Because the punching operation Is done manually, the accuracy of the punching degree is varied with operator's condition. To improve this manual punch ing operation to the FPC, automatic FPC punching system is introduced. Test algorithms and programs showed good results to the designed automatic punching system and led to the increasement of productivity and huge cost down to law material like FPC by avoiding bad quality.

  • PDF

신체 장애우를 위한 얼굴 특징 추적을 이용한 실감형 게임 시스템 구현

  • Ju, Jin-Sun;Shin, Yun-Hee;Kim, Eun-Yi
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2006.10a
    • /
    • pp.475-478
    • /
    • 2006
  • 실감형 게임은 사람의 신체 움직임 및 오감을 최대한 반영한 리얼리티를 추구하는 전문적인 게임이다. 현재 개발된 실감형 게임들은 비 장애우를 대상으로 만들어 졌기 때문에 많은 움직임을 필요로 한다. 하지만 신체적 불편함을 가진 장애우들은 이러한 게임들을 이용하는데 어려움이 있다. 따라서 본 논문에서는 PC상에서 최소의 얼굴 움직임을 사용하여 수행할 수 있는 실감형 게임 시스템을 제안한다. 제안된 실감형 게임 시스템은 웹 카메라로부터 얻어진 영상에서 신경망 기반의 텍스쳐 분류기를 이용하여 눈 영역을 추출한다. 추출된 눈 영역은 Mean-shift 알고리즘을 이용하여 실시간으로 추적되어지고, 그 결과로 마우스의 움직임이 제어된다. 구현된 flash게임과 연동하여 게임을 눈의 움직임으로 제어 할 수 있다. 제안된 시스템의 효율성을 검증하기 위하여 장애우와 비 장애우로 분류하여 성능을 평가 하였다. 그 결과 제안된 시스템이 보다 편리하고 친숙하게 신체 장애우 에게 활용 될 수 있으며 복잡한 환경에서도 확실한 얼굴 추적을 통하여 실감형 게임 시스템을 실행 할 수 있음이 증명되었다.

  • PDF

A Technology of Greenhouse Management System based on USN (USN 기반의 그린하우스 관리 기술)

  • Rhee, Inbaum;Jeon, Byeong-chan;An, Young-chang;Lee, Jong-kyo;Bae, Tae-hyun;Park, Ju-hee;Ryu, Daehyun
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2011.11a
    • /
    • pp.1162-1165
    • /
    • 2011
  • 본 연구는 그린하우스 내의 환경 정보를 원격 감시 및 제어함으로써, 재배의 편리성을 확보하는 한편, 수집된 정보에 대하여 데이터베이스를 구축하여 작물 재배의 최적 환경을 도출하는 데 그 목적이 있다. 이를 위해서, 2연동 그린하우스 제작하여, 그린하우스를 내에 여러 종류의 센서와 카메라를 장착하였으며, 이를 통해서 감지되는 정보를 원격에서 수집, 자료화 하였다. 사용자 편의를 위하여 웹페이지를 개설, 실시간으로 정보의 검색과 제어가 가능하게 하였으며, 모바일에서도 일부 기능 구사가 가능하도록 하였다. 정보의 수집과 전달, 사용자에 의한 그린하우스 환경제어와 관련한 모든 기능에 대한 안정성을 장시간 현장시험을 통해서 실험적으로 확인하였다. 이 시스템은 그린하우스를 설치하여 작물을 재배하는 농가에 편리를 제공하여 시간적, 공간적 제약에서 많은 융통을 부여할 것이다. 또한 공장, 사무실, 가정 등 유사한 환경 시설에 대해서 확대 적용하는 것이 가능할 것이다.

Expressway Falling Object recognition system using Deep Learning (딥러닝을 이용한 고속도로 낙하물 객체 인식 시스템)

  • Sang-min Choi;Min-gyun Kim;Seung-yeop Lee;Seong-Kyoo Kim;Jae-wook Shin;Woo-jin Kim;Seong-oh Choo;Yang-woo Park
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2023.07a
    • /
    • pp.451-452
    • /
    • 2023
  • 고속도로에 낙하물이 있으면 사고 방지를 위해 바로 치워야 하지만 순찰차가 발견하거나 신고가 들어오기 전까진 낙하물을 바로 발견하기 힘들며, 대다수의 사람들은 신고하지 않고 지나치는 경우가 있기에 이러한 문제점들을 개선하기 위해 드론과 YOLO를 이용하여 도로의 낙하물을 인식하고 낙하물에 대한 정보를 보내 줄 수 있는 시스템을 개발하였다. 실시간 객체 인식 알고리즘인 YOLOv5를 데스크톱 PC에 적용하여 구현하였고, F450 프레임에 픽스호크와 모듈, 카메라를 장착하여 실시간으로 도로를 촬영할 수 있는 드론을 직접 제작하였다. 개발한 시스템은 낙하물에 대한 인식 결과와 정보를 제공하며 지상관제 시스템과 웹을 통해 확인할 수 있다. 적은 인력으로 더 빠르게 낙하물을 발견할 수 있으므로 빠른 상황 조치를 기대할 수 있다.

  • PDF

Subimage Detection of Window Image Using AdaBoost (AdaBoost를 이용한 윈도우 영상의 하위 영상 검출)

  • Gil, Jong In;Kim, Manbae
    • Journal of Broadcast Engineering
    • /
    • v.19 no.5
    • /
    • pp.578-589
    • /
    • 2014
  • Window image is displayed through a monitor screen when we execute the application programs on the computer. This includes webpage, video player and a number of applications. The webpage delivers a variety of information by various types in comparison with other application. Unlike a natural image captured from a camera, the window image like a webpage includes diverse components such as text, logo, icon, subimage and so on. Each component delivers various types of information to users. However, the components with different characteristic need to be divided locally, because text and image are served by various type. In this paper, we divide window images into many sub blocks, and classify each divided region into background, text and subimage. The detected subimages can be applied into 2D-to-3D conversion, image retrieval, image browsing and so forth. There are many subimage classification methods. In this paper, we utilize AdaBoost for verifying that the machine learning-based algorithm can be efficient for subimage detection. In the experiment, we showed that the subimage detection ratio is 93.4 % and false alarm is 13 %.

Automated Training Database Development through Image Web Crawling for Construction Site Monitoring (건설현장 영상 분석을 위한 웹 크롤링 기반 학습 데이터베이스 구축 자동화)

  • Hwang, Jeongbin;Kim, Jinwoo;Chi, Seokho;Seo, JoonOh
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.39 no.6
    • /
    • pp.887-892
    • /
    • 2019
  • Many researchers have developed a series of vision-based technologies to monitor construction sites automatically. To achieve high performance of vision-based technologies, it is essential to build a large amount and high quality of training image database (DB). To do that, researchers usually visit construction sites, install cameras at the jobsites, and collect images for training DB. However, such human and site-dependent approach requires a huge amount of time and costs, and it would be difficult to represent a range of characteristics of different construction sites and resources. To address these problems, this paper proposes a framework that automatically constructs a training image DB using web crawling techniques. For the validation, the authors conducted two different experiments with the automatically generated DB: construction work type classification and equipment classification. The results showed that the method could successfully build the training image DB for the two classification problems, and the findings of this study can be used to reduce the time and efforts for developing a vision-based technology on construction sites.

Development of Greenhouse Environment Monitoring & Control System Based on Web and Smart Phone (웹과 스마트폰 기반의 온실 환경 제어 시스템 개발)

  • Kim, D.E.;Lee, W.Y.;Kang, D.H.;Kang, I.C.;Hong, S.J.;Woo, Y.H.
    • Journal of Practical Agriculture & Fisheries Research
    • /
    • v.18 no.1
    • /
    • pp.101-112
    • /
    • 2016
  • Monitoring and control of the greenhouse environment play a decisive role in greenhouse crop production processes. The network system for greenhouse control was developed by using recent technologies of networking and wireless communications. In this paper, a remote monitoring and control system for greenhouse using a smartphone and a computer with internet has been developed. The system provides real-time remote greenhouse integrated management service which collects greenhouse environment information and controls greenhouse facilities based on sensors and equipments network. Graphical user interface for an integrated management system was designed with bases on the HMI and the experimental results showed that a sensor data and device status were collected by integrated management in real-time. Because the sensor data and device status can be displayed on a web page, transmitted using the server program to remote computer and mobile smartphone at the same time. The monitored-data can be downloaded, analyzed and saved from server program in real-time via mobile phone or internet at a remote place. Performance test results of the greenhouse control system has confirmed that all work successfully in accordance with the operating conditions. And data collections and display conditions, event actions, crops and equipments monitoring showed reliable results.

An Implementation of Gaze Recognition System Based on SVM (SVM 기반의 시선 인식 시스템의 구현)

  • Lee, Kue-Bum;Kim, Dong-Ju;Hong, Kwang-Seok
    • The KIPS Transactions:PartB
    • /
    • v.17B no.1
    • /
    • pp.1-8
    • /
    • 2010
  • The researches about gaze recognition which current user gazes and finds the location have increasingly developed to have many application. The gaze recognition of existence all about researches have got problems because of using equipment that Infrared(IR) LED, IR camera and head-mounted of high price. This study propose and implement the gaze recognition system based on SVM using a single PC Web camera. The proposed system that divide the gaze location of 36 per 9 and 4 to recognize gaze location of 4 direction and 9 direction recognize user's gaze. Also, the proposed system had apply on image filtering method using difference image entropy to improve performance of gaze recognition. The propose system was implements experiments on the comparison of proposed difference image entropy gaze recognition system, gaze recognition system using eye corner and eye's center and gaze recognition system based on PCA to evaluate performance of proposed system. The experimental results, recognition rate of 4 direction was 94.42% and 9 direction was 81.33% for the gaze recognition system based on proposed SVM. 4 direction was 95.37% and 9 direction was 82.25%, when image filtering method using difference image entropy implemented. The experimental results proved the high performance better than existed gaze recognition system.

Non-Marker Based Mobile Augmented Reality Technology Using Image Recognition (이미지 인식을 이용한 비마커 기반 모바일 증강현실 기법 연구)

  • Jo, Hui-Joon;Kim, Dae-Won
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.12 no.4
    • /
    • pp.258-266
    • /
    • 2011
  • AR(Augmented Reality) technology is now easily shown around us with respect to its applicable areas' being spreaded into various shapes since the usage is simply generalized and many-sided. Currently existing camera vision based AR used marker based methods rather than using real world's informations. For the marker based AR technology, there are limitations on applicable areas and its environmental properties that a user could immerse into the usage of application program. In this paper, we proposed a novel AR method which users could recognize objects from the real world's data and the related 3-dimensional contents are also displayed. Those are done using image processing skills and a smart mobile embedded camera for terminal based AR implementations without any markers. Object recognition is done from the comparison of pre-registered and referenced images. In this process, we tried to minimize the amount of computations of similarity measurements for improving working speed by considering features of smart mobile devices. Additionally, the proposed method is designed to perform reciprocal interactions through touch events using smart mobile devices after the 3-dimensional contents are displayed on the screen. Since then, a user is able to acquire object related informations through a web browser with respect to the user's choice. With the system described in this paper, we analyzed and compared a degree of object recognition, working speed, recognition error for functional differences to the existing AR technologies. The experimental results are presented and verified in smart mobile environments to be considered as an alternate and appropriate AR technology.