• Title/Summary/Keyword: 카메라 애플리케이션

Search Result 83, Processing Time 0.031 seconds

An Android Application to Guide Waste Sorting using a Deep Learning Image Classifier (딥러닝 사진 분류기를 활용한 분리배출 가이드 안드로이드 응용)

  • Kim, So-Yeong;Park, So-Hui;Kim, Min-Ji;Lee, Je-min;Kim, Hyung-Shin
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2021.07a
    • /
    • pp.99-101
    • /
    • 2021
  • 쓰레기 대란, 환경파괴의 상황 속 실제 재활용 쓰레기 가운데 절반 정도만이 재활용되고 있다. 재활용률을 높이기 위해, 올바른 분리배출 방법을 쉽고 편하게 찾을 수 있는 방식이 필요하다. 본 논문에서는 올바른 분리수거를 통해 재활용률을 증진하기 위한 분리수거 분류 서비스를 제안한다. 본 논문은 ResNet-34 모델을 통해 안드로이드 카메라로 촬영한 이미지의 분리배출 클래스를 예측하고 그에 따른 분리배출 가이드를 제공하는 시스템을 설계하였다. 향후 연구에서는 모델의 정확도 향상을 위해 온디바이스와 서버 모델을 분리하고 모델의 개인 맞춤화를 진행할 예정이다.

  • PDF

A Study on how to selectively apply a filter effect to mask wearers (마스크 착용 여부에 따른 얼굴 필터 효과 부분 적용 기술)

  • Park, Shin Wi;Lee, Eui Chul
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2021.11a
    • /
    • pp.772-774
    • /
    • 2021
  • COVID-19 로 인해 마스크 착용이 필수적인 사회가 되면서 마스크를 착용한 상태로 얼굴 사진을 촬영하는 빈도가 증가하고 있다. 그러나 얼굴인식 기반의 보정 및 필터링 기능이 적용된 카메라 애플리케이션은 인물의 마스크 착용 유무를 인식하지 못하여 마스크로 가려진 영역까지 필터 및 색조 기능을 적용시킨다는 한계가 있다. 이러한 문제를 해결하기 위해 본 연구에서는 검출된 얼굴영역에서 마스크 착용 여부 및 마스크 영역을 판단하고 해당 영역을 제외한 나머지 얼굴 영역에 필터링 효과를 적용하는 기술을 구현하였다.

A Study for the Accessibility of Camera-Based Mobile Applications on Touch Screen Devices for Blind People (스마트기기에서 시각장애인을 위한 카메라기반 인식 소프트웨어 인터페이스의 접근성 연구)

  • Choi, Yoonjung;Hong, Ki-Hyung
    • Journal of the HCI Society of Korea
    • /
    • v.7 no.2
    • /
    • pp.49-56
    • /
    • 2012
  • The camera-based mobile applications such as color, pattern and object reading can improve the living quality of blind people. However currently available camera-based applications are uncomfortable for the blind, since these applications do not reflect accessibility requirements of the blind especially on touch screen. We investigated accessibility requirements about rapidly growing camera-based mobile applications on touch screen devices for the blind. In order to identify accessibility requirements, we conducted a usability testing for color reading applications with three different types of interfaces on Android OS. The results of the usability testing were as follows: (1) users preferred short depth of menu hierarchy, (2) the initial audio help was more useful than just-in-time help, (3) users needed both manual and automatic camera shooting modes although they preferred manual to automatic mode, (4) users wanted the OS supported screen reader function to be turned off during the color reading application was running, and (5) users required tactile feedback to identify touch screen boundary. We designed a new user interface for blind people by applying the identified accessibility requirements. From a usability testing of the new user interface with 10 blind people, we showed that the identified accessibility requirements were very useful accessibility guidelines for camera-based mobile applications.

  • PDF

Android-Based Devices Control System Using Web Server (웹 서버를 이용한 안드로이드 기반 기기 제어 시스템)

  • Jung, Chee-Oh;Kim, Wung-Jun;Jung, Hoe-Kyung
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.19 no.3
    • /
    • pp.741-746
    • /
    • 2015
  • Recently, as mobile operating system market and wireless communication technology have been rapidly developed, many devices such as smart phones, air conditioners, smart TVs, cleaning robot, and cameras become available with android operating system. Accordingly, collecting variety of information through many everyday use devices with network connections is now enabled. However, in the current market, most devices are controlled individually developed applications, and there is growing need to develop a master application that can control multiple devices. In this paper, we propose and implement a system that can control multiple android-based devices on a Wired/Wireless router(AP) registered through web server. we expect such an effort can attribute to future IoT researches.

Design and Implementation of M2M-based Smart Factory Management Systems that controls with Smart Phone (스마트폰과 연동되는 M2M 기반 스마트 팩토리 관리시스템의 설계 및 구현)

  • Park, Byoung-Seob
    • Journal of the Korea Society of Computer and Information
    • /
    • v.16 no.4
    • /
    • pp.189-196
    • /
    • 2011
  • The main issues of the researches are monitoring environment such as weather or temperature variation and natural accident, and sensor gateways which have mobile device, applications for mobile health care. In this paper, we propose the SFMS(Smart Factory Management System) that can effectively monitor and manage a green smart factory area based on M2M service and smart phone with android OS platform. The proposed system is performed based on the TinyOS-based IEEE 802.15.4 protocol stack. To validate system functionality, we built sensor network environments where were equipped with four application sensors such as Temp/Hum, PIR, door, and camera sensor. We also built and tested the SFMS system to provide a novel model for event detection systems with smart phone.

M-Learning Application for Ubiquitous Learning Based on Android Web Platform (안드로이드 웹 플랫폼 기반 U-Learning을 위한 M-Learning 애플리케이션)

  • Kim, Hye-Jin
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.12 no.12
    • /
    • pp.5564-5569
    • /
    • 2011
  • In this paper we introduced Augmented Reality (AR) on Android platform for ubiquitous learning (u-learning). Android is breaking new ground for mobile computing and open technologies. Android is versatile as it is not limited only to mobile phones, but it can be installed on various devices. Android gives developers the opportunity to leverage their development skills, while also building an exciting and active community. Augmented Reality (AR) is going to be the future of most apps; all it takes is a decent processor, a camera, a compass and a GPS, all of which are becoming increasingly common on smart phones. Through AR we can have educational tools that provide individuals with total flexibility to receive, send, and review training and detailed product information through an increasingly ubiquitous web-enabled communication device. In this paper, we proposed Augmented Reality for Species Identification using Android Smartphone with augmented reality in species determination. This study is appropriate in the field of Biology. This is useful in outdoor experimental activities of the students. Like for example while they are visiting the zoo, botanical garden and etc.

An Automatic Cosmetic Ingredient Analysis System based on Text Recognition Techniques (텍스트 인식 기법에 기반한 화장품 성분 자동 분석 시스템)

  • Ye-Won Kim;Sun-Mi Hong;Seong-Yong Ohm
    • The Journal of the Convergence on Culture Technology
    • /
    • v.9 no.1
    • /
    • pp.565-570
    • /
    • 2023
  • There are people who are sensitive to cosmetic ingredients, such as pregnant women and skin disease patients. There are also people who experience side effects from cosmetics. To avoid this, it is cumbersome to search for harmful ingredients in cosmetics one by one when shopping. In addition, knowing and remembering functional ingredients that suit you is helpful when purchasing new cosmetics. There is a need for a system that allows you to immediately know the cosmetics ingredients in the field through photography. In this paper, we introduce an application for smartphones, <Hwa Ahn>, which allows you to immediately know the cosmetics ingredients by photographing the ingredients displayed in the cosmetics. This system is more effective and convenient than the existing system in that it automatically recognizes and automatically classifies the ingredients of the cosmetic when the camera is illuminated on the cosmetic ingredients or retrieves the photos of the cosmetic ingredients from the album. If the system is widely used, it is expected that it will prevent skin diseases caused by cosmetics in daily life and reduce purchases of cosmetics that are not suitable for you.

User Motion Recognition Healthcare System Using Smart-Band (스마트밴드를 이용한 사용자 모션인식 헬스 케어 시스템 구현)

  • Park, Jin-Tae;Hwang, Hyun-Seo;Yun, Jun-Soo;Park, Gyung-Soo;Moon, Il-Young
    • Journal of Advanced Navigation Technology
    • /
    • v.18 no.6
    • /
    • pp.619-624
    • /
    • 2014
  • Nowadays there are various smart devices and development with the development of smart phones and that can be attached to the human body wearable computing device has been in the spotlight. In this paper, we proceeded developing wearable devices in watch type which can detect user's movement and developing a system which connects the wearable devices to smart TVs, or smart phones so that users can save and manage their physical information in those devices. Health care wearable devices already existing save information by connecting their systems to smart phones. And, smart TV health applications usually include motion detecting systems using cameras. However, there is a limit when connecting smart phone systems to different devices from various companies. Also, in case of smart TV, because some devices may not have cameras, there can be a limit for users who wants to connect their devices to smart TVs. Wearable device and user information collected by using the smart phone and when it is possible to exercise and manage anywhere. This information can also be confirmed by the smart TV applications. By using this system will be able to take advantage of the study of the behavior of the future work of the user more accurately be measured in recognition technology and other devices.

Development of Recognition Application of Facial Expression for Laughter Theraphy on Smartphone (스마트폰에서 웃음 치료를 위한 표정인식 애플리케이션 개발)

  • Kang, Sun-Kyung;Li, Yu-Jie;Song, Won-Chang;Kim, Young-Un;Jung, Sung-Tae
    • Journal of Korea Multimedia Society
    • /
    • v.14 no.4
    • /
    • pp.494-503
    • /
    • 2011
  • In this paper, we propose a recognition application of facial expression for laughter theraphy on smartphone. It detects face region by using AdaBoost face detection algorithm from the front camera image of a smartphone. After detecting the face image, it detects the lip region from the detected face image. From the next frame, it doesn't detect the face image but tracks the lip region which were detected in the previous frame by using the three step block matching algorithm. The size of the detected lip image varies according to the distance between camera and user. So, it scales the detected lip image with a fixed size. After that, it minimizes the effect of illumination variation by applying the bilateral symmetry and histogram matching illumination normalization. After that, it computes lip eigen vector by using PCA(Principal Component Analysis) and recognizes laughter expression by using a multilayer perceptron artificial network. The experiment results show that the proposed method could deal with 16.7 frame/s and the proposed illumination normalization method could reduce the variations of illumination better than the existing methods for better recognition performance.

Online Monitoring System based notifications on Mobile devices with Kinect V2 (키넥트와 모바일 장치 알림 기반 온라인 모니터링 시스템)

  • Niyonsaba, Eric;Jang, Jong-Wook
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.20 no.6
    • /
    • pp.1183-1188
    • /
    • 2016
  • Kinect sensor version 2 is a kind of camera released by Microsoft as a computer vision and a natural user interface for game consoles like Xbox one. It allows acquiring color images, depth images, audio input and skeletal data with a high frame rate. In this paper, using depth image, we present a surveillance system of a certain area within Kinect's field of view. With computer vision library(Emgu CV), if an object is detected in the target area, it is tracked and kinect camera takes RGB image to send it in database server. Therefore, a mobile application on android platform was developed in order to notify the user that Kinect has sensed strange motion in the target region and display the RGB image of the scene. User gets the notification in real-time to react in the best way in the case of valuable things in monitored area or other cases related to a reserved zone.