• Title/Summary/Keyword: Smartphone Sensor

Search Result 307, Processing Time 0.03 seconds

A Road Luminance Measurement Application based on Android (안드로이드 기반의 도로 밝기 측정 어플리케이션 구현)

  • Choi, Young-Hwan;Kim, Hongrae;Hong, Min
    • Journal of Internet Computing and Services
    • /
    • v.16 no.2
    • /
    • pp.49-55
    • /
    • 2015
  • According to the statistics of traffic accidents over recent 5 years, traffic accidents during the night times happened more than the day times. There are various causes to occur traffic accidents and the one of the major causes is inappropriate or missing street lights that make driver's sight confused and causes the traffic accidents. In this paper, with smartphones, we designed and implemented a lane luminance measurement application which stores the information of driver's location, driving, and lane luminance into database in real time to figure out the inappropriate street light facilities and the area that does not have any street lights. This application is implemented under Native C/C++ environment using android NDK and it improves the operation speed than code written in Java or other languages. To measure the luminance of road, the input image with RGB color space is converted to image with YCbCr color space and Y value returns the luminance of road. The application detects the road lane and calculates the road lane luminance into the database sever. Also this application receives the road video image using smart phone's camera and improves the computational cost by allocating the ROI(Region of interest) of input images. The ROI of image is converted to Grayscale image and then applied the canny edge detector to extract the outline of lanes. After that, we applied hough line transform method to achieve the candidated lane group. The both sides of lane is selected by lane detection algorithm that utilizes the gradient of candidated lanes. When the both lanes of road are detected, we set up a triangle area with a height 20 pixels down from intersection of lanes and the luminance of road is estimated from this triangle area. Y value is calculated from the extracted each R, G, B value of pixels in the triangle. The average Y value of pixels is ranged between from 0 to 100 value to inform a luminance of road and each pixel values are represented with color between black and green. We store car location using smartphone's GPS sensor into the database server after analyzing the road lane video image with luminance of road about 60 meters ahead by wireless communication every 10 minutes. We expect that those collected road luminance information can warn drivers about safe driving or effectively improve the renovation plans of road luminance management.

Implementation of Smart Shopping Cart using Object Detection Method based on Deep Learning (딥러닝 객체 탐지 기술을 사용한 스마트 쇼핑카트의 구현)

  • Oh, Jin-Seon;Chun, In-Gook
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.21 no.7
    • /
    • pp.262-269
    • /
    • 2020
  • Recently, many attempts have been made to reduce the time required for payment in various shopping environments. In addition, for the Fourth Industrial Revolution era, artificial intelligence is advancing, and Internet of Things (IoT) devices are becoming more compact and cheaper. So, by integrating these two technologies, access to building an unmanned environment to save people time has become easier. In this paper, we propose a smart shopping cart system based on low-cost IoT equipment and deep-learning object-detection technology. The proposed smart cart system consists of a camera for real-time product detection, an ultrasonic sensor that acts as a trigger, a weight sensor to determine whether a product is put into or taken out of the shopping cart, an application for smartphones that provides a user interface for a virtual shopping cart, and a deep learning server where learned product data are stored. Communication between each module is through Transmission Control Protocol/Internet Protocol, a Hypertext Transmission Protocol network, a You Only Look Once darknet library, and an object detection system used by the server to recognize products. The user can check a list of items put into the smart cart via the smartphone app, and can automatically pay for them. The smart cart system proposed in this paper can be applied to unmanned stores with high cost-effectiveness.

Self-Organizing Middleware Platform Based on Overlay Network for Real-Time Transmission of Mobile Patients Vital Signal Stream (이동 환자 생체신호의 실시간 전달을 위한 오버레이 네트워크 기반 자율군집형 미들웨어 플랫폼)

  • Kang, Ho-Young;Jeong, Seol-Young;Ahn, Cheol-Soo;Park, Yu-Jin;Kang, Soon-Ju
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.38C no.7
    • /
    • pp.630-642
    • /
    • 2013
  • To transmit vital signal stream of mobile patients remotely, it requires mobility of patient and watcher, sensing function of patient's abnormal symptom and self-organizing service binding of related computing resources. In the existing relative researches, the vital signal stream is transmitted as a centralized approach which exposure the single point of failure itself and incur data traffic to central server although it is localized service. Self-organizing middleware platform based on heterogenous overlay network is a middleware platform which can transmit real-time data from sensor device(including vital signal measure devices) to Smartphone, TV, PC and external system through overlay network applied self-organizing mechanism. It can transmit and save vital signal stream from sensor device autonomically without arbitration of management server and several receiving devices can simultaneously receive and display through interaction of nodes in real-time.

A Study on the Development of 3D Virtual Reality Campus Tour System for the Adaptation of University Life to Freshmen in Non-face-to-face Situation - Autonomous Operation of Campus Surrounding Environment and University Information Guide Screen Design Using Visual Focus Movement - (비대면 상황에서 신입생 대학생활적응을 위한 3차원 가상현실 캠퍼스 투어시스템 개발연구 - 시야초점의 움직임을 활용한 캠퍼스주변 환경의 자유로운 이동과 대학정보안내화면 GUI설계 -)

  • Lim, Jang-Hoon
    • Journal of Information Technology Applications and Management
    • /
    • v.28 no.3
    • /
    • pp.59-75
    • /
    • 2021
  • This study aims to establish a foundation for autonomous driving on campus and communication of abundant university information in the HCI environment in a VR environment where college freshmen can freely travel around campus facilities. The purpose of this study is to develop a three-dimensional VR-style campus tour system to establish a media environment to provide abundant university information guidance services to freshmen in non-face-to-face situations. This study designed a three-dimensional virtual reality campus tour system to solve the problem of discontinuity in which VR360 filming space does not lead to space like reality, and to solve many problems of expertise in VR technology by constructing an integrated production environment of tour system. We aim to solve the problem of inefficiency, which requires a large amount of momentum in virtual space, by constructing a GUI that utilizes the motion of the field of view focus. The campus environment was designed as a three-dimensional virtual reality using a three-dimensional graphic design. In non-face-to-face situations, college freshmen freely transformed the HMD VR device, smartphone, FPS operation mode of the gyroscope sensor. The design elements of the three-dimensional virtual reality campus tour system were classified as ①Visualization of factual experiences, ②Continuity of space movement, ③Operation, automatic operation mode, ④Natural landscape animation, ⑤Animation according to wind direction, ⑥Actual space movement mode, ⑦Informatization of spatial understanding, ⑧GUI by experience environment, ⑨Text GUI by building, ⑩VR360, 3D360 Studio Environment, ⑪Three-dimensional virtual space coupling block module, ⑫3D360-3D Virtual Space Transmedia Zone, ⑬Transformable GUI(VR Device Dual Viewer-Gyro Sensor Full Viewer-FPS Operation Viewer) and an integrated production environment was established with each element. It is launched online (http://vautu.com/u1) by constructing a GUI for free driving mode and college information screens to adapt to college life for freshmen, and designing an environment that can be used simultaneously by current media such as PCs, Android, and iPads. Therefore, it conducted user research, held a development presentation, a forum on excellence in university innovation support projects, and applied it as a system on the website of a particular university. College freshmen will be able to experience university information directly from the web and app to the virtual reality campus environment.

Implementation of Air Pollutant Monitoring System using UAV with Automatic Navigation Flight

  • Shin, Sang-Hoon;Park, Myeong-Chul
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.8
    • /
    • pp.77-84
    • /
    • 2022
  • In this paper, we propose a system for monitoring air pollutants such as fine dust using an unmanned aerial vehicle capable of autonomous navigation. The existing air quality management system used a method of collecting information through a fixed sensor box or through a measurement sensor of a drone using a control device. This has disadvantages in that additional procedures for data collection and transmission must be performed in a limited space and for monitoring. In this paper, to overcome this problem, a GPS module for location information and a PMS7003 module for fine dust measurement are embedded in an unmanned aerial vehicle capable of autonomous navigation through flight information designation, and the collected information is stored in the SD module, and after the flight is completed, press the transmit button. It configures a system of one-stop structure that is stored in a remote database through a smartphone app connected via Bluetooth. In addition, an HTML5-based web monitoring page for real-time monitoring is configured and provided to interested users. The results of this study can be utilized in an environmental monitoring system through an unmanned aerial vehicle, and in the future, various pollutants measuring sensors such as sulfur dioxide and carbon dioxide will be added to develop it into a total environmental control system.

IoT Open-Source and AI based Automatic Door Lock Access Control Solution

  • Yoon, Sung Hoon;Lee, Kil Soo;Cha, Jae Sang;Mariappan, Vinayagam;Young, Ko Eun;Woo, Deok Gun;Kim, Jeong Uk
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.12 no.2
    • /
    • pp.8-14
    • /
    • 2020
  • Recently, there was an increasing demand for an integrated access control system which is capable of user recognition, door control, and facility operations control for smart buildings automation. The market available door lock access control solutions need to be improved from the current level security of door locks operations where security is compromised when a password or digital keys are exposed to the strangers. At present, the access control system solution providers focusing on developing an automatic access control system using (RF) based technologies like bluetooth, WiFi, etc. All the existing automatic door access control technologies required an additional hardware interface and always vulnerable security threads. This paper proposes the user identification and authentication solution for automatic door lock control operations using camera based visible light communication (VLC) technology. This proposed approach use the cameras installed in building facility, user smart devices and IoT open source controller based LED light sensors installed in buildings infrastructure. The building facility installed IoT LED light sensors transmit the authorized user and facility information color grid code and the smart device camera decode the user informations and verify with stored user information then indicate the authentication status to the user and send authentication acknowledgement to facility door lock integrated camera to control the door lock operations. The camera based VLC receiver uses the artificial intelligence (AI) methods to decode VLC data to improve the VLC performance. This paper implements the testbed model using IoT open-source based LED light sensor with CCTV camera and user smartphone devices. The experiment results are verified with custom made convolutional neural network (CNN) based AI techniques for VLC deciding method on smart devices and PC based CCTV monitoring solutions. The archived experiment results confirm that proposed door access control solution is effective and robust for automatic door access control.

A Study on Intuitive IoT Interface System using 3D Depth Camera (3D 깊이 카메라를 활용한 직관적인 사물인터넷 인터페이스 시스템에 관한 연구)

  • Park, Jongsub;Hong, June Seok;Kim, Wooju
    • The Journal of Society for e-Business Studies
    • /
    • v.22 no.2
    • /
    • pp.137-152
    • /
    • 2017
  • The decline in the price of IT devices and the development of the Internet have created a new field called Internet of Things (IoT). IoT, which creates new services by connecting all the objects that are in everyday life to the Internet, is pioneering new forms of business that have not been seen before in combination with Big Data. The prospect of IoT can be said to be unlimited in its utilization. In addition, studies of standardization organizations for smooth connection of these IoT devices are also active. However, there is a part of this study that we overlook. In order to control IoT equipment or acquire information, it is necessary to separately develop interworking issues (IP address, Wi-Fi, Bluetooth, NFC, etc.) and related application software or apps. In order to solve these problems, existing research methods have been conducted on augmented reality using GPS or markers. However, there is a disadvantage in that a separate marker is required and the marker is recognized only in the vicinity. In addition, in the case of a study using a GPS address using a 2D-based camera, it was difficult to implement an active interface because the distance to the target device could not be recognized. In this study, we use 3D Depth recognition camera to be installed on smartphone and calculate the space coordinates automatically by linking the distance measurement and the sensor information of the mobile phone without a separate marker. Coordination inquiry finds equipment of IoT and enables information acquisition and control of corresponding IoT equipment. Therefore, from the user's point of view, it is possible to reduce the burden on the problem of interworking of the IoT equipment and the installation of the app. Furthermore, if this technology is used in the field of public services and smart glasses, it will reduce duplication of investment in software development and increase in public services.