• Title/Summary/Keyword: Information input device

Search Result 567, Processing Time 0.028 seconds

Input Device for Immersive Virtual Education (몰입형 가상교육을 위한 입력장치)

  • Jeong, GooCheol;Im, SungMin;Kim, Sang-Youn
    • The Journal of Korean Institute for Practical Engineering Education
    • /
    • v.5 no.1
    • /
    • pp.34-39
    • /
    • 2013
  • This paper suggests an input device that allows a user not only to naturally interact with education contents in virtual environment but also to sense haptic feedback according to his/her interaction. The proposed system measures a user's motion and then creates haptic feedback based on the measured position. To create haptic information in response to a user's interaction with educational contents in virtual environment, we develop a motion input device which consists of a motion controller, a haptic actuator, a wireless communication module, and a motion sensor. To measure a user's motion input, an accelerometer is used as the motion sensor. The experiment shows that the proposed system creates continuous haptic sensation without any jerky motion or vibration.

  • PDF

A Mobile Tour Guide System using Wearable See-through Device and Hand-held Device based on Shared Touring Context (여행자 상황 정보 기반 안경형 웨어러블 디바이스 및 핸드헬드 디바이스 투어 가이드 시스템)

  • Kim, Doyeon;Seo, Daeil;Yoo, Byounghyun;Ko, Heedong
    • Journal of the HCI Society of Korea
    • /
    • v.11 no.1
    • /
    • pp.29-38
    • /
    • 2016
  • Mobile tour guide applications help tourists to search for and visit their surrounding POIs(Points of Interest) of their locations and to obtain their guide information. With the development of wearable devices like smart watches and wearable glasses, people using multiple mobile devices are increasing; a tourist may use a hand-held device, a wearable device or both to obtain tour information. However, most mobile tour guides using mobile devices provide the tour information with little consideration of their hand-held and wearable characteristics. In particular, a tourist with multiple mobile devices who search for the tour information from multiple mobile devices must input their intention separately to each device. To alleviate these problems, we propose a mobile tour guide system with the following features: one is to reduce redundant user input by sharing the touring context between hand-held and wearable devices; the other is to present tour information according to capabilities and usage pattern of the devices. The proposed system guides tourists by complementing disadvantage of the devices and also minimizes user interaction between applications and tourists.

Object Recognition Algorithm with Partial Information

  • Yoo, Suk Won
    • International Journal of Advanced Culture Technology
    • /
    • v.7 no.4
    • /
    • pp.229-235
    • /
    • 2019
  • Due to the development of video and optical technology today, video equipments are being used in a variety of fields such as identification, security maintenance, and factory automation systems that generate products. In this paper, we investigate an algorithm that effectively recognizes an experimental object in an input image with a partial problem due to the mechanical problem of the input imaging device. The object recognition algorithm proposed in this paper moves and rotates the vertices constituting the outline of the experimental object to the positions of the respective vertices constituting the outline of the DB model. Then, the discordance values between the moved and rotated experimental object and the corresponding DB model are calculated, and the minimum discordance value is selected. This minimum value is the final discordance value between the experimental object and the corresponding DB model, and the DB model with the minimum discordance value is selected as the recognition result for the experimental object. The proposed object recognition method obtains satisfactory recognition results using only partial information of the experimental object.

A Study on the Automata for Hangul Input of Gesture Type (제스처 형태의 한글입력을 위한 오토마타에 관한 연구)

  • Lim, Yang-Won;Lim, Han-Kyu
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.16 no.2
    • /
    • pp.49-58
    • /
    • 2011
  • Owing to the spread of smart devices using touch screen, the Korean language (Hangul) input methods have been varied. In this paper, we have researched and analyzed for finding the best input way of Hangul, which is appropriate to smart devices. By using automata theory, we have suggested easy and efficient automata. The automata, which we suggested, can be used in a Hangul input of gesture type.

A Handwritten Document Digitalization Framework based Defect Management System in Educational Facilities (수기문서 전자화 프레임워크 기반의 교육시설 하자관리 시스템)

  • Son, Bong-Ki
    • The Journal of Sustainable Design and Educational Environment Research
    • /
    • v.9 no.3
    • /
    • pp.1-11
    • /
    • 2010
  • In the construction industry, IT based information system has been diversely applied to increase productivity. Although IT device such as PDA, RFID, Barcode, wireless network and web camera has been introduced to gather information in construction site, the effect of the IT device is limited, because of bringing about additional works of engineer. In this paper, we proposed a defect management system which is based on handwritten document digitalization framework for introducing applicability of new IT device, digital pen. By the proposed system, we can effectively gather and input defect information to defect management system by using digital pen and paper like conventional way. Applying the data gathering device, digital pen to defect management, it is able to increase productivity by improving work process, building up and utilizing defect information database of good quality.

Implementation of an Autostereoscopic Virtual 3D Button in Non-contact Manner Using Simple Deep Learning Network

  • You, Sang-Hee;Hwang, Min;Kim, Ki-Hoon;Cho, Chang-Suk
    • Journal of Information Processing Systems
    • /
    • v.17 no.3
    • /
    • pp.505-517
    • /
    • 2021
  • This research presented an implementation of autostereoscopic virtual three-dimensional (3D) button device as non-contact style. The proposed device has several characteristics about visible feature, non-contact use and artificial intelligence (AI) engine. The device was designed to be contactless to prevent virus contamination and consists of 3D buttons in a virtual stereoscopic view. To specify the button pressed virtually by fingertip pointing, a simple deep learning network having two stages without convolution filters was designed. As confirmed in the experiment, if the input data composition is clearly designed, the deep learning network does not need to be configured so complexly. As the results of testing and evaluation by the certification institute, the proposed button device shows high reliability and stability.

Development of Data Tansfer Program Using USB Interface (USB 인터페이스를 이용한 데이터 전송프로그램 개발)

  • Jeon, Se-Il;Lee, Du-Bok
    • The Transactions of the Korea Information Processing Society
    • /
    • v.7 no.5
    • /
    • pp.1553-1558
    • /
    • 2000
  • The development of recent computer and communication technology has changed Automation System using communication network, and the new USB substituted with Serial Communication is already developed and now popular. In this paper, High speed data transfer system design using USB interface and communication application simulated for the situation is introduced. Base on USB, we can use additive function efficiently coped with former field device. The 'Winsock Connection USB Ternimal,' designed for hardware simulation, control the field device connected by USB, and provide the way for remote control of field device by Telnet connection through TCP/IP. That theorem can guarantee controlling direct input dta of user, and acuate function of field device using USB Packet Transmission. As a result of amy research, this communication application system identified good operation of field device with those of former field device. Another result of the experiment of hardware operation, we obtained accomplishment that the sufficient bandwidth guarantee of USB has high speed and high performance, and reduce the occupancy of system.

  • PDF

Design and Development of Intelligent Input Device for Students with Physical Disabilities (지체장애학생을 위한 지능형 입력 장치의 설계와 구현)

  • Jeon, Byung-Un;Go, Dung-Young
    • The Journal of the Korea Contents Association
    • /
    • v.7 no.4
    • /
    • pp.199-205
    • /
    • 2007
  • Most of information and communication assistance machinery and tools for disorder people of Occupied are machinery and tools for a visual impairment person and software, The things which can apply to only a specification disorder type and a specification disorder part are most, A special keyboard or a special mouse device of the handicapped person that disorder rank is comparatively the hardness makes the mainstream. It is reported in diffusion rate being very low if I compare this with total disorder population. I study new 1 plan which it can be applied to various disorder types and disorder parts through an intelligent special input device in a study of a book and, I designed this at the real standard that I could manufacture and incarnated it. In addition, I suggested this in a base for a design of a universal supporting input device and suggestion for the side of incarnation plan and a future study direction.

Automatic gasometer reading system using selective optical character recognition (관심 문자열 인식 기술을 이용한 가스계량기 자동 검침 시스템)

  • Lee, Kyohyuk;Kim, Taeyeon;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.1-25
    • /
    • 2020
  • In this paper, we suggest an application system architecture which provides accurate, fast and efficient automatic gasometer reading function. The system captures gasometer image using mobile device camera, transmits the image to a cloud server on top of private LTE network, and analyzes the image to extract character information of device ID and gas usage amount by selective optical character recognition based on deep learning technology. In general, there are many types of character in an image and optical character recognition technology extracts all character information in an image. But some applications need to ignore non-of-interest types of character and only have to focus on some specific types of characters. For an example of the application, automatic gasometer reading system only need to extract device ID and gas usage amount character information from gasometer images to send bill to users. Non-of-interest character strings, such as device type, manufacturer, manufacturing date, specification and etc., are not valuable information to the application. Thus, the application have to analyze point of interest region and specific types of characters to extract valuable information only. We adopted CNN (Convolutional Neural Network) based object detection and CRNN (Convolutional Recurrent Neural Network) technology for selective optical character recognition which only analyze point of interest region for selective character information extraction. We build up 3 neural networks for the application system. The first is a convolutional neural network which detects point of interest region of gas usage amount and device ID information character strings, the second is another convolutional neural network which transforms spatial information of point of interest region to spatial sequential feature vectors, and the third is bi-directional long short term memory network which converts spatial sequential information to character strings using time-series analysis mapping from feature vectors to character strings. In this research, point of interest character strings are device ID and gas usage amount. Device ID consists of 12 arabic character strings and gas usage amount consists of 4 ~ 5 arabic character strings. All system components are implemented in Amazon Web Service Cloud with Intel Zeon E5-2686 v4 CPU and NVidia TESLA V100 GPU. The system architecture adopts master-lave processing structure for efficient and fast parallel processing coping with about 700,000 requests per day. Mobile device captures gasometer image and transmits to master process in AWS cloud. Master process runs on Intel Zeon CPU and pushes reading request from mobile device to an input queue with FIFO (First In First Out) structure. Slave process consists of 3 types of deep neural networks which conduct character recognition process and runs on NVidia GPU module. Slave process is always polling the input queue to get recognition request. If there are some requests from master process in the input queue, slave process converts the image in the input queue to device ID character string, gas usage amount character string and position information of the strings, returns the information to output queue, and switch to idle mode to poll the input queue. Master process gets final information form the output queue and delivers the information to the mobile device. We used total 27,120 gasometer images for training, validation and testing of 3 types of deep neural network. 22,985 images were used for training and validation, 4,135 images were used for testing. We randomly splitted 22,985 images with 8:2 ratio for training and validation respectively for each training epoch. 4,135 test image were categorized into 5 types (Normal, noise, reflex, scale and slant). Normal data is clean image data, noise means image with noise signal, relfex means image with light reflection in gasometer region, scale means images with small object size due to long-distance capturing and slant means images which is not horizontally flat. Final character string recognition accuracies for device ID and gas usage amount of normal data are 0.960 and 0.864 respectively.

Tangible Interaction : Application for A New Interface Method for Mobile Device -Focused on development of virtual keyboard using camera input - (체감형 인터랙션 : 모바일 기기의 새로운 인터페이스 방법으로서의 활용 -카메라 인식에 의한 가상 키보드입력 방식의 개발을 중심으로 -)

  • 변재형;김명석
    • Archives of design research
    • /
    • v.17 no.3
    • /
    • pp.441-448
    • /
    • 2004
  • Mobile devices such as mobile phones or PDAs are considered as main interlace tools in ubiquitous computing environment. For searching information in mobile device, it should be possible for user to input some text as well as to control cursor for navigation. So, we should find efficient interlace method for text input in limited dimension of mobile devices. This study intends to suggest a new approach to mobile interaction using camera based virtual keyboard for text input in mobile devices. We developed a camera based virtual keyboard prototype using a PC camera and a small size LCD display. User can move the prototype in the air to control the cursor over keyboard layout in screen and input text by pressing a button. The new interaction method in this study is evaluated as competitive compared to mobile phone keypad in left input efficiency. And the new method can be operated by one hand and make it possible to design smaller device by eliminating keyboard part. The new interaction method can be applied to text input method for mobile devices requiring especially small dimension. And this method can be modified to selection and navigation method for wireless internet contents on small screen devices.

  • PDF