• Title/Summary/Keyword: Touch gesture

Search Result 73, Processing Time 0.024 seconds

Implementation of Multi-touch Tabletop Display for Human Computer Interaction (HCI 를 위한 멀티터치 테이블-탑 디스플레이 시스템 구현)

  • Kim, Song-Gook;Lee, Chil-Woo
    • 한국HCI학회:학술대회논문집
    • /
    • 2007.02a
    • /
    • pp.553-560
    • /
    • 2007
  • 본 논문에서는 양손의 터치를 인식하여 실시간 상호작용이 가능한 테이블 탑 디스플레이 시스템 및 구현 알고리즘에 대해 기술한다. 제안하는 시스템은 FTIR(Frustrated Total Internal Reflection) 메커니즘을 기반으로 제작되었으며 multi-touch, multi-user 방식의 손 제스처 입력이 가능하다. 시스템은 크게 영상 투영을 위한 빔-프로젝터, 적외선 LED를 부착한 아크릴 스크린, Diffuser 그리고 영상을 획득하기 위한 적외선 카메라로 구성되어 있다. 시스템 제어에 필요한 제스처 명령어 종류는 상호작용 테이블에서의 입력과 출력의 자유도를 분석하고 편리함, 의사소통, 항상성, 완벽함의 정도를 고려하여 규정하였다. 규정된 제스처는 사용자가 상호작용을 위해 스크린에 접촉한 손가락의 개수, 위치, 그리고 움직임 변화를 기준으로 세분화된다. 적외선 카메라를 통해 입력받은 영상은 잡음제거 및 손가락 영역 탐색을 위해 간단한 모폴로지 기법이 적용된 후 인식과정에 들어간다. 인식 과정에서는 입력 받은 제스처 명령어들을 미리 정의해놓은 손 제스처 모델과 비교하여 인식을 행한다. 세부적으로는 먼저 스크린에 접촉된 손가락의 개수를 파악하고 그 영역을 결정하며 그 후 그 영역들의 중심점을 추출하여 그들의 각도 및 유클리디언 거리를 계산한다. 그리고 나서 멀티터치 포인트의 위치 변화값을 미리 정의해둔 모델의 정보와 비교를 한다. 본 논문에서 제안하는 시스템의 효율성은 Google-earth를 제어하는 것을 통해 입증될 수 있다.

  • PDF

User Interface Design Platform based on Usage Log Analysis (사용성 로그 분석 기반의 사용자 인터페이스 설계 플랫폼)

  • Kim, Ahyoung;Lee, Junwoo;Kim, Mucheol
    • The Journal of Society for e-Business Studies
    • /
    • v.21 no.4
    • /
    • pp.151-159
    • /
    • 2016
  • The user interface is an important factor in providing efficient services to application users. In particular, mobile applications that can be executed anytime and anywhere have a higher priority of usability than applications in other domains.Previous studies have used prototype and storyboard methods to improve the usability of applications. However, this approach has limitations in continuously identifying and improving the usability problems of a particular application. Therefore, in this paper, we propose a usability analysis method using touch gesture data. It could identify and improve the UI / UX problem of the application continuously by grasping the intention of the user after the application is distributed.

Building Plan Research of Meeting System based on Multi-Touch Interface (멀티터치 인터페이스 회의시스템 구축 방안 연구)

  • Jang, Suk-Joo;Bak, Seon-Hui;Choi, Tae-Jun
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.14 no.5
    • /
    • pp.255-261
    • /
    • 2014
  • The development of the IT industry brought major changes in modern society. That change applies in all areas, that life is more and more convenient. work is more efficient, handling quickly and that make convenient life. Interface is a big factor in the center of these changes. that is more and more development and there is currently using NUI technology. NUI technology is not necessary that discrete input device. and that use natural behavior like touch, gesture. among them, the smart phone device is a representative of appllying the NUI technology. smart phone as well as NUI technology applies kiosk, big table and that use Various fields, such as culture, defense, and advertising industries. In this research, development multi-touch table based multi-touch meeting system. and proposal efficient system possible improvements about Existing meeting system.

Direction of Touch Gestures and Perception of Inner Scroll in Smartphone UI (스마트폰 UI에서 터치 제스처의 방향성과 이너 스크롤의 인지)

  • Lee, Young-Ju
    • Journal of Digital Convergence
    • /
    • v.19 no.2
    • /
    • pp.409-414
    • /
    • 2021
  • In this paper, we investigated the touch gestures of the scroll direction of a small and long UI due to the characteristics of a device in a smartphone environment that has become popular and used. Touch gestures are touched and directed by triggers such as metaphors and affordances based on past experiences. Different types of touch gestures are used depending on the type of navigation, motion, and transformation gesture, but scrolling is the most frequently used among them. In general, the scroll is vertically scrolled, but recently, a design pattern that can be scrolled left and right inside is arranged to cause cognitive dissonance of users. In the use of an inner scroll that can scroll left and right by covering a part of the right content, the mixing of a non-scrollable design pattern becomes a factor that requires attention to the user. Therefore, it was found that the use of triggers and the use of consistent design patterns can enhance the user experience even in the inner scroll environment.

A Study on Gesture Interface through User Experience (사용자 경험을 통한 제스처 인터페이스에 관한 연구)

  • Yoon, Ki Tae;Cho, Eel Hea;Lee, Jooyoup
    • Asia-pacific Journal of Multimedia Services Convergent with Art, Humanities, and Sociology
    • /
    • v.7 no.6
    • /
    • pp.839-849
    • /
    • 2017
  • Recently, the role of the kitchen has evolved from the space for previous survival to the space that shows the present life and culture. Along with these changes, the use of IoT technology is spreading. As a result, the development and diffusion of new smart devices in the kitchen is being achieved. The user experience for using these smart devices is also becoming important. For a natural interaction between a user and a computer, better interactions can be expected based on context awareness. This paper examines the Natural User Interface (NUI) that does not touch the device based on the user interface (UI) of the smart device used in the kitchen. In this method, we use the image processing technology to recognize the user's hand gesture using the camera attached to the device and apply the recognized hand shape to the interface. The gestures used in this study are proposed to gesture according to the user's context and situation, and 5 kinds of gestures are classified and used in the interface.

A Study on the Motion and Voice Recognition Smart Mirror Using Grove Gesture Sensor (그로브 제스처 센서를 활용한 모션 및 음성 인식 스마트 미러에 관한 연구)

  • Hui-Tae Choi;Chang-Hoon Go;Ji-Min Jeong;Ye-Seul Shin;Hyoung-Keun Park
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.18 no.6
    • /
    • pp.1313-1320
    • /
    • 2023
  • This paper presents the development of a smart mirror that allows control of its display through glove gestures and integrates voice recognition functionality. The hardware configuration of the smart mirror consists of an LCD monitor combined with an acrylic panel, onto which a semi-mirror film with a reflectance of 37% and transmittance of 36% is attached, enabling it to function as both a mirror and a display. The proposed smart mirror eliminates the need for users to physically touch the mirror or operate a keyboard, as it implements gesture control through glove gesture sensors. Additionally, it incorporates voice recognition capabilities and integrates Google Assistant to display results on the screen corresponding to voice commands issued by the user.

Development of a Tiled Display GOCI Observation Satellite Imagery Visualization System (타일드 디스플레이 천리안 해양관측 위성 영상 가시화 시스템 개발)

  • Park, Chan-sol;Lee, Kwan-ju;Kim, Nak-hoon;Lee, Sang-ho;Seo, Ki-young;Park, Kyoung Shin
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2013.10a
    • /
    • pp.641-642
    • /
    • 2013
  • This research implemented Geostationary Ocean Color Imager (GOCI) observation satellite imagery visualization system on a large high-resolution tiled display. This system is designed to help users observe or analyze satellite imagery more effectively on the tiled display using multi-touch and Kinect motion gesture recognition interaction. We developed the multi-scale image loading and rendering technique for the massive amount of memory management and smooth rendering for GOCI satellite imagery on the tiled display. In this system, the unit of time corresponding to the selected date of the satellite images are sequentially displayed on the screen. Users can zoom-in, zoom-out, move the imagery and select some buttons to trigger functions using both multi-touch or Kinect gesture interaction.

  • PDF

Accelerometer-based Gesture Recognition for Robot Interface (로봇 인터페이스 활용을 위한 가속도 센서 기반 제스처 인식)

  • Jang, Min-Su;Cho, Yong-Suk;Kim, Jae-Hong;Sohn, Joo-Chan
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.1
    • /
    • pp.53-69
    • /
    • 2011
  • Vision and voice-based technologies are commonly utilized for human-robot interaction. But it is widely recognized that the performance of vision and voice-based interaction systems is deteriorated by a large margin in the real-world situations due to environmental and user variances. Human users need to be very cooperative to get reasonable performance, which significantly limits the usability of the vision and voice-based human-robot interaction technologies. As a result, touch screens are still the major medium of human-robot interaction for the real-world applications. To empower the usability of robots for various services, alternative interaction technologies should be developed to complement the problems of vision and voice-based technologies. In this paper, we propose the use of accelerometer-based gesture interface as one of the alternative technologies, because accelerometers are effective in detecting the movements of human body, while their performance is not limited by environmental contexts such as lighting conditions or camera's field-of-view. Moreover, accelerometers are widely available nowadays in many mobile devices. We tackle the problem of classifying acceleration signal patterns of 26 English alphabets, which is one of the essential repertoires for the realization of education services based on robots. Recognizing 26 English handwriting patterns based on accelerometers is a very difficult task to take over because of its large scale of pattern classes and the complexity of each pattern. The most difficult problem that has been undertaken which is similar to our problem was recognizing acceleration signal patterns of 10 handwritten digits. Most previous studies dealt with pattern sets of 8~10 simple and easily distinguishable gestures that are useful for controlling home appliances, computer applications, robots etc. Good features are essential for the success of pattern recognition. To promote the discriminative power upon complex English alphabet patterns, we extracted 'motion trajectories' out of input acceleration signal and used them as the main feature. Investigative experiments showed that classifiers based on trajectory performed 3%~5% better than those with raw features e.g. acceleration signal itself or statistical figures. To minimize the distortion of trajectories, we applied a simple but effective set of smoothing filters and band-pass filters. It is well known that acceleration patterns for the same gesture is very different among different performers. To tackle the problem, online incremental learning is applied for our system to make it adaptive to the users' distinctive motion properties. Our system is based on instance-based learning (IBL) where each training sample is memorized as a reference pattern. Brute-force incremental learning in IBL continuously accumulates reference patterns, which is a problem because it not only slows down the classification but also downgrades the recall performance. Regarding the latter phenomenon, we observed a tendency that as the number of reference patterns grows, some reference patterns contribute more to the false positive classification. Thus, we devised an algorithm for optimizing the reference pattern set based on the positive and negative contribution of each reference pattern. The algorithm is performed periodically to remove reference patterns that have a very low positive contribution or a high negative contribution. Experiments were performed on 6500 gesture patterns collected from 50 adults of 30~50 years old. Each alphabet was performed 5 times per participant using $Nintendo{(R)}$ $Wii^{TM}$ remote. Acceleration signal was sampled in 100hz on 3 axes. Mean recall rate for all the alphabets was 95.48%. Some alphabets recorded very low recall rate and exhibited very high pairwise confusion rate. Major confusion pairs are D(88%) and P(74%), I(81%) and U(75%), N(88%) and W(100%). Though W was recalled perfectly, it contributed much to the false positive classification of N. By comparison with major previous results from VTT (96% for 8 control gestures), CMU (97% for 10 control gestures) and Samsung Electronics(97% for 10 digits and a control gesture), we could find that the performance of our system is superior regarding the number of pattern classes and the complexity of patterns. Using our gesture interaction system, we conducted 2 case studies of robot-based edutainment services. The services were implemented on various robot platforms and mobile devices including $iPhone^{TM}$. The participating children exhibited improved concentration and active reaction on the service with our gesture interface. To prove the effectiveness of our gesture interface, a test was taken by the children after experiencing an English teaching service. The test result showed that those who played with the gesture interface-based robot content marked 10% better score than those with conventional teaching. We conclude that the accelerometer-based gesture interface is a promising technology for flourishing real-world robot-based services and content by complementing the limits of today's conventional interfaces e.g. touch screen, vision and voice.

DEVS Modeling for Interactive Motion-based Mobile Contents Authoring Tool (모바일 기기 환경의 인터렉티브 모션 기반 콘텐츠 개발 도구와 DEVS 모델링)

  • Ju, Seunghwan;Choi, Yohan;Lim, Yongsoo;Seo, Heesuk
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.11 no.2
    • /
    • pp.121-129
    • /
    • 2015
  • Interactive media is a method of communication in which the output from the media comes from the input of the users. The interactive media lets the user go back with the media. Interactive media works with the user's participation. The media still has the same purpose but the user's input adds the interaction and brings interesting features to the system for a better enjoyment. We need a digital content using a dynamic motion and gesture of the mobile device. We made an authoring tool for content producers to easily create interactive content. We have tried to take advantage of the interaction by using a touch screen and a gravity sensor of the mobile device. This interaction may lead to allow the user to participate in the content, it can be used as a key device to assist in engagement. Furthermore, our authoring tool can be applied to various fields of publishing content.

A Gesture Interface based on Hologram and Haptics Environments for Interactive and Immersive Experiences (상호작용과 몰입 향상을 위한 홀로그램과 햅틱 환경 기반의 동작 인터페이스)

  • Pyun, Hae-Gul;An, Haeng-A;Yuk, Seongmin;Park, Jinho
    • Journal of Korea Game Society
    • /
    • v.15 no.1
    • /
    • pp.27-34
    • /
    • 2015
  • This paper proposes a user interface for enhancing immersiveness and usability by combining hologram and haptic device with common Leap Motion. While Leap Motion delivers physical motion of user hand to control virtual environment, it is limited to handle virtual hands on screen and interact with virtual environment in one way. In our system, hologram is coupled with Leap Motion to improve user immersiveness by arranging real and virtual hands in the same place. Moreover, we provide a interaction prototype of sense by designing a haptic device to convey touch sense in virtual environment to user's hand.