• 제목/요약/키워드: User Interaction Data

검색결과 362건 처리시간 0.022초

양방향 DMB 서비스를 위한 사용자 이벤트 분석 모듈 (User Event Analyzer for Bidirectional DMB Data Service)

  • 이송록;라잉수킨;김상욱
    • 한국HCI학회:학술대회논문집
    • /
    • 한국HCI학회 2007년도 학술대회 1부
    • /
    • pp.624-629
    • /
    • 2007
  • Digital Multimedia Broadcasting (DMB) is a digital radio transmission system for sending multimedia such as radio, TV, and data casting to mobile devices. Nowadays, DMB specifications are the major standard for digital broadcasting and have been establishing for bidirectional service using MPEG-4 system. But there has been only some simple demonstrated system for this bidirectional services. In this paper, we introduce bidirectional DMB data service system that provides the interaction between the user and DMB server without any additional equipment such as web server. The proposed bidirectional DMB system can capture and send user interaction information and response through the existing DMB transmission channel, finally update the original contents. The action event from the user is the most important thing in developing the bidirectional DMB system. Therefor, capturing the event data from the user is the first step we need to do for the bidirectional DMB service. In this paper, we propose an interaction manager module for the user events. This system will extract the user events and make a plan to update the original scene with the server's reaction information.

  • PDF

Emotional Communication on Interactive Typography System

  • Lim, Sooyeon
    • International Journal of Contents
    • /
    • 제14권2호
    • /
    • pp.41-44
    • /
    • 2018
  • In this paper, we propose a novel method for developing expressive typography authoring tools with personal emotions. Our goal is to implement an interactive typography system that does not rely on any particular language and provides an easy, natural user interface and allows for immediate interaction. For this purpose, we converted the text data entered by a user to image data. The image data was then used for interaction with the user. The data was synchronized with the user's skeleton information obtained from the depth camera. We decomposed the characters using the formality of language to provide a typographical movement that responds more dynamically to the user's motion. Thus, this system provides interaction as a unit of characters rather than as a whole character, allowing the user to have emotional and aesthetic emotional immersion into his or her creation.

Discernment of Android User Interaction Data Distribution Using Deep Learning

  • Ho, Jun-Won
    • International Journal of Internet, Broadcasting and Communication
    • /
    • 제14권3호
    • /
    • pp.143-148
    • /
    • 2022
  • In this paper, we employ deep neural network (DNN) to discern Android user interaction data distribution from artificial data distribution. We utilize real Android user interaction trace dataset collected from [1] to evaluate our DNN design. In particular, we use sequential model with 4 dense hidden layers and 1 dense output layer in TensorFlow and Keras. We also deploy sigmoid activation function for a dense output layer with 1 neuron and ReLU activation function for each dense hidden layer with 32 neurons. Our evaluation shows that our DNN design fulfills high test accuracy of at least 0.9955 and low test loss of at most 0.0116 in all cases of artificial data distributions.

홈 네트워크에서 UI 디자인을 위한 사용자 데이터 구조화에 관한 연구 (A Structured Method of User Data for User Interface Design in Home Network)

  • 정지홍;김영철;반영환
    • 대한인간공학회지
    • /
    • 제26권2호
    • /
    • pp.61-66
    • /
    • 2007
  • The networked home is connected to the external world using a high speed network. The devices inside the house are connected using a wired and wireless network. Acquiring the user data is an essential step for designing the user interface in user centered design. In networked home, the numbers of use cases are exponentially increased because connected use cases are considered. Because the user data for networked home are too complicated, they are acquired and analyzed by a structured methodology. We surveyed 40 people to acquire the context data home and analyzed by 5W1H (Who, Where, What, When, Why, How). We established a framework for the user data using tasks, user, time, space, objects and environment. The data for home context was structured by our framework. This framework makes simple the home context and is helpful for user interface design in home network.

MPEG-U-based Advanced User Interaction Interface Using Hand Posture Recognition

  • Han, Gukhee;Choi, Haechul
    • IEIE Transactions on Smart Processing and Computing
    • /
    • 제5권4호
    • /
    • pp.267-273
    • /
    • 2016
  • Hand posture recognition is an important technique to enable a natural and familiar interface in the human-computer interaction (HCI) field. This paper introduces a hand posture recognition method using a depth camera. Moreover, the hand posture recognition method is incorporated with the Moving Picture Experts Group Rich Media User Interface (MPEG-U) Advanced User Interaction (AUI) Interface (MPEG-U part 2), which can provide a natural interface on a variety of devices. The proposed method initially detects positions and lengths of all fingers opened, and then recognizes the hand posture from the pose of one or two hands, as well as the number of fingers folded when a user presents a gesture representing a pattern in the AUI data format specified in MPEG-U part 2. The AUI interface represents a user's hand posture in the compliant MPEG-U schema structure. Experimental results demonstrate the performance of the hand posture recognition system and verified that the AUI interface is compatible with the MPEG-U standard.

Interactive Typography System using Combined Corner and Contour Detection

  • Lim, Sooyeon;Kim, Sangwook
    • International Journal of Contents
    • /
    • 제13권1호
    • /
    • pp.68-75
    • /
    • 2017
  • Interactive Typography is a process where a user communicates by interacting with text and a moving factor. This research covers interactive typography using real-time response to a user's gesture. In order to form a language-independent system, preprocessing of entered text data presents image data. This preprocessing is followed by recognizing the image data and the setting interaction points. This is done using computer vision technology such as the Harris corner detector and contour detection. User interaction is achieved using skeleton information tracked by a depth camera. By synchronizing the user's skeleton information acquired by Kinect (a depth camera,) and the typography components (interaction points), all user gestures are linked with the typography in real time. An experiment was conducted, in both English and Korean, where users showed an 81% satisfaction level using an interactive typography system where text components showed discrete movements in accordance with the users' gestures. Through this experiment, it was possible to ascertain that sensibility varied depending on the size and the speed of the text and interactive alteration. The results show that interactive typography can potentially be an accurate communication tool, and not merely a uniform text transmission system.

Robust Sentiment Classification of Metaverse Services Using a Pre-trained Language Model with Soft Voting

  • Haein Lee;Hae Sun Jung;Seon Hong Lee;Jang Hyun Kim
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제17권9호
    • /
    • pp.2334-2347
    • /
    • 2023
  • Metaverse services generate text data, data of ubiquitous computing, in real-time to analyze user emotions. Analysis of user emotions is an important task in metaverse services. This study aims to classify user sentiments using deep learning and pre-trained language models based on the transformer structure. Previous studies collected data from a single platform, whereas the current study incorporated the review data as "Metaverse" keyword from the YouTube and Google Play Store platforms for general utilization. As a result, the Bidirectional Encoder Representations from Transformers (BERT) and Robustly optimized BERT approach (RoBERTa) models using the soft voting mechanism achieved a highest accuracy of 88.57%. In addition, the area under the curve (AUC) score of the ensemble model comprising RoBERTa, BERT, and A Lite BERT (ALBERT) was 0.9458. The results demonstrate that the ensemble combined with the RoBERTa model exhibits good performance. Therefore, the RoBERTa model can be applied on platforms that provide metaverse services. The findings contribute to the advancement of natural language processing techniques in metaverse services, which are increasingly important in digital platforms and virtual environments. Overall, this study provides empirical evidence that sentiment analysis using deep learning and pre-trained language models is a promising approach to improving user experiences in metaverse services.

Behavior recognition system based fog cloud computing

  • Lee, Seok-Woo;Lee, Jong-Yong;Jung, Kye-Dong
    • International journal of advanced smart convergence
    • /
    • 제6권3호
    • /
    • pp.29-37
    • /
    • 2017
  • The current behavior recognition system don't match data formats between sensor data measured by user's sensor module or device. Therefore, it is necessary to support data processing, sharing and collaboration services between users and behavior recognition system in order to process sensor data of a large capacity, which is another formats. It is also necessary for real time interaction with users and behavior recognition system. To solve this problem, we propose fog cloud based behavior recognition system for human body sensor data processing. Fog cloud based behavior recognition system solve data standard formats in DbaaS (Database as a System) cloud by servicing fog cloud to solve heterogeneity of sensor data measured in user's sensor module or device. In addition, by placing fog cloud between users and cloud, proximity between users and servers is increased, allowing for real time interaction. Based on this, we propose behavior recognition system for user's behavior recognition and service to observers in collaborative environment. Based on the proposed system, it solves the problem of servers overload due to large sensor data and the inability of real time interaction due to non-proximity between users and servers. This shows the process of delivering behavior recognition services that are consistent and capable of real time interaction.

링크드 데이터를 이용한 인터랙티브 요리 비디오 질의 서비스 시스템 (An Interactive Cooking Video Query Service System with Linked Data)

  • 박우리;오경진;홍명덕;조근식
    • 지능정보연구
    • /
    • 제20권3호
    • /
    • pp.59-76
    • /
    • 2014
  • 스마트 미디어 장치의 발달로 인하여 시공간적인 제약이 없이 비디오를 시청 가능한 환경이 제공됨에 따라 사용자의 시청행태가 수동적인 시청에서 능동적인 시청으로 계속해서 변화하고 있다. 사용자는 비디오를 시청하면서 비디오를 볼 뿐 아니라 관심 있는 내용에 대한 세부적인 정보를 검색한다. 그 결과 사용자와 미디어 장치간의 인터랙션이 주요 관심사로 등장하였다. 이러한 환경에서 사용자들은 일방적으로 정보를 제공해주는 것보다는 자신이 원하는 정보를 웹 검색을 통해 사용자 스스로 정보를 찾지 않고, 쉽고 빠르게 정보를 얻을 수 있는 방법의 필요성을 인식하게 되었으며 그에 따라 인터랙션을 직접 수행하는 것에 대한 요구가 증가하였다. 또한 많은 정보의 홍수 속에서 정확한 정보를 얻는 것이 중요한 이슈가 되었다. 이러한 사용자들의 요구사항을 만족시키기 위해 사용자 인터랙션 기능을 제공하고, 링크드 데이터를 적용한 시스템이 필요한 상황이다. 본 논문에서는 여러 분야 중에서 사람들이 가장 관심 있는 분야중 하나인 요리를 선택하여 문제점을 발견하고 개선하기 위한 방안을 살펴보았다. 요리는 사람들이 지속적인 관심을 갖는 분야이다. 레시피, 비디오, 텍스트와 같은 요리에 관련된 정보들이 끊임없이 증가하여 빅 데이터의 한 부분으로 발전하였지만 사용자와 요리 콘텐츠간의 인터랙션을 제공하는 방법과 기능이 부족하고, 정보가 부정확하다는 문제점을 가지고 있다. 사용자들은 쉽게 요리 비디오를 시청할 수 있지만 비디오는 단 방향으로만 정보를 제공하기 때문에 사용자들의 요구사항을 충족시키기 어렵고, 검색을 통해 정확한 정보를 얻는 것이 어렵다. 이러한 문제를 해결하기 위하여 본 논문에서는 요리 비디오 시청과 동시에 정보제공을 위한 UI(User Interface), UX(User Experience)를 통해 사용자의 편의성을 고려한 환경을 제시하고, 컨텍스트에 맞는 정확한 정보를 제공하기 위해 링크드 데이터를 이용하여 사용자와 비디오 간에 인터랙션을 위한 요리보조 서비스 시스템을 제안한다.

User Identification Using Real Environmental Human Computer Interaction Behavior

  • Wu, Tong;Zheng, Kangfeng;Wu, Chunhua;Wang, Xiujuan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제13권6호
    • /
    • pp.3055-3073
    • /
    • 2019
  • In this paper, a new user identification method is presented using real environmental human-computer-interaction (HCI) behavior data to improve method usability. User behavior data in this paper are collected continuously without setting experimental scenes such as text length, action number, etc. To illustrate the characteristics of real environmental HCI data, probability density distribution and performance of keyboard and mouse data are analyzed through the random sampling method and Support Vector Machine(SVM) algorithm. Based on the analysis of HCI behavior data in a real environment, the Multiple Kernel Learning (MKL) method is first used for user HCI behavior identification due to the heterogeneity of keyboard and mouse data. All possible kernel methods are compared to determine the MKL algorithm's parameters to ensure the robustness of the algorithm. Data analysis results show that keyboard data have a narrower range of probability density distribution than mouse data. Keyboard data have better performance with a 1-min time window, while that of mouse data is achieved with a 10-min time window. Finally, experiments using the MKL algorithm with three global polynomial kernels and ten local Gaussian kernels achieve a user identification accuracy of 83.03% in a real environmental HCI dataset, which demonstrates that the proposed method achieves an encouraging performance.