• Title/Summary/Keyword: natural user interface

Search Result 226, Processing Time 0.046 seconds

UX Evaluation of Financial Service Chatbot Interactions (금융 서비스 챗봇의 인터렉션 유형별 UX 평가)

  • Cho, Gukae;Yun, Jae Young
    • Journal of the HCI Society of Korea
    • /
    • v.14 no.2
    • /
    • pp.61-69
    • /
    • 2019
  • Recently, as a new ICT trend, emerging chatbots are actively introduced in the field of finance. Chatbot conducts services through the interaction of communication with users. The purpose of this study is to investigate the effect of interaction dialogue type on the efficiency, usability, sensibility and perceived security of financial service chatbot. Based on theoretical considerations, I have divided into closed conversation, open conversation, and mixed conversation type based on the conversation style based on the implementation method of chatbot. Three types of Financial Chatbot prototypes were made and the experiments were conducted after account inquiry, account transfer, Q & A financial task execution. As a result of experimental research analysis, chatbot's interaction dialogue type was found to affect efficiency and usability. Users have shown that the interaction of closed conversations and mixed conversations is an intuitive interface that allows financial services to be easily manipulated without error. This study will be used as a resource to improve the user experience that requires deep understanding of financial chatbot users who should consider both the emotional element of artificial intelligence that provides services through natural conversation and the functional elements that perform financial business can be.

Virtual Go to School (VG2S): University Support Course System with Physical Time and Space Restrictions in a Distance Learning Environment

  • Fujita, Koji
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.12
    • /
    • pp.137-142
    • /
    • 2021
  • Distance learning universities provide online course content. The main methods of providing class contents are on-demand and live-streaming. This means that students are not restricted by time or space. The advantage is that students can take the course anytime and anywhere. Therefore, unlike commuting students, there is no commuting time to the campus, and there is no natural process required to take classes. However, despite this convenient situation, the attendance rate and graduation rate of distance learning universities tend to be lower than that of commuting universities. Although the course environment is not the only factor, students cannot obtain a bachelor's degree unless they fulfill the graduation requirements. In both commuter and distance learning universities, taking classes is an important factor in earning credits. There are fewer time and space constraints for distance learning students than for commuting students. It is also easy for distance learning students to take classes at their own timing. There should be more ease of learning than for students who commute to school with restrictions. However, it is easier to take a course at a commuter university that conducts face-to-face classes. I thought that the reason for this was that commuting to school was a part of the process of taking classes for commuting students. Commuting to school was thought to increase the willingness and motivation to take classes. Therefore, I thought that the inconvenient constraints might encourage students to take the course. In this research, I focused on the act of commuting to school by students. These situations are also applied to the distance learning environment. The students have physical time constraints. To achieve this goal, I will implement a course restriction method that aims to promote the willingness and attitude of students. Therefore, in this paper, I have implemented a virtual school system called "virtual go to school (VG2S)" that reflects the actual route to school.

The Extraction of Face Regions based on Optimal Facial Color and Motion Information in Image Sequences (동영상에서 최적의 얼굴색 정보와 움직임 정보에 기반한 얼굴 영역 추출)

  • Park, Hyung-Chul;Jun, Byung-Hwan
    • Journal of KIISE:Software and Applications
    • /
    • v.27 no.2
    • /
    • pp.193-200
    • /
    • 2000
  • The extraction of face regions is required for Head Gesture Interface which is a natural user interface. Recently, many researchers are interested in using color information to detect face regions in image sequences. Two most widely used color models, HSI color model and YIQ color model, were selected for this study. Actually H-component of HSI and I-component of YIQ are used in this research. Given the difference in the color component, this study was aimed to compare the performance of face region detection between the two models. First, we search the optimum range of facial color for each color component, examining the detection accuracy of facial color regions for variant threshold range about facial color. And then, we compare the accuracy of the face box for both color models by using optimal facial color and motion information. As a result, a range of $0^{\circ}{\sim}14^{\circ}$ in the H-component and a range of $-22^{\circ}{\sim}-2^{\circ}$ in the I-component appeared to be the most optimum range for extracting face regions. When the optimal facial color range is used, I-component is better than H-component by about 10% in accuracy to extract face regions. While optimal facial color and motion information are both used, I-component is also better by about 3% in accuracy to extract face regions.

  • PDF

Object VR-based 2.5D Virtual Textile Wearing System : Viewpoint Vector Estimation and Textile Texture Mapping (오브젝트 VR 기반 2.5D 가상 직물 착의 시스템 : 시점 벡터 추정 및 직물 텍스쳐 매핑)

  • Lee, Eun-Hwan;Kwak, No-Yoon
    • 한국HCI학회:학술대회논문집
    • /
    • 2008.02a
    • /
    • pp.19-26
    • /
    • 2008
  • This paper is related to a new technology allowing a user to have a 360 degree viewpoint of the virtual wearing object, and to an object VR(Virtual Reality)-based 2D virtual textile wearing system using viewpoint vector estimation and textile texture mapping. The proposed system is characterized as capable of virtually wearing a new textile pattern selected by the user to the clothing shape section segmented from multiview 2D images of clothes model for object VR, and three-dimensionally viewing its virtual wearing appearance at a 360 degree viewpoint of the object. Regardless of color or intensity of model clothes, the proposed system is possible to virtually change the textile pattern with holding the illumination and shading properties of the selected clothing shape section, and also to quickly and easily simulate, compare, and select multiple textile pattern combinations for individual styles or entire outfits. The proposed system can provide higher practicality and easy-to-use interface, as it makes real-time processing possible in various digital environment, and creates comparatively natural and realistic virtual wearing styles, and also makes semi -automatic processing possible to reduce the manual works to a minimum. According to the proposed system, it can motivate the creative activity of the designers with simulation results on the effect of textile pattern design on the appearance of clothes without manufacturing physical clothes and, as it can help the purchasers for decision-making with them, promote B2B or B2C e-commerce.

  • PDF

Development and Efficacy Validation of an ICF-Based Chatbot System to Enhance Community Participation of Elderly Individuals with Mild Dementia in South Korea (우리나라 경도 치매 노인의 지역사회 참여 증진을 위한 ICF 기반 Decision Tree for Chatbot 시스템 개발과 효과성 검증)

  • Haewon Byeon
    • Journal of Advanced Technology Convergence
    • /
    • v.3 no.3
    • /
    • pp.17-27
    • /
    • 2024
  • This study focuses on the development and evaluation of a chatbot system based on the International Classification of Functioning, Disability, and Health (ICF) framework to enhance community participation among elderly individuals with mild dementia in South Korea. The study involved 12 elderly participants who were living alone and had been diagnosed with mild dementia, along with 15 caregivers who were actively involved in their daily care. The development process included a comprehensive needs assessment, system design, content creation, natural language processing using Transformer Attention Algorithm, and usability testing. The chatbot is designed to offer personalized activity recommendations, reminders, and information that support physical, social, and cognitive engagement. Usability testing revealed high levels of user satisfaction and perceived usefulness, with significant improvements in community activities and social interactions. Quantitative analysis showed a 92% increase in weekly community activities and an 84% increase in social interactions. Qualitative feedback highlighted the chatbot's user-friendly interface, relevance of suggested activities, and its role in reducing caregiver burden. The study demonstrates that an ICF-based chatbot system can effectively promote community participation and improve the quality of life for elderly individuals with mild dementia. Future research should focus on refining the system and evaluating its long-term impact.

Dynamic Behavior Modelling of Augmented Objects with Haptic Interaction (햅틱 상호작용에 의한 증강 객체의 동적 움직임 모델링)

  • Lee, Seonho;Chun, Junchul
    • Journal of Internet Computing and Services
    • /
    • v.15 no.1
    • /
    • pp.171-178
    • /
    • 2014
  • This paper presents dynamic modelling of a virtual object in augmented reality environments when external forces are applied to the object in real-time fashion. In order to simulate a natural behavior of the object we employ the theory of Newtonian physics to construct motion equation of the object according to the varying external forces applied to the AR object. In dynamic modelling process, the physical interaction is taken placed between the augmented object and the physical object such as a haptic input device and the external forces are transferred to the object. The intrinsic properties of the augmented object are either rigid or elastically deformable (non-rigid) model. In case of the rigid object, the dynamic motion of the object is simulated when the augmented object is collided with by the haptic stick by considering linear momentum or angular momentum. In the case of the non-rigid object, the physics-based simulation approach is adopted since the elastically deformable models respond in a natural way to the external or internal forces and constraints. Depending on the characteristics of force caused by a user through a haptic interface and model's intrinsic properties, the virtual elastic object in AR is deformed naturally. In the simulation, we exploit standard mass-spring damper differential equation so called Newton's second law of motion to model deformable objects. From the experiments, we can successfully visualize the behavior of a virtual objects in AR based on the theorem of physics when the haptic device interact with the rigid or non-rigid virtual object.

Visualizing the Results of Opinion Mining from Social Media Contents: Case Study of a Noodle Company (소셜미디어 콘텐츠의 오피니언 마이닝결과 시각화: N라면 사례 분석 연구)

  • Kim, Yoosin;Kwon, Do Young;Jeong, Seung Ryul
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.4
    • /
    • pp.89-105
    • /
    • 2014
  • After emergence of Internet, social media with highly interactive Web 2.0 applications has provided very user friendly means for consumers and companies to communicate with each other. Users have routinely published contents involving their opinions and interests in social media such as blogs, forums, chatting rooms, and discussion boards, and the contents are released real-time in the Internet. For that reason, many researchers and marketers regard social media contents as the source of information for business analytics to develop business insights, and many studies have reported results on mining business intelligence from Social media content. In particular, opinion mining and sentiment analysis, as a technique to extract, classify, understand, and assess the opinions implicit in text contents, are frequently applied into social media content analysis because it emphasizes determining sentiment polarity and extracting authors' opinions. A number of frameworks, methods, techniques and tools have been presented by these researchers. However, we have found some weaknesses from their methods which are often technically complicated and are not sufficiently user-friendly for helping business decisions and planning. In this study, we attempted to formulate a more comprehensive and practical approach to conduct opinion mining with visual deliverables. First, we described the entire cycle of practical opinion mining using Social media content from the initial data gathering stage to the final presentation session. Our proposed approach to opinion mining consists of four phases: collecting, qualifying, analyzing, and visualizing. In the first phase, analysts have to choose target social media. Each target media requires different ways for analysts to gain access. There are open-API, searching tools, DB2DB interface, purchasing contents, and so son. Second phase is pre-processing to generate useful materials for meaningful analysis. If we do not remove garbage data, results of social media analysis will not provide meaningful and useful business insights. To clean social media data, natural language processing techniques should be applied. The next step is the opinion mining phase where the cleansed social media content set is to be analyzed. The qualified data set includes not only user-generated contents but also content identification information such as creation date, author name, user id, content id, hit counts, review or reply, favorite, etc. Depending on the purpose of the analysis, researchers or data analysts can select a suitable mining tool. Topic extraction and buzz analysis are usually related to market trends analysis, while sentiment analysis is utilized to conduct reputation analysis. There are also various applications, such as stock prediction, product recommendation, sales forecasting, and so on. The last phase is visualization and presentation of analysis results. The major focus and purpose of this phase are to explain results of analysis and help users to comprehend its meaning. Therefore, to the extent possible, deliverables from this phase should be made simple, clear and easy to understand, rather than complex and flashy. To illustrate our approach, we conducted a case study on a leading Korean instant noodle company. We targeted the leading company, NS Food, with 66.5% of market share; the firm has kept No. 1 position in the Korean "Ramen" business for several decades. We collected a total of 11,869 pieces of contents including blogs, forum contents and news articles. After collecting social media content data, we generated instant noodle business specific language resources for data manipulation and analysis using natural language processing. In addition, we tried to classify contents in more detail categories such as marketing features, environment, reputation, etc. In those phase, we used free ware software programs such as TM, KoNLP, ggplot2 and plyr packages in R project. As the result, we presented several useful visualization outputs like domain specific lexicons, volume and sentiment graphs, topic word cloud, heat maps, valence tree map, and other visualized images to provide vivid, full-colored examples using open library software packages of the R project. Business actors can quickly detect areas by a swift glance that are weak, strong, positive, negative, quiet or loud. Heat map is able to explain movement of sentiment or volume in categories and time matrix which shows density of color on time periods. Valence tree map, one of the most comprehensive and holistic visualization models, should be very helpful for analysts and decision makers to quickly understand the "big picture" business situation with a hierarchical structure since tree-map can present buzz volume and sentiment with a visualized result in a certain period. This case study offers real-world business insights from market sensing which would demonstrate to practical-minded business users how they can use these types of results for timely decision making in response to on-going changes in the market. We believe our approach can provide practical and reliable guide to opinion mining with visualized results that are immediately useful, not just in food industry but in other industries as well.

Color Image Segmentation and Textile Texture Mapping of 2D Virtual Wearing System (2D 가상 착의 시스템의 컬러 영상 분할 및 직물 텍스쳐 매핑)

  • Lee, Eun-Hwan;Kwak, No-Yoon
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.35 no.5
    • /
    • pp.213-222
    • /
    • 2008
  • This paper is related to color image segmentation and textile texture mapping for the 2D virtual wearing system. The proposed system is characterized as virtually wearing a new textile pattern selected by user to the clothing shape section, based on its intensity difference map, segmented from a 2D clothes model image using color image segmentation technique. Regardless of color or intensity of model clothes, the proposed system is possible to virtually change the textile pattern or color with holding the illumination and shading properties of the selected clothing shape section, and also to quickly and easily simulate, compare, and select multiple textile pattern combinations for individual styles or entire outfits. The proposed system can provide higher practicality and easy-to-use interface, as it makes real-time processing possible in various digital environment, and creates comparatively natural and realistic virtual wearing styles, and also makes semi-automatic processing possible to reduce the manual works to a minimum. According to the proposed system, it can motivate the creative activity of the designers with simulation results on the effect of textile pattern design on the appearance of clothes without manufacturing physical clothes and, as it can help the purchasers for decision-making with them, promote B2B or B2C e-commerce.

Stereo Vision Based 3D Input Device (스테레오 비전을 기반으로 한 3차원 입력 장치)

  • Yoon, Sang-Min;Kim, Ig-Jae;Ahn, Sang-Chul;Ko, Han-Seok;Kim, Hyoung-Gon
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.39 no.4
    • /
    • pp.429-441
    • /
    • 2002
  • This paper concerns extracting 3D motion information from a 3D input device in real time focused to enabling effective human-computer interaction. In particular, we develop a novel algorithm for extracting 6 degrees-of-freedom motion information from a 3D input device by employing an epipolar geometry of stereo camera, color, motion, and structure information, free from requiring the aid of camera calibration object. To extract 3D motion, we first determine the epipolar geometry of stereo camera by computing the perspective projection matrix and perspective distortion matrix. We then incorporate the proposed Motion Adaptive Weighted Unmatched Pixel Count algorithm performing color transformation, unmatched pixel counting, discrete Kalman filtering, and principal component analysis. The extracted 3D motion information can be applied to controlling virtual objects or aiding the navigation device that controls the viewpoint of a user in virtual reality setting. Since the stereo vision-based 3D input device is wireless, it provides users with a means for more natural and efficient interface, thus effectively realizing a feeling of immersion.

Distance measurement System from detected objects within Kinect depth sensor's field of view and its applications (키넥트 깊이 측정 센서의 가시 범위 내 감지된 사물의 거리 측정 시스템과 그 응용분야)

  • Niyonsaba, Eric;Jang, Jong-Wook
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2017.05a
    • /
    • pp.279-282
    • /
    • 2017
  • Kinect depth sensor, a depth camera developed by Microsoft as a natural user interface for game appeared as a very useful tool in computer vision field. In this paper, due to kinect's depth sensor and its high frame rate, we developed a distance measurement system using Kinect camera to test it for unmanned vehicles which need vision systems to perceive the surrounding environment like human do in order to detect objects in their path. Therefore, kinect depth sensor is used to detect objects in its field of view and enhance the distance measurement system from objects to the vision sensor. Detected object is identified in accuracy way to determine if it is a real object or a pixel nose to reduce the processing time by ignoring pixels which are not a part of a real object. Using depth segmentation techniques along with Open CV library for image processing, we can identify present objects within Kinect camera's field of view and measure the distance from them to the sensor. Tests show promising results that this system can be used as well for autonomous vehicles equipped with low-cost range sensor, Kinect camera, for further processing depending on the application type when they reach a certain distance far from detected objects.

  • PDF