• Title/Summary/Keyword: Human-computer interactions

Search Result 95, Processing Time 0.028 seconds

A Framework for Supporting Virtual Engineering Services Using Ubiquitous and Context-Aware Computing (가상공학 서비스를 위한 유비쿼터스 및 상황인식 컴퓨팅 프레임워크)

  • Seo D.W.;Kim H.;Kim K.S.;Lee J.Y.
    • Korean Journal of Computational Design and Engineering
    • /
    • v.10 no.6
    • /
    • pp.402-411
    • /
    • 2005
  • Context-aware engineering services in ubiquitous environments are emerging as a viable alternative to traditional engineering services. Most of the previous approaches are computer-centered rather than human-centered. In this paper, we present a Ubiquitous and Context-Aware computing Framework for collaborative virtual Engineering $(U-CAF\acute{E})$ services. The proposed approach utilizes BPEL-based (Business Process Execution Language) process templates for engineering service orchestration and choreography and adopts semantic web-based context-awareness for providing human-centered engineering services. The paper discusses how to utilize engineering contexts and share this knowledge in support of collaborative virtual engineering services and service interfaces. The paper also discusses how Web services and JINI (Java Intelligent Network Infrastructure) services are utilized to support engineering service federations and seamless Interactions among persons, devices, and various kinds of engineering services.

Two person Interaction Recognition Based on Effective Hybrid Learning

  • Ahmed, Minhaz Uddin;Kim, Yeong Hyeon;Kim, Jin Woo;Bashar, Md Rezaul;Rhee, Phill Kyu
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.2
    • /
    • pp.751-770
    • /
    • 2019
  • Action recognition is an essential task in computer vision due to the variety of prospective applications, such as security surveillance, machine learning, and human-computer interaction. The availability of more video data than ever before and the lofty performance of deep convolutional neural networks also make it essential for action recognition in video. Unfortunately, limited crafted video features and the scarcity of benchmark datasets make it challenging to address the multi-person action recognition task in video data. In this work, we propose a deep convolutional neural network-based Effective Hybrid Learning (EHL) framework for two-person interaction classification in video data. Our approach exploits a pre-trained network model (the VGG16 from the University of Oxford Visual Geometry Group) and extends the Faster R-CNN (region-based convolutional neural network a state-of-the-art detector for image classification). We broaden a semi-supervised learning method combined with an active learning method to improve overall performance. Numerous types of two-person interactions exist in the real world, which makes this a challenging task. In our experiment, we consider a limited number of actions, such as hugging, fighting, linking arms, talking, and kidnapping in two environment such simple and complex. We show that our trained model with an active semi-supervised learning architecture gradually improves the performance. In a simple environment using an Intelligent Technology Laboratory (ITLab) dataset from Inha University, performance increased to 95.6% accuracy, and in a complex environment, performance reached 81% accuracy. Our method reduces data-labeling time, compared to supervised learning methods, for the ITLab dataset. We also conduct extensive experiment on Human Action Recognition benchmarks such as UT-Interaction dataset, HMDB51 dataset and obtain better performance than state-of-the-art approaches.

Opportunities and Future Directions of Human-Metaverse Interaction (휴먼-메타버스 인터랙션의 기회와 발전방향)

  • Yoon, Hyoseok;Park, ChangJu;Park, Jung Yeon
    • Smart Media Journal
    • /
    • v.11 no.6
    • /
    • pp.9-17
    • /
    • 2022
  • In the COVID-19 pandemic era, non-contact services were demanded and the use of extended reality and metaverse services increased rapidly in various applications. In this paper, we analyze Gather.town, ifland, Roblox, and ZEPETO metaverse platforms in terms of user interaction, avatar-based interaction, and virtual world authoring. Especially, we distinguish interactions among user input techniques that occur in the real world, avatar representation techniques to represent users in the virtual world, and interaction types that create a virtual world through user participation. Based on this work, we highlight the current trends and needs of human-metaverse interaction and forecast future opportunities and research directions.

Smartphone Application Interface for Intelligent Human-Robot Interactions (인간-로봇 지능형 상호작용 지원을 위한 스마트폰 응용 인터페이스)

  • Kwak, Byul-Saim;Lee, Jae-Ho
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2010.06c
    • /
    • pp.399-403
    • /
    • 2010
  • 지능로봇은 복잡하고 동적으로 변화하는 환경 내에서 다양한 인식기를 통해 주변 환경 및 상황을 인식하고 이를 바탕으로 사용자에게 지속적으로 유용한 서비스를 제공하는 지능 시스템이다. 그러나 인식 기기의 성능 및 소프트웨어 알고리즘의 한계로 인해 로봇은 중요한 정보를 인식하지 못하거나 잘못된 정보를 인식할 수 있고, 이로 인해 사용자 의도를 파악하지 못하거나 의도하지 않은 행위를 수행할 수 있다. 본 논문에서는 스마트폰을 이용한 로봇-사용자 협력을 제시하여 이러한 문제를 해결하고자 한다. 스마트폰은 사용자가 항상 휴대하고 있기 때문에 언제든 로봇과 협력할 수 있으며 이 기기를 통해 로봇의 인식정보를 효과적으로 표현하고 직관적인 작업 지시 인터페이스를 제공하여 로봇이 사용자 의도에 적합한 올바른 행위를 수행할 수 있게 한다.

  • PDF

Emulearner: Deep Learning Library for Utilizing Emulab

  • Song, Gi-Beom;Lee, Man-Hee
    • Journal of information and communication convergence engineering
    • /
    • v.16 no.4
    • /
    • pp.235-241
    • /
    • 2018
  • Recently, deep learning has been actively studied and applied in various fields even to novel writing and painting in ways we could not imagine before. A key feature is that high-performance computing device, especially CUDA-enabled GPU, supports this trend. Researchers who have difficulty accessing such systems fall behind in this fast-changing trend. In this study, we propose and implement a library called Emulearner that helps users to utilize Emulab with ease. Emulab is a research framework equipped with up to thousands of nodes developed by the University of Utah. To use Emulab nodes for deep learning requires a lot of human interactions, however. To solve this problem, Emulearner completely automates operations from authentication of Emulab log-in, node creation, configuration of deep learning to training. By installing Emulearner with a legitimate Emulab account, users can focus on their research on deep learning without hassle.

Interactive Motion Retargeting for Humanoid in Constrained Environment (제한된 환경 속에서 휴머노이드를 위한 인터랙티브 모션 리타겟팅)

  • Nam, Ha Jong;Lee, Ji Hye;Choi, Myung Geol
    • Journal of the Korea Computer Graphics Society
    • /
    • v.23 no.3
    • /
    • pp.1-8
    • /
    • 2017
  • In this paper, we introduce a technique to retarget human motion data to the humanoid body in a constrained environment. We assume that the given motion data includes detailed interactions such as holding the object by hand or avoiding obstacles. In addition, we assume that the humanoid joint structure is different from the human joint structure, and the shape of the surrounding environment is different from that at the time of the original motion. Under such a condition, it is also difficult to preserve the context of the interaction shown in the original motion data, if the retargeting technique that considers only the change of the body shape. Our approach is to separate the problem into two smaller problems and solve them independently. One is to retarget motion data to a new skeleton, and the other is to preserve the context of interactions. We first retarget the given human motion data to the target humanoid body ignoring the interaction with the environment. Then, we precisely deform the shape of the environmental model to match with the humanoid motion so that the original interaction is reproduced. Finally, we set spatial constraints between the humanoid body and the environmental model, and restore the environmental model to the original shape. To demonstrate the usefulness of our method, we conducted an experiment by using the Boston Dynamic's Atlas robot. We expected that out method can help the humanoid motion tracking problem in the future.

Inter-space Interaction Issues Impacting Middleware Architecture of Ubiquitous Pervasive Computing

  • Lim, Shin-Young;Helal, Sumi
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.8 no.1
    • /
    • pp.42-51
    • /
    • 2008
  • We believe that smart spaces, offering pervasive services, will proliferate. However, at present, those islands of smart spaces should be joined seamlessly with each other. As users move about, they will have to roam from one autonomous smart space to another. When they move into the new island of smart space, they should setup their devices and service manually or not have access to the services available in their home spaces. Sometimes, there will conflicts between users when they try to occupy the same space or use a specific device at the same time. It will also be critical to elder people who suffer from Alzheimer or other cognitive impairments when they travel from their smart space to other visited spaces (e.g., grocery stores, museums). Furthermore our experience in building the Gator Tech Smart House reveals to us that home residents generally do not want to lose or be denied all the features or services they have come to expect simply because they move to a new smart space. The seamless inter-space interaction requirements and issues are raised automatically when the ubiquitous pervasive computing system tries to establish the user's service environment by allocating relevant resources after the user moves to a new location where there are no prior settings for the new environment. In this paper, we raise and present several critical inter-space interactions issues impacting middleware architecture design of ubiquitous pervasive computing. We propose requirements for resolving these issues on seamless inter-space operation. We also illustrate our approach and ideas via a service scenario moving around two smart spaces.

Systematical Analysis of Cutaneous Squamous Cell Carcinoma Network of microRNAs, Transcription Factors, and Target and Host Genes

  • Wang, Ning;Xu, Zhi-Wen;Wang, Kun-Hao
    • Asian Pacific Journal of Cancer Prevention
    • /
    • v.15 no.23
    • /
    • pp.10355-10361
    • /
    • 2015
  • Background: MicroRNAs (miRNAs) are small non-coding RNA molecules found in multicellular eukaryotes which are implicated in development of cancer, including cutaneous squamous cell carcinoma (cSCC). Expression is controlled by transcription factors (TFs) that bind to specific DNA sequences, thereby controlling the flow (or transcription) of genetic information from DNA to messenger RNA. Interactions result in biological signal control networks. Materials and Methods: Molecular components involved in cSCC were here assembled at abnormally expressed, related and global levels. Networks at these three levels were constructed with corresponding biological factors in term of interactions between miRNAs and target genes, TFs and miRNAs, and host genes and miRNAs. Up/down regulation or mutation of the factors were considered in the context of the regulation and significant patterns were extracted. Results: Participants of the networks were evaluated based on their expression and regulation of other factors. Sub-networks with two core TFs, TP53 and EIF2C2, as the centers are identified. These share self-adapt feedback regulation in which a mutual restraint exists. Up or down regulation of certain genes and miRNAs are discussed. Some, for example the expression of MMP13, were in line with expectation while others, including FGFR3, need further investigation of their unexpected behavior. Conclusions: The present research suggests that dozens of components, miRNAs, TFs, target genes and host genes included, unite as networks through their regulation to function systematically in human cSCC. Networks built under the currently available sources provide critical signal controlling pathways and frequent patterns. Inappropriate controlling signal flow from abnormal expression of key TFs may push the system into an incontrollable situation and therefore contributes to cSCC development.

On Motion Planning for Human-Following of Mobile Robot in a Predictable Intelligent Space

  • Jin, Tae-Seok;Hashimoto, Hideki
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.4 no.1
    • /
    • pp.101-110
    • /
    • 2004
  • The robots that will be needed in the near future are human-friendly robots that are able to coexist with humans and support humans effectively. To realize this, humans and robots need to be in close proximity to each other as much as possible. Moreover, it is necessary for their interactions to occur naturally. It is desirable for a robot to carry out human following, as one of the human-affinitive movements. The human-following robot requires several techniques: the recognition of the moving objects, the feature extraction and visual tracking, and the trajectory generation for following a human stably. In this research, a predictable intelligent space is used in order to achieve these goals. An intelligent space is a 3-D environment in which many sensors and intelligent devices are distributed. Mobile robots exist in this space as physical agents providing humans with services. A mobile robot is controlled to follow a walking human using distributed intelligent sensors as stably and precisely as possible. The moving objects is assumed to be a point-object and projected onto an image plane to form a geometrical constraint equation that provides position data of the object based on the kinematics of the intelligent space. Uncertainties in the position estimation caused by the point-object assumption are compensated using the Kalman filter. To generate the shortest time trajectory to follow the walking human, the linear and angular velocities are estimated and utilized. The computer simulation and experimental results of estimating and following of the walking human with the mobile robot are presented.

Comparing Initiating and Responding Joint Attention as a Social Learning Mechanism: A Study Using Human-Avatar Head/Hand Interaction (사회 학습 기제로서 IJA와 RJA의 비교: 인간-아바타 머리/손 상호작용을 이용한 연구)

  • Kim, Mingyu;Kim, So-Yeon;Kim, Kwanguk
    • Journal of KIISE
    • /
    • v.43 no.6
    • /
    • pp.645-652
    • /
    • 2016
  • Joint Attention (JA) has been known to play a key role in human social learning. However, relative impact of different interaction types has yet to be rigorously examined because of limitation of existing methodologies to simulate human-to-human interaction. In the present study, we designed a new JA paradigm with emulating human-avatar interaction and virtual reality technologies, and tested the paradigm in two experiments with healthy adults. Our results indicated that initiating JA (IJA) condition was more effective than responding JA (RJA) condition for social learning in both head and hand interactions. Moreover, the hand interaction involved better information processing than the head interaction. The implication of the results, the validity of the new paradigm, and limitations of this study were discussed.