• Title/Summary/Keyword: Human computer

Search Result 5,011, Processing Time 0.031 seconds

Silhouette-Edge-Based Descriptor for Human Action Representation and Recognition

  • Odoyo, Wilfred O.;Choi, Jae-Ho;Moon, In-Kyu;Cho, Beom-Joon
    • Journal of information and communication convergence engineering
    • /
    • v.11 no.2
    • /
    • pp.124-131
    • /
    • 2013
  • Extraction and representation of postures and/or gestures from human activities in videos have been a focus of research in this area of action recognition. With various applications cropping up from different fields, this paper seeks to improve the performance of these action recognition machines by proposing a shape-based silhouette-edge descriptor for the human body. Information entropy, a method to measure the randomness of a sequence of symbols, is used to aid the selection of vital key postures from video frames. Morphological operations are applied to extract and stack edges to uniquely represent different actions shape-wise. To classify an action from a new input video, a Hausdorff distance measure is applied between the gallery representations and the query images formed from the proposed procedure. The method is tested on known public databases for its validation. An effective method of human action annotation and description has been effectively achieved.

A Multi-Scale Parallel Convolutional Neural Network Based Intelligent Human Identification Using Face Information

  • Li, Chen;Liang, Mengti;Song, Wei;Xiao, Ke
    • Journal of Information Processing Systems
    • /
    • v.14 no.6
    • /
    • pp.1494-1507
    • /
    • 2018
  • Intelligent human identification using face information has been the research hotspot ranging from Internet of Things (IoT) application, intelligent self-service bank, intelligent surveillance to public safety and intelligent access control. Since 2D face images are usually captured from a long distance in an unconstrained environment, to fully exploit this advantage and make human recognition appropriate for wider intelligent applications with higher security and convenience, the key difficulties here include gray scale change caused by illumination variance, occlusion caused by glasses, hair or scarf, self-occlusion and deformation caused by pose or expression variation. To conquer these, many solutions have been proposed. However, most of them only improve recognition performance under one influence factor, which still cannot meet the real face recognition scenario. In this paper we propose a multi-scale parallel convolutional neural network architecture to extract deep robust facial features with high discriminative ability. Abundant experiments are conducted on CMU-PIE, extended FERET and AR database. And the experiment results show that the proposed algorithm exhibits excellent discriminative ability compared with other existing algorithms.

Word Sense Disambiguation Using Knowledge Embedding (지식 임베딩 심층학습을 이용한 단어 의미 중의성 해소)

  • Oh, Dongsuk;Yang, Kisu;Kim, Kuekyeng;Whang, Taesun;Lim, Heuiseok
    • Annual Conference on Human and Language Technology
    • /
    • 2019.10a
    • /
    • pp.272-275
    • /
    • 2019
  • 단어 중의성 해소 방법은 지식 정보를 활용하여 문제를 해결하는 지식 기반 방법과 각종 기계학습 모델을 이용하여 문제를 해결하는 지도학습 방법이 있다. 지도학습 방법은 높은 성능을 보이지만 대량의 정제된 학습 데이터가 필요하다. 반대로 지식 기반 방법은 대량의 정제된 학습데이터는 필요없지만 높은 성능을 기대할수 없다. 최근에는 이러한 문제를 보완하기 위해 지식내에 있는 정보와 정제된 학습데이터를 기계학습 모델에 학습하여 단어 중의성 해소 방법을 해결하고 있다. 가장 많이 활용하고 있는 지식 정보는 상위어(Hypernym)와 하위어(Hyponym), 동의어(Synonym)가 가지는 의미설명(Gloss)정보이다. 이 정보의 표상을 기존의 문장의 표상과 같이 활용하여 중의성 단어가 가지는 의미를 파악한다. 하지만 정확한 문장의 표상을 얻기 위해서는 단어의 표상을 잘 만들어줘야 하는데 기존의 방법론들은 모두 문장내의 문맥정보만을 파악하여 표현하였기 때문에 정확한 의미를 반영하는데 한계가 있었다. 본 논문에서는 의미정보와 문맥정보를 담은 단어의 표상정보를 만들기 위해 구문정보, 의미관계 그래프정보를 GCN(Graph Convolutional Network)를 활용하여 임베딩을 표현하였고, 기존의 모델에 반영하여 문맥정보만을 활용한 단어 표상보다 높은 성능을 보였다.

  • PDF

Ko-ATOMIC 2.0: Constructing Commonsense Knowledge Graph in Korean (Ko-ATOMIC 2.0: 한국어 상식 지식 그래프 구축)

  • Jaewook Lee;Jaehyung Seo;Dahyun Jung;Chanjun Park;Imatitikua Aiyanyo;Heuiseok Lim
    • Annual Conference on Human and Language Technology
    • /
    • 2023.10a
    • /
    • pp.319-323
    • /
    • 2023
  • 일반 상식 기반의 지식 그래프는 대규모 코퍼스에 포함되어 있는 일반 상식을 수집하고 구조화하는 지식의 표현 방법이다. 일반 상식 기반의 지식 그래프는 코퍼스 내에 포함되어 있는 다양한 일반 상식의 형태와 관계를 모델링하며, 주로 질의응답 시스템, 상식 추론 등의 자연어처리 하위 작업에 활용할 수 있다. 가장 잘 알려진 일반 상식 기반의 지식 그래프로는 ConceptNet [1], ATOMIC [2]이 있다. 하지만 한국어 기반의 일반 상식 기반의 지식 그래프에 대한 연구가 존재했지만, 자연어처리 태스크에 활용하기에는 충분하지 않다. 본 연구에서는 대규모 언어 모델과 프롬프트의 활용을 통해 한국어 일반 상식 기반의 지식 그래프를 효과적으로 구축하는 방법론을 제시한다. 또한, 제안하는 방법론으로 구축한 지식 그래프와 기존의 한국어 상식 그래프의 품질을 양적, 질적으로 검증한다.

  • PDF

Empirical Study on the Hallucination of Large Language Models Derived by the Sentence-Closing Ending (어체에 따른 초거대언어모델의 한국어 환각 현상 분석)

  • Hyeonseok Moon;Sugyeong Eo;Jaehyung Seo;Chanjun Park;Yuna Hur;Heuiseok Lim
    • Annual Conference on Human and Language Technology
    • /
    • 2023.10a
    • /
    • pp.677-682
    • /
    • 2023
  • 초거대 언어모델은 모델의 학습 없이 학습 예시만을 입력에 추가함으로써 목표하는 작업을 수행한다. 이런 방식은 상황 내 학습 (In-Context Learning, ICL)이라 불리며, 초거대 언어모델 활용의 사실상의 표준으로 사용되고 있다. 하지만 이러한 모델은, 환각현상 등 사용상의 한계가 발생하는 상황이 다수 발생한다는 연구 결과가 나오고 있다. 본 연구에서는 초거대언어모델을 한국어 작업에서 사용하는 경우, 매우 간단한 수준의 종결어미 변환만으로도 성능 편차가 매우 크게 발생함을 확인하였다. 우리는 이에 대한 분석을 통해, 학습 예시의 어체와 추론 대상의 어체의 변환에 따라 초거대언어모델의 효용성이 크게 변함을 발견하고 이에 대해 분석한다. 나아가 우리는 본 실험 결과를 바탕으로, 어체에 대한 일관성이 유지된 형태의 한국어 데이터 구축이 이루어져야 함을 제안한다.

  • PDF

Real-time Interactive Particle-art with Human Motion Based on Computer Vision Techniques (컴퓨터 비전 기술을 활용한 관객의 움직임과 상호작용이 가능한 실시간 파티클 아트)

  • Jo, Ik Hyun;Park, Geo Tae;Jung, Soon Ki
    • Journal of Korea Multimedia Society
    • /
    • v.21 no.1
    • /
    • pp.51-60
    • /
    • 2018
  • We present a real-time interactive particle-art with human motion based on computer vision techniques. We used computer vision techniques to reduce the number of equipments that required for media art appreciations. We analyze pros and cons of various computer vision methods that can adapted to interactive digital media art. In our system, background subtraction is applied to search an audience. The audience image is changed into particles with grid cells. Optical flow is used to detect the motion of the audience and create particle effects. Also we define a virtual button for interaction. This paper introduces a series of computer vision modules to build the interactive digital media art contents which can be easily configurated with a camera sensor.

Q&A Chatbot in Arabic Language about Prophet's Biography

  • Somaya Yassin Taher;Mohammad Zubair Khan
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.3
    • /
    • pp.211-223
    • /
    • 2024
  • Chatbots have become very popular in our times and are used in several fields. The emergence of chatbots has created a new way of communicating between human and computer interaction. A Chatbot also called a "Chatter Robot," or conversational agent CA is a software application that mimics human conversations in its natural format, which contains textual material and oral communication with artificial intelligence AI techniques. Generally, there are two types of chatbots rule-based and smart machine-based. Over the years, several chatbots designed in many languages for serving various fields such as medicine, entertainment, and education. Unfortunately, in the Arabic chatbots area, little work has been done. In this paper, we developed a beneficial tool (chatBot) in the Arabic language which contributes to educating people about the Prophet's biography providing them with useful information by using Natural Language Processing.

Human-Computer Interaction Based Only on Auditory and Visual Information

  • Sha, Hui;Agah, Arvin
    • Transactions on Control, Automation and Systems Engineering
    • /
    • v.2 no.4
    • /
    • pp.285-297
    • /
    • 2000
  • One of the research objectives in the area of multimedia human-computer interaction is the application of artificial intelligence and robotics technologies to the development of computer interfaces. This involves utilizing many forms of media, integrating speed input, natural language, graphics, hand pointing gestures, and other methods for interactive dialogues. Although current human-computer communication methods include computer keyboards, mice, and other traditional devices, the two basic ways by which people communicate with each other are voice and gesture. This paper reports on research focusing on the development of an intelligent multimedia interface system modeled based on the manner in which people communicate. This work explores the interaction between humans and computers based only on the processing of speech(Work uttered by the person) and processing of images(hand pointing gestures). The purpose of the interface is to control a pan/tilt camera to point it to a location specified by the user through utterance of words and pointing of the hand, The systems utilizes another stationary camera to capture images of the users hand and a microphone to capture the users words. Upon processing of the images and sounds, the systems responds by pointing the camera. Initially, the interface uses hand pointing to locate the general position which user is referring to and then the interface uses voice command provided by user to fine-the location, and change the zooming of the camera, if requested. The image of the location is captured by the pan/tilt camera and sent to a color TV monitor to be displayed. This type of system has applications in tele-conferencing and other rmote operations, where the system must respond to users command, in a manner similar to how the user would communicate with another person. The advantage of this approach is the elimination of the traditional input devices that the user must utilize in order to control a pan/tillt camera, replacing them with more "natural" means of interaction. A number of experiments were performed to evaluate the interface system with respect to its accuracy, efficiency, reliability, and limitation.

  • PDF

BackTranScription (BTS)-based Jeju Automatic Speech Recognition Post-processor Research (BackTranScription (BTS)기반 제주어 음성인식 후처리기 연구)

  • Park, Chanjun;Seo, Jaehyung;Lee, Seolhwa;Moon, Heonseok;Eo, Sugyeong;Jang, Yoonna;Lim, Heuiseok
    • Annual Conference on Human and Language Technology
    • /
    • 2021.10a
    • /
    • pp.178-185
    • /
    • 2021
  • Sequence to sequence(S2S) 기반 음성인식 후처리기를 훈련하기 위한 학습 데이터 구축을 위해 (음성인식 결과(speech recognition sentence), 전사자(phonetic transcriptor)가 수정한 문장(Human post edit sentence))의 병렬 말뭉치가 필요하며 이를 위해 많은 노동력(human-labor)이 소요된다. BackTranScription (BTS)이란 기존 S2S기반 음성인식 후처리기의 한계점을 완화하기 위해 제안된 데이터 구축 방법론이며 Text-To-Speech(TTS)와 Speech-To-Text(STT) 기술을 결합하여 pseudo 병렬 말뭉치를 생성하는 기술을 의미한다. 해당 방법론은 전사자의 역할을 없애고 방대한 양의 학습 데이터를 자동으로 생성할 수 있기에 데이터 구축에 있어서 시간과 비용을 단축 할 수 있다. 본 논문은 BTS를 바탕으로 제주어 도메인에 특화된 음성인식 후처리기의 성능을 향상시키기 위하여 모델 수정(model modification)을 통해 성능을 향상시키는 모델 중심 접근(model-centric) 방법론과 모델 수정 없이 데이터의 양과 질을 고려하여 성능을 향상시키는 데이터 중심 접근(data-centric) 방법론에 대한 비교 분석을 진행하였다. 실험결과 모델 교정없이 데이터 중심 접근 방법론을 적용하는 것이 성능 향상에 더 도움이 됨을 알 수 있었으며 모델 중심 접근 방법론의 부정적 측면 (negative result)에 대해서 분석을 진행하였다.

  • PDF

Discriminant Analysis of Human's Implicit Intent based on Eyeball Movement (안구운동 기반의 사용자 묵시적 의도 판별 분석 모델)

  • Jang, Young-Min;Mallipeddi, Rammohan;Kim, Cheol-Su;Lee, Minho
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.6
    • /
    • pp.212-220
    • /
    • 2013
  • Recently, there has been tremendous increase in human-computer/machine interaction system, where the goal is to provide with an appropriate service to the user at the right time with minimal human inputs for human augmented cognition system. To develop an efficient human augmented cognition system based on human computer/machine interaction, it is important to interpret the user's implicit intention, which is vague, in addition to the explicit intention. According to cognitive visual-motor theory, human eye movements and pupillary responses are rich sources of information about human intention and behavior. In this paper, we propose a novel approach for the identification of human implicit visual search intention based on eye movement pattern and pupillary analysis such as pupil size, gradient of pupil size variation, fixation length/count for the area of interest. The proposed model identifies the human's implicit intention into three types such as navigational intent generation, informational intent generation, and informational intent disappearance. Navigational intent refers to the search to find something interesting in an input scene with no specific instructions, while informational intent refers to the search to find a particular target object at a specific location in the input scene. In the present study, based on the human eye movement pattern and pupillary analysis, we used a hierarchical support vector machine which can detect the transitions between the different implicit intents - navigational intent generation to informational intent generation and informational intent disappearance.