• Title/Summary/Keyword: 상상 음성

Search Result 5, Processing Time 0.022 seconds

Drone controller using motion imagery brainwave and voice recognition (동작 상상뇌파와 음성인식을 이용한 드론 컨트롤러)

  • Park, Myeong-Chul;Oh, Dae-Sung;Han, JI-Hun;Oh, Hyo-Jun;Kim, Yu-Sin;Jeong, Jin-Yong;Park, Sang-Uk;Son, Yeong-Woong
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2020.07a
    • /
    • pp.257-258
    • /
    • 2020
  • 기존의 드론 조작은 초보자에게 어려웠다. 초보자의 경우 드론을 조종하다가 드론이 추락하거나 장애물에 걸려 프로펠러 등의 부품들이 손상되는 경우를 빈번하게 마주한다. 본 연구에서는 초보자 또한 드론 파손의 걱정 없이 드론의 조작을 더욱 쉽게 개선시키는 것을 전제로 뇌파와 보조입력인 음성인식을 이용한 드론 컨트롤러 기술을 적용하고자 한다. 현재 대중적으로 출시되어 있는 드론의 경우 호버링 기능을 포함시켜 드론의 추락 위험을 줄여주는 기능을 탑재하고 있다. 하지만 속도가 빠른 드론의 조작에 있어 미숙한 초보자들은 장애물과의 충돌 그리고 드론 착륙 시 기체손상 등의 위험에 대비하기 힘들다. 본 논문은 이러한 문제점들을 개선하기 위해 기존의 드론 컨트롤러 대신 특정한 동작을 상상할 때 발현되는 동작상상뇌파와 음성입력을 적용한 '동작상상뇌파와 음성인식을 이용한 드론 컨트롤러' 기술을 제안한다. 기존의 드론 컨트롤러와는 다르게 빅 데이터 처리기술인 머신러닝을 이용하여 뇌파 데이터를 처리하고 그 데이터들과 입력되는 뇌파 값을 비교하여 드론을 제어한다. 또한 뇌파의 발현이 안정적이지 못하는 상황을 대비한 보조입력인 음성인식을 이용하여 드론의 기체손상을 최소화 시킬 수 있다.

  • PDF

Wireless Energy Transfer Technology (무선 에너지 전송 기술)

  • Kang, S.Y.;Kim, Y.H.;Lee, M.L.;Zyung, T.H.
    • Electronics and Telecommunications Trends
    • /
    • v.23 no.6
    • /
    • pp.59-69
    • /
    • 2008
  • IT 혁명의 근간이 되는 무선통신기술은 개인간의 통신을 가능하게 하여, 단순 정보 전달에서 음성 및 화상 통화가 언제, 어느 곳에서도 가능하게 되었다. 그러나 이런 단말기를 작동하게 하는 전력 또는 에너지는 여전히 유선으로 공급하거나 전지를 충전하여 사용한다. 만일 무선 통신뿐만 아니라 무선 에너지 전송까지 가능하다면 IT 기술은 또 다른 도약을 하게 될 것이다. 무선 에너지 전송은 백 년 전의 테슬라로부터 시작되어 지금까지 지속되어온 인류의 꿈이었다. 본 기고문에서는 20세기 초 무선 에너지 전송을 상상한 테슬라로부터 근래의 다양한 무선 에너지 전송 기술을 소개한다. 전자기파 방사를 이용한 기술과 전자기 유도현상을 이용한 무선 에너지 전송 기술을 개괄하고, 이러한 기존 기술을 넘어 근거리에서 공진 현상을 이용한 비방사 방식으로 에너지를 전송하는 MIT의 무선 에너지 전송 기술을 소개한다.

EEG based Vowel Feature Extraction for Speech Recognition System using International Phonetic Alphabet (EEG기반 언어 인식 시스템을 위한 국제음성기호를 이용한 모음 특징 추출 연구)

  • Lee, Tae-Ju;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.24 no.1
    • /
    • pp.90-95
    • /
    • 2014
  • The researchs using brain-computer interface, the new interface system which connect human to macine, have been maded to implement the user-assistance devices for control of wheelchairs or input the characters. In recent researches, there are several trials to implement the speech recognitions system based on the brain wave and attempt to silent communication. In this paper, we studied how to extract features of vowel based on international phonetic alphabet (IPA), as a foundation step for implementing of speech recognition system based on electroencephalogram (EEG). We conducted the 2 step experiments with three healthy male subjects, and first step was speaking imagery with single vowel and second step was imagery with successive two vowels. We selected 32 channels, which include frontal lobe related to thinking and temporal lobe related to speech function, among acquired 64 channels. Eigen value of the signal was used for feature vector and support vector machine (SVM) was used for classification. As a result of first step, we should use over than 10th order of feature vector to analyze the EEG signal of speech and if we used 11th order feature vector, the highest average classification rate was 95.63 % in classification between /a/ and /o/, the lowest average classification rate was 86.85 % with /a/ and /u/. In the second step of the experiments, we studied the difference of speech imaginary signals between single and successive two vowels.

Vowel Classification of Imagined Speech in an Electroencephalogram using the Deep Belief Network (Deep Belief Network를 이용한 뇌파의 음성 상상 모음 분류)

  • Lee, Tae-Ju;Sim, Kwee-Bo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.1
    • /
    • pp.59-64
    • /
    • 2015
  • In this paper, we found the usefulness of the deep belief network (DBN) in the fields of brain-computer interface (BCI), especially in relation to imagined speech. In recent years, the growth of interest in the BCI field has led to the development of a number of useful applications, such as robot control, game interfaces, exoskeleton limbs, and so on. However, while imagined speech, which could be used for communication or military purpose devices, is one of the most exciting BCI applications, there are some problems in implementing the system. In the previous paper, we already handled some of the issues of imagined speech when using the International Phonetic Alphabet (IPA), although it required complementation for multi class classification problems. In view of this point, this paper could provide a suitable solution for vowel classification for imagined speech. We used the DBN algorithm, which is known as a deep learning algorithm for multi-class vowel classification, and selected four vowel pronunciations:, /a/, /i/, /o/, /u/ from IPA. For the experiment, we obtained the required 32 channel raw electroencephalogram (EEG) data from three male subjects, and electrodes were placed on the scalp of the frontal lobe and both temporal lobes which are related to thinking and verbal function. Eigenvalues of the covariance matrix of the EEG data were used as the feature vector of each vowel. In the analysis, we provided the classification results of the back propagation artificial neural network (BP-ANN) for making a comparison with DBN. As a result, the classification results from the BP-ANN were 52.04%, and the DBN was 87.96%. This means the DBN showed 35.92% better classification results in multi class imagined speech classification. In addition, the DBN spent much less time in whole computation time. In conclusion, the DBN algorithm is efficient in BCI system implementation.

A Study on the Creation of Digital Self-portrait with Intertextuality (상호텍스트성을 활용한 디지털 자화상 창작)

  • Lim, Sooyeon
    • The Journal of the Convergence on Culture Technology
    • /
    • v.8 no.1
    • /
    • pp.427-434
    • /
    • 2022
  • The purpose of this study is to create a self-portrait that provides an immersive experience that immerses the viewer into the problem of self-awareness. We propose a method to implement an interactive self-portrait by using audio and image information obtained from viewers. The viewer's voice information is converted into text and visualized. In this case, the viewer's face image is used as pixel information composing the text. Text is the result of a mixture of one's own emotions, imaginations, and intentions based on personal experiences and memories. People have different interpretations of certain texts in different ways.The proposed digital self-portrait not only reproduces the viewer's self-consciousness in the inner aspect by utilizing the intertextuality of the text, but also expands the meanings inherent in the text. Intertextuality in a broad sense refers to the totality of all knowledge that occurs between text and text, and between subject and subject. Therefore, the self-portrait expressed in text expands and derives various relationships between the viewer and the text, the viewer and the viewer, and the text and the text. In addition, this study shows that the proposed self-portrait can confirm the formativeness of text and re-create spatial and temporality in the external aspect. This dynamic self-portrait reflects the interests of viewers in real time, and has the characteristic of being updated and created.