• Title/Summary/Keyword: Auditory Interface

Search Result 47, Processing Time 0.028 seconds

Perception Ability of Synthetic Vowels in Cochlear Implanted Children (모음의 포먼트 변형에 따른 인공와우 이식 아동의 청각적 인지변화)

  • Huh, Myung-Jin
    • MALSORI
    • /
    • no.64
    • /
    • pp.1-14
    • /
    • 2007
  • The purpose of this study was to examine the acoustic perception different by formants change for profoundly hearing impaired children with cochlear implants. The subjects were 10 children after 15 months of experience with the implant and mean of their chronological age was 8.4 years and Standard deviation was 2.9 years. The ability of auditory perception was assessed using acoustic-synthetic vowels. The acoustic-synthetic vowel was combined with F1, F2, and F3 into a vowel and produced 42 synthetic sound, using Speech GUI(Graphic User Interface) program. The data was deal with clustering analysis and on-line analytical processing for perception ability of acoustic synthetic vowel. The results showed that auditory perception scores of acoustic-synthetic vowels for cochlear implanted children were increased in F2 synthetic vowels compaire to those of F1. And it was found that they perceived the differences of vowels in terms of distance rates between F1 and F2 in specific vowel.

  • PDF

32-Channel EEG and Evoked Potential Mapping System (32채널 뇌파 및 뇌유전발전위 Mapping 시스템)

  • 안창범;박대준
    • Journal of Biomedical Engineering Research
    • /
    • v.17 no.2
    • /
    • pp.179-188
    • /
    • 1996
  • A clinically oriented 32-channel electroencephalogram (EEG) and evoked potential (EP) mapping system has been developed EEG and EP signals acquired from 32-channel electrodes attached on the heroid surface are amplified by a pre-amplifier which is separated from main amplifier and is located near the patient to reduce signal attenuation and noise contamination between electrodes and the amplifier. The amplified signals are further amplified by a main amplifier where various filtering and gain contr61 are achieved An automatic artifact rejection scheme is employed using neural network-based EEG and artifact classifier, by which examination time is substantially reduce4 The continuously measured EEG sigrlals are used for spectral mapping, and auditory and visual evoked potentials measured in synchronous to the auditory and visual stimuli are used for temporal evoked potential mapping. A user-friendly graphical interface based on the Microsoft Window 3.1 is developed for the operation of the system. Statistical databases for comparisons of group and individual are included to support a statistically-based diagnosis.

  • PDF

Brain Correlates of Emotion for XR Auditory Content (XR 음향 콘텐츠 활용을 위한 감성-뇌연결성 분석 연구)

  • Park, Sangin;Kim, Jonghwa;Park, Soon Yong;Mun, Sungchul
    • Journal of Broadcast Engineering
    • /
    • v.27 no.5
    • /
    • pp.738-750
    • /
    • 2022
  • In this study, we reviewed and discussed whether auditory stimuli with short length can evoke emotion-related neurological responses. The findings implicate that if personalized sound tracks are provided to XR users based on machine learning or probability network models, user experiences in XR environment can be enhanced. We also investigated that the arousal-relaxed factor evoked by short auditory sound can make distinct patterns in functional connectivity characterized from background EEG signals. We found that coherence in the right hemisphere increases in sound-evoked arousal state, and vice versa in relaxed state. Our findings can be practically utilized in developing XR sound bio-feedback system which can provide preference sound to users for highly immersive XR experiences.

Pattern classification of the synchronized EEG records by an auditory stimulus for human-computer interface (인간-컴퓨터 인터페이스를 위한 청각 동기방식 뇌파신호의 패턴 분류)

  • Lee, Yong-Hee;Choi, Chun-Ho
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.12 no.12
    • /
    • pp.2349-2356
    • /
    • 2008
  • In this paper, we present the method to effectively extract and classify the EEG caused by only brain activity when a normal subject is in a state of mental activity. We measure the synchronous EEG on the auditory event when a subject who is in a normal state thinks of a specific task, and then shift the baseline and reduce the effect of biological artifacts on the measured EEG. Finally we extract only the mental task signal by averaging method, and then perform the recognition of the extracted mental task signal by computing the AR coefficients. In the experiment, the auditory stimulus is used as an event and the EEG was recorded from the three channel $C_3-A_1$, $C_4-A_2$ and $P_Z-A_1$. After averaging 16 times for each channel output, we extracted the features of specific mental tasks by modeling the output as 12th order AR coefficients. We used total 36th order coefficient as an input parameter of the neural network and measured the training data 50 times per each task. With data not used for training, the rate of task recognition is 34-92 percent on the two tasks, and 38-54 percent on the four tasks.

Interactive information process image with minute hand gestures

  • Lim, Chan
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2016.04a
    • /
    • pp.799-802
    • /
    • 2016
  • It is definitely an interesting job to work with V4 to create various contents emphasizing different interfaces like 3D graphics, and multimedia such as video, audio, and camera. Moreover, beyond the other interface, as it could be used in the many aspects of the sensory sign such as visual effects, auditory effects, and touchable effects, it feels free to make a better developed model. We intended the users to feel some kind of pleasure and interactions rather than just using in aspect of Media art.

Tele-Manipulation of ROBHAZ-DT2 for Hazard Environment Applications

  • Ryu, Dong-Seok;Lee, Jong-Wha;Yoon, Seong-Sik;Kang, Sung-Chul;Song, Jae-Bok;Kim, Mun-Sang
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2003.10a
    • /
    • pp.2051-2056
    • /
    • 2003
  • In this paper, a tele-manipulation in explosive ordnance disposal(EOD) applications is discussed. The ROBHAZ-DT2 is developed as a teleoperated mobile manipulator for EOD. In general, it has been thought that the robot must have appropriate functions and accuracy enough to handle the complicated and dangerous mission. However, the research on the ROBHAZ-DT2 revealed that the teleoperation causes more restrictions and difficulties in EOD mission. Thus to solve the problem, a novel user interface for the ROBHAZ-DT2 is developed, in which the operator can interact with various human senses (i.e. visual, auditory and haptic sense). It enables an operator to control the ROBHAZ-DT2 simply and intuitively. A tele-manipulation control scheme for the ROBHAZ-DT2 is also proposed including compliance control via force feedback. It makes the robot adapt itself to circumstances, while the robot faithfully follows a command of the operator. This paper deals with a detailed description on the user interface and the tele-manipulation control for the ROBHAZ-DT2. An EOD demonstration is conducted to verify the validity of the proposed interface and the control scheme.

  • PDF

Encoding Method for Olfactory Information (후각 정보의 부호화 방법)

  • Lee, Keun-Hee;Lee, Sang-Wook;Kim, Eung-Soo
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.11 no.7
    • /
    • pp.1275-1282
    • /
    • 2007
  • With the rapid spread of smart mobile products and practical use of wireless network, it became possible to offer the computing environment people can use anytime anywhere and it is predicted that portable computer as the next generation computing is to be put to practical use. This computer will be expected to have the new interface which provides realistic service through human's five senses as well as the interface through the visual and auditory senses. Accordingly, in this study, we are going to talk about the technology that expresses and reproduces what is based on olfactory information - closely related with memory among honan's five senses - in order to embody the lifelike user interface.

Human-Computer Interaction Based Only on Auditory and Visual Information

  • Sha, Hui;Agah, Arvin
    • Transactions on Control, Automation and Systems Engineering
    • /
    • v.2 no.4
    • /
    • pp.285-297
    • /
    • 2000
  • One of the research objectives in the area of multimedia human-computer interaction is the application of artificial intelligence and robotics technologies to the development of computer interfaces. This involves utilizing many forms of media, integrating speed input, natural language, graphics, hand pointing gestures, and other methods for interactive dialogues. Although current human-computer communication methods include computer keyboards, mice, and other traditional devices, the two basic ways by which people communicate with each other are voice and gesture. This paper reports on research focusing on the development of an intelligent multimedia interface system modeled based on the manner in which people communicate. This work explores the interaction between humans and computers based only on the processing of speech(Work uttered by the person) and processing of images(hand pointing gestures). The purpose of the interface is to control a pan/tilt camera to point it to a location specified by the user through utterance of words and pointing of the hand, The systems utilizes another stationary camera to capture images of the users hand and a microphone to capture the users words. Upon processing of the images and sounds, the systems responds by pointing the camera. Initially, the interface uses hand pointing to locate the general position which user is referring to and then the interface uses voice command provided by user to fine-the location, and change the zooming of the camera, if requested. The image of the location is captured by the pan/tilt camera and sent to a color TV monitor to be displayed. This type of system has applications in tele-conferencing and other rmote operations, where the system must respond to users command, in a manner similar to how the user would communicate with another person. The advantage of this approach is the elimination of the traditional input devices that the user must utilize in order to control a pan/tillt camera, replacing them with more "natural" means of interaction. A number of experiments were performed to evaluate the interface system with respect to its accuracy, efficiency, reliability, and limitation.

  • PDF

Evaluation of Haptic Seat for Vehicle Navigation System (자동차 네비게이션 시스템을 위한 햅틱 시트의 평가에 관한 연구)

  • Chang, Won-suk;Kim, Seok-Hwan;Pyun, Jong-Kweon;Ji, Yong-Gu
    • Journal of the Ergonomics Society of Korea
    • /
    • v.29 no.4
    • /
    • pp.625-629
    • /
    • 2010
  • This study has confirmed that subjective positive and negative aspects a driver feels by applying haptic seat on a vehicle to substantiate vehicle navigation system. Our experiment with total twenty subjects provides that the reaction time (RT) is superior in haptic interface than visual or auditory interface but subjective satisfaction, which subjects feel, and workload is less low in a simulator environment. Although, the difference of individuals and unfamiliarity is relatively high inasmuch as the experiment of absolutely new technology, but overall satisfaction of haptic seat is high. The result of study provides some consideration and direction to need in implementation of a haptic seat and it also confirms their possibility meaningfully. We expect the interaction between a driver and a vehicle and safety improvement potentially through applied haptic seat on actual vehicles.

Sensibility Evaluation of Function Sounds on Mobile Phones (휴대폰 기능음별 감성평가)

  • Kim, Jae-Kuk;Cho, Am
    • Journal of the Ergonomics Society of Korea
    • /
    • v.27 no.3
    • /
    • pp.61-69
    • /
    • 2008
  • The purpose of this study was to examine the effects of sensibility and compatibility from function sounds of mobile phones. For this purpose, the present study extracted sensibility adjectives and carried out sensibility evaluation to identify the sensibility factors in the function sounds of mobile phones. And 65 sound sources out of six categories were investigated for 27 subjects. The result showed that two dominant sensibility factors were the duration and the melody pattern of the sound sources. The compatibilities of opening and closing sounds were evaluated lowly with short duration and quick tempo. Especially, the most of the subjects preferred more attractive and longer opening sounds than other function sounds. Another finding was that the short sounds with melodies were preferred for cancellations and alerts, unliked the commonly used sound sources in industry and other traditional electronic products.