• Title/Summary/Keyword: voice recognition sensor

Search Result 39, Processing Time 0.022 seconds

Voice Recognition Sensor Driven Elevator for High-rise Vertical Shift (동굴관광용 고층수직이동 승강기의 긴급 음성구동 제어)

  • Choi, Byong-Seob;Kang, Tae-Hyun;Yun, Yeo-Hoon;Jang, Hoon-Gyou;Soh, Dea-Wha
    • Journal of the Speleological Society of Korea
    • /
    • no.88
    • /
    • pp.1-7
    • /
    • 2008
  • Recently, it is one of very interest technology of Human Computer Interaction(HCI). Nowadays, it is easy to find out that, for example, inside SF movies people has talking to computer. However, there are difference between CPU language and ours. So, we focus on connecting to CPU. For 30 years many scientists experienced in that technology. But it is really difficult. Our project goal is making that CPU could understand human voice. First of all the signal through a voice sensor will move to BCD (binary code). That elevator helps out people who wants to move up and down. This product's point is related with people's safety. Using a PWM for motor control by ATmega16, we choose a DC motor to drive it because of making a regular speed elevator. Furthermore, using a voice identification module the elevator driven by voice sensor could operate well up and down perfectly from 1st to 10th floor by PWM control with ATmega16. And, it will be clearly useful for high-rise vertical shift with voice recognition sensor driven.

Kiosk for the Visually Impaired using Voice Recognition (음성인식 기능을 이용한 시각장애인용 키오스크)

  • Kim, Dae-Young;Lee, Ah-Hyun;Lee, Gun-Haeng;Kim, Se-Hyun;Lee, Boong-Joo
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.17 no.5
    • /
    • pp.873-882
    • /
    • 2022
  • In this paper, we studied the voice recognition system kiosk for convenience, thinking that the kiosk widely used in modern society should compensate for the inconvenience of using by the visually impaired. Using ultrasonic sensor and PIR(Passive Infrared), it recognizes the visually impaired within the range of 80cm-40cm, introduces the kiosk through the MP3 module and induces them to come closer. Also, when the visually impaired within 40cm is recognized, the product description and order are guided through the MP3 module. A recording-based data voice recognition system and a kiosk that outputs desired items through servo motors were studied. A kiosk for the convenience of the visually impaired was manufactured through operation and optimization experiments of PIR, ultrasonic, voice recognition, and shock sensor for the manufactured voice recognition kiosk. Finally, it was confirmed that security can be strengthened by using shock sensors and emergency bells to enhance security.

Implementation of Motorized Wheelchair using Speaker Independent Voice Recognition Chip and Wireless Microphone (화자 독립 방식의 음성 인식 칩 및 무선 마이크를 이용한 전동 휄체어의 구현)

  • Song, Byung-Seop;Lee, Jung-Hyun;Park, Jung-Jae;Park, Hee-Joon;Kim, Myoung-Nam
    • Journal of Sensor Science and Technology
    • /
    • v.13 no.1
    • /
    • pp.20-26
    • /
    • 2004
  • For the disabled persons who can't use their limbs, motorized wheelchair that is activated by voice recognition module employing speaker independent method, was implemented. The wireless voice transfer device was designed and employed for the user convenience. And the wheelchair was designed to operate using voice and keypad by selection of the user because they can manipulate it using keypad if necessary. The speaker independent method was used as the voice recognition module in order that anyone can manipulate the wheelchair in case of assistance. Using the implemented wheelchair, performance and motion of the system was examined and it has over than 97% of voice recognition rate and proper movements.

Performance Evaluation of Real-time Voice Traffic over IEEE 802.15.4 Beacon-enabled Mode (IEEE 802.15.4 비컨 가용 방식에 의한 실시간 음성 트래픽 성능 평가)

  • Hur, Yun-Kang;Kim, You-Jin;Huh, Jae-Doo
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.2 no.1
    • /
    • pp.43-52
    • /
    • 2007
  • IEEE 802.15.4 specification which defines low-rate wireless personal area network(LR-WPAN) has application to home or building automation, remote control and sensing, intelligent management, environmental monitoring, and so on. Recently, it has been considered as an alternative technology to provide multimedia services such as automation via voice recognition, wireless headset and wireless camera for surveillance. In order to evaluate capability of voice traffic on the IEEE 802.15.4 LR-WPAN, we supposed two scenarios, voice traffic only and coexistence of voice and sensing traffic. For both cases we examined delay and packet loss rate in case of with and without acknowledgement, and various beacon period varying with beacon and superframe order values. In LR-WPAN with voice devices only, total 5 voice devices could be applicable and in the other case, i.e., coexisted cases of voice and sensor devices, a voice device was able to coexist with about 60 sensor devices.

  • PDF

Multi-Modal Biometries System for Ubiquitous Sensor Network Environment (유비쿼터스 센서 네트워크 환경을 위한 다중 생체인식 시스템)

  • Noh, Jin-Soo;Rhee, Kang-Hyeon
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.44 no.4 s.316
    • /
    • pp.36-44
    • /
    • 2007
  • In this paper, we implement the speech & face recognition system to support various ubiquitous sensor network application services such as switch control, authentication, etc. using wireless audio and image interface. The proposed system is consist of the H/W with audio and image sensor and S/W such as speech recognition algorithm using psychoacoustic model, face recognition algorithm using PCA (Principal Components Analysis) and LDPC (Low Density Parity Check). The proposed speech and face recognition systems are inserted in a HOST PC to use the sensor energy effectively. And improve the accuracy of speech and face recognition, we implement a FEC (Forward Error Correction) system Also, we optimized the simulation coefficient and test environment to effectively remove the wireless channel noises and correcting wireless channel errors. As a result, when the distance that between audio sensor and the source of voice is less then 1.5m FAR and FRR are 0.126% and 7.5% respectively. The face recognition algorithm step is limited 2 times, GAR and FAR are 98.5% and 0.036%.

A Study on the Motion and Voice Recognition Smart Mirror Using Grove Gesture Sensor (그로브 제스처 센서를 활용한 모션 및 음성 인식 스마트 미러에 관한 연구)

  • Hui-Tae Choi;Chang-Hoon Go;Ji-Min Jeong;Ye-Seul Shin;Hyoung-Keun Park
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.18 no.6
    • /
    • pp.1313-1320
    • /
    • 2023
  • This paper presents the development of a smart mirror that allows control of its display through glove gestures and integrates voice recognition functionality. The hardware configuration of the smart mirror consists of an LCD monitor combined with an acrylic panel, onto which a semi-mirror film with a reflectance of 37% and transmittance of 36% is attached, enabling it to function as both a mirror and a display. The proposed smart mirror eliminates the need for users to physically touch the mirror or operate a keyboard, as it implements gesture control through glove gesture sensors. Additionally, it incorporates voice recognition capabilities and integrates Google Assistant to display results on the screen corresponding to voice commands issued by the user.

HearCAM Embedded Platform Design (히어 캠 임베디드 플랫폼 설계)

  • Hong, Seon Hack;Cho, Kyung Soon
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.10 no.4
    • /
    • pp.79-87
    • /
    • 2014
  • In this paper, we implemented the HearCAM platform with Raspberry PI B+ model which is an open source platform. Raspberry PI B+ model consists of dual step-down (buck) power supply with polarity protection circuit and hot-swap protection, Broadcom SoC BCM2835 running at 700MHz, 512MB RAM solered on top of the Broadcom chip, and PI camera serial connector. In this paper, we used the Google speech recognition engine for recognizing the voice characteristics, and implemented the pattern matching with OpenCV software, and extended the functionality of speech ability with SVOX TTS(Text-to-speech) as the matching result talking to the microphone of users. And therefore we implemented the functions of the HearCAM for identifying the voice and pattern characteristics of target image scanning with PI camera with gathering the temperature sensor data under IoT environment. we implemented the speech recognition, pattern matching, and temperature sensor data logging with Wi-Fi wireless communication. And then we directly designed and made the shape of HearCAM with 3D printing technology.

Wearable Computing System for the bland persons (시각 장애우를 위한 Wearable Computing System)

  • Kim, Hyung-Ho;Choi, Sun-Hee;Jo, Tea-Jong;Kim, Soon-Ju;Jang, Jea-In
    • Proceedings of the KIEE Conference
    • /
    • 2006.04a
    • /
    • pp.261-263
    • /
    • 2006
  • Nowadays, technologies such as RFID, sensor network makes our life comfortable more and more. In this paper we propose a wearable computing system for blind and deaf person who can be easily out of sight from our technology. We are making a wearable computing system that is consisted of embedded board to processing data, ultrasonic sensors to get distance data and motors that make vibration as a signal to see the screen for a deaf person. This system offers environmental informations by text and voice. For example, distance data from a obstacle to a person are calculated by data compounding module using sensed ultrasonic reflection time. This data is converted to text or voice by main processing module, and are serviced to a handicapped person. Furthermore we will extend this system using a voice recognition module and text to voice convertor module to help communication among the blind and deaf persons.

  • PDF

Emotion Recognition using Short-Term Multi-Physiological Signals

  • Kang, Tae-Koo
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.3
    • /
    • pp.1076-1094
    • /
    • 2022
  • Technology for emotion recognition is an essential part of human personality analysis. To define human personality characteristics, the existing method used the survey method. However, there are many cases where communication cannot make without considering emotions. Hence, emotional recognition technology is an essential element for communication but has also been adopted in many other fields. A person's emotions are revealed in various ways, typically including facial, speech, and biometric responses. Therefore, various methods can recognize emotions, e.g., images, voice signals, and physiological signals. Physiological signals are measured with biological sensors and analyzed to identify emotions. This study employed two sensor types. First, the existing method, the binary arousal-valence method, was subdivided into four levels to classify emotions in more detail. Then, based on the current techniques classified as High/Low, the model was further subdivided into multi-levels. Finally, signal characteristics were extracted using a 1-D Convolution Neural Network (CNN) and classified sixteen feelings. Although CNN was used to learn images in 2D, sensor data in 1D was used as the input in this paper. Finally, the proposed emotional recognition system was evaluated by measuring actual sensors.

Wireless Speech Recognition System using Psychoacoustic Model (심리음향 모델을 이용한 무선 음성인식 시스템)

  • Noh, Jin-Soo;Rhee, Kang-Hyeon
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.43 no.6 s.312
    • /
    • pp.110-116
    • /
    • 2006
  • In this paper, we implement a speech recognition system to support ubiquitous sensor network application services such as switch control, authentication, etc. using wireless audio sensors. The proposed system is consist of the wireless audio sensor, the speech recognition algorithm using psychoacoustic model and LDPC(low density parity check) for correcting errors. The proposed speech recognition system is inserted in a HOST PC to use the sensor energy effectively mil to improve the accuracy of speech recognition, a FEC(Forward Error Correction) system is used. Also, we optimized the simulation coefficient and test environment to effectively remove the wireless channel noises and correcting wireless channel errors. As a result, when the distance between sensor and the source of voice is less then 1.0m FAR and FRR are 0.126% and 7.5% respectively.