• Title/Summary/Keyword: Wi-Fi audio

Search Result 8, Processing Time 0.019 seconds

Low-Delay, Low-Power, and Real-Time Audio Remote Transmission System over Wi-Fi

  • Hong, Jinwoo;Yoo, Jeongju;Hong, Jeongkyu
    • Journal of information and communication convergence engineering
    • /
    • v.18 no.2
    • /
    • pp.115-122
    • /
    • 2020
  • Audiovisual (AV) facilities such as TVs and signage are installed in various public places. However, audio cannot be used to prevent noise and interference from individuals, which results in a loss of concentration and understanding of AV content. To address this problem, a total technique for remotely listening to audio from audiovisual facilities with clean sound quality while maintaining video and lip-syncing through personal smart mobile devices is proposed in this paper. Through the experimental results, the proposed scheme has been verified to reduce system power consumption by 8% to 16% and provide real-time processing with a low latency of 120 ms. The system described in this paper will contribute to the activation of audio telehearing services as it is possible to provide audio remote services in various places, such as express buses, trains, wide-area and intercity buses, public waiting rooms, and various application services.

Data Transmission Method using Broadcasting in Bluetooth Low Energy Environment (저전력 블루투스 환경에서 브로드캐스팅을 이용한 데이터전송 방법)

  • Jang, Rae-Young;Lee, Jae-Ung;Jung, Sung-Jae;Soh, Woo-Young
    • Journal of Digital Contents Society
    • /
    • v.19 no.5
    • /
    • pp.963-969
    • /
    • 2018
  • Wi-Fi and Bluetooth technologies are perhaps the most prominent examples of wireless communication technologies used in the Internet of Things (IoT) environment. Compared to widely used Wi-Fi, Bluetooth technology has some flaws including 1:1 connection (one-way) between Master and Slave, slow transmission, and limited connection range; Bluetooth is mainly used for connecting audio devices. Since the release of Bluetooth Low Energy (BLE), some of the flaws of Bluetooth technology have been improved but it still failed to become a competitive alternative of Wi-Fi. This paper presents a method of data transmission through broadcasting in BLE and demonstrates its performance, one-to-many data transfer result. The Connection-Free Data Transmission proposed in this paper will hopefully be utilized in special circumstances requiring 1:N data transmission or disaster security network.

The Design and Implementation Android OS Based Portable Navigation System For Visually Impaired Person and N : N Service (시각 장애인을 위한 Android OS 기반의 Portable Navigation System 설계 및 구현 과 N : N Service)

  • Kong, Sung-Hun;Kim, Young-Kil
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2012.05a
    • /
    • pp.327-330
    • /
    • 2012
  • In the rapid growth of cities, road has heavy traffic and many buildings are under constructions. These kinds of environments make more difficulty for a person who is visually handicapped to walk comfortable. To alleviate the problem, we introduce Android based Portable Navigation System to help walking for Visually Impaired Person. It follows, service center give instant real time monitoring to visually impaired person for their convenient by this system. Android based Portable Navigation System has GPS, Camera, Audio and WI-FI(wireless fidelity) available. It means that GPS location and Camera image information can be sent to service center by WI-FI network. To be specific, transmitted GPS location information enables service center to figure out the visually impaired person's whereabouts and mark the location on the map. By delivered Camera image information, service center monitors the visually impaired person's view. Also, they can offer live guidance to visually impaired person by equipped Audio with live talking. To sum up, Android based Portable Navigation System is a specialized navigation system that gives practical effect to realize more comfortable walking for visually impaired person.

  • PDF

The Design and Implementation Navigation System For Visually Impaired Person (시각 장애인을 위한 Navigation System의 설계 및 구현)

  • Kong, Sung-Hun;Kim, Young-Kil
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.16 no.12
    • /
    • pp.2702-2707
    • /
    • 2012
  • In the rapid growth of cities, road has heavy traffic and many buildings are under constructions. These kinds of environments make more difficulty for a person who is visually handicapped to walk comfortable. To alleviate the problem, we introduce Navigation System to help walking for Visually Impaired Person. It follows, service center give instant real time monitoring to visually impaired person for their convenient by this system. This Navigation System has GPS, Camera, Audio and Wi-Fi(wireless fidelity) available. It means that GPS location and Camera image information can be sent to service center by Wi-Fi network. To be specific, transmitted GPS location information enables service center to figure out the visually impaired person's whereabouts and mark the location on the map. By delivered Camera image information, service center monitors the visually impaired person's view. Also, they can offer live guidance to visually impaired person by equipped Audio with live talking. To sum up, Android based Portable Navigation System is a specialized navigation system that gives practical effect to realize more comfortable walking for visually impaired person.

Implementation of Automotive Multimedia Interface Supporting Multi-Channel Display in Multi-Screen Display (다채널 다중 화면 디스플레이를 지원하는 차량용 멀티미디어 인터페이스 구현)

  • Jeon, Young-Joon;Song, Bong-Gi;Kim, Jang-Ju;Park, Jang-Sik;Yu, Yun-Sik
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.8 no.1
    • /
    • pp.199-206
    • /
    • 2013
  • Recently, the diverse needs of the drivers for in-vehicle infotainment systems are increasing rapidly. As a result, the infotainment systems are equipped with more convenient and human-friendly high-tech features. In this paper, we designed and implemented in-vehicle multimedia infotainment system based on embedded system that was applied various multimedia to in-vehicles. The proposed system can support independent display on each screen for the multi-channel multimedia source based on one processor(1 CPU). Therefore, our system can be reduced costs compared to other systems. This system not only displays the video and audio data in storage devices but also displays CAM, T-DMB, and DVB-T multimedia contents which are supplied in real-time services. Also, our system could multi-screen displays multimedia data in smart phone using Wi-Fi. We expect that in-vehicle infotainment systems like AVN(Audio video navigation) and RSE(Rear Seat Entertainment) could be used in various applications and reduced costs.

Web Storage Application for In-Vehicle Infortainment System (차량용 인포테인먼트 시스템을 위한 웹 저장소 연동 응용 개발)

  • Jeon, Boo-Sun;Han, Tae-Man
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2012.06d
    • /
    • pp.118-120
    • /
    • 2012
  • 차안에 탑재되는 차량용 헤드유닛은 AVN(Audio, Vedio, Navigation)에서 외부 기기 및 인터넷과의 연동을 통한 IT 융복합 서비스가 제공되는 차량용 인포테인먼트 시스템(In-Vehicle Infortainment, IVI)으로 진화하는 추세이다. IVI 시스템에서는 3G, WiFi, Bluetooth 등을 통하여 인터넷 접속이 가능하고, 이러한 인터넷 연결을 기반으로 다양한 인터넷 서비스를 제공할 수 있다. 이러한 인터넷 서비스 중 웹 저장소 연동 응용 서비스는 사용자의 다양한 단말들간에 콘텐츠를 공유할 수 있도록 하는 서비스이다. 웹 저장소 연동 응용에서는 사진/음악/동영상/문서들에 대하여 서버와의 접속을 통해 공유할 수 있는 기능을 제공하며, 이러한 응용이 사용되는 장소가 차량 안이다 보니, 운전에 방해가 되지 않는 사용자 친화적이고 직관적인 인터페이스를 제공해야 한다. 이러한 요구사항들을 충족하는 차량용 웹 저장소 연동 응용프로그램에 대하여 제안하고자 한다.

A Review of Assistive Listening Device and Digital Wireless Technology for Hearing Instruments

  • Kim, Jin Sook;Kim, Chun Hyeok
    • Korean Journal of Audiology
    • /
    • v.18 no.3
    • /
    • pp.105-111
    • /
    • 2014
  • Assistive listening devices (ALDs) refer to various types of amplification equipment designed to improve the communication of individuals with hard of hearing to enhance the accessibility to speech signal when individual hearing instruments are not sufficient. There are many types of ALDs to overcome a triangle of speech to noise ratio (SNR) problems, noise, distance, and reverberation. ALDs vary in their internal electronic mechanisms ranging from simple hard-wire microphone-amplifier units to more sophisticated broadcasting systems. They usually use microphones to capture an audio source and broadcast it wirelessly over a frequency modulation (FM), infra-red, induction loop, or other transmission techniques. The seven types of ALDs are introduced including hardwire devices, FM sound system, infra-red sound system, induction loop system, telephone listening devices, television, and alert/alarm system. Further development of digital wireless technology in hearing instruments will make possible direct communication with ALDs without any accessories in the near future. There are two technology solutions for digital wireless hearing instruments improving SNR and convenience. One is near-field magnetic induction combined with Bluetooth radio frequency (RF) transmission or proprietary RF transmission and the other is proprietary RF transmission alone. Recently launched digital wireless hearing aid applying this new technology can communicate from the hearing instrument to personal computer, phones, Wi-Fi, alert systems, and ALDs via iPhone, iPad, and iPod. However, it comes with its own iOS application offering a range of features but there is no option for Android users as of this moment.

A Deep Learning Based Approach to Recognizing Accompanying Status of Smartphone Users Using Multimodal Data (스마트폰 다종 데이터를 활용한 딥러닝 기반의 사용자 동행 상태 인식)

  • Kim, Kilho;Choi, Sangwoo;Chae, Moon-jung;Park, Heewoong;Lee, Jaehong;Park, Jonghun
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.163-177
    • /
    • 2019
  • As smartphones are getting widely used, human activity recognition (HAR) tasks for recognizing personal activities of smartphone users with multimodal data have been actively studied recently. The research area is expanding from the recognition of the simple body movement of an individual user to the recognition of low-level behavior and high-level behavior. However, HAR tasks for recognizing interaction behavior with other people, such as whether the user is accompanying or communicating with someone else, have gotten less attention so far. And previous research for recognizing interaction behavior has usually depended on audio, Bluetooth, and Wi-Fi sensors, which are vulnerable to privacy issues and require much time to collect enough data. Whereas physical sensors including accelerometer, magnetic field and gyroscope sensors are less vulnerable to privacy issues and can collect a large amount of data within a short time. In this paper, a method for detecting accompanying status based on deep learning model by only using multimodal physical sensor data, such as an accelerometer, magnetic field and gyroscope, was proposed. The accompanying status was defined as a redefinition of a part of the user interaction behavior, including whether the user is accompanying with an acquaintance at a close distance and the user is actively communicating with the acquaintance. A framework based on convolutional neural networks (CNN) and long short-term memory (LSTM) recurrent networks for classifying accompanying and conversation was proposed. First, a data preprocessing method which consists of time synchronization of multimodal data from different physical sensors, data normalization and sequence data generation was introduced. We applied the nearest interpolation to synchronize the time of collected data from different sensors. Normalization was performed for each x, y, z axis value of the sensor data, and the sequence data was generated according to the sliding window method. Then, the sequence data became the input for CNN, where feature maps representing local dependencies of the original sequence are extracted. The CNN consisted of 3 convolutional layers and did not have a pooling layer to maintain the temporal information of the sequence data. Next, LSTM recurrent networks received the feature maps, learned long-term dependencies from them and extracted features. The LSTM recurrent networks consisted of two layers, each with 128 cells. Finally, the extracted features were used for classification by softmax classifier. The loss function of the model was cross entropy function and the weights of the model were randomly initialized on a normal distribution with an average of 0 and a standard deviation of 0.1. The model was trained using adaptive moment estimation (ADAM) optimization algorithm and the mini batch size was set to 128. We applied dropout to input values of the LSTM recurrent networks to prevent overfitting. The initial learning rate was set to 0.001, and it decreased exponentially by 0.99 at the end of each epoch training. An Android smartphone application was developed and released to collect data. We collected smartphone data for a total of 18 subjects. Using the data, the model classified accompanying and conversation by 98.74% and 98.83% accuracy each. Both the F1 score and accuracy of the model were higher than the F1 score and accuracy of the majority vote classifier, support vector machine, and deep recurrent neural network. In the future research, we will focus on more rigorous multimodal sensor data synchronization methods that minimize the time stamp differences. In addition, we will further study transfer learning method that enables transfer of trained models tailored to the training data to the evaluation data that follows a different distribution. It is expected that a model capable of exhibiting robust recognition performance against changes in data that is not considered in the model learning stage will be obtained.