• Title/Summary/Keyword: audio visual

Search Result 425, Processing Time 0.023 seconds

Effects of Audio-visual Entertainment and Soft Tissue Mobilization on Pressure Pain Thresholds, Psychophysiological parameters, and Brain waves in University Students with Tension-type Headache (긴장성 두통이 있는 대학생들에게 시청각적 엔터테인먼트와 연부조직 가동술이 압력통각역치, 바이오피드백, 뇌파에 미치는 영향)

  • Jung, Dae-In;Lee, Eun-Sang;Kim, Hyun-Joong
    • Journal of Korea Entertainment Industry Association
    • /
    • v.14 no.7
    • /
    • pp.539-548
    • /
    • 2020
  • TTH(tension-type headache) is the most common primary headache among adults. Long-term headaches cause chronic headaches and have a better impact on daily life. The purpose of this study is to compare the contributions to TTH through AVE(audio-visual entertainment) and STM(soft tissue mobilization) suitable for management of pathogenic and psychogenic factors of TTH. The participants of this study were from 30 people who complained of intermittent or persistent headaches for more than 6 months, and 10 participants each in the AVE group, STM group, and AVE plus STM group. In the assigned group, a total of 12 sessions were performed three times a week for 4 weeks after the baseline, followed by post-test. Outcome measures measured PPTs(pressure pain thresholds), psychophysiological parameters, and EEG(electroencephalogram). The measured results were analyzed for interaction between time and group through a two way rmANOVA(repeated measurement variance analysis). As a result of the PPTs, interaction was found in the results of the right trapezius (p<.05), and the more improvement was observed in the AVE group. Therefore, through AVE based on psychological factors rather than direct access to the muscles of pathogenic factors, a positive impact on the PPTs was shown, but the average value of the psychophysiological parameters and brain waves that were not statistically significant. The amount of change was observed. Through this, it is suggested that audio-visual stimulation could be considered in the management of TTH.

Improved Bimodal Speech Recognition Study Based on Product Hidden Markov Model

  • Xi, Su Mei;Cho, Young Im
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.13 no.3
    • /
    • pp.164-170
    • /
    • 2013
  • Recent years have been higher demands for automatic speech recognition (ASR) systems that are able to operate robustly in an acoustically noisy environment. This paper proposes an improved product hidden markov model (HMM) used for bimodal speech recognition. A two-dimensional training model is built based on dependently trained audio-HMM and visual-HMM, reflecting the asynchronous characteristics of the audio and video streams. A weight coefficient is introduced to adjust the weight of the video and audio streams automatically according to differences in the noise environment. Experimental results show that compared with other bimodal speech recognition approaches, this approach obtains better speech recognition performance.

The Development of Virtual Reality Telemedicine System for Treatment of Acrophobia (고소공포증 치료를 위한 가상현실 원격진료 시스템의 개발)

  • Ryu Jong Hyun;Beack Seung Hwa;Paek Seung Eun;Hong Sung Chan
    • The Transactions of the Korean Institute of Electrical Engineers D
    • /
    • v.52 no.4
    • /
    • pp.252-257
    • /
    • 2003
  • Acrophobia is an abnormal fear of heights. Medications or cognitive-behavior methods have been mainly used as a treatment. Lately the virtual reality technology has been applied to that kind of anxiety disorders. A virtual environment provides patient with stimuli which arouses phobia, and exposing to that environment makes him having ability to over come the fear. Recently, the patient can take diagnose from a medical doctor in distance with the telemedicine system. The hospital and doctors can get the medical data, audio, video, signals in the actual examination room or operating room via a live interactive system. Audio visual and multimedia conference service, online questionary, ECG signal transfer system, update system are needed in this system. Virtual reality simulation system that composed with a position sensor, head mount display, and audio system, is also included in this telemedicine system. In this study, we tried this system to the acrophobia patient in distance.

Visual Image Effects on Sound Localization in Peripheral Region under Dynamic Multimedia Conditions

  • Kono, Yoshinori;Hasegawa, Hiroshi;Ayama, Miyoshi;Kasuga, Masao;Matsumoto, Shuichi;Koike, Atsushi;Takagi, Koichi
    • Proceedings of the IEEK Conference
    • /
    • 2002.07a
    • /
    • pp.702-705
    • /
    • 2002
  • This paper describes effects of visual information influencing sound localization in the peripheral visual Held under dynamic conditions. Presentation experiments of an audio-visual stimulus were carried out using a movie of a moving patrol car and its siren sound. The tallowing results were obtained: first, the sound image on the timing at the beginning of the presentation was more strongly captured by the visual image than that at the end, i.e., the "beginning effect" was occurred; second, in the peripheral regions, the "beginning effect" was strongly appeared in near the fixation point of eyes.

  • PDF

Audio and Video Bimodal Emotion Recognition in Social Networks Based on Improved AlexNet Network and Attention Mechanism

  • Liu, Min;Tang, Jun
    • Journal of Information Processing Systems
    • /
    • v.17 no.4
    • /
    • pp.754-771
    • /
    • 2021
  • In the task of continuous dimension emotion recognition, the parts that highlight the emotional expression are not the same in each mode, and the influences of different modes on the emotional state is also different. Therefore, this paper studies the fusion of the two most important modes in emotional recognition (voice and visual expression), and proposes a two-mode dual-modal emotion recognition method combined with the attention mechanism of the improved AlexNet network. After a simple preprocessing of the audio signal and the video signal, respectively, the first step is to use the prior knowledge to realize the extraction of audio characteristics. Then, facial expression features are extracted by the improved AlexNet network. Finally, the multimodal attention mechanism is used to fuse facial expression features and audio features, and the improved loss function is used to optimize the modal missing problem, so as to improve the robustness of the model and the performance of emotion recognition. The experimental results show that the concordance coefficient of the proposed model in the two dimensions of arousal and valence (concordance correlation coefficient) were 0.729 and 0.718, respectively, which are superior to several comparative algorithms.

Real-time 3D Audio Downmixing System based on Sound Rendering for the Immersive Sound of Mobile Virtual Reality Applications

  • Hong, Dukki;Kwon, Hyuck-Joo;Kim, Cheong Ghil;Park, Woo-Chan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.12
    • /
    • pp.5936-5954
    • /
    • 2018
  • Eight out of the top ten the largest technology companies in the world are involved in some way with the coming mobile VR revolution since Facebook acquired Oculus. This trend has allowed the technology related with mobile VR to achieve remarkable growth in both academic and industry. Therefore, the importance of reproducing the acoustic expression for users to experience more realistic is increasing because auditory cues can enhance the perception of the complicated surrounding environment without the visual system in VR. This paper presents a audio downmixing system for auralization based on hardware, a stage of sound rendering pipelines that can reproduce realiy-like sound but requires high computation costs. The proposed system is verified through an FPGA platform with the special focus on hardware architectural designs for low power and real-time. The results show that the proposed system on an FPGA can downmix maximum 5 sources in real-time rate (52 FPS), with 382 mW low power consumptions. Furthermore, the generated 3D sound with the proposed system was verified with satisfactory results of sound quality via the user evaluation.

Effects of distractions such as audio, audiovisual, and hand-use on food intake and satiety ratings

  • Sukkyung Shin
    • Journal of Nutrition and Health
    • /
    • v.57 no.3
    • /
    • pp.275-281
    • /
    • 2024
  • Purpose: Various forms of distraction can have different effects on food intake. Distraction can draw attention away from the food being consumed and inhibit monitoring of food intake This study examined the effects of different levels of distraction on eating behaviors. Methods: The study was conducted using a repeated-measures design. The participants (10 males, 13 females) were served test meals (curry rice, 800 g) with the same volume at lunch for 4 weeks. The eating behaviors were analyzed during 4 distraction sessions: first session (without distraction), second session (audio distraction, radio), third session (audiovisual distraction, television), and fourth session (audiovisual distraction and hand-use, smartphone). The satiety ratings were measured using a 100 mm visual analog scale. Results: The participants consumed more food during the fourth session than during other sessions. In addition, the mealtime duration in the fourth session was longer than that in the other sessions (audiovisual distraction and hand-use, 13.74 minutes vs. without distraction, 10.36 minutes; audio distraction, 8.31 minutes; and audiovisual distraction, 9.61 minutes; p < 0.05). As the satiety ratings obtained before and after consumption of the test meals in each distraction session, participants felt significantly more satiated 30 minutes after consuming the test meal in the first session than they did in the other distraction sessions (without distraction, 84.23 mm vs. audio distraction, 76.07 mm; audiovisual distraction, 68.93 mm; and audiovisual distraction and hand-use, 74.70 mm; p < 0.05). Conclusion: Different levels of distraction can have different effects on eating behaviors and when distractions become diverse and selectable, food intake may be affected by distraction.

Evaluation of the Usefulness of the Respiratory Guidance System in the Respiratory Gating Radiation Therapy (호흡동조 방사선치료 시 호흡유도시스템의 유용성 평가)

  • Lee, Yeong-Cheol;Kim, Sun-Myung;Do, Gyeong-Min;Park, Geun-Yong;Kim, Gun-Oh;Kim, Young-Bum
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.24 no.2
    • /
    • pp.167-174
    • /
    • 2012
  • Purpose: The respiration is one of the most important factors in respiratory gating radiation therapy (RGRT). We have developed an unique respiratory guidance system using an audio-visual system in order to support and stabilize individual patient's respiration and evaluated the usefulness of this system. Materials and Methods: Seven patients received the RGRT at our clinic from June 2011 to April 2012. After breathing exercise with the audio-visual system, we measured their spontaneous respiration and their respiration with the audio-visual system respectively. With the measured data, we yielded standard deviations by the superficial contents of respiratory cycles and functions, and analyzed them to examine changes in their breathing before and after the therapy. Results: The PTP (peak to peak) of the standard deviations of the free breathing, the audio guidance system, and the respiratory guidance system were 0.343, 0.148, and 0.078 respectively. The respiratory cycles were 0.645, 0.345, and 0.171 respectively and the superficial contents of the respiratory functions were 2.591, 1.008, and 0.877 respectively. The average values of the differences in the standard deviations among the whole patients at the CT room and therapy room were 0.425 for the PTP, 1.566 for the respiratory cycles, and 3.671 for the respiratory superficial contents. As for the standard deviations before and after the application of the PTP respiratory guidance system, that of the PTP was 0.265, that of the respiratory cycles was 0.474, and that of the respiratory superficial contents. The results of t-test of the values before and after free breathing and the audio-visual guidance system showed that the P-value of the PTP was 0.035, that of the cycles 0.009, and that of the respiratory superficial contents 0.010. Conclusion: The respiratory control could be one of the most important factors in the RGRT which determines the success or failure of a treatment. We were able to get more stable breathing with the audio-visual respiratory guidance system than free breathing or breathing with auditory guidance alone. In particular, the above system was excellent at the reproduction of respiratory cycles in care units. Such a system enables to reduce time due to unstable breathing and to perform more precise and detailed treatment.

  • PDF

Visual Sharing: A View Sharing Technique for Multi-party Collaboration Environments (Visual Sharing: 다자간 원격 협업 환경에서의 View 공유 기술)

  • Kim, Nam-Gon;Kim, Jong-Won
    • 한국HCI학회:학술대회논문집
    • /
    • 2008.02a
    • /
    • pp.643-647
    • /
    • 2008
  • ACE (Advanced Collaboration Environment) [1] aims to provide people to have remote collaboration as they are in the same Space. To provide tele-presence in remote collaboration, quality of audio and sharing of view which can show overall environment. Visual Sharing focuses on providing interactive view sharing among remote participants. A user can see remote collaboration space from any direction and can share his working screen view with others. In this paper, we summarize the development plan of Visual Sharing and relevant elementary techniques for developing Visual Sharing.

  • PDF

ATM Forum의 표준화수행체계 및 연구 동향 분석

  • 오행석;박기식
    • TTA Journal
    • /
    • s.60
    • /
    • pp.79-94
    • /
    • 1998
  • DAVIC(Digital Audio Visual Council) 포럼은 상호호환성을 극대화 시키는 개방형 인터페이스 및 프로토콜을 표준화하여 대화형 디지털 오디오비쥬얼 서비스의 확산을 위해 1994년 설립되었다. 본 고에서는 DAVIC 포럼의 조직 및 임무, 표준화 수행체계 그리고 표준화 추진 실적와 향후 계획을 조사$\cdot$분석하고 주요 각국의 쟁점기술에 대한 입장을 고려한 우리나라의 대응전략을 기술하고 있다.

  • PDF