• Title/Summary/Keyword: Sound Stimuli

Search Result 104, Processing Time 0.026 seconds

Effect of Multimodal cues on Tactile Mental Imagery and Attitude-Purchase Intention Towards the Product (다중 감각 단서가 촉각적 심상과 제품에 대한 태도-구매 의사에 미치는 영향)

  • Lee, Yea Jin;Han, Kwanghee
    • Science of Emotion and Sensibility
    • /
    • v.24 no.3
    • /
    • pp.41-60
    • /
    • 2021
  • The purpose of this research was to determine whether multimodal cues in an online shopping environment could enhance tactile consumer mental imagery, purchase intentions, and attitudes towards an apparel product. One limitation of online retail is that consumers are unable to physically touch the items. However, as tactile information plays an important role in consumer decisions especially for apparel products, this study investigated the effects of multimodal cues on overcoming the lack of tactile stimuli. In experiment 1, to explore the product, the participants were randomly assigned to four conditions; picture only, video without sound, video with corresponding sound, and video with discordant sound; after which tactile mental imagery vividness, ease of imagination, attitude, and purchase intentions were measured. It was found that the video with discordant sound had the lowest average scores of all dependent variables. A within-participants design was used in experiment 2, in which all participants explored the same product in the four conditions in a random order. They were told that they were visiting four different brands on a price comparison web site. After the same variables as in experiment 1, including the need for touch, were measured, the repeated measures ANCOVA results revealed that compared to the other conditions, the video with the corresponding sound significantly enhanced tactile mental imagery vividness, attitude, and purchase intentions. However, the discordant condition had significantly lower attitudes and purchase intentions. The dual mediation analysis also revealed that the multimodal cue conditions significantly predicted attitudes and purchase intentions by sequentially mediating the imagery vividness and ease of imagination. In sum, vivid tactile mental imagery triggered using audio-visual stimuli could have a positive effect on consumer decision making by making it easier to imagine a situation where consumers could touch and use the product.

An Analysis of the Acoustical Quality of Rooms Using Auditory Perception Tests (청감실험에 의한 실의 음향 특성분석)

  • 전진용
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • 1998.06e
    • /
    • pp.57-60
    • /
    • 1998
  • This paper adopts the psychoacoustical methodology to evaluate the acoustical qualities of rooms and describes some of the results of an attempt to develop such a test. In order to investigate the effect of hall response to subjects, a subjective experiment was performed in artificially simulated sound fields. Pairs of sounds having differences in duration/frequency were presented to a subject and the subject's responses were recorded. The stimuli were varied through an auralization system that simulates three different rooms. It was found that duration/frequency discrimination is influenced by the room conditions and that these discrimination procedures may form the basis for assessment of room acoustics.

  • PDF

Differences of Perceptual Correctness in the Place of Articulation Between Korean Plosives According to Two Phonation Types (두 가지 발성 유형에 따른 한국어 파열음의 조음 위치 인지도(認知度) 차이)

  • Suh, Seung-Wan
    • Proceedings of the KSPS conference
    • /
    • 2007.05a
    • /
    • pp.84-87
    • /
    • 2007
  • The final purpose of this paper is to prove that, under noisy environment, there is significant difference of perceptibility of the place of articulation between fortis plosives and aspirated plosives in Korean. For this research, a perceptual experiment had been made. Two groups of subjects heard stimuli with noise and were required to answer which sound they had heard. The result is that, with noise, aspirated plosives cannot be heard clearly whereas fortis plosives can be heard well.

  • PDF

Analyzing animation techniques used in webtoons and their potential issues (웹툰 연출의 애니메이션 기법활용과 문제점 분석)

  • Kim, Yu-mi
    • Cartoon and Animation Studies
    • /
    • s.46
    • /
    • pp.85-106
    • /
    • 2017
  • With the media's shift into the digital era in the 2000s, comic book publishers attempted a transition into the new medium by establishing a distribution structure using internet networks. But that effort shied from escaping the parallel-page reading structure of traditional comics. On the other hand, webtoons are showing divers changes by redesigning the structure of traditional sequential art media; they tend to separate and allot spaces according to the vertical scroll reading method of the internet browser and include animations, sound effects and background music. This trend is also in accordance with the preferences of modern readers. Modern society has complicated social structures with the development of various media; the public is therefore exposed to different stimuli and shows characteristics of differentiated perceptions. In other words, webtoons display more relevant and entertaining characteristics by inserting sounds and using moving texts and characters in specific frames, while traditional comics require an appreciation of withdrawal and immersion like other published media. Motions in webtoons are partially applied for dramatic tension or to create an effective expression of action. For example, hand-drawn animation is adopted to express motions by dividing motion images into many layers. Sounds are also utilized, such as background music with episode-related lyrics, melodies, ambient sounds and motion-related sound effects. In addition, webtoons provide readers with new amusement by giving tactile stimuli via the vibration of a smart phone. As stated above, the vertical direction, time-based nature of animation motions and tactile stimuli used in webtoons are differentiated from published comics. However, webtoons' utilization of innovative techniques hasn't yet reached its full potential. In addition to the fact that the software used for webtoon effects is operationally complex, this is a transitional phenomenon since there is still a lack of technical understanding of animation and sound application amongst the general public. For example, a sound might be programmed to play when a specific frame scrolls into view on the monitor, but the frame may be scrolled faster or slower than the author intended; in this case, sound can end before or after a reader sees the whole image. The motion of each frame is also programmed to start in a similar fashion. Therefore, a reader's scroll speed is related to the motion's speed. For this reason, motions might miss the intended timing and be unnatural because they are played out of context. Also, finished sound effects can disturb the concentration of readers. These problems come from a shortage of continuity; to solve these, naturally activated consecutive sounds or animations, like the simple rotation of joints when a character moves, is required.

Sound-Field Speech Evoked Auditory Brainstem Response in Cochlear-Implant Recipients

  • Jarollahi, Farnoush;Valadbeigi, Ayub;Jalaei, Bahram;Maarefvand, Mohammad;Zarandy, Masoud Motasaddi;Haghani, Hamid;Shirzhiyan, Zahra
    • Korean Journal of Audiology
    • /
    • v.24 no.2
    • /
    • pp.71-78
    • /
    • 2020
  • Background and Objectives: Currently limited information is available on speech stimuli processing at the subcortical level in the recipients of cochlear implant (CI). Speech processing in the brainstem level is measured using speech-auditory brainstem response (S-ABR). The purpose of the present study was to measure the S-ABR components in the sound-field presentation in CI recipients, and compare with normal hearing (NH) children. Subjects and Methods: In this descriptive-analytical study, participants were divided in two groups: patients with CIs; and NH group. The CI group consisted of 20 prelingual hearing impairment children (mean age=8.90±0.79 years), with ipsilateral CIs (right side). The control group consisted of 20 healthy NH children, with comparable age and sex distribution. The S-ABR was evoked by the 40-ms synthesized /da/ syllable stimulus that was indicated in the sound-field presentation. Results: Sound-field S-ABR measured in the CI recipients indicated statistically significant delayed latencies, than in the NH group. In addition, these results demonstrated that the frequency following response peak amplitude was significantly higher in CI recipients, than in the NH counterparts (p<0.05). Finally, the neural phase locking were significantly lower in CI recipients (p<0.05). Conclusions: The findings of sound-field S-ABR demonstrated that CI recipients have neural encoding deficits in temporal and spectral domains at the brainstem level; therefore, the sound-field S-ABR can be considered an efficient clinical procedure to assess the speech process in CI recipients.

Sound-Field Speech Evoked Auditory Brainstem Response in Cochlear-Implant Recipients

  • Jarollahi, Farnoush;Valadbeigi, Ayub;Jalaei, Bahram;Maarefvand, Mohammad;Zarandy, Masoud Motasaddi;Haghani, Hamid;Shirzhiyan, Zahra
    • Journal of Audiology & Otology
    • /
    • v.24 no.2
    • /
    • pp.71-78
    • /
    • 2020
  • Background and Objectives: Currently limited information is available on speech stimuli processing at the subcortical level in the recipients of cochlear implant (CI). Speech processing in the brainstem level is measured using speech-auditory brainstem response (S-ABR). The purpose of the present study was to measure the S-ABR components in the sound-field presentation in CI recipients, and compare with normal hearing (NH) children. Subjects and Methods: In this descriptive-analytical study, participants were divided in two groups: patients with CIs; and NH group. The CI group consisted of 20 prelingual hearing impairment children (mean age=8.90±0.79 years), with ipsilateral CIs (right side). The control group consisted of 20 healthy NH children, with comparable age and sex distribution. The S-ABR was evoked by the 40-ms synthesized /da/ syllable stimulus that was indicated in the sound-field presentation. Results: Sound-field S-ABR measured in the CI recipients indicated statistically significant delayed latencies, than in the NH group. In addition, these results demonstrated that the frequency following response peak amplitude was significantly higher in CI recipients, than in the NH counterparts (p<0.05). Finally, the neural phase locking were significantly lower in CI recipients (p<0.05). Conclusions: The findings of sound-field S-ABR demonstrated that CI recipients have neural encoding deficits in temporal and spectral domains at the brainstem level; therefore, the sound-field S-ABR can be considered an efficient clinical procedure to assess the speech process in CI recipients.

Vibration Stimulus Generation using Sound Detection Algorithm for Improved Sound Experience (사운드 실감성 증진을 위한 사운드 감지 알고리즘 기반 촉각진동자극 생성)

  • Ji, Dong-Ju;Oh, Sung-Jin;Jun, Kyung-Koo;Sung, Mee-Young
    • 한국HCI학회:학술대회논문집
    • /
    • 2009.02a
    • /
    • pp.158-162
    • /
    • 2009
  • Sound effects coming with appropriate tactile stimuli can strengthen its reality. For example, gunfire in games and movies, if it is accompanied by vibrating effects, can enhance the impressiveness. On a similar principle, adding the vibration information to existing sound data file and playing sound while generating vibration effects through haptic interfaces can augment the sound experience. In this paper, we propose a method to generate vibration information by analyzing the sound. The vibration information consists of vibration patterns and the timing within a sound file. Adding the vibration information is labor-intensive if it is done manually. We propose a sound detection algorithm to search the moments when specific sounds occur in a sound file and a method to create vibration effects at those moments. The sound detection algorithm compares the frequency characteristic of specific sounds and finds the moments which have similar frequency characteristic within a sound file. The detection ratio of the algorithm was 98% for five different kinds of gunfire. We also develop a GUI based vibrating pattern editor to easily perform the sound search and vibration generation.

  • PDF

An Investigation of Perceived and Performed Sound Durations

  • Jeon, Jin-Yong
    • The Journal of the Acoustical Society of Korea
    • /
    • v.15 no.3E
    • /
    • pp.86-94
    • /
    • 1996
  • The aims of this study were to describe the way in which sound durations are perceived, and to attempt to explain the hidden mechanisms of the duration perception in music performances. Three experiments were carried out to determine the difference limen for the perception of sound duration and to find the effects of frequency and intensity on duration discrimination. For short duration tones ranging from 25 to 100 msec, a linear improvement in discrimination judgements was found with increasing duration of signal. The JND was constant for durations between 100 msec and 2 sec. However, for extended stimulus durations (more than 2 sec) the JND was again linearly improved. Subjects were also presented with a pair of stimuli, composed of high and low frequency pure tones, and asked to discriminate differences in duration of the two tones and ignore differences in the frequency of the tones. It was found that subjects perceived the higher frequency to be longer in duration. When an experiment was carried out to investigate the effect of intensity on duration discrimination it was found that a 20 phon difference makes subjects perceive the louder stimulus as longer than the quieter stimulus. Finally, in a performance study, an analysis of musical performances revealed the effect of frequency. It was found that the musicians played the higher notes shorter than the lower notes. This agrees with what was previously found in the work on the perception of tones.

  • PDF

Effects on Electrophysiologic Responses to the Transcutaneous Electrical Nerve Stimulation and Ultra Sound (경피신경전기자극과 초음파가 전기생리학적 반응에 미치는 영향)

  • Baek Su-Jeong;Lee Mi-Ae;Kim Jin-Sang;Choi Jin-ho
    • The Journal of Korean Physical Therapy
    • /
    • v.12 no.1
    • /
    • pp.49-56
    • /
    • 2000
  • The purpose of this study was to investigate the influnce of afferent stimuli, transcutaneous electrical nerve stimulation and ultra sound, on the electrdiagnostic study of normal subjects. Electrodiagnostic study was performed before and after the application of afferent stimulation of the right popliteal fossa on 18 healthy female volunteers. After the transcutaneous electrical nerve stimulation, there is no significantly change of latencies and amplitudes of SEP, H-reflex, peroneal nerve F-wave, and sensory nerve conduction. After the ultra sound, there is no significantly change of latencies and amplitudes of SEP, H-reflex, peroneal nerve F-wave, and sensory nerve conduction. Tibial nope F-wave and motor nerve shows prolonged latency after TENS and US (p<0.01). Ultrasound may have a similar mechanism of action compared to transcutaneous electrical nerve stimulation by having localized inhibitory effects of the peripheral nerve. However, further investigation is needed to assess their mechanism of action and the precise relevance of stimulation modality.

  • PDF

The perceptual judgment of sound prolongation: Equal-appearing interval and direct magnitude estimation (연장음 길이에 따른 비유창성 정도 평가: 등간척도와 직접크기평정 비교 연구)

  • Jin Park;Hwajung Cha;Sejin Bae
    • Phonetics and Speech Sciences
    • /
    • v.15 no.3
    • /
    • pp.59-67
    • /
    • 2023
  • This study aimed to propose an appropriate evaluation method for the perceived level of speech disfluency based on sound prolongation (i.e., increased duration of segments). To this end, 34 Korean-speaking adults (9 males, 25 females, average age: 32.9 yrs.) participated as raters in this study. The participants listened to sentences containing a total of 25 stimuli with the Korean voiceless fricative /s/ extended by 80-ms increments up to 2,000 ms (i.e., 285 ms, 365 ms., ..., 2,125 ms, 2,205 ms), and evaluated them using an equal-appearing interval scale (EAI, 1-7 points, where 1 represents "normal" and 7 represents "severe"). Subsequently, based on the interval-scale results, the sentence stimuli with the prolonged voiceless fricative corresponding to the mild-to-moderate level (rated as 4 points) were selected as the reference modulus for direct magnitude estimation (DME). After scatter plots were created for the two evaluation results, the relationship between the two measured mean values was analyzed using a curve estimation method for the observed data with the highest R2-value to determine whether a linear or curvilinear approximation fit the data better. A curvilinear relationship between the two evaluation results was indicated, suggesting that DME is a more appropriate evaluation method than the EAI scale for assessing the perceived level of disfluency based on sound prolongation.